Discover
The Test Community.Network
The Test Community.Network
Author: Tim Burnett
Subscribed: 0Played: 1Subscribe
Share
© Tim Burnett
Description
Welcome to the Test Community Network, your go-to weekly update on everything about assessment and testing. Tim Burnett and special guests will help you find the latest chances to explore and connect in the diverse world of assessment and testing. Discover new events, webinars, and podcasts, plus get inspiration from the Test Community Network marketplace. For more information visit: https://www.testcommunity.network
79 Episodes
Reverse
In this week's episode, Tim Burnett chats with Catherine Whitaker, Global Director Online Learning at the British Council, and Louella Morton, Co-founder and Co-CEO of TestReach. They discuss:Authenticity and Assessment in the Age of AI - How AI is accelerating the debate around skills-based assessment versus knowledge regurgitation, and what assessment should look like in an AI-enabled worldCross-Sectoral Learning and Collaboration - The value of bringing together schools, higher education, and professional qualifications sectors to share challenges and opportunities, breaking down industry silosDigital Transformation in the Exams Industry - The rapid rate of change in assessment over the past 10-15 years, the role of organizations like the EAA in providing thought leadership, and how examining bodies can navigate innovation while managing risk aversionTim provides a recap of the Assessment Now conference organised by the British Council in London, featuring about 100 delegates from awarding organizations, higher education institutions, and assessment experts. Catherine shares insights on the British Council's unique position as both an assessment provider and exam deliverer for over 140 organizations, and discusses their new AI-powered speaking assessment bot with 90% positive feedback. Louella discusses the importance of best practice and guidance as the industry navigates uncharted waters with new technologies, and the balance between innovation and the cautious approach many organizations take.Connect with Catherine Whitaker on LinkedIn: https://www.linkedin.com/in/catherine-whitaker/Connect with Luella Morton on LinkedIn: https://www.linkedin.com/in/louellamorton/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this week's episode, Tim Burnett chats with Dr. Aftab Hussain, an education technology pioneer and founder of Second Mesh. Aftab has been a teacher since the mid-1990s and recently completed his doctorate researching AI in education at Bolton College, where he conducted groundbreaking work on chatbots and safe AI systems.They discuss:Aftab's pioneering work developing AI chatbots for education in 2017, years before ChatGPT, including creating robust safeguarding protocols to ensure concerning conversations were properly flagged to appropriate college teamsThe critical gaps in the DfE's AI guidelines for education—why most EdTech companies don't fully comply with the recommendations and the fundamental challenges around data governance, privacy, and institutional oversightSecond Mesh's mission to provide locally-hosted, open-weighted AI models that give schools and colleges complete control over their AI services, moving away from risky third-party commercial platforms that handle sensitive student dataAftab shares his unique journey from putting Business Studies materials online in the 1990s to addressing the growing complexity of multiple information systems in colleges. He explains how his team at Bolton College pioneered safeguarding approaches for AI interactions that are only now becoming mainstream considerations. Through Second Mesh, he's combining product development with applied research to create AI solutions specifically designed for educational institutions—focusing on affordability, sustainability, and ensuring that education leaders maintain proper governance over AI services rather than surrendering control to external providers.The episode also features Wall Street Journal bestselling author Sheena Yap Chan discussing her upcoming presentation at Beyond Multiple Choice 2025. In "Visible Confidence," Sheena explores how confidence, visibility, and authentic leadership must remain at the core of equitable assessment design as AI revolutionises assessment practices. Drawing from her books The Tao of Self-Confidence and Bridging the Confidence Gap, she shares how assessment designers can centre empathy and clarity whilst leveraging AI innovations, ensuring that assessment tools remain not just efficient—but fair, transparent, and human-centred.Join Sheena at BMC 2025: https://www.beyond-multiple-choice.appConnect with Dr. Aftab Hussain on LinkedIn: https://www.linkedin.com/in/aftab-hussain-phd-7145444/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/Learn more about Second Mesh: https://secondmesh.comIf you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this week's episode, Tim Burnett chats with David Lay, founder of SkillSync AI and an expert in medical education assessment. With extensive experience in delivering and developing assessment software for both undergraduate and postgraduate medical education, David shares his insights on transforming portfolio-based learning. They discuss:• The challenges with current medical ePortfolio systems and why 60-65% of trainees don't see value in them• How AI can move beyond simple item generation to enable passive, authentic assessment approaches• The future of medical assessment using technology like body cameras and voice-activated daily logsDavid explains how medical assessment in the UK currently works, from formative workplace-based assessments and ePortfolios to summative exams and OSCEs. He highlights a critical problem: medical trainees often view ePortfolios as a chore rather than a valuable learning tool, with some even postponing their careers because of the administrative burden.The conversation explores how AI offers opportunities to simplify assessment by allowing doctors to focus on patient care whilst passively capturing learning moments. David describes SkillSync AI's MVP, which lets trainees speak or type daily logs that automatically align to curriculum requirements and identify completed assessments. Looking ahead, David envisions using existing technologies like body cameras in A&E departments to analyse patient interactions across multiple data sources, creating authentic assessments without disrupting clinical practice.Read the Vibe Coding Cheating Article: hhttps://www.linkedin.com/pulse/creating-cheating-apps-vibe-coding-its-so-easy-tim-burnett--vtnke/?trackingId=OKm9%2Bo1%2BRBuABJIUJUdg%2BA%3D%3DConnect with David Lay on LinkedIn: https://www.linkedin.com/in/david-lay1/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
This debate examines whether assessment organizations should rapidly transition to fully digital delivery or maintain paper-based or hybrid approaches based on evidence and risk management.Security Against AI Threats: With 88% of learners using AI for exam prep, paper offers no defense—only digital platforms enable keystroke analysis, browser lockdowns, and AI detection tools.Accessibility and Inclusion: Digital formats provide screen readers, adjustable colors, and adaptive interfaces that paper cannot match, making assessments genuinely inclusive rather than discriminatory.Operational Sustainability: The massive logistical effort of printing, shipping, and storing millions of paper documents annually is environmentally and operationally unsustainable compared to digital efficiency.Innovation Opportunities: Digital platforms enable interactivity, rich media, and the ability to modernize outdated assessment models for the 21st century.Scalability: Full digital transformation is the only realistic path for large-scale, secure, and inclusive assessment delivery.Construct-Irrelevant Variance: Rushing to digital risks measuring the wrong skills—the test format itself (typing speed, screen navigation) may interfere with accurately assessing the intended competencies.Subject-Specific Validity: Certain subjects like advanced mathematics and physics require complex diagrams and working calculations that are better suited to paper; forcing all subjects onto screens compromises measurement integrity.Privacy and Compliance Risks: GDPR concerns around biometric data collection and online proctoring are serious; AI detection tools need proven reliability and known false positive rates before high-stakes deployment.Infrastructure Vulnerabilities: Digital systems introduce new risks—server failures, connectivity issues, and the need for specialized technical support that may not be universally available.Environmental Trade-offs: The green argument isn't clear-cut; 80% of a laptop's carbon footprint comes from manufacturing, potentially trading paper waste for serious e-waste problems.Need for Validation: Format decisions must be grounded in comprehensive risk assessments and validation studies comparing candidate performance across modes, not driven by technological enthusiasm.The debate centers on whether urgency (security threats, accessibility needs, operational efficiency) justifies rapid digital transformation, or whether protecting assessment validity, candidate rights, and ensuring evidence-based implementation requires a more measured, potentially multimodal approach tailored to different subjects and contexts.Key Arguments For Rapid Digital AdoptionKey Arguments For Evidence-Based CautionThe Central TensionTranscript:https://testcommunity.network/debate-digital-vs-paper-assessment-delivery
Summary: The Online Proctoring DebateThis debate examines whether increasingly sophisticated remote proctoring technology represents necessary security evolution or an invasive arms race that compromises candidate privacy and equity.Security Necessity: Escalating cheating threats, particularly AI-enabled tools, require multimodal monitoring (video, audio, device checks) to maintain certification credibility. Sophisticated cheaters use virtual machines and secondary devices, necessitating countermeasures like lockdown browsers and multi-camera systems.Audit Trail Requirements: Without defensible, auditable evidence of test integrity, certifications lose their value entirely.Usability Improvements: Providers are investing in low-bandwidth solutions and browser-native delivery to reduce technical barriers, while hybrid AI-human review systems minimize false positives and ensure fair outcomes.Surveillance Overreach: Deep proctoring systems employ continuous facial recognition, microphone monitoring, and OS-level activity detection—this constitutes surveillance rather than mere integrity assurance.Data Privacy Risks: Collecting biometric data, ID scans, and continuous screen captures creates a "data honeypot" with massive PII liability and GDPR compliance burdens.Equity and Access Issues: Requirements for specialized equipment, high-speed internet, and modern computers effectively exclude disadvantaged candidates, creating unequal access to certification.Candidate Stress: Constant monitoring generates anxiety and leads to false positives that force innocent candidates through stressful appeals processes.The debate ultimately centers on balancing assessment integrity against candidate well-being, privacy, and equitable access—a challenge that will define the future of remote certification.Key Arguments For Enhanced ProctoringKey Arguments Against Current TrajectoryThe Central Tension
In this week's episode, Tim Burnett chats with Nena Hollis, Paddy McLaughlin, Hugh McNeela, and Anna Kate Phelan at the EATP conference in Dublin. Nena works in the ATP operations team, Paddy served as chair of the EATP conference, Hugh is a first time attendee from ReadSpeaker, and Anna is an ATP volunteer and eAA Director working at Eintech focusing on user experience design in assessment platforms. They discuss:The success of the EATP Dublin conference, which saw a 25% increase in attendance, and the importance of ATP's volunteer-driven model across its 15 division committeesAccessibility and inclusion in assessments, including text-to-speech solutions, women-specific barriers in assessment, and human-centred design principlesInnovation in the assessment sector, from AI-powered immediate feedback platforms like Surpass Tutor to virtual professional discussions and the ongoing challenges of preventing cheating in the AI eraThe conversations highlight the vibrant community at EATP Dublin, with delegates sharing insights on upcoming ATP events including webinars, coffee chats, and the Innovations conference in New Orleans (1-4 March 2026). All guests emphasised the personal and professional benefits of volunteering in industry organisations like ATP. Key topics included Mary Richardson's opening keynote on the importance of humanising assessments, the challenges facing disengaged younger learners (particularly females), and how organisations are incorporating accessibility and inclusive design into their assessment workflows. Anna's presentation on "Breaking Barriers: Women in Assessment" generated significant discussion about menopause, gender parity, and the need for supportive male allies in the industry. Hugh shared insights on text-to-speech technology's role in creating fair and accessible assessments, whilst Paddy reflected on the responsibilities and rewards of conference leadership.Connect with Anna: https://www.linkedin.com/in/annakatephelan/Connect with Paddy:https://www.linkedin.com/in/paddy-mclaughlin/Connect with Hugh:https://www.linkedin.com/in/hughmcneela/Connect with Nena:https://www.linkedin.com/in/nena-hollis-mba-938994265/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this week's episode, Tim Burnett chats with Jim from Surpass, discussing the upcoming Surpass Virtual Conference. They discuss:The evolution of the Surpass Conference format and the decision to alternate between virtual and in-person eventsThe impressive keynote lineup featuring John Weiner on AI and automation, Andreas Schleicher on global AI educational practices, and John Winkley with Shirley Watson on formative assessmentThe closing panel exploring the convergence of AI and humanoid robotics in vocational education and trainingTim and Jim explore the extensive programme planned for the 7th-9th October conference, including a pre-conference day dedicated to deep-diving into specific topics. Jim shares exciting updates about AI-assisted item writing at scale using Surpass Copilot, innovative offline assessment delivery on Chromebooks, and developments in the Surpass Test Centre Network. The conversation also touches on the growing use of Surpass Tutor for personalised learning and formative assessment by organisations like NCFE and SQA.Tim also provides his weekly industry headlines update, highlighting key stories including medical assessment integrity issues in Belgium, new research on auto-scoring creative thinking, and the launch of his comprehensive AI Adoption Playbook for UK awarding bodies.Connect with Jim and Surpass: https://www.linkedin.com/in/jim-crawford-15040966/Register for the free Surpass Conference: https://conference.surpass.comConnect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network🎙️ New to streaming or looking to level up? Check out StreamYard and get $10 discount! 😍 https://streamyard.com/pal/d/5780597344829440
In this week's episode, Tim Burnett chats with Mark Gray, UK and Ireland Manager for Universal Robots. Mark brings 25 years of automation experience and 9 years with Universal Robots, where he's responsible for day-to-day operations across the UK and Ireland. They discuss:How collaborative robots are revolutionizing manufacturing without replacing human workersThe flexibility of robotics technology and its potential for reshoring UK manufacturingFuture skills requirements and training opportunities in the robotics industryMark shares insights into how Universal Robots' collaborative robots (cobots) and autonomous mobile robots (AMRs) are being deployed across various industries - from small 3-4 person companies to large manufacturing facilities. The conversation explores applications ranging from machine shops and welding to food packaging, medical device assembly, and even laboratory automation that can run overnight processing blood samples.A key theme throughout the discussion is how robotics technology can address the UK's skills gap and aging workforce while making manufacturing jobs more attractive to younger generations. Mark explains how robots provide the flexibility that modern manufacturing demands - able to be reprogrammed and redeployed for different tasks as business needs change, unlike traditional hard automation.The episode also delves into the intersection of AI and robotics, with Mark describing robots as "the physical embodiment of AI." He discusses how AI-assisted programming and vision systems are making robots more autonomous and easier to deploy, while emphasizing the continued need for human skills in both practical installation and advanced software development.For those interested in entering the robotics field, Mark highlights training opportunities available through Universal Robots' Sheffield facility, including free courses for technical apprentices. The company also provides educational support to colleges and universities across the UK, with plans to expand their presence in UTC colleges.Connect with Mark Gray on LinkedIn: https://lnkd.in/eDRDYUafConnect with Tim Burnett on LinkedIn: https://lnkd.in/ewkjcPvUIf you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://lnkd.in/e5krHwzAEvents: https://lnkd.in/gngYJ_vZPodcasts: https://lnkd.in/gqbFy4HmMarketplace: https://lnkd.in/eryjSzutSubscribe to the Test Community Network here: https://lnkd.in/eEfqjykN🎙️ New to streaming or looking to level up? Check out StreamYard and get $10 discount! 😍 https://lnkd.in/eWCFYRVX
Transform your ChatGPT trials into a strategic 2-year AI plan that maintains integrity while driving operational excellenceThis session transforms everyday AI experiments into a strategic advantage.Most senior and AI-forward staff at Awarding Organisations now use AI platforms like OpenAI, Microsoft, and Google on a day-to-day basis. But are they maximising their potential, are they taking risks? And what comes next in AI adoption?This session brings together experts in AI, AOs, training and skills adoption to provide both practical and strategic advice for evaluating AI solutions, building compelling business cases, and developing your adaptive 2-year roadmap.This session covers:- Evaluating AI solutions with integrity and compliance at the core- Building AI literacy across your organisation- Developing internal expertise while maintaining quality standards- Creating adaptive roadmaps that scale from individual productivity to organisational transformationYour expertise is invaluable; AI should amplify it, not replace it. By adopting a 2-year roadmap approach, every organisation can plan its AI journey, helping them remain competitive, manage resources, and stay true to their vision and values.Key Focus Areas:- Assessment integrity and fairness in AI adoption- Regulatory compliance strategies- Building AI literacy and internal capabilities- Operational efficiency opportunitiesDesigned for: Senior leaders, heads of departments, and content specialists seeking practical frameworks to move from experimentation to strategic implementation.Lizzie Gauntlett - AO Consultant and Assessment SpecialistFormer Senior Policy Officer at the Federation of Awarding Bodies with a PhD in academic resilience. Now works independently supporting awarding organisations with policy development, risk management, and AI implementation challenges in assessment and compliance.Adarsh Sudhindra - AI Technology EntrepreneurVice President at Excelsoft Technologies with 15+ years in e-learning. Computer Science graduate with experience at Adobe and IBM. TEDx speaker specialising in AI-powered educational technology and assessment solutions.Jonathan Jacob - Founder & Lead InstructorFounder of Northern IT Academy with 20+ years in IT training and project management. Former director at a major UK online training provider, focusing on helping organisations effectively adopt and implement new technologies.
In this week's episode, Tim Burnett explores the latest AI cheating capabilities using Microsoft's Copilot Pro Actions feature. He demonstrates:- How Copilot Pro's new "Actions" feature can autonomously complete online assessments- Which question types are most vulnerable to AI infiltration (multiple choice vs matching questions)- The accessibility of this technology - available for just £19/month with free trial optionsTim provides a live demonstration of an AI agent taking a sample test, achieving 16 out of 17 correct answers whilst operating completely hands-off. He discusses the implications for assessment security, explaining how simple technology costing as little as £12 can be used to share exam screens with AI systems that provide real-time answers through audio feedback or tactile signals.The episode highlights the urgent need for assessment providers to understand these readily available cheating tools and their potential impact on exam integrity. Tim also touches on his previous work building similar systems using Replit and discusses how retrieval-augmented generation (RAG) models can be enhanced with domain-specific knowledge to improve performance on specialised assessments.This demonstration follows Tim's recent cheating webinar and serves as a practical exploration of how accessible AI-powered cheating tools have become for the average test-taker.Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this week's episode, Tim Burnett chats with Kristina Preston, Head of Awarding and Skills Appointments at Peridot Partners. Kristina specialises in executive search for awarding organisations, assessment bodies, and skills providers across the not-for-profit education sector. They discuss:The current impact of AI on recruitment within awarding organisations and whether it poses an immediate threat to jobs in the sectorHow AI is being used by both candidates and employers in the recruitment process, including both positive applications and concerning misuseThe challenge of maintaining entry-level pathways and career development opportunities as automation potentially eliminates traditional starting rolesKristina shares insights on the "sticky" nature of careers in the awarding sector, driven by strong social purpose and mission-focused work. She explains how AI is currently being used as a tool to enhance recruitment processes rather than replace human judgement, particularly important in a sector where cultural fit and lived experience matter significantly. The conversation explores the growing demand for trustee recruitment and how bringing external expertise from other industry sectors is bringing fresh commercial and digital insights to awarding organisation boards.A key theme emerges around the human-centred nature of the awarding sector - organisations driven by social mobility, education access, and skills development that rely heavily on values-based leadership and mission-driven professionals. While AI may streamline certain processes, the core purpose of these organisations remains deeply human, suggesting the sector may be more resilient to AI displacement than others.Connect with Kristina Preston on LinkedIn: https://www.linkedin.com/in/kristinaprestonperidot/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this week's episode, Tim Burnett chats with Romana Moss (eAA Board Member & Royal College of Emergency Medicine), Graham Hudson (eAA Chair) and Gareth Hopkins (Surpass Assessment and Volunteer), Shoeb Mozammel (Vretta), and Jeremy Carter (Sentira XR + 2025 three-time award winner). They discuss:The transformative role of formative assessment in supporting learning journeys and strengthening understanding through AI enhancementsThe record-breaking success of the eAssessment Association conference with participants from 18 countries and unprecedented energy and engagementThe emerging accessibility focus in digital assessment and the tools making tests available to everyoneTim provides a review of the three-day e-Assessment Association conference and awards in London, highlighting the national roundtable, AI symposium, main conference sessions, and the interactive space where 24 applications were built in just 25 minutes using no-code development tools. The episode captures insights from multiple award winners and conference organisers about the future of digital assessment.Romana Moss emphasizes how formative assessment can "pinpoint weak points" and provide "deep understanding into why people aren't being as successful as they would like to be," particularly when enhanced with AI. Graham Hudson and Gareth Hopkins reflect on the extraordinary energy of this year's conference and the importance of accessibility tools in making assessments available to everyone. Shoeb Mozammel from Vretta shares his experience as an international participant, praising the intimate, family-like atmosphere that sets this conference apart. Jeremy Carter, who made history by winning three awards in one night, offers practical advice for future award submissions, emphasising the importance of commitment, careful analysis of judging criteria, and making submissions human rather than salesy.The episode also showcases Tim's innovative interactive session where participants used AI-powered no-code development tools to create functional assessment applications, including an OSCE examination platform with real-time transcription capabilities, demonstrating the rapid evolution of assessment technology.Finally Tim shares a clip when Gateway Qualifications received their award!Connect with Romana Moss on LinkedIn: https://www.linkedin.com/in/romana-moss-6113a619/Connect with Graham Hudson on LinkedIn: https://www.linkedin.com/in/grahamhudson/Connect with Gareth Hopkins on LinkedIn: https://www.linkedin.com/in/garethahopkins/Connect with Shoeb Mozammel on LinkedIn: https://www.linkedin.com/in/shoeb-mozammel-973b9b1bb/Connect with Jeremy Carter on LinkedIn: https://www.linkedin.com/in/jeremycarter72/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this week's episode, Tim Burnett answers a listener question about where organisations should start with AI implementation. He discusses:- Establishing AI policies and creating red, green, and blue team structures for AI adoption- Recommended starting platforms including Notebook LM, Microsoft Copilot, Google Gemini, and ChatGPT Teams- Data security considerations and the importance of paid versus free AI toolsTim provides comprehensive guidance on beginning your organisation's AI journey, starting with policy development and team structure. He demonstrates practical tools like Notebook LM for workforce development and research, then explores enterprise solutions based on your existing tech stack.The episode includes hands-on demonstrations of deep research capabilities, no-code app creation tools like Lovable and Replit, and emphasises crucial data security considerations when choosing between free and paid AI platforms.Tim also highlights upcoming events, particularly the eAssessment Association's International Conference and Awards taking place in London, featuring an AI Symposium and the main conference with awards dinner.https://e-assessment.com/conferenceConnect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this week's episode (Part 2 of 2), Tim Burnett continues his conversation with Rita Bateson, an assessment writer, curriculum developer, and Director at Eblana Learning. Rita has extensive experience working with the International Baccalaureate (IB) and specialises in the ethical evolution of assessment technology. They discuss:Cognitive offloading: How AI might be making us mentally lazy and changing our thinking habitsThe politeness paradox: Why Sam Altman's comment about "please and thank you" costing millions reveals deeper social engineering implicationsGenerational impacts: How children born into the AI world face unique challenges with privacy, mistakes, and constant surveillanceRita explores how AI is unconsciously changing human behaviour, from writing styles to social interactions. She shares insights about students reaching for AI before asking peers, the phenomenon of "memory offloading" (similar to how we stopped remembering phone numbers), and the subtle ways AI is reshaping how we think and communicate. The conversation reveals surprising findings about AI's ability to understand human dynamics and relationships, while addressing concerns about the first generation that "truly won't have the ability to be forgotten."If you missed Part 1, catch up on the environmental impact of AI-powered assessments: https://www.linkedin.com/events/7328162968389435393/commentsLearn more about Rita's work at the e-Assessment Associations 2025 conference in London:https://conference2025.e-assessment.com/2025/en/page/ai-symposium-2025Connect with Rita Bateson on LinkedIn: https://www.linkedin.com/in/rita-bateson-frsa-8741b868/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this first part of a 2-part episode, Tim Burnett chats with Rita Bateson, an assessment writer, curriculum developer, and Director at Eblana Learning. Rita has extensive experience working with the International Baccalaureate (IB) and specialises in the ethical evolution of assessment, particularly focusing on sustainability and environmental impact.In Part 1, they discuss:Dublin's energy crisis and the impact of data centre energy consumption on Ireland's electrical gridThe hidden carbon cost of AI and that already surpassed aviation in environmental impactWhy we can't ignore the inconvenient truth when it comes to the impact of AI on the environmentRita explains how the assessment industry is sleepwalking into a massive environmental crisis. Whilst everyone pushes to move away from paper-based testing, the energy consumption of AI-powered assessments creates invisible but enormous carbon footprints. She explores why a single AI prompt can use anywhere from 1 to 15 watt-hours of energy, and warns about the scaling factor as teachers and students multiply that usage exponentially.The conversation reveals why this is the "inconvenient truth" the education sector isn't discussing - despite our commitment to UN Sustainable Development Goals. Rita shares insights from Dublin, where AI energy demands have led to discussions about reopening nuclear plants and investing £800 billion in data centres.Stay tuned for Part 2 where Rita and Tim explore: Could AI be changing how we think and socialise in ways we haven't realised?Learn more about Rita's work at the e-Assessment Associations 2025 conference in London:https://conference2025.e-assessment.com/2025/en/page/ai-symposium-2025Connect with Rita Bateson on LinkedIn: https://www.linkedin.com/in/rita-bateson-frsa-8741b868/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this week's episode, Tim Burnett chats with Allison Mulford, solo practitioner at Mulford and Associates. With extensive experience in the testing community since 2014, Allison now specialises in privacy, data protection, AI governance, and corporate legal services.They discuss:The shifting AI regulatory landscape in the US, EU, and globallyPractical approaches to AI governance for organisationsCopyright considerations for AI-generated contentAllison provides valuable insights into how the US approach to AI regulation has shifted under the new administration, moving from a focus on responsible and ethical AI toward prioritising innovation over regulatory oversight. While this happens at the federal level, many states are implementing their own AI regulations, particularly around employment and consumer health data protection.In contrast, the EU led with the comprehensive EU AI Act, similar to their approach with GDPR. However, Allison notes a recent trend where the EU appears to be "backtracking" by giving member states more control over implementation and penalties, potentially softening some of the stricter requirements to balance innovation and governance.On the topic of copyright for AI-generated content, Allison clarifies that human intervention is essential for copyright protection. She explains that what matters is proving human decision-making in the creative process and maintaining a proper audit trail, with disclosure of AI use now required in copyright applications.More information on this topic can be found here: https://www.linkedin.com/pulse/beyond-compliance-unraveling-intersection-ethics-law-ai-tim-burnett--9cxee/?trackingId=9vB%2Bw3fKSzO2pvfCjuGfsA%3D%3DConnect with Allison Mulford on LinkedIn: https://www.linkedin.com/in/allison-mulford-97038111/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this week's episode, Tim Burnett chats with Bradley Emi, CTO of Pangram Labs. Bradley is a Stanford AI researcher who previously worked as an AI scientist at Tesla before co-founding Pangram Labs, a company specialising in AI detection technology. They discuss:- How Pangram Labs' AI detection technology identifies AI-generated content with a remarkably low false positive rate of 0.01%- The unique "active learning" approach that helps their system achieve greater accuracy than competitors- The challenges of creating equitable AI detection that works across multiple languages and for non-native English speakersBradley shares his background as a Stanford AI researcher who, along with his co-founder (also from Stanford), created Pangram Labs after working at major tech companies. Their mission is to ensure transparency around AI-generated content, particularly addressing academic integrity issues in education.Unlike many academic integrity tools that combine plagiarism detection with AI detection, Pangram Labs focuses exclusively on identifying AI-generated text. Their deep learning model, trained on hundreds of millions of examples of human and AI text, achieves impressive accuracy with a false positive rate of just 0.01% and a false negative rate of 0.1%.Bradley explains how their system works by identifying the subtle patterns and stylistic choices that distinguish AI writing from human writing. He notes that AI systems often use distinctive language patterns and word choices (like frequent use of em dashes or words such as "tapestry" and "delve") that result from their training process.What sets Pangram Labs apart is their "active learning" approach, where their system actively searches for difficult examples that it struggles to classify correctly, then incorporates these into further training. This has allowed them to reduce error rates from around 1-2% (typical for competitors) to about 0.1%.Bradley states that, unlike many AI detectors that struggle with non-native English writing, Pangram's solution avoids using metrics like "perplexity" and "burstiness" that tend to misclassify ESL (English as Second Language) text as AI-generated. Instead, their approach looks for patterns across diverse human language samples, making it more equitable for all writers.The platform provides different levels of analysis depending on text length—for shorter submissions (under 400 words), it gives a simple AI or not AI determination with a confidence percentage, while for longer texts, it offers paragraph-by-paragraph analysis to identify mixed content where students might have used AI for only portions of their work.This episode provides valuable insights for educators, assessment professionals, and anyone interested in understanding the current state of AI detection technology. Particularly noteworthy is Bradley's offer of unlimited free access for academics who wish to conduct research on their tool—reflecting an open approach to scientific validation that benefits the wider educational community as we navigate the evolving landscape of AI in education.Now run these show notes through the platform to see if they were written by AI :)Bonus AO Forum Presentation: https://www.youtube.com/watch?v=MemOFGK1-0AConnect with Bradley Emi on LinkedIn: https://www.linkedin.com/in/bradleyemi/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
In this week's episode, Tim Burnett chats with Jim Holm, an education technology expert with nearly 30 years of experience who now works at Stukent. They discuss:- How simternships provide students with practical, hands-on experience that bridges the gap between classroom learning and real-world job skills- The impressive results of simternships in reducing drop-out rates and increasing student engagement- The future of performance-based assessment in the age of AIJim Holm explains how Stukent's simternships offer technology-based internships that complement traditional coursework. These simulated experiences help students understand what their first job experiences will be like, focussing on practical skills rather than just theoretical knowledge. Simternships teach students the tools they'll need, how to communicate effectively in a workplace, and how to identify and work toward professional objectives.Using the example of an accounting simternship, Jim describes how students might spend 24-40 hours working through realistic scenarios where they make journal entries, track products, and coordinate communication between different teams—all in a low-stress environment that makes learning engaging. Unlike many business simulations that focus on high-level executive decision-making, simternships prepare students for entry-level positions they're likely to encounter after graduation.Currently, Stukent offers around 40 different simternships primarily focussed on business topics, from digital marketing to social media and accounting. While the technology could potentially expand to other fields like chemical engineering or nursing, the company currently concentrates on business-related courses.Looking to the future, Jim believes that as AI continues to advance, proving what you can do rather than just what you know will become increasingly important in education and assessment. He envisions simternships evolving beyond learning tools to become sophisticated measurement instruments that evaluate how well individuals can synthesise information, communicate effectively, and make business decisions.Stukent currently serves over 1,000 universities worldwide, with about 80% based in the US and the remainder in English-speaking countries like the UK, Canada, and Australia. While they're primarily focussed on higher education, they're expanding into high school-level career and technical education programmes.Connect with Jim Holm on LinkedIn: https://www.linkedin.com/in/jimholm/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network
Join me this week to take a look at online and hybrid events taking place over the next week in the Assessment community plus an insight into my interviews with Jim Holm (Simternships) and Bradley Emi (Ai Dection Tools).
In this week's episode, Tim Burnett chats with Tom Ashmore, Digital Assessment Designer, and Elliot Spence, Senior Digital Assessment Coordinator at Birmingham City University. They discuss their award-winning hybrid assessment model and initiatives to boost digital literacy amongst students.They discuss:BCU's innovative hybrid assessment model that allows students to choose between on-campus or remote examsThe Digital Kickstart programme to enhance digital literacy skills for new studentsThe challenges of digital literacy among today's students and how BCU is addressing themTom and Elliot share insights about their award-winning hybrid assessment model, which gives students the flexibility to take exams either on campus or remotely using their own devices. This approach has proven particularly valuable for BCU's diverse student population, many of whom are commuters. The model ensures assessment quality remains consistent regardless of location, with remote students being proctored via live recording and on-campus students supervised traditionally.The conversation also explores their Digital Kickstart initiative, designed to boost digital literacy skills for new students. Contrary to assumptions about younger generations being digitally savvy, they've found many students struggle with basic computer operations and software like Microsoft Office. These skills gaps cut across age groups and backgrounds, including school leavers, mature students returning to education, and international students. The team offers tailored sessions ranging from 15 minutes to an hour to help students develop essential digital competencies.They also discuss how their digital exam platform has helped address concerns about AI in assessments, with the recording element making AI use during exams easily detectable. This has increased confidence in exams while concerns about AI in coursework have grown, leading some courses to shift from coursework to exams. Despite nearly doubling their exam submissions in recent years (from 15,000 to approximately 20,000), their team of five has managed the increased workload efficiently, demonstrating the scalability of digital assessment systems.Connect with Tom Ashmore on LinkedIn: https://www.linkedin.com/in/tom-ashmore-1a18a5169/Connect with Elliot Spence on LinkedIn: https://www.linkedin.com/in/elliot-spence-330964181/Connect with Tim Burnett on LinkedIn: https://www.linkedin.com/in/tburnett/If you would like to join Tim on a future episode or sponsor the Test Community Network then get in touch with Tim to schedule a call: https://calendly.com/educationtech/15min-teams-chatEvents: https://www.testcommunity.network/upcoming-eventsPodcasts: https://www.testcommunity.network/podcastsMarketplace: https://www.testcommunity.network/marketplaceSubscribe to the Test Community Network here: https://www.testcommunity.network




