BldgAltEntf E066: Wir kennen Sie ja!
Update: 2025-10-15
Description
Die Folge haben wir am 10.10.2025 aufgenommen.
News+Alt+Entf
News+O
- O hat auf dem OERcamp in Hannover einen Einstiegs-Workshop zu H5P gegeben (Doku)
- Er hat wie immer eine ganze Reihe H5P-News:
- H5P-Happening in Hamburg
- Catharsis und WordPress-Plugin
- Plugin WP Local-User-Data
- 2 neue Inhaltstypen: Chat-Simulator & Idea Board
- … und noch ein paar Gerüchte
News+A
- A hatte mit der MoodleMOOT DACH in Lübeck gut zu tun:
- 296 TN aus 20 Ländern, 80 BarCamp-Sessions, 22 DevCamp-Projekte
- Eigene Session mit Kollegin Tina zur DLC-mintSH-Anbindung via LTI Advantage
- SummarAIzer erstmalig eingesetzt und Ergebnisse getestet (Angelehnt an Lösung von Andreas Sexauer)
- Es gab Gelasertes: Awards, Aufsteller, Wegweiser
- Grafiken zur BarCamp-Sessionplanung
- Summer School „Feedback neu gedacht“ in Hamburg
- Doku im DLC-Kurs,
- Podcast Update Hochschule
- Methodisch Inkorrekt Live
Paper+Alt+Entf
Paper+O: Jetzt ist es aber genug!
<form method="get" name="tppublistform"></form>
Parker, Michael J.; Bunch, Matt; Pike, Andrew
How Much is Enough? Formative Assessment Dynamics Artikel
In: Journal of Learning Analytics, Bd. 12, Ausg. 2, S. 196–210, 2025, ISSN: 1929-7750.
@article{Parker2025,
title = {How Much is Enough? Formative Assessment Dynamics},
author = {Michael J. Parker and Matt Bunch and Andrew Pike},
url = {https://doi.org/10.18608/jla.2025.8753},
doi = {10.18608/jla.2025.8753},
issn = {1929-7750},
year = {2025},
date = {2025-07-04},
urldate = {2025-07-04},
journal = {Journal of Learning Analytics},
volume = {12},
issue = {2},
pages = {196–210},
abstract = {While the educational value of formative assessment is widely acknowledged, the precise amount needed to effectively predict student performance on summative assessments remains unclear. This study investigates the relationship between intermediate formative assessment performance and final exam scores, addressing the critical question of how much assessment is needed for accurate prediction. Using a large dataset encompassing over 20,000 student enrollments across 127 course runs of 15 online biomedical sciences courses, we examined the correlation between intermediate assessment scores and final exam performance. Our results show that after completing about 40% of the formative assessments in a course, student scores demonstrate a strong correlation (Pearson r > 0.7) with their final exam scores. The correlation after taking additional formative assessments reaches a maximum of approximately 0.75. This finding was consistent across different course types and lengths, suggesting that the relative amount of assessment taken, rather than the absolute number, is key. Surprisingly, we found that random sampling of assessments was even more predictive than chronological sampling, suggesting that the proportion of questions used, relative to the total number of assessment questions, is more important than their specific sequence. These findings contribute to a deeper understanding of the predictive capabilities of formative assessment, and enable educators to identify at-risk students earlier, optimize assessment design, and develop more efficient and targeted interventions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
While the educational value of formative assessment is widely acknowledged, the precise amount needed to effectively predict student performance on summative assessments remains unclear. This study investigates the relationship between intermediate formative assessment performance and final exam scores, addressing the critical question of how much assessment is needed for accurate prediction. Using a large dataset encompassing over 20,000 student enrollments across 127 course runs of 15 online biomedical sciences courses, we examined the correlation between intermediate assessment scores and final exam performance. Our results show that after completing about 40% of the formative assessments in a course, student scores demonstrate a strong correlation (Pearson r > 0.7) with their final exam scores. The correlation after taking additional formative assessments reaches a maximum of approximately 0.75. This finding was consistent across different course types and lengths, suggesting that the relative amount of assessment taken, rather than the absolute number, is key. Surprisingly, we found that random sampling of assessments was even more predictive than chronological sampling, suggesting that the proportion of questions used, relative to the total number of assessment questions, is more important than their specific sequence. These findings contribute to a deeper understanding of the predictive capabilities of formative assessment, and enable educators to identify at-risk students earlier, optimize assessment design, and develop more efficient and targeted interventions.
Wie viele Übungsaufgaben eines Kurses muss ein Studi eigentlich bearbeiten, bevor man automatisch mit hoher Genauigkeit die Endnote voraussagen kann?
Paper+A: Irren lohnt sich
<form method="get" name="tppublistform"></form>
Sinha, Tanmay; Kapur, Manu
When Problem Solving Followed by Instruction Works: Evidence for Productive Failure Artikel
In: Review of Educational Research, Bd. 91, Ausg. 5, S. 761–798, 2021, ISSN: 1935-1046.
Abstract | <a class
Comments
In Channel