DiscoverLeading LearningEvidence-Based Evaluation with Rob Brinkerhoff and Daniela Schroeter
Evidence-Based Evaluation with Rob Brinkerhoff and Daniela Schroeter

Evidence-Based Evaluation with Rob Brinkerhoff and Daniela Schroeter

Update: 2021-08-17
Share

Description

<figure class="alignleft size-large">Leading Learning Podcast interviewees Rob Brinkerhoff and Daniela Schroeter</figure>




Learning science uses evidence-based practice to support learning, and evaluation plays a critical role in providing that evidence by revealing its true impact.





To help us unpack how evaluations should inform decisions about learning, we spoke with Dr. Robert Brinkerhoff and Dr. Daniela Schroeter, co-directors of the Brinkerhoff Evaluation Institute (BEI).





Rob is an internationally recognized expert with four decades of experience in evaluation and learning effectiveness and he’s the author of several books including, The Success Case Method and Telling Training’s Story. He’s also the creator of the Success Case Method, a highly regarded and carefully crafted impact evaluation approach to determining how well educational and training programs work.





Daniela has a PhD in interdisciplinary evaluation and has spent the past 15 years providing evaluation and capacity building to a wide range of private, public, and nonprofit organizations around the globe. In addition to co-directing BEI, Daniela is an associate professor at Western Michigan University.





In this sixth installment in our seven-part series on learning science’s role in a learning business, we talk with Rob and Daniela about how to effectively leverage evaluations to maximize outcomes from learning. We also discuss the Success Case Method, the value in using evidence-based stories to demonstrate the impact of an offering, and why evaluation must lead to actionability.





To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). And, if you like the podcast, be sure to give it a tweet.





Listen to the Show









Access the Transcript





Download a PDF transcript of this episode’s audio.





Read the Show Notes





[00:19 ]Intro and background info about Rob and Daniela.





Flaws of Traditional Evaluation Methods





[02:13 ]What do you see as the primary flaws or shortcomings of traditional typical evaluation methods?





There’s currently an emphasis on evidence-based practice focusing on quantitative outcome data and sophisticated methodologies. Those are challenging because they’re often not practical and don’t allow you to adapt a learning intervention while you’re still implementing it.





There’s also too much emphasis on comparison groups and singular outcomes rather than looking at the intervention as a whole and the unique environments of each individual learner. Evaluations shouldn’t focus on the end point, but rather they should be used from the beginning to continuously improve the program, the impacts from it, and to leverage learning and maximize outcomes from learning.





Success Case Method





[04:03 ]Can you briefly introduce the Success Case Method?





Rob describes how he got the idea for the Success Case Method when he realized the need for an evaluation method that focuses on the success of something when it was actually used, not just on average. This is because the average always underestimates the good.





The Success Case Method identifies the most successful users—and not so successful users—of the initiative being evaluated. It then answers questions to identify what needs to be done to make more people perform as well as the few best people.





<figure class="aligncenter size-large">The Success Case Method infographic<figcaption>The Success Case Method (image from www.monicawabuke.com)</figcaption></figure>




[08:13 ]Would you talk a little bit about some of the purposes that the Success Case Method can be used for?





The Success Case Method can be used to:





  • Improve learning interventions and maximize the outcomes from the learning.
  • Pilot programs to find out what works well and for whom.
  • Market to downstream audiences. Once we know what is working and for whom, we can leverage that information to push the learning to new audiences.
  • Help program deliverers tell the story. Often learning providers want share outcomes that are the result of a learning experience, and a success case story provides information that can be shared.
  • Teach participants and their supervisors about the value of the learning that they’re participating in.




Too many times we’ve seen evaluation studies that are hard to interpret, hard to understand. They use a lot of statistics and a lot of jargon. And what really compels people is stories…. That sort of evidence really compels action and drives emotional response and buy-in.

Rob Brinkerhoff




Rob stresses the importance of stories. There are fictional stories, and there are evidence-based stories. We need to look for the truth of a program. Almost always there are successes, and it’s important to leverage those.





Value in Impact Evaluation: Past and Future





[11:46 ]Do you think impact evaluation should always have a future-facing aspect, where you’re looking to improve? Or do you see value in a purely historical look at a particular course’s impact in the past?





There’s value in summative, endpoint evaluation and being able to provide evidence that a particular program is working and making a difference. We can learn a lot from history. Being able to learn about programs that didn’t work and why is valuable, but, even with that, there’s a future orientation. It can provide information that helps you defend why you want to continue with a program.





All evaluation should be used for learning at one point or another. While our method directly tries to focus on current learning and what we can learn now for future learning interventions…there’s also a longer-term effect in doing historical evaluation because, without looking back at the past, we cannot innovate in the future.

Daniela Schroeter




[13:56 ]How do you respond to people who get tripped up trying to show direct causation between an educational offering and specific results?





Learning is never the sole cause for anything other than paying the bill for having participated in it. Any change in human performance or behavior is driven by a complex nexus of causes. It’s not important to show that the training is the sole cause of an improvement or change, but it’s critical to show that the training made a worthy and necessary contribution to an individual’s success.





As a methodology, the Success Case Method gets away from looking at the average. Instead, it looks at outliers and the best an intervention can do when it works well, as well as why it doesn’t work for people at the very bottom.





Too many valuable babies get thrown out in the bathwater of statistical reporting. We want to be

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Evidence-Based Evaluation with Rob Brinkerhoff and Daniela Schroeter

Evidence-Based Evaluation with Rob Brinkerhoff and Daniela Schroeter

Jackie Harman