DiscoverTeacher Ollie's Takeaways
Teacher Ollie's Takeaways
Claim Ownership

Teacher Ollie's Takeaways

Author: Ollie Lovell: Secondary school teacher and lover of learning who is passionate about all things eduction. @ollie_lovell

Subscribed: 25Played: 68
Share

Description

A weekly podcast where Teacher Ollie summarises his key takeaways from twitter, blogs, research papers, conversations, and his classroom.
5 Episodes
Reverse
Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here.  Show Notes Why minimal guidance during instruction doesn’t work Ref: Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86. The arguments for and against minimally guided instruction Assertion: The most recent version of instruction with minimal guidance comes from constructivism (e.g., Steffe & Gale, 1995), which appears to have been derived from observations that knowledge is constructed by learners and so (a) they need to have the opportunity to construct by being presented with goals and minimal information, and (b) learning is idiosyncratic and so a common instructional format or strategies are ineffective. Response: “The constructivist description of learning is accurate, but the instructional consequences suggested by constructivists do not necessarily follow.” Learners have to construct a mental schema of the information in the end, that’s what we’re trying to furnish them with, and it turns out, the less of a schema we give them (as with minimal guidance) the less complete of a schema they end up with. Essentially, if we give them the full picture, it will better help them to construct the full picture! Assertion: Another consequence of attempts to implement constructivist theory is a shift of emphasis away from teaching a discipline as a body of knowledge toward an exclusive emphasis on learning a discipline by experiencing the processes and procedures of the discipline (Handelsman et. al., 2004; Hodson, 1988). This change in focus was accompanied by an assumption shared by many leading educators and discipline specialists that knowledge can best be learned or only learned through experience that is based primarily on the procedures of the discipline. This point of view led to a commitment by educators to extensive practical or project work, and the rejection of instruction based on the facts, laws, principles and theories that make up a discipline’s content accompanied by the use of discovery and inquiry methods of instruction. Response: …it may be a fundamental error to assume that the pedagogic content of the learning experience is identical to the methods and processes (i.e., the epistemology) of the discipline being studied and a mistake to assume that instruction should exclusively focus on methods and processes. (see Shulman (1986; Shulman & Hutchings, 1999)). This gets to the heart of the distinction between experts and novices. Experts and novices simply don’t learn the same way. They don’t have the same background knowledge at their disposal. By teaching novices in the way that experts should be taught we’re really doing them a disservice, overloading working memories, and simply being ineffective teachers. Drilling down to the evidence: None of the preceding arguments and theorizing would be important if there was a clear body of research using controlled experiments indicating that unguided or minimally guided instruction was more effective than guided instruction.. Mayer (2004) recently reviewed evidence from studies conducted from 1950 to the late 1980s comparing pure discovery learning, defined as unguided, problem-based instruction, with guided forms of instruction. He suggested that in each decade since the mid-1950s, when empirical studies provided solid evidence that the then popular unguided approach did not work, a similar approach popped up under a different name with the cycle then repeating itself. Each new set of advocates for unguided approaches seemed either unaware of or uninterested in previous evidence that unguided approaches had not been validated. This pattern produced discovery learning, which gave way to experiential learning, which gave way to probem-based and inquiry learning, which now gives way to constructivist instructional techniques. Mayer (2004) concluded that the “debate about discovery has been replayed many times in education but each time, the evidence has favored a guided approach to learning” (p. 18). Current Research Supporting Direct Guidance List is too long, here are some excerpts Aulls (2002), who observed a number of teachers as they implemented constructivist activities…He described the “scaffolding” that the most effective teachers introduced when students failed to make learning progress in a discovery set- ting. He reported that the teacher whose students achieved all of their learning goals spent a great deal of time in instructional interactions with students. Stronger evidence from well-designed, controlled experi- mental studies also supports direct instructional guidance (e.g., see Moreno, 2004; Tuovinen & Sweller, 1999). Klahr and Nigam (2004) tested transfer following discovery learning, found that those relatively few students who learned via discovery ‘showed no signs of superior quality of learning’. Re-visiting Sweller’s ‘Story of a Research Program.  From last week: Goal free effect, worked example effect, split attention effect. My post from this week on trying out the goal free effect in my classroom. See full paper here. David Geary provided the relevant theoretical constructs (Geary, 2012). He described two categories of knowledge: biologically primary knowledge that we have evolved to acquire and so learn effortlessly and unconsciously and biologically secondary knowledge that we need for cultural reasons. Examples of primary knowledge are learning to listen and speak a first language while virtually everything learned in educational institutions provides an example of secondary knowledge. We invented schools in order to provide biologically secondary knowledge. (pg. 11) For many years our field had been faced with arguments along the following lines. Look at the ease with which people learn outside of class and the difficulty they have learning in class. They can accomplish objectively complex tasks such as learning to listen and speak, to recognise faces, or to interact with each other, with consummate ease. In contrast, look at how relatively difficult it is for students to learn to read and write, learn mathematics or learn any of the other subjects taught in class. The key, the argument went, was to make learning in class more similar to learning outside of class. If we made learning in class similar to learning outside of class, it would be just as natural and easy. How might we model learning in class on learning outside of class? The argument was obvious. We should allow learners to discover knowledge for themselves without explicit teaching. We should not present information to learners – it was called “knowledge transmission” – because that is an unnatural, perhaps impossible, way of learning. We cannot transmit knowledge to learners because they have to construct it themselves. All we can do is organize the conditions that will facilitate knowledge construction and then leave it to students to construct their version of reality themselves. The argument was plausible and swept the education world. The argument had one flaw. It was impossible to develop a body of empirical literature supporting it using properly constructed, randomized, controlled trials The worked example effect demonstrated clearly that showing learners how to do something was far better than having them work it out themselves. Of course, with the advantage of hindsight provided by Geary’s distinction between biologically primary and secondary knowledge, it is obvious where the problem lies. The difference in ease of learning between class-based and non-class-based topics had nothing to do with differences in how they were taught and everything to do with differences in the nature of the topics. If class-based topics really could be learned as easily as non-class-based topics, we would never have bothered including them in a curriculum since they would be learned perfectly well without ever being mentioned in educational institutions. If children are not explicitly taught to read and write in school, most of them will not learn to read and write. In contrast, they will learn to listen and speak without ever going to school. Re-visit Heather Hill. I asked: Dylan William quotes you and says ‘Heather Hill’s – http://hvrd.me/TtXcYh – work at Harvard suggested that a teacher would need to be observed teaching 5 different classes, with every observation made by made by 6 independent observers to reduce chance to really be able to reliable judge a teacher.’ Heather replied. Thanks for your question about how many observations are necessary. It really depends upon the purpose for use. 1. If the use is teacher professional development. I wouldn’t worry too much about score reliability if the observations are used for informal/growth purposes. It’s much more valuable to have teachers and observers actually processing the instruction they are seeing, and then talking about it, than to be spending their time worrying about the “right” score for a lesson. That principle is actually the basis for our own coaching program, which we built around our observation instrument (the MQI): http://mqicoaching.cepr.harvard.edu The goal is to have teachers learn the MQI (though any instrument would do), then analyze their own instruction vis-a-vis the MQI, and plan for improvement by using the upper MQI score points as targets. So for instance, if a teacher concludes that she is a “low” for student engagement, she then plans with her coach how to become a “mid” on this item. The coach serves as a therapist of sorts, giving teachers tools, cheering her on, and making sure she stays on course rat
Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here.  Show Notes Cognitive Load Theory, John Sweller. Edit: See a blog post of mine on trying out some CLT informed practices in my classroom here. I’ve come to the conclusion Sweller’s Cognitive Load Theory is the single most important thing for teachers to know https://t.co/MkJJLruR8g — Dylan Wiliam (@dylanwiliam) January 26, 2017 Wiliam then posted a link to Sweller’s article entitled ‘Story of a Research Program‘. The following excerpts are from that article. It starts off biographically, I was born in 1946 in Poland to parents who, apart from my older sister, were their families’ sole survivors of the Holocaust. With touches of dry humour… At school, I began as a mediocre student who slowly deteriorated to the status of a very poor student by the time I arrived at the University of Adelaide….  Initially, I enrolled in an undergraduate dentistry course but never managed to advance beyond the first year. While I am sure that was a relief to the Dental Faculty, it also should be a relief to Australian dental patients. Given the physical proximity of the teeth and brain, I decided next to try my luck at psychology. It was a good choice because my grades immediately shot up from appalling back to mediocre, where they had been earlier in my academic career. I decided I wanted to be an academic. Sweller eventually ended up at UNSW. Then he details the seminal experiment.  After several non-descript experiments, I saw some results that I thought might be important. I, along with research students Bob Mawer and Wally Howe, was running an experiment on problem solving, testing undergraduate students (Sweller, Mawer, & Howe, 1982). The problems required students to transform a given number into a goal number where the only two moves allowed were multiplying by 3 or subtracting 29. Each problem had only one possible solution and that solution required an alternation of multiplying by 3 and subtracting 29 a specific number of times. For example, a given and goal number might require a 2-step solution requiring a single sequence of: x 3, – 29 to transform the given number into the goal number. Other, more difficult problems would require the same sequence consisting of the same two steps repeated a variable number of times. My undergraduates found these problems relatively easy to solve with very few failures, but there was something strange about their solutions. While all problems had to be solved by this alternation sequence very few students discovered the rule, that is, the solution sequence of alternating the two possible moves. Whatever the problem solvers were doing to solve the problems, learning the alternating solution sequence rule did not play a part. Cognitive load theory probably can be traced back to that experiment. But this was an isolated case. Sweller needed to demonstrate it in an educational context. Research was taken to the fields of maths and physics education, and it did indeed show the effect. I’ll talk briefly about  some of the Cognitive Load Effects in education, and we’ll save some more for the next two or three episodes of TOT.  The Goal Free Effect:  If working memory during problem solving was overloaded by attempts to reach the problem goal thus preventing learning, then eliminating the problem goal might allow working memory resources to be directed to learning useful move combinations rather than searching for a goal. Problem solvers could not reduce the distance between their current problem state and the goal using means-ends analysis if they did not have a specific goal state. Rather than asking learners to “find Angle X” in a geometry problem, it might be better to ask them to “find the value of as many angles as possible”. Nice post from @mpershan on applying Sweller’s cognitive load theory to math instruction: https://t.co/HC3PoboBIs — Dylan Wiliam (@dylanwiliam) January 27, 2017 A couple of other effects are worth noting, these are the worked example effect, the split-attention effect. Using Question Stems in the Classroom Jennifer Gonzalez’s ‘Is Your Classroom Academically Safe?’ Gonzalez’s question stems to scaffold student questioning: This is what I do understand… (summarize up to the point of misunderstanding) Can you tell me if I’ve got this right? (paraphrasing current understanding) Can you please show another example? Could you explain that one more time? Is it ______ or _________? (identifying a point of confusion between two possibilities) I said: What is ___ in the diagram Am I right in thinking that ___ What’s the difference between ___ and ___ Would love more suggestions. What Would it Take to Fix Education in Australia? Full article here, but I’ll just talk briefly about two comments made in question time. Larissa made an interesting point on the role of literacy. Following up on a question from Maxine McKew on the inclusion of Australian literature in Australian schools, she suggested that the literature studied in schools must represent the diversity of our Australian society. If we don’t do this then we’re effectively saying to vast swathes of our society ‘You do not have a place here’. Glenn: There’s a misalignment between the locus of policy making and the locus of accountability in Australia. We’ve increasingly got federal bodies making decisions that have implications for education right across the country (locus of policy making), whereas the accountability to the impacts of these decisions actually falls not at the federal level but at the state levels. Fundamentally this is a broken feedback loop (my terminology) that undermines improvements and accountability right throughout the system. Several times whilst I was listening to this very high level discussion on education a quote came to mind that I heard a couple of years ago,  ‘If you change what happens in your classroom, you are changing the education system.’ The post TOT #004. Cognitive Load Theory + more Twitter Takeaways appeared first on Ollie Lovell.
Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here.  Show Notes A Student tries out effective learning strategies GUEST POST: A Student Tries out the Six Strategies for Effective Learning https://t.co/vLfL7jy4MSpic.twitter.com/t9WKARpIjF — Robot Ollie (@OllieAutoEd) January 19, 2017 Original Author,Syeda Nizami The Strategies: Spaced Practice, Retrieval Practice, Elaboration, Interleaving, Concrete Examples, Dual Coding “Overall, each of the six strategies had their strengths and weaknesses, and it somewhat depends on which method is preferable to you, but I think the two that are truly essential are retrieval practice and spacing. Retrieval practice was and is my preferred way of studying for a quiz or exam, but this experience made me realize how truly useful it is. To be perfectly honest, spacing was a strategy I had never tried out before, even though teachers had always stressed that cramming wasn’t effective.” Edu Podcasts for Kids (or for inspiration!) Wow, exciting from @cultofpedagogy. Ed Podcasts for kids https://t.co/NPiJNUbxtk — Oliver Lovell (@ollie_lovell) January 22, 2017 The Show about Science: This science interview show is hosted by 6-year-old Nate, and while it has some serious science chops, it’s also just plain adorable. Nate talks to scientists about everything from alligators to radiation to vultures, in his distinctly original interviewing style. Episode on Ants! Nate’s first interview : ) Are laptops and tablets a help or a hindrance to note taking? The Impact of Computer Usage on Academic Performance: Evidence from a Randomized Trial at the United States Military Academy (Carter, Greenberg and Walker, 2016) We present findings from a study that prohibited computer devices in randomly selected classrooms of an introductory economics course at the United States Military Academy. Average final exam scores among students assigned to classrooms that allowed computers were 18 percent of a standard deviation lower than exam scores of students in classrooms that prohibited computers. Through the use of two separate treatment arms, we uncover evidence that this negative effect occurs in classrooms where laptops and tablets are permitted without restriction and in classrooms where students are only permitted to use tablets that must remain flat on the desk surface. Humans can’t multitask https://t.co/rgGgxLjgHP — David Didau (@DavidDidau) January 23, 2017 One of the highlights of my day at researchED Amsterdam was hearing Paul Kirschner speak about edu-myths. He began his presentation by forbidding the use of laptops or mobile phones, explaining  that taking notes electronically leads to poorer recall than handwritten notes. The benefits of handwritten over typed notes include better immediate recall as well as improved retention after 2 weeks. In addition, students who take handwritten notes are more like to remember facts but also to have better future understanding of the topic. Fascinatingly, it doesn’t even matter whether you ever look at these notes – the simple act of making them appears to be beneficial. . @DavidDidau tyranny of the 140 characters! See attached. Wld love ur thoughts. ps: I enjoyed your recent post on reading 4 betterment : ) pic.twitter.com/0y0jhqJHIs — Oliver Lovell (@ollie_lovell) January 26, 2017 The rise of Randomised Controlled Trials Reviewing the evidence shows smaller effect sizes but programmes that replicate in randomised controlled trials: https://t.co/l7xJIt6zbh — Harry Fletcher-Wood (@HFletcherWood) January 23, 2017 Original article by Robert Slavin, told us about reciprocal teaching effects in TOT001. reports of rigorous research are appearing very, very fast. In our secondary reading review, there were 64 studies that met our very stringent standards. 55 of these used random assignment, and even the 9 quasi-experiments all specified assignment to experimental or control conditions in advance. We eliminated all researcher-made measures. But the most interesting fact is that of the 64 studies, 19 had publication or report dates of 2015 or 2016. In a recent review I did with my colleague Alan Cheung, we found that the mean effect size for large, randomized experiments across all of elementary and second reading, math, and science is only +0.13, much smaller than effect sizes from smaller studies and from quasi-experiments. However, unlike small and quasi-experimental studies, rigorous experiments using standardized outcome measures replicate. These effect sizes may not be enormous, but you can take them to the bank. One might well argue that the SIM findings are depressing, because the effect sizes were quite modest (though usually statistically significant). This may be true, but once we can replicate meaningful impacts, we can also start to make solid improvements. Replication is the hallmark of a mature science, and we are getting there. If we know how to replicate our findings, then the developers of SIM and many other programs can create better and better programs over time with confidence that once designed and thoughtfully implemented, better programs will reliably produce better outcomes, as measured in large, randomized experiments. This means a lot. Replication is the hallmark of a mature science, and we’re getting there.’ A nice quote to end on 1/2: When kids receive grades AND comments, 1st thing they look at is the grade; 2nd thing they look at is…someone else’s grade -D William — Alfie Kohn (@alfiekohn) January 19, 2017   The post TOT #003. Edu podcasts for kids + more Twitter Takeaways appeared first on Ollie Lovell.
Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here.  Show Notes Teaching ‘the scientific method’ Superb post from @mfordhamhistory, on how we can teach students the discipline through a curriculum of case studies: https://t.co/Akgpv6D3NT — Harry Fletcher-Wood (@HFletcherWood) January 10, 2017 Original post by Michael Fordham ‘1. Disciplines are characterised as much by their internal differences as their similarities. 2. There is no Platonic ideal of each discipline 3. Generalised models of disciplines rarely reflect what happens on the ground All of these points lead me to great scepticism about curriculum theories in history, science or other disciplines that work by distilling the ‘essence’ from those disciplines, and teaching those. I am not all convinced that we can teach children ‘the scientific method’ in a general sense before they have learnt a number of cases of scientific research in practice. History teachers have produced numerous examples of this over the last few years. Steve Mastin, for example, designed a scheme of work in which he taught his pupils how one historian (Eamon Duffy) had worked with a particular body of source material to answer questions about the impact of the reformation in England. Rachel Foster has a similarly well-cited example where she designed a scheme of work around the way two different historians (Goldhagen and Browning) had interpreted the same source material (a report from a police battalion involved in the Holocaust) in quite different ways. In examples such as these, children are taught about a specific example of where historians have undertaken research. Over time, as pupils learn more and more cases of disciplinary practice, we can then teach them the similarities and differences between different approaches: we thus end with abstract ideas, rather than beginning with them. This means that I would suggest the following as an alternative way of teaching disciplinary practice to school children. Rather than distil some general, abstract ideas about ‘how the discipline works’, we would be better off specifying a range of specific cases of disciplinary practice for children to learn, from which we can as teachers tease out the similarities and differences in approach that characterise our respective disciplines.’ Is Growth Mindset a Hoax? This is an example of the kind of education journalism that we need more of https://t.co/Ljmk3fyo6x via @tomchivers — Greg Ashman (@greg_ashman) January 14, 2017 Original article by Tom Chivers, about hype of Growth mindset, being able to do everything from help struggling students to bring peace to the middle east. ‘Scott Alexander, the pseudonymous psychiatrist behind the blog Slate Star Codex, described Dweck’s findings as “really weird”, saying “either something is really wrong here, or [the growth mindset intervention] produces the strongest effects in all of psychology”. He asks: “Is growth mindset the one concept in psychology which throws up gigantic effect sizes … Or did Carol Dweck really, honest-to-goodness, make a pact with the Devil in which she offered her eternal soul in exchange for spectacular study results?” Strongest evidence from Timothy Bates’ research… ‘Bates told BuzzFeed News that he has been trying to replicate Dweck’s findings in that key mindset study for several years. “We’re running a third study in China now,” he said. “With 200 12-year-olds. And the results are just null. “People with a growth mindset don’t cope any better with failure. If we give them the mindset intervention, it doesn’t make them behave better. Kids with the growth mindset aren’t getting better grades, either before or after our intervention study.” Dweck told BuzzFeed News that attempts to replicate can fail because the scientists haven’t created the right conditions. “Not anyone can do a replication,” she said. “We put so much thought into creating an environment; we spend hours and days on each question, on creating a context in which the phenomenon could plausibly emerge.’ Reply by Scott Alexander. http://slatestarcodex.com/2017/01/14/should-buzzfeed-publish-information-which-is-explosive-if-true-but-not-completely-verified/ ‘it mentions a psychologist Timothy Bates who has tried to replicate Dweck’s experiments (at least) twice, and failed. This is the strongest evidence the article presents. But I don’t think any of Bates’ failed replications have been published – or at least I couldn’t find them. Yet hundreds of studies that successfully demonstrate growth mindset have been published. Just as a million studies of a fake phenomenon will produce a few positive results, so a million replications of a real phenomenon will produce a few negative results. We have to look at the entire field and see the balance of negative and positive results. The last time I tried to do this, the only thing I could find was this meta-analysis of 113 studies which found a positive effect for growth mindset and relatively little publication bias in the field.’ ‘I guess my concern is this: the Buzzfeed article sounds really convincing. But I could write an equally convincing article, with exactly the same structure, refuting eg global warming science. I would start by talking about how global warming is really hyped in the media (true!), that people are making various ridiculous claims about it (true!), interview a few scientists who doubt it (98% of climatologists believing it means 2% don’t), and cite two or three studies that fail to find it (98% of studies supporting it means 2% don’t). Then I would point out slight statistical irregularities in some of the key global warming papers, because every paper has slight statistical irregularities. Then I would talk about the replication crisis a lot.’ ‘Again, this isn’t to say I believe in growth mindset. I recently talked to a totally different professor who said he’d tried and failed to replicate some of the original growth mindset work (again, not yet published). But we should do this the right way and not let our intuitions leap ahead of the facts. I worry that one day there’s going to be some weird effect that actually is a bizarre miracle. Studies will confirm it again and again. And if we’re not careful, we’ll just say “Yeah, but replication crisis, also I heard a rumor that somebody failed to confirm it,” and then forget about it. And then we’ll miss our chance to bring peace to the Middle East just by doing a simple experimental manipulation on the Prime Minister of Israel.’ Using private school instructional techniques in a public school Scaling Mount Improbable: King’s Wimbledon https://t.co/Qt63V9ZLC9 via @joe__kirby — Greg Ashman (@greg_ashman) January 14, 2017 Greg Ashman pointed me to an article by Joe Kirby on how public schools can adopt some of the practices that high achieving private schools implement, without the massive cost barriers. e.g., ‘Teaching writing is heavily guided, even up to sixth form. In History, for instance, starting point sentences are shared for each paragraph of complex essays on new material. Extensive written guidance is shared with pupils. Sub-questions within each paragraph and numerous facts are also shared.’ Does class size matter? Class size matters a lot, research shows #ednewsoz https://t.co/qOZQQLOBK1 — TER Podcast (@TERPodcast) January 14, 2017 Original article by Valerie Strauss (read whole article) How do visible disadvantage impact student outcomes? Social Class in the Classroom: Highlighting Disadvantages https://t.co/Lm2GpCS9L6 pic.twitter.com/55Wi3vttNl — Robot Ollie (@OllieAutoEd) January 18, 2017 Original post by Megan Smith. Asking students to raise their hand to signal their achievement (when they knew an answer) highlights differences in performance between students, making it more visible. This can lead to students in lower social classes, or with lower familiarity with a task, to perform even worse than they would have. In other words, highlighting performance gaps with no explanation for the gap can make the gap even wider! However, making students aware of the fact that some are more familiar with the tasks, due to extra training, can mitigate these issues. The post TOT #002. Teaching ‘The Scientific Method’ + more Twitter Takeaways appeared first on Ollie Lovell.
Find all other episodes of Teacher Ollie’s Takeaways here, find it on iTunes here, or on your favourite podcasting app by searching ‘Teacher Ollie’s Takeaways’. You may also like to check out Ollie’s other podcast, the Education Research Reading Room, here.  This is the first ever episode of the Teacher Ollie’s Takeaways podcast, the podcast in which I summarise my key takeaways from twitter, blogs, research papers, conversations, and even my own classroom, from the week just past. If you have any thoughts or comments after listening to this podcast, please share them with me via twitter: @ollie_lovell Show Notes John Hattie on Direct Instruction “John (Hattie, 2009) defines direct instruction in a way that conveys an intentional, well-planned, and student-centered guided approach to teaching. “In a nutshell, the teacher decides the learning intentions and success criteria, makes them transparent to the students, demonstrates them by modeling, evaluates if they understand what they have been told by checking for understanding, and re-tells them what they have been told by tying it all together with closure”(p. 206).” “When thinking of direct instruction in this way, the effect size is 0.59. Dialogic instruction also has a high effect size of 0.82. This doesn’t mean that teachers should always choose one approach over another. It should never be an either/ or situation. The bigger conversation, and purpose of this book, is to show how teachers can choose the right approach at the right time to ensure learning, and how both dialogic and direct approaches have a role to play throughout the learning process, but in different ways.” “Precision teaching is about knowing what strategies to implement when for maximum impact.” Some comments on my Masters Project… “This study shows that, for under-achieving students, the bridge from mathematical challenge and disengagement to success and motivation is a fragile one, and the journey across it becomes more perilous the older a student gets. The ongoing challenge for teachers is to shore up and scaffold this fragile bridge’s structure, and to ensure that the scaffolding provided is appropriate to both the ‘who’ that is crossing, and the ‘when’ of their traverse.” Tidbit “Factor Game ( http:// www.tc.pbs.org/ teachers/ mathline/ lessonplans/ pdf/ msmp/ factor.pdf ) in which an understanding of primes and composites was crucial to developing strategies to win” The Mr Barton Podcast with Dylan Wiliam Original article here. Reciprocal Teaching Robert Slavin: When we encourage students to help each other, whilst there are great benefits to both students, the students who learn the most are the ones who do the most explaining. The Relevance of Problem Contexts Jo Boaler: Q: ‘When do girls prefer football to fashion?’ A: When it’s the context of a maths question. Presented with a structurally identical maths question in two different contexts, girls do better than boys when the context is that of football (soccer). This is because they bring less irrelevant and confounding background knowledge into the solving process. What is learning? Paul Kirschner: Learning is a change in long term memory, Aka: if they don’t remember it in 6 weeks, they haven’t really learnt it. Relatedly… John Mason: ‘Teaching takes place in time, but learning takes place over time.’ Ref: Jame’s Manion’s article, Learning is Meaningless. We don’t actually Know what Good Teaching Looks Like! Heather Hill: We need to stop kidding ourselves by thinking that we can pick a good or a bad teacher by observing them teach a class. Hill suggests they would need to be observed in 6 different classes by 5 different observers (a total of 30 observations) to obtain a reliable rating. Edit: I emailed Heather Hill about this, and this is what she said: “hanks for your question. For my own instrument, it originally looked like we needed 4 observations each scored by 2 raters (see attached paper). However, Andrew Ho and colleagues came up with the 6 observations/5 observer estimates from MET data:” Ho’s paper.  Dan Goldhaber: Comparing two models of ‘good teaching’ (a fixed effect and a random effect model) based upon ‘value added’ metrics, the best 9% of teachers as rated by one model were classified as the worst teachers in the other! Dylan concludes that we can only really comment in the extremes, i.e., ‘We can be pretty sure that a teacher who appears to be very very good is in fact not very very bad, and we can be pretty sure that a teacher who appears very very bad is in fact not very very good.’, but that’s about the extent of it. So… where to? Dylan says that team leaders should focus on one question: ‘What do you want to get better at and how can we do it?’. I’m (Ollie) a bit dubious about this and I think that team leaders could help by guiding efforts to areas where we can be pretty sure that they’ll have a positive effect on learning (more frequent assessment and better feedback, distribution of practice, better modelling, etc). Thinking Hard and Distributed Practice Robert Bjork: The harder you think about something the better you remember it. Relatedly, the best time to study something is at the point just before you’ve completely forgotten it! Simple Hacks to improve Assessment The hypercorrection effect: You get two benefits of assessment, the first is when the testee is forced to recall the information in the first place, this strengthens the synaptic connections. The second benefit is when they see the answer. Thus, in order to maximise learning, the best person to mark a test Synoptic testing: Testing shizzle up to the point that you’re now up to! Building habits (NY Times article) Charles Duhigg’s TED talk. “the core of every habit is a neurological loop with three parts: A cue, a routine and a reward. The summary of this article is that you want to get to a point where the reward is internal, i.e., you don’t need any external input from yourself (or your students), to feel good about the habit that you’re trying to establish. However, the interesting thing that this NY times article points out, is that you can start of with an external reward, and use this to build the neuro-associations in such a way that the external reward will eventually be no longer required. I’ll read an excerpt from the article that provides a good example. “If you want to start running each morning, it’s essential that you choose a simple cue (like always lacing up your sneakers before breakfast or always going for a run at the same time of day) and a clear reward (like a sense of accomplishment from recording your miles, or the endorphin rush you get from a jog). But countless studies have shown that, at first, the rewards inherent in exercise aren’t enough. So to teach your brain to associate exercise with a reward, you need to give yourself something you really enjoy — like a small piece of chocolate — after your workout. This is counterintuitive, because most people start exercising to lose weight. But the goal here is to train your brain to associate a certain cue (“It’s 5 o’clock”) with a routine (“Three miles down!”) and a reward (“Chocolate!”). Eventually, your brain will start expecting the reward inherent in exercise (“It’s 5 o’clock. Three miles down! Endorphin rush!”), and you won’t need the chocolate anymore. In fact, you won’t even want it. But until your neurology learns to enjoy those endorphins and the other rewards inherent in exercise, you need to jump-start the process. And then, over time, it will become automatic to lace up your jogging shoes each morning. You won’t want the chocolate anymore. You’ll just crave the endorphins. The cue, in addition to triggering a routine, will start triggering a craving for the inherent rewards to come”   The post TOT001: What is Direct Instruction? Dylan Wiliam takeaways, and Building Habits appeared first on Ollie Lovell.
Comments