Category: What Does This Look Like?

American edition of What Does This Look Like in the Classroom? with new foreword by Dylan Wiliam

twitter_cover.jpg

Delighted to announce that the American edition of ‘What Does This Look Like in the Classroom?’ has been released this week with a new foreword by Dylan Wiliam which you can read here.

The book is the third volume in @Learn_Sci‘s Dylan Wiliam Center Collection 

In 1999, Paul Black and I were working with a group of math and science teachers. We had just completed a major review of the research on the impact of assessment on learning, and we had published our findings in a rather dense 65-page academic journal article.1 However, since we thought our findings would be of interest to practitioners and policy-makers, we also wrote a more accessible summary of our research, and its implications, which was published in Phi Delta Kappan magazine.2

One of the most surprising findings of our review, which we were sharing with the teachers, related to research on feedback. A particularly comprehensive review of feedback in schools, colleges and workplaces by two American psychologists—Avraham Kluger and Angelo DeNisi—had found that while feedback was often helpful in improving achievement, in 38% of the well-designed studies they had found, feedback actually lowered performance.3 In other words, in almost two out of every five cases, the participants in the research studies would have performed better if the feedback had not been given at all.

In trying to makes sense of their findings, Kluger and DeNisi suggested that feedback was less effective when it was focused on the individual (what psychologists call “ego-involving”) and more effective when it was focused on the task in which the students were engaged (“task-involving”). We therefore suggested to the teachers that to make their feedback to students more effective, they should give task-involving rather than ego-involving feedback.

Most of the teachers seemed to find this advice useful, but one teacher, after some thought, asked, “So does this mean I should not say ‘well done’ to a student?” Paul and I looked at each other, and realized that we didn’t know the answer to the question. We knew, from the work of a number of researchers, that in the longer term, praise for effort would be more likely to be successful than praise for ability. However, without knowing more about the relationship between the teacher and the student, about the context of the work, and a whole host of other factors, we could not be sure whether “Well done” would be task-involving or ego-involving feedback.

What is ironic in all this, is that we had failed to take the advice we had given teachers a year earlier in the Phi Delta Kappan article, where we said,

if the substantial rewards promised by the research evidence are to be secured, each teacher must find his or her own ways of incorporating the lessons and ideas set out above into his or her own patterns of classroom work and into the cultural norms and expectations of a particular school community. (p. 146)

The important point here is that the standard model of research dissemination, where researchers discover things, and then tell teachers about their findings, so that teachers can then implement them in their own classrooms, simply cannot work. As Carl Hendrick and Robin MacPherson point out in this book, classrooms are too complex for the results of

research studies to be implemented as a series of instructions to be followed. Rather, the work that teachers do in finding out how to apply insights from research in their own classrooms is a process of creating new knowledge, albeit of a distinct, and local kind. This is why this book is so unusual and important. It is not an instruction manual on how to do “evidence-based teaching” (whatever that might mean). It is, instead, an invitation to every educator to reflect on some of the most important issues in education in general—and teaching in particular—and to think about how educational research might be used as a guide for careful thinking about, and exploration of, practice.

A second unusual—and welcome—feature of this book is the way it was put together. Carl and Robin started by identifying a number of issues that are relevant to every teacher—student discipline and behavior, motivation, grading, classroom practice, reading, inclusion, memory, technology and so on. At this point, most authors would have written advice for teachers on these issues, but of course, the danger with such an approach is that it reflects the concerns of the authors, rather than of the potential reader—what philosopher Karl Popper described as “unwanted answers to unasked questions.”4

Instead, in a novel twist, Carl and Robin decided to ask practicing teachers what were for them the most important questions in each of the areas. Then, again counter to what most writers would have done, they posed the questions to both academic experts and those with expertise in classroom practice. The result is a marvelous combination of insights into teaching that are both authoritative and immediately relevant to classroom practice.

If you have read the previous volumes in the “Dylan Wiliam Center Collection” —Craig Barton’s How I wish I’d taught maths and Tom Sherrington’s The learning rainforest—you will know that our aim has been to bring to North American readers the very best of authoritative writing on education from around the world. While the questions that were posed by the teachers that Carl and Robin worked with may appear to be focused on issues that are of particular concern to teachers in England, the extraordinary range of expertise of those responding to these questions means that the answers are relevant to every American educator. The very best writing on educational research, in an accessible form—solutions you can trust.

 

References

1 Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice, 5(1), 7-74.

2 Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139-148.

3 Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254-284.

4 Popper, K. (1992 p. 41). Unended quest: An intellectual biography. London, UK: Routledge.

 

 

‘Four Quarters Marking’ – A Workload Solution?

In our new book ‘What Does This Look Like in the Classroom?’ we interviewed Dylan Wiliam on how to implement research on assessment in the classroom.  

 

A central problem in the area of assessment in the classroom has been in the way we often confuse marking and feedback. As Dylan Wiliam points out in our discussion, there is an extraordinary amount of energy expended by teachers on marking and often very little to show for it in the way of student benefit. Although feedback is one of the most effective drivers of learning, one of the more surprising findings is that a lot of it actually has a negative effect on student achievement.

A set of marked books is traditionally seen as an effective proxy for good teaching but there is a lot of evidence to say that this might not always be the case. This problem is on a scale that might surprise a lot of people:

Dylan: I once estimated that, if you price teacher’s time appropriately, in England we spend about two and a half billion pounds a year on feedback and it has almost no effect on student achievement.

Certainly students need to know where they make misconceptions or spelling errors and correcting those is important. Doing so also provides a useful diagnostic for teachers to inform what they will teach next, but the written comments at the end of a piece of work are often both the most time-consuming and also the most ineffective. For example, taking the following typical comments on a GCSE English essay:

  •        Try to phrase your analysis of language using more sophisticated vocabulary and phrasing.
  •        Try to expand on your points with more complex analysis of Macbeth’s character.

This is a good example of certain assessment practices where the feedback mainly focuses on what was deficient about it, which as Douglas Reeve’s notes, is “more like a post-mortem than a medical.” The other thing is that it doesn’t really tell the student what they need to do to improve. What is more useful to the student here? receiving vague comments like these or actually seeing sophisticated vocabulary, phrasing and analysis in action? It’s very difficult to be excellent if you don’t know what excellent looks like.

Often, teachers give both a grade and comments like those above to students, hoping that they somehow improve by the time their next piece of writing comes around a week later and then berate the student when, lo and behold, they make the same mistakes again. Perhaps part of the problem here is that we have very low expectations of what students are willing to do in response to a piece of work and do not afford them the opportunity to engage in the kind of tasks that might really improve their learning.

To address this problem, Dylan advocates a much more streamlined model of marking that is not only more manageable for teachers, but also allows students to have more ownership over the process:

Dylan: I recommend what I call ‘four quarters marking.’ I think that teachers should mark in detail, 25% of what students do, should skim another 25%, students should then self-assess about 25% with teachers monitoring the quality of that and finally, peer assessment should be the other 25%. It’s a sort of balanced diet of different kinds of marking and assessment.

four quarters

Dylan Wiliam’s Four Quarters Marking (Oliver Caviglioli)

After producing a piece of work, instead of using abstract skills based success criteria, it is probably more powerful for students to have access to a bank of exemplar essays or worked solutions to see concrete examples of success against which to self-assess their own work. Marking everything in sight and leaving detailed comments is an established cultural norm now but this practice doesn’t appear to be based on good evidence. We know for example that many students will look at a grade and not engage with the feedback but is that feedback always useful anyway?

As we discuss in the book, a common issue we see again and again in using research in the classroom is the ‘Chinese whisper effect’ where by the time evidence works its way down to the level of the classroom, it’s a pale imitation of its original form. This is especially prevalent in the area of marking where convoluted policies such as triple marking are enacted as a means of raising pupil achievement whereas all they are doing is often increasing teacher workload. As Dylan Wiliam reminds us, “feedback should be more work for the recipient than the donor,” but how do you change a culture that has traditionally been the opposite?

Dylan: In terms of what we do about this, I would say first of all, headteachers should lay down clear expectation to parents and say things like, “We are not going to give detailed feedback on more than 25% of what your child does. The reason for that is not because we’re lazy. It’s because there are better uses we could make of that time. We could mark everything your child does, but that would lead to lower quality teaching and then your child will learn less.”  Heads have to establish those cultural norms. If a teacher is marking everything your child does, it’s bad teaching. It is using time in a way that does not have the greatest benefit for students.

As a profession, we are too some extent, we are our own worst enemy. Using marking policies that have little impact on student achievement and a negative impact on teacher workload and morale makes little sense. By adopting an approach like four quarters marking, we might go some way to address this issue and at the same time, give students more ownership over their own learning.

‘What Does This Look Like in the Classroom?’ is out later this month. 

The Abilene Paradox: Why Schools Do Things Nobody Actually Wants

On an indecently hot day in Texas, professor Jerry B. Harvey was visiting his wife’s family when his father-in-law suggested they visit a new restaurant in the town of Abilene to which his wife exclaimed “sounds like a great idea.” Harvey had reservations about this however, as a 53 mile trip in a car with no air-conditioning sounded terrible to him, but not wanting to rock the boat he also proclaimed this a good idea and asked his mother in law if she wanted to go. As she was now the only one in the group who had not yet expressed agreement with this “great idea,” she also said they should go, and so they began their journey to Abilene. However, as Harvey explains, the trip was not a success:

My predictions were fulfilled. The heat was brutal. Perspiration had cemented a fine layer of dust to our skin by the time we had arrived. The cafeteria’s food could serve as a first-rate prop in an antacid commercial.

Some four hours and 106 miles later, we returned to Coleman, hot and exhausted. We silently sat in front of the fan for a long time. Then to be sociable, I dishonestly said, “It was a great trip wasn’t it?”

No one spoke.

After a while, his mother-in-law admitted that she never really wanted to go but only did so because she thought everyone else wanted to and didn’t want to cause a fuss, to which his wife also protested that she never really wanted to go either which then lead to a volley of argument. Eventually his father in law broke the silence and exclaimed in a long Texas drawl: Shee-it. Listen, I never wanted to go to Abilene. I just thought you might be bored. You visit so seldom I wanted to be sure you enjoyed it. I would have preferred to play another game of dominoes and eat the leftovers in the icebox.” This experience led to Harvey coining the term ‘the Abilene paradox’ to explain a curious aspect of group dynamics in which the opposite of what everyone wants is tacitly created by the group who thinks they are agreeing with what everyone else wants.

After the outburst of recrimination we all sat back in silence. Here we were, four reasonably sensible people who, of our own volition, had just taken a 106-mile trip across a godforsaken desert in a furnace-like temperature through a cloud-like dust storm to eat unpalatable food at a hole-in-the-wall cafeteria in Abilene, when none of us had really wanted to go. In fact, to be more accurate, we’d done just the opposite of what we wanted to do. The whole situation simply didn’t make sense.

The Abeline paradox lies in the fact that we have problems not with disagreement, but rather with agreement. It is characterised by groups of people in an organisation privately agreeing that one course of action makes sense but failing to properly communicate those ideas and then collectively stumbling to what they think is the right course of action or what everyone else wants. Eventually an inaccurate picture of what to do emerges and based on that, the organisation takes steps towards actions that nobody really wants and which is ultimately counterproductive to the aims of the organisation itself.

ABI_CityLimitsSign

You can witness the Abilene paradox at work in many schools. Often this takes the form of ill-considered marking policies which increase teacher workload to the point of exhaustion, endless tracking and monitoring of students, behaviour policies which punish the teacher more than the student who misbehaves, and graded lesson observations where teachers abandon what they normally do to put on a one-off, all singing, all dancing lesson for the observer, because that’s what everyone thinks that’s what inspectors want.  

A lot of this can be accounted for by innate cognitive biases such as groupthink but it can also be exacerbated by either poor evidence, as in the case of learning styles or a poor understanding and misappropriation of good evidence as in the case of formative assessment. With the emergence of a solid evidence base, we might just be able to defend ourselves from these kind of cognitive biases if they are communicated clearly and appropriated effectively as part of a broader discussion about the values of a school. At it’s best, good evidence can act as a bulwark against the tsunami of nonsense that has so often washed over our schools. If we fail to have these important discussions and simply go with what we think might work, then we are at risk of loading the entire staff onto the school mini-bus and heading off to Abilene. 

 

This is an excerpt taken from the forthcoming book ‘What Does This Look Like in the Classroom? Bridging the Gap Between Research and Practice