testing

We’ve lost Sight of the Point of Testing

Testing in its current form is a relatively new phenomenon. In the Ancient World, Socrates would ‘test’ his students through a dialogue where there were no ‘correct’ responses but simply more questions and answers. The Socratic tradition of dialogue was largely continued in Europe with students being tested through oral responses and then essays until around 150 years where we begin to see the notion of testing as something that can be standardised in a uniform fashion.

In the 19th century, the “father of American public education” Horace Mann advocated testing to provide “objective information about the quality of teaching and learning in urban schools, monitor the quality of instruction, and compare schools and teachers within each school.” At the beginning of the 20th century psychologist Alfred Binet developed a standardized test of intelligence, which would eventually become the standard IQ test we know today.

There is no question that a standardised measure of assessment holds great value in terms of being able to compare students and schools with on a national level and to flag up underachieving groups but there is a clear sense at the moment that standardised testing has become something else, that the tail is now wagging the dog and that the model may be in need of reform, something brought to national attention in the US not only by mass boycotting of tests but also by John Oliver on national TV:

For me there are several issues that focus not so much on testing itself but on the collateral damage caused by testing and how they’re being used far beyond their intended purpose:

1. Tests are no longer part of a judgement, they now are the judgement.

One of the proposed benefits of testing in the 19th century was that they would provide a diagnostic indicator of student progress and school efficacy that would inform a more wider, more balanced and measured judgement. However what we have now is a system where test scores are the judgement – it’s not so much that the tail is wagging the dog as the tail now is the dog. Referring to Horace Mann’s statewide roll-out of standardised tests in the mid 1800s, a US congress report from 1992 notes that:

It is important to point out what “standardization” meant in those days. It did not mean “norm-referenced” but rather that “. . . the tests were published, that directions were given for administration, that the exam could be answered in consistent and easily graded ways, and that there would be instructions on the interpretation of results. ’

The other issue is that the current model of testing risks exceeding its own mandate by being used mainly for school reform and teacher evaluation as opposed to pupil evaluation. Many opponents of standardised testing claim that they may not be fit for purpose on their own terms. According to James Popham standardised tests are like trying to “measure temperature with a tablespoon”

“Tablespoons have a different measurement mission than indicating how hot or cold something is. Standardized achievement tests have a different measurement mission than indicating how good or bad a school is. Standardized achievement tests should be used to make the comparative interpretations that they were intended to provide. They should not be used to judge educational quality.”

Combine this with an inspectorate that is data-obsessed and you have a perfect storm as this cautionary tale from Geoff Barton illustrates.

2. Standardised tests are really just a measure of who has access to specific resources

Does standardised testing evaluate a pupil’s aptitude or general knowledge or does it simply register their access to a particular set of resources? Resources here can mean a wide range of things such as the school, a private tutor, parental support but when a particular set of textbooks define success then there is an issue. This is well illustrated by the case of the Pennsylvania System of School Assessment (PSSA) where Meredith Broussard discovered that success in her daughter’s 3rd grade test was inextricably linked to specific textbooks:

“Standardized tests are not based on general knowledge. As I learned in the course of my investigation, they are based on specific knowledge contained in specific sets of books: the textbooks created by the test makers.”

Regardless of all the other inequities, the cost of these textbooks are prohibitive in many districts so the inevitable outcome is that schools in deprived areas simply cannot ‘win’ in standardised texts. Interestingly, in the 1960s the Civil Rights movement protested against standardised testing as it inevitably punished those from a certain social strata. The Coleman Report, found that a student’s home environment was the deciding factor in determining achievement. (Rumberger & Palardy, 2005) In a culture of high stakes accountability the losers will inevitably be the ones without access to the best resources.

2. Linking teacher accountability to student test scores raises difficult ethical questions

The logical end point of a high stakes system of accountability where teachers are judged on their students’ scores will be the occurrence of dubious ethical practices somewhere along the line, whether that be on a small scale with teachers ‘correcting’ a student’s coursework or on the more extreme end with institutional malpractice. Earlier this year in Atlanta eight educators were found guilty of organising a “criminal enterprise” under the Racketeer Influenced and Corrupt Organization (RICO) act for manipulating student scores on Georgia’s state standardized tests. Now I’m not for a second suggesting that they did this as a result of standardised testing, but the case raises some important issues around the kinds of pressure being put on teachers at the moment. What kind of a system makes people risk years in jail to improve student test scores?

Across the US right now there is a fairly widespread movement of civil disobedience with parents intervening in their children’s education and opting out of standardised tests. Whilst the moral and ethical dimensions of these decisions are unclear it is evident that the current fetish for testing has engendered a string of unintended consequences.

3. “But will this be in the test?”

Probably the most dispiriting thing a teacher can hear. We all want students to ‘achieve’ academically but should that be at the expense of intellectual curiosity and the ephemeral joy of learning that is often immeasurable? Now they are not mutually exclusive of course but when the outcomes for one far outweigh the other then something has to give and often that is the autonomy of the teacher to be able to go ‘off piste’ and follow a particular conversation or idea perhaps not directly related to the test. Teachers want their students to score well in tests but what about another measure of ‘success’ – what about the English teacher who has engendered a lifelong love of reading in a pupil? or the Physics teacher who has sparked a student’s curiosity about cosmic universe? or the languages teacher who has opened a student’s eyes to the values and customs of another country they now want to visit? Many of these things are not testable and are being subsumed by a focus on what is prescribed by an exam board and the techniques needed to be ‘successful’ in them. Harvard Professor Daniel Koretz notes that “If you impose a simplistic numerical measure and lose sight of the other important goals of the institution, then the other goals get short shrift.”

4. College dropout rates suggest that something is wrong 

In the UK, figures published by the Higher Education Statistics Agency indicated that over 32,000 students dropped out of university after a year of study in 2012/13. Of those 7,420 transferred to another university, while 24,745 dropped out of higher education altogether. In the US things the picture is even more bleak where there is the “lowest college completion rate in the developed world” (Organization for Economic Cooperation and Development.)

There seems to be a lot of blame attached to the universities themselves, but what if pre-university education is simply not preparing students for the intellectual rigour, criticality and independence needed at that level?  Are students effectively being herded through a set of tests to provide data that benefits policy makers and Ofsted rather than the actual pupils themselves? Is focusing on the ability to use a broad base of knowledge to think critically being sidelined for the narrow measure of how to pass an exam?

The knock-on effect of a high-stakes testing system with increased accountability will be the limiting of both teacher and pupil agency. Apart from the impact on student mental health and stress levels, teachers are increasingly being asked to teach how to pass an exam as opposed to impart knowledge and elicit dialogue. Schools are systems of deep uncertainty and flux in which teachers are often held accountable for the unaccountable.

Part of the solution has to be to move from a high stakes, Russian Roulette, sudden-death style system to one where pupils can be evaluated on their progress in a series of low stakes, non-threatening tests that foster not only an appreciation of knowledge for knowledge’s sake, but also the ability to think critically and to be able to embrace uncertainty. All of which will prepare them for the ‘tests’ they have ahead of them.


Why Poor Schools Can’t Win at Standardized Testing

Testing in American Schools: Asking the Right Questions. [Full Report.] 

Teacher Knowledge: From ‘Outside-In’ to ‘Inside-Out’

In their book ‘Inside Outside: Teacher Research and Knowledge,’ Marilyn Cochrane Smith and Susan L. Lytle outline a fundamental problem with our profession, namely that there has been an outside-in model of knowledge creation about what effective teaching is and a ‘top-down’ model of school improvement. Teachers they claim, have effectively been passive participants in the process of what constitutes good practice, and in research terms have been mere ‘objects of study.’

“The primary knowledge source for the improvement of practice is research on classroom phenomenon that can be observed. This research has a perspective that is “outside-in”; in other words, it has been conducted almost exclusively by university based researchers who are outside of the day-to-day practice of schooling.”

With all the millions invested in education research, it’s somewhat ironic (and symptomatic of the age) that some of the biggest agents of impact recently have been various grassroots movements, driven through self-organising, informal communities connecting through social media. The establishment of forums like ResearchED are now functioning as a sort of fourth estate to the traditional trifecta of School/Academia/Government, indeed major funding is now being invested to evaluate the impact of both school to school support models and also brokerage models of research engagement.

What is most significant about these initiatives, is that they are being driven by classroom teachers at a grassroots level working from the inside out, as opposed to traditional top-down model of ‘experts’ dictating from the outside-in.

This movement from ‘fringe to forefront’ is fuelled by a real desire from classroom teachers for not just knowledge, but practical, usable knowledge that speaks to them about their own experiences and has real, demonstrable impact on the pupils in their charge. For too long, the creation of knowledge about what makes effective teaching has been one-way traffic with researchers observing and codifying the phenomenon of the classroom, with very little input from teachers themselves.

Stenhouse’s argument was radical: He claimed that research was the route to teacher emancipation and that “researchers should justify themselves to practitioners, not practitioners to researchers.” *

For me the single most important element in this process is the autonomy for teachers to be able to ask their own questions and then carry out the process of collaborative, systematic enquiry to explore those questions. What we have had until this point has largely been characterised by teachers being given answers to questions they didn’t ask. It is vital that those questions come not just from outside-in but from the direct issues that teachers experience everyday in the classroom. As Cochran-Smith and Lytle remind us:

The unique feature of the questions that prompt teacher research is that they emanate from neither theory nor practice alone but from critical reflection on the intersection of the two.

However teachers can’t do it alone with the present workload level and lack of training/expertise in research methods. The experience and knowledge of experienced academics and education research departments can play a vital role in working alongside teachers helping them shape their focus and provide crucial support in terms of methodology, literature and the wider evidence base and hopefully helping to create a truly collaborative model of knowledge creation about effective classroom practice that is not solely outside-in.


Works cited: Cochran-Smith, M. and Lytle, S. (1993) Inside –Outside: teacher research and knowledge, Teachers College Press, New York *Stenhouse in Ruddock and Hopkins, 1985 p.19

272804773_b3577b4b7d

The McNamara Fallacy and the Problem with Numbers in Education

Robert McNamara was by any standards, a wildly successful man. Harvard graduate, president of Ford motors then rising to the heights of U.S. Secretary of Defense in the 1960s, McNamara epitomised American élan and brio. But he had one major flaw – he saw the world in numbers.

During the Vietnam War, McNamara employed a strategic method he had successfully used during his days at Ford where he created data points for every element of production and quantified everything in a ruthless fashion to improve efficiency and production. One of the main metrics he used to evaluate progress and inform strategy was body counts. “Things you can count, you ought to count,” claimed McNamara, “loss of life is one.”

272804773_b3577b4b7d

The problem with this method was that the Vietnam war was characterised by the unmeasurable chaos of human conflict not the definable production of parts on a factory assembly line. Things spun out of control as McNamara’s statistical method failed to take into account numerous unseen variables and the public turned against US involvement in the war through a cultural outcry that would change the country. Although on paper America was ‘winning’ the war, ultimately they lost it.

As the war became more and more untenable, McNamara had to increasingly justify his methods. Far from providing an objective clarity, his algorithmic approach gave a misleading picture of what was becoming an unfathomably complex situation. In a 1967 speech he said that:

“It is true enough that not every conceivable complex human situation can be fully reduced to the lines on a graph, or to percentage points on a chart, or to figures on a balance sheet, but all reality can be reasoned about. And not to quantify what can be quantified is only to be content with something less than the full range of reason.”

While there is some merit to this approach in certain situations,  there is a deeply hubristic arrogance in the reduction of complex human processes to statistics, an aberration which led the sociologist Daniel Yankelovitch coining the term the “McNamara fallacy”:

1. Measure whatever can be easily measured.

2. Disregard that which cannot be measured easily.

3. Presume that which cannot be measured easily is not important.

4. Presume that which cannot be measured easily does not exist.

Sadly, some of these tenets will be recognisable to many of us in education – certainly the first two are consistent with many aspects of standardised testing, inspections and graded lesson observations. This fiscal approach been allowed to embed itself in education with the justification given often to ‘use data to drive up standards.’ What we should be doing is using “standards to drive up data” as Keven Bartle reminds us.

The fallacy is based on the misguided notion that you can improve something by consistently measuring it. In the classroom, this is best illustrated by the conflation between learning and performance, which For Robert Bjork, are two very different things – the former is almost impossible to measure, the latter much simpler. It is very easy to transpose observable performance onto a spreadsheet and so that has become the metric used to measure pupil achievement and concomitantly, teacher performance. In tandem with that you’ve had the hugely problematic grading of lesson observations on a linear scale against often erroneous criteria as Greg Ashman has written about here.

RobertMcNamara55

Two years after the Vietnam war ended, Douglas Kinnard, published a significant study called The War Managers  in which almost every US general interviewed said that the metric of body counts were a totally misguided way of measuring progress. One noted that they were “grossly exaggerated by many units primarily because of the incredible interest shown by people like McNamara.”

In education, the ‘incredible interest’ of the few over the many is having a disastrous impact in many areas. One inevitable endpoint of a system that audits itself in terms of numbers and then makes high-stakes decisions based on that narrow measurement is the wilful manipulation of those numbers. A culture that sees pupils as numbers and reduces the complex relational process of teaching to data points on a spreadsheet will ultimately become untethered from the moral and ethical principles that are at the heart of the profession, as the recent Atlanta cheating scandal suggests.

Even in the field of education research, there is a dangerous view in some quarters that the only game in town is a randomised controlled trial (its inherent problems have been flagged up by people like Dylan Wiliam.) If the only ‘evidence’ in evidence based practice is that which can be measured through this dollars and cents approach then we are again risking the kind of blind spots associated with the McNamara fallacy.

Teaching is often an unfathomable enterprise that is relational in essence, and resists the crude measures often imposed upon it. There should be more emphasis on phronesis or discretionary practitioner judgement that is informed by a deep subject knowledge, a set of ethical and philosophical principles and quality research/sustained inquiry into complex problems.

In my experience, the most important factors in great teaching are almost unmeasurable in numbers. The best teachers I know have a set of common characteristics:

1. They are not only very knowledgable about their subject but they are almost unreasonably passionate about it – something which is infectious for kids.

2. They create healthy relationships with those students in a million subtle ways, which are not only unmeasurable but often invisible to those involved.

3. They view teaching as an emancipatory enterprise which informs/guides everything they do. They see it as the most important job in the world and feel it’s a privilege to stand in a room with kids talking about their passion.

Are these things measurable in numbers and is it even appropriate to do so? Are these things helped or hindered by the current league table culture?

5Jsgx8bQ5430kshplPqhX7CYqmc

Robert McNamara died an old man and had opportunity to reflect on his long life, most notably in the Academy award winning documentary The Fog of War. His obituary in the Economist records that:

“He was haunted by the thought that amid all the objective-setting and evaluating, the careful counting and the cost-benefit analysis, stood ordinary human beings. They behaved unpredictably.”

Measuring progress is important. We need to know what we are doing is having impact against another approach that might yield better outcomes, but the current fetish of crude numerical quantification in education is misleading and fundamentally inappropriate for the unpredictable nature of the classroom. We need better ways of recording the phenomenon of the classroom that captures more than simply test scores and arbitrary judgements on teachers, and seeks to impose an order where often there is none.

Continue reading

The ‘Outstanding’ school fallacy (It’s not one where the staff work twice as hard as the kids)

This week an insurance firm specialising in coverage against teachers being off work published some revealing figures showing that stress causes twice the amount of time off as common ailments with the firm’s director Harry Cramer noting that “among men it is the single biggest reason.” In another (admittedly less rigorous) study, a preliminary online survey of 3,500 members of the NASUWT showed that 67% of teachers felt that the job was adversely affecting their mental health, with 76% saying they are “seriously considering” leaving the profession.

The usual suspects rolled out are Ofsted, consistent curriculum changes, poor behaviour and increased bureaucracy but one thing that’s rarely mentioned is that in many schools it would appear that teachers are working significantly harder than the pupils in their charge, and not so much because the kids are lazy but rather because of an institutionalised miasma that is obsessed with measuring everything (usually poorly) that privileges the spreadsheet over the individual and which has infantilised the process of learning to such a degree that actually knowing stuff is deemed less important than merely appearing to know stuff.

What are the contributing factors here?

1. Ownership of results 

 For whatever reason the ownership of student achievement has somehow transferred from pupil to teacher. We now talk about *our* results, not *their* results. Many teachers start off in September with a well intentioned focus on ‘independent learning’ only to end up in Feb-May doing a series of lessons that look more like the Gettysburg address. If results now *belong* to the teachers, why should the students work as hard?

 2. A culture of Spoon-feeding.

In a culture that audits itself purely in terms of readily quantifiable measures against often arbitrary targets (with very real consequences for the teacher as opposed to the student) the inevitable outcome will be for teachers to do ‘whatever it takes’ to hit those targets, and this has led to some of the most unethical practices ever seen and yet those same schools are deemed ‘outstanding.’

3. The “shrinking of intellectual aspiration.” 

Can’t claim credit for that phrase, I heard Tony Little say it last week and it struck a chord. Something I find more alarming than the proportion of kids who lack basic foundational knowledge (in terms of culture/history/politics) is the amount of teachers who think that’s ok. Too many schools now are bastions of anti-intellectualism that exist only to hit targets and where being clever and culturally aware comes second to passing an exam.

In a school culture that seems to think purely in financial terms, it will view individuals as expendable and the inevitable outcome is that canopy of meaningless bureaucracy and stress where teachers are more skilled at data entry that knowing their subject. And the saddest thing is that the kids then begin to think in this fiscal way, and demand ‘painting by numbers’ style teaching in order to pass exams and one day inevitably utter the most depressing sentence you can hear as a teacher: ‘but will this be in the exam?’

An outstanding school is not one where the teachers are working twice as hard as the kids. It’s a quivering house of cards that is constantly on the verge of collapse.

Originally posted here on Staffrm as part of #digimeet

Engagement: Just because they’re busy, doesn’t mean they’re learning anything.

I’ve long thought that one of the weakest proxy indicators of effective learning is engagement, and yet it’s a term persistently used by school leaders (and some researchers) as one of the most important measures of quality. In fact many of the things we’ve traditionally associated with effective teachers may not be indicative of students actually learning anything at all.

At the #ascl2015 conference last Friday, the always engaging Professor Rob Coe gave a talk entitled ‘From Evidence to Great Teaching’ and reiterated this claim. Take the following slide – How many ‘outstanding’ lessons have been awarded so based on this checklist?

Screen Shot 2015-03-21 at 21.15.21

Prof. Rob Coe From Evidence to Great Teaching ASCL 20 Mar 2015

Now these all seem like key elements of a successful classroom, so what’s the problem? and more specifically, why is engagement is such a poor proxy indicator – surely the busier they are, the more they are learning?

This paradox is explored by Graham Nuthall in his book ‘The Hidden Lives of Learners,’ (2007) in which he writes:

“Our research shows that students can be busiest and most involved with material they already know. In most of the classrooms we have studied, each student already knows about 40-50% of what the teacher is teaching.” p.24

Nuthall’s work shows that students are far more likely to get stuck into tasks they’re comfortable with and already know how to do as opposed to the more uncomfortable enterprise of grappling with uncertainty and indeterminate tasks. A good example of this as Alex Quigley has pointed out is that engagement in the form of the seemingly visible activity of highlighting is often “little more than colouring in.” Furthermore, teachers are more than happy to sanction that kind of stuff in the name of fulfilling that all important ‘engagement’ proxy indicator so prevalent in lesson observation forms.

The other difficulty is the now constant exhortation for students to be ‘motivated’ (often at the expense of subject knowledge and depth) but motivation in itself is not enough. Nuthall writes that:

“Learning requires motivation, but motivation does not necessarily lead to learning.”p.35

Motivation and engagement and vital elements in learning but it seems to be what they are used in conjunction with that determines impact. It is right to be motivating students but motivated to do what? If they are being motivated to do the types of tasks they already know how to do or focus on the mere performing of superficial tasks at the expense of the assimilation of complex knowledge then the whole enterprise may be a waste of time.

Learning is in many cases invisible as outlined many times by David Didau and is certainly not linear but rather more nebulous in actuality. As Prof. Coe reminds us, ‘learning happens when people have to think hard’ but unfortunately there is no easy way of measuring this, so what does he suggest is effective in terms of evidencing quality?

Ultimately he argues that it comes down to a more nuanced set of practitioner/student skills, habits and conditions that are very difficult to observe, never mind measure. Things like “selecting, integrating, orchestrating, adapting, monitoring, responding” and which are contingent on context, history, personalities, relationships” and which all work together to create impact and initiate effective learning. So while engagement and motivation are important elements in learning they should be seen as part of a far more complex conglomerate of factors that traditional lesson observations have little hope of finding in a 20 min drive-by.

This is where a more robust climate of research and reflective practice can inform judgements. It’s true that more time for teachers to be critically reflective will improve judgements but we also need to be more explicit in precisely what it is we are looking for and accept that often the most apparent classroom element may also be the most misleading.

Slides: Prof. Rob Coe:  From Evidence to Great Teaching ASCL 20 Mar 2015

Nuthall, Graham (2007). The Hidden Lives of Learners. Wellington: New Zealand Council for Educational Research Press

Education Research: Cognitive Psychology Can’t Be The Only Game in Town

As head of research, this past year I have spent a huge amount of time reading papers and increasingly coming up against terms like “interrater Reliability” or “Box-and-whisker plot.” (The latter sounds like some sort of racy cat based detective novel.) The majority of papers I seem to be reading are from the field of cognitive psychology and whilst they provide fascinating insights into the workings of the brain and have deeply enhanced my understanding of how we actually interpret sensory data, I feel we are losing sight of something.

efbb442e599e8ed46c183917f8ffbd90cfc82cab

 “8 out of 10 education researchers prefer whiskers.”

Despite doing a course in statistical methods in my first year of my PhD in education, I’m often left cold. Whilst I can work with the abstracts and conclusions, I find I often struggle with the very methodological terms used to justify the claims made. Someone recently told me that to work in education research you should ideally have a degree in cognitive psychology and statistical methods. My response was that unless he has read Homer, Plato, Socrates, Shakespeare or Locke then he shouldn’t be allowed near a classroom. (Unreasonable I know, but it sounded good at the time.)

A key element of education research is about representation. You are attempting to represent a process (and an unfathomably complex one at that) and then test particular approaches or observe specific phenomena. Using solely an empirical method to represent and describe this deeply complex relational phenomenon can seem akin to “measuring a transistor to make sense of a joke in a YouTube video.” (Eagleman)

In the education research arena, I find myself more and more listening to people who are not so much talking about this complex process but rather lecturing us on how they have simply measured a transistor. I’m always reminded of Chris Morris tricking Noel Edmonds on Brass Eye into telling us that the “made-up” drug “cake” affects a part of the brain known as “Shatner’s Bassoon.”

It is great that there have been so many advances in our understanding of how the brain works and the relationship to student learning, but there sometimes seems to be an absence of discourse about to what end this information is useful or how it exactly it empowers children. There are some fantastic practitioners in cognitive psychology such as Nick Rose and Mark Healy who take findings from the field and then apply it insightful and erudite ways informed by other disciplines, but it feels that in the case of many others we are becoming literally “brainwashed.”

Why is this field so dominant now? Is it commensurate with a school culture that seems to audit itself solely now in terms of quantitative ‘measurable’ data? There is much more to be said on this and I just wanted to briefly put down some thoughts here but to my mind there are four areas in education research: philosophy, anthropology, sociology and of course psychology and at the moment I worry that we seem to have hedged all our bets on the latter.

The scourge of motivational posters and the problem with pop psychology in the classroom

Fifteen years ago I watched David Brent give this masterclass in motivation. This was before I started teaching, and when I entered the profession I was horrified to learn that this kind of stuff appeared to be embedded in so much of education from the Monday morning assembly to the top-down CPD session. I remember attending a leadership training day that featured one bit that was almost word for word, a carbon copy of the hotel role-play scene where Brent ‘fazes’ the trainer.

Nowhere is this pseudo-profundity more alive today than in social media, and the weapon of choice for this kind of stuff is the motivational poster. More than ever, we seem to be drowning under a tidal wave of guff exhorting both pupil and teacher to ‘reach for the stars’ and ‘be all that you can be.’ While seemingly benign and well intentioned, these missives in mediocrity signal a larger shift towards the trivial and sit alongside a set of approaches that may well be doing more harm than good.

Carol Dweck’s work on Growth Mindsets is often mentioned in relation to interventions aimed at shifting student self perception but like a lot of promising areas, the transition from research to practice is often a dysfunctional one. The hallways of many schools are now festooned with the obligatory mindset motivational posters and “failure walls” (Always wondered about these, they’re like a 12 step recovery programme with 11 steps missing) with whole school assemblies exhorting kids to embrace failure and choose a more positive mindset, often reductively misrepresented as “you can achieve anything if you believe.”

Screen Shot 2015-02-15 at 11.31.48

This type of stuff is obviously well intentioned but beyond symbolising a culture that privileges the media-soundbite over critical reflection, it does I think signify an increasing shift towards psychological interventions aimed at changing student self perception and represents a somewhat base and quite reductive approach to an extremely complex set of issues. Done well, certain interventions can be highly effective as in the case of coaching or the aforementioned promising field of Growth Mindsets. However, done poorly they can be not only confusing for students, but can take up valuable time and resources for things that might actually improve student self perception in a far more powerful way. On a more serious level, Nick Rose has written about the worrying rise of soft psychotherapy in schools and warns that these interventions may be poor substitutions for woefully inadequate mental health provision for children.

There are two central issues with these generalised attempts at trying to manipulate student’s perception of themselves. Firstly, student self-concept is both multi-dimensional and hierarchal. (Marsh et al.,1983; Muijs 1997) A student might have a very positive concept of self in English but a very negative one in Maths. Secondly student self concept is both academic and non-academic and can be broadly categorised into seven subareas such as physical ability/appearance and peer relations as well as academic ability (Shavelson, 1986.) So tying to manipulate these domain specific issues through ‘all-purpose’ positive interventions attempting to boost general self esteem are likely to be ineffective.

The other major issue here is that we may have got things back to front. Research shows that while there is a strong correlation between self perception and achievement, the actual effect of achievement on self perception is stronger than the other way round (Guay, Marsh and Boivin, 2003.) It may well be the case that using time and resources to improve student academic achievement may well be a better agent of psychological change than psychological interventions themselves. Daniel Muijs and David Reynolds (2011) note that:

At the end of the day, the research reviewed shows that the effect of achievement on self-concept is stronger that the effect of self-concept on achievement.

So there is a strong case to say that that focusing our efforts on students being taught well (surprise, surprise) and given clear and achievable paths to academic success creates a more positive perception of themselves anyway than those given unproven interventions such as the kind of pop psychology churned out in so much of school life. A key question then is why is so much time and energy invested in it?

One of the best initiatives I’ve seen is from a school in New York where they use blocked periods of time in the school week called ‘Lab Time’ where both teachers and pupils were free and where the onus was on the students to book appointments with particular teachers and go over work they had missed or didn’t understand or just needed to improve on. This gave pupils a real sense of agency, responsibility and choice and a series of opportunities to address their own problems. How much time do we waste on assemblies, tutorials and numerous interventions that are costly, time-intensive and ultimately ineffective? Would an approach like this not only give pupils more chance of improving academic achievement but also concomitantly, their own self-perception?

motivational-quotes-pics-photos

Motivational posters are a a “daily boost of inspiration” for some and vomit-inducing for the rest of us but they also encourage us to take complex ideas and reduce them to something utterly trivial, and seemingly life-changing and often far removed from their original premise. There are complex ideas that should be given time and space for us to critically reflect upon and resist the urge to summarise into a soundbite. Education research in particular shouldn’t be represented as some kind of ersatz profundity summarised in a single sentence, it should embrace Keats’s notion of negative capability and seeking a richer, more complex and ultimately elegant elucidation of these difficult ideas that we hope will improve student experience.

As my old English literature tutor Prof. Chris Baldick once quipped in a lecture “Men are from Mars, women are from Venus and pop psychology is from Uranus.”

Why has practitioner research had such little impact in schools?

One thing that has always baffled me is how school leaders have marginalised staff involved in research or practitioner inquiry. If a teacher wants to do an MA or PhD in an area related to their own professional development, they are often given little or no financial or time support. Certainly research has not been a central part of the mission of being a classroom teacher, it has in effect been seen as an expensive hobby.

Many teachers I’ve spoken to have essentially felt like rogue agents, “pale students of unhallowed arts” wielding dangerous knowledge. Their work is not aligned with a whole school focus and very little of it is even linked with their own professional development.

Screen Shot 2015-01-24 at 07.47.52

How many senior leadership meetings features the phrase “What does the research say?” or even taken the position that it might be something useful? In my experience many younger staff who want to do research are not sure exactly what it is they want to research but just want to improve and be more reflective about their practice. Why aren’t school leaders harnessing that kind of enthusiasm towards whole school improvement?

Screen Shot 2015-01-24 at 07.47.42

Whole school research focus.

In order to maximise the impact of school led research we need to move towards a model where there is a common focus of inquiry that has many stakeholders One way of doing that is to:

  • – Establish an issue to be solved that is aligned with whole school vision.
  • – work with HEI to survey the literature around this area and help design methodologically robust approaches.
  • – Opportunities given for practitioner research embedded across all departments/faculties not just a self-selected few.
  • – Involve the student body with this focus using student journal clubs.
  • – Establish a Research Centre to act as a conduit.
  • – Build in time for staff to conduct research and disseminate findings.

If we are going to maximise the impact of research in schools then it needs to be more than a clandestine bunch of mavericks practicing some kind of weird alchemy that no-one even understands (especially themselves.)

Opportunities need to be given for practitioner-led research that is aligned with a clearly defined whole school vision of improvement, that is well communicated and where all staff feel they can have an impact.