Robert McNamara was by any standards, a wildly successful man. Harvard graduate, president of Ford motors then rising to the heights of U.S. Secretary of Defense in the 1960s, McNamara epitomised American élan and brio. But he had one major flaw – he saw the world in numbers.

During the Vietnam War, McNamara employed a strategic method he had successfully used during his days at Ford where he created data points for every element of production and quantified everything in a ruthless fashion to improve efficiency and production. One of the main metrics he used to evaluate progress and inform strategy was body counts. “Things you can count, you ought to count,” claimed McNamara, “loss of life is one.”

272804773_b3577b4b7d

The problem with this method was that the Vietnam war was characterised by the unmeasurable chaos of human conflict not the definable production of parts on a factory assembly line. Things spun out of control as McNamara’s statistical method failed to take into account numerous unseen variables and the public turned against US involvement in the war through a cultural outcry that would change the country. Although on paper America was ‘winning’ the war, ultimately they lost it.

As the war became more and more untenable, McNamara had to increasingly justify his methods. Far from providing an objective clarity, his algorithmic approach gave a misleading picture of what was becoming an unfathomably complex situation. In a 1967 speech he said that:

“It is true enough that not every conceivable complex human situation can be fully reduced to the lines on a graph, or to percentage points on a chart, or to figures on a balance sheet, but all reality can be reasoned about. And not to quantify what can be quantified is only to be content with something less than the full range of reason.”

While there is some merit to this approach in certain situations,  there is a deeply hubristic arrogance in the reduction of complex human processes to statistics, an aberration which led the sociologist Daniel Yankelovitch coining the term the “McNamara fallacy”:

1. Measure whatever can be easily measured.

2. Disregard that which cannot be measured easily.

3. Presume that which cannot be measured easily is not important.

4. Presume that which cannot be measured easily does not exist.

Sadly, some of these tenets will be recognisable to many of us in education – certainly the first two are consistent with many aspects of standardised testing, inspections and graded lesson observations. This fiscal approach been allowed to embed itself in education with the justification given often to ‘use data to drive up standards.’ What we should be doing is using “standards to drive up data” as Keven Bartle reminds us.

The fallacy is based on the misguided notion that you can improve something by consistently measuring it. In the classroom, this is best illustrated by the conflation between learning and performance, which For Robert Bjork, are two very different things – the former is almost impossible to measure, the latter much simpler. It is very easy to transpose observable performance onto a spreadsheet and so that has become the metric used to measure pupil achievement and concomitantly, teacher performance. In tandem with that you’ve had the hugely problematic grading of lesson observations on a linear scale against often erroneous criteria as Greg Ashman has written about here.

RobertMcNamara55

Two years after the Vietnam war ended, Douglas Kinnard, published a significant study called The War Managers  in which almost every US general interviewed said that the metric of body counts were a totally misguided way of measuring progress. One noted that they were “grossly exaggerated by many units primarily because of the incredible interest shown by people like McNamara.”

In education, the ‘incredible interest’ of the few over the many is having a disastrous impact in many areas. One inevitable endpoint of a system that audits itself in terms of numbers and then makes high-stakes decisions based on that narrow measurement is the wilful manipulation of those numbers. A culture that sees pupils as numbers and reduces the complex relational process of teaching to data points on a spreadsheet will ultimately become untethered from the moral and ethical principles that are at the heart of the profession, as the recent Atlanta cheating scandal suggests.

Even in the field of education research, there is a dangerous view in some quarters that the only game in town is a randomised controlled trial (its inherent problems have been flagged up by people like Dylan Wiliam.) If the only ‘evidence’ in evidence based practice is that which can be measured through this dollars and cents approach then we are again risking the kind of blind spots associated with the McNamara fallacy.

Teaching is often an unfathomable enterprise that is relational in essence, and resists the crude measures often imposed upon it. There should be more emphasis on phronesis or discretionary practitioner judgement that is informed by a deep subject knowledge, a set of ethical and philosophical principles and quality research/sustained inquiry into complex problems.

In my experience, the most important factors in great teaching are almost unmeasurable in numbers. The best teachers I know have a set of common characteristics:

1. They are not only very knowledgable about their subject but they are almost unreasonably passionate about it – something which is infectious for kids.

2. They create healthy relationships with those students in a million subtle ways, which are not only unmeasurable but often invisible to those involved.

3. They view teaching as an emancipatory enterprise which informs/guides everything they do. They see it as the most important job in the world and feel it’s a privilege to stand in a room with kids talking about their passion.

Are these things measurable in numbers and is it even appropriate to do so? Are these things helped or hindered by the current league table culture?

5Jsgx8bQ5430kshplPqhX7CYqmc

Robert McNamara died an old man and had opportunity to reflect on his long life, most notably in the Academy award winning documentary The Fog of War. His obituary in the Economist records that:

“He was haunted by the thought that amid all the objective-setting and evaluating, the careful counting and the cost-benefit analysis, stood ordinary human beings. They behaved unpredictably.”

Measuring progress is important. We need to know what we are doing is having impact against another approach that might yield better outcomes, but the current fetish of crude numerical quantification in education is misleading and fundamentally inappropriate for the unpredictable nature of the classroom. We need better ways of recording the phenomenon of the classroom that captures more than simply test scores and arbitrary judgements on teachers, and seeks to impose an order where often there is none.

58 responses to “The McNamara Fallacy and the Problem with Numbers in Education”

  1. What happens if we change ‘measurable’ to ‘observable’?

    Like

    1. Thanks Peter. The question is then; what is truly observable and of those things that can be observe, what we think is of value? A lot of what is used for criteria in lesson obs is very contested.

      Like

    2. There are also unpredictable effects caused by being observed that are not normal – e.g. the child who suddenly acts like an angel so it is assumed there is no behaviour problems. Also the list of criteria for lessons has been turned into a checklist for every lesson – its just not feasible or desirable.

      In addition, what has actually been gained from all this? Given that we achieved a society where most people were literate and numerate without all the league tables, observations, etc what have these added? What has been the loss? Certainly teacher attrition is at an all time high and heres the thing – no evidence to support the idea that it is the ‘bad’ teachers who are leaving – as everyone knows they go into senior management…

      Liked by 1 person

      1. I’m in senior management and I’m acknowledged as a good teacher.

        Like

    3. Excellent, Carl. I’ll be there in New York to grade your session

      Like

      1. julieeclarke – I feel on reflection that was a bit mean-spirited of me – I apologise. I know SLT who are good teachers and some of them have been very supportive. But unfortunately this has been the minority…

        Like

  2. Great to get a historical perspective on this issue – filing this one under the ‘what was ne’er so well expressed’ category! I’m trying to think through ways that our faculty can turn more of our focus on the students’ habits (reading / writing / participating in culture) to balance our current obsession with performance under test conditions. Thanks again

    Liked by 1 person

    1. Thank you. Great to hear.

      Like

  3. Carl this is brilliant – there is someone asking for you to be given a teacher award – I second that!! Brilliant analysis.

    Like

  4. An incisive, eloquent post. I’d just add that the cul-de-sac into which this mania for measurement leads is a general reconfiguration of education as blandly predictable training and it is this insidious bleaching of all that is of immeasurable value that, not surprisingly, alienates teachers from their own profession. They may “leave” (in increasing numbers) but the reality is that such a system is itself deserting them.

    Liked by 1 person

    1. Thank you Ray. Very eloquently put 😉

      Like

  5. theclumsylearner Avatar
    theclumsylearner

    Although I’m a McNamara (unrelated I might add) I couldn’t agree with this post more. Trying to simplify complex processes into easy to measure data that is used to inform high stakes accountability has a tangled effect that damages the morale of professionals, dumbs down the goals of education and corrupts the learning process. Great post!

    Liked by 1 person

    1. Thank you, and well said!

      Like

  6. chrismwparsons Avatar
    chrismwparsons

    Thank you very much indeed Carl – I had two blogs planned over the next few weeks which I don’t think I’ll need to bother with now! I might roll them into one and use this as a starting point.

    Like

  7. Reblogged this on The Echo Chamber.

    Like

  8. Fabio Escobar Avatar
    Fabio Escobar

    A bit of a strawman, no? I know of no one claiming that we should measure for measurement’s sake or that we should only value what can be currently measured. Sometimes we measure because we want to learn about something (maybe our own teaching) and the measurements end up being useless. Still, we try. Isn’t that exploratory enterprise part of education as well?

    Also: The trio of things you attribute to great teachers doesn’t receive any argument. You just present it as if it is gospel.

    Finally, how do you know other teachers have those features? I never get to see others teach, unfortunately. Is it a thing elsewhere in the world that teachers observe each other? I want in on that, because then I might measure myself against them and improve how I teach.

    Like

    1. Hi Fabio, thanks for this. I’m not sure anyone is consciously measuring for measurements sake but in many cases that is exactly what is happening. Performance reviews based on numerically graded lesson observations are still de rigueur in many instances with value added improvement scores a key element of evaluation, and we all know about standardised testing.

      In terms of teacher subject knowledge, in my experience it is simply *assumed* that teachers know their stuff. Very little time or focus seems to be given to it, and the expectation is to do it in your own time. How many schools support teachers doing subject specific MAs or PhDs? In fact, Hattie claims that it is not a particularly important element of teaching: http://mtle.wikispaces.com/file/view/hattie-on-teacher-subject-matter.pdf

      I think you are right about observing other teachers and collaborating on learning. Have you heard of lesson study? It is a great evidence based way of doing this. Ultimately, we need to move towards a situation where a decent part of our time is ring-fenced off for that type of systematic enquiry.

      Like

    2. chrismwparsons Avatar
      chrismwparsons

      Hello Fabio and Carl – please forgive me for butting-in to what is already a well addressed query. Perhaps it might give a flavour though of what I suggested further up the thread would be a future post of my own? Please shoot me down Carl if the attributions I make to you are offensive!

      I’m looking at this matter perhaps from a different perspective: If what Carl has suggested is indeed a ‘straw man’, it could actually be as a pre-emptive strike on future criticisms? I was myself planning a cautionary post about how ‘evidence-based’ practice could run into difficulties down the line. We’re acknowledging that not everything that has value in education can be measured, but if we’re going to evidence great teaching, it is inevitably going to rely on the data drawn from that which can be measured. Inevitably at some point down the line then, it is going to be highlighted that evidence-based practice can only justify itself against that which is measurable – which is by definition a limited sphere of education. Therefore a counter-culture could enthusiastically re-emerge, denouncing the need to ‘evidence’ teaching practice.

      Personally, I think that an approach such as the one Carl is taking could help to pre-empt such reactions against what is an important movement in teaching.

      Additionally – about the trio of qualities that Carl mentions – perhaps the point could be that these AREN’T easy to quantifiably evidence? Perhaps they do indeed require a certain amount of professional personal experience saying – “look these things really work – even though you can’t easily measure them!”
      Here’s my take on it: I personally think that the most powerful catalyst in education – in communication from one mature human to an impressionably immature other – is a suitably endowed ‘Sage on the Stage’. I think that one of the reasons why people have moved away from this model to the ‘Guide by the Side’ (which I still think is a really important facet in teaching) is that what they’ve had is ‘Stooges on the Stage’ who haven’t really had the knowledge and the passion to ignite the flame.

      Sorry again for the rude interruption! Please tell me to wind my neck in.

      Like

      1. That is superbly put Chris, you will have to write that blog now! I love this: “what they’ve had is ‘Stooges on the Stage’ who haven’t really had the knowledge and the passion to ignite the flame.” – I think this is an uncomfortable truth for many. Subject knowledge and an effusive passion are hugely powerful drivers of learning, but very difficult to measure and so get sidelined for things that are measurable, and that are probably less important.

        Like

  9. Hi Carl,
    I enjoyed this very much. It also reminded me of Donald Rumsfeldt and his unknown unknowns, the poorly applied Carroll diagram.
    It strikes me that the profession needs to get better at qualitative research in classroom practice, as the essence of the processes you describe is the nature of human relationships with regard to passing information. That’s the thing you can almost “feel” in a high quality classroom, but no real way to measure it. It could be that some people, hopefully the majority, “have it”, but it’s impossible to go beyond the generalisations.
    However, if you want to know if they have “got it”, then a quantitative test is enough, or just asking the right questions…
    Best wishes,
    Chris

    Like

    1. Thank you Chris, yes I agree with this. I am increasingly feeling better qualitative measures are needed to evaluate the phenomenon of the classroom. It seems to me that we need better ways to represent and talk about the process, the current focus on measurable outcomes does not do that at all. This tyranny of numbers is a defining feature of contemporary education that we may well look back on in future years with utter disbelief.

      Like

  10. […] is the great post The McNamara Fallacy and the Problem with Numbers in Education by Carl Hendrick (thanks to AJ Juliani for the tip). Here’s a quotation that Hendrick […]

    Like

  11. Kimmo Kumpulainen Avatar
    Kimmo Kumpulainen

    Thank you for a good post Carl. What if the constant feedback of the students would consider both aspects; how well have you shown your skills in the subject and how have you shown you learning skills i.e. teamwork, bravery to try out new things, focus on work at hand, asking for help etc.? And what if the teacher would be given feedback on issues like: how well did X engourage me to learn (even from mistakes), did i understand what i learned today, did X help me to connect new information to things i know etc.

    Like

  12. I enjoyed this post from an historical perspective and found it thought-provoking with respect to education, but certainly not a complete sell on the idea that measurement is essentially a misguided activity. It is all too easy to measure things that don’t matter, and to struggle to measure those that do (I work in tertiary education, where these issues are as real as they are in primary and secondary schools). However, I think that just means that ongoing refinement and reflection is needed – not that we should all pack up and go home. The quest for the Holy Grail is just that – a quest. We wouldn’t accept lack of measurement in public health, hospital medicine, aviation, or building safety, but we all know that in spite of good measurement protocols, mishaps and errors still occur in those paradigms. Measurement doesn’t have to be perfect to be useful. One of the challenges perhaps is collection of multiple data sources and triangulation across these, to find points of concurrence as well as tensions and inconsistences. However – some guy getting measurement wrong in relation to the Vietnam war isn’t a convincing argument nearly half a century later, to not persist with identifying and refining measurement tools in an important endeavor like education.

    Liked by 1 person

  13. Jeanne Ballou Avatar
    Jeanne Ballou

    A very thought-provoking article! I think one of Carl’s responses referenced “lesson study”, a powerful and authentic opportunity to work alongside and learn from fellow educators; this is valued in some educational cultures but probably not so much in the US because it involves creative use of scarce resources (and $$ always seems to be an object–although we don’t seem to mind spending billions on high-stakes standardized testing of questionable quality). Also “action research” can be a great tool for assessing students’ needs and progress and could, in my opinion, be an effective element of a teacher’s evaluation.

    Like

  14. […] from 4th April about the use of numbers in education (for original post see here http://chronotopeblog.com/2015/04/04/the-mcnamara-fallacy-and-the-problem-with-numbers-in-education/) Here, Hendrick talks about the McNamara Fallacy, the idea that we can improve something if we […]

    Like

  15. […] The McNamara Fallacy and the problem with numbers in Education. […]

    Like

  16. […] This is a well argued critique of the obsession within education for quantifying performance and pro… […]

    Like

  17. […] judgement. There is a lot of interesting literature in this area, I particularly enjoyed the McNamara Fallacy and the problem with Numbers in Education article by Carl Hendrick on the dangers of using data for decision making on very complex […]

    Like

  18. […] The McNamara Fallacy and the Problem with Numbers in Education. […]

    Like

  19. Reblogged this on smjc3 and commented:
    #McNamara #education #statistics #standardization #learning #performance #phronesis

    Like

  20. […] I read this post by @c_hendrick. While not dealing specifically with SATs resits, it does give an excellent, […]

    Like

  21. Yes, I would agree that one can get carried away with measuring things. However, measuring is an activity which is sometimes appropriate and sometimes not. The question is, ” what is your purpose?” Education is a word which has become lost. What does it currently refer to? Can anyone tell if it is or it is not happening? Does it have a definable purpose?

    Here in British Columbia, Canada, the purpose of education is to provide jobs. Viewed this way it is a fabulously successful activity. It employs 41,000 teachers whose union is one of the most powerful in the province. The union has no use for measuring anything related to students, least of all anything to do with any activity that can be remotely connected with a teacher doing anything at all. Teachers are just teachers and they teach. Just do not ask what that is or if it results in anything.
    Regardless of anything else, they get paid, and they pay their union dues. What more is there?

    Like

  22. […] *what my colleague Carl Hendrick calls the McNamara fallacy. […]

    Like

  23. […] story The Passive House in New York How hard it is to get across U.S. cities using only bike lanes The McNamara Fallacy and the Problem with Numbers in Education The Hugo Award silliness What do conservative policy intellectuals think about climate change? […]

    Like

  24. […] *what my colleague Carl Hendrick calls the McNamara fallacy. […]

    Like

  25. […] The worst influence of this focus on the summative judgement of ‘teacher quality’ is that political discussion about the education system falls into the ‘McNamara Fallacy’ – as described brilliantly in a recent blog by Carl Hendrick: […]

    Like

  26. …but attainment isn’t easy to measure, it takes 10’s of thousands of people and hundreds of millions of pounds each year!!!

    Like

  27. […] failed, so perhaps we should be brave enough to allow at least some of it to remain a mystery, to not reduce everything to numbers and seek to ‘tag and bag’ every single thing and instead celebrate our differences as […]

    Like

  28. […] The McNamara Fallacy and the Problem with Numbers in Education by Carl Hendrick […]

    Like

  29. So you dismiss the evidence that formative assessment (which is predicated on accurate measurement) is one of the most effective pedagogies available to teachers? You must, I think, not only because formative assessment depends on measurement but also because the reasearch that shows it is so effective also depends on measurement. And if you don’t allow research based on objective measurement, how is it possible to disprove your theory that measurement is harmful? If it isn’t possible, does your (undoubtedly popular) theory have any meaning at all (cf Karl Popper’s verificationism and A J Ayer)?

    Like

    1. Nor , in my view, does Dylan Wiliam provide any evidence in his ResearchEd talk to support the assertion in his title that quantitative research is inherently problematic, rather than merely difficult and very often done badly. It is a surprisingly common fallacy to think that something is impossible to do, just because it is badly done.

      Like

    2. Thanks Crispin. Well I would question your claim of “accurate measurement” – Measurement of what exactly? Learning or performance? As many have pointed out, the measurement of learning against often arbitrary assessment criteria (particularly in the humanities) often produces data that is not worth the paper it is printed on and to base interventions and worse still, teacher evaluation on such stuff is something we should all be concerned about.

      I agree with your views on formative assessment but if the data that that is founded on is questionable then we are back to the drawing board. At the very least we need to recognise the wholly imperfect enterprise for what it is. Also the adoption of linear unit measurement of productivity from business is another hugely problematic issue.

      My main point is that there are too many certainties in education right now and not enough acceptance of its mysteries, that the claims of big data are vastly overrated and they are engendering an unhealthy culture of accountability that serves no one. I would like to see more of a focus on more regular low stakes testing, using GPA to build up a more robust picture of progress and a focus on exemplar work as criteria instead of wooly assessment criteria.

      Like

      1. Hi Carl, Thanks for your reply and apologies for the length of this one. I disagree with some of what you say but there is also a lot of common ground and I think the issue is a very important one and it is useful to drill down to the real issues.

        I agree with you that much of our use of data is simplistic and counter-productive. The disagreement – which may actually be quite nuanced – is what we do about this: try and make it better or give up and retreat (as I would see it) to a model based on personal intuition, which will do nothing to resolve the massive inconsistency of performance in the teaching profession.

        I disagree with your distinction between “learning” and “performance” – but it is a distinction that is so commonly made that it will require a massive post – or probably a book – to clear up. But here is my argument in brief.

        Teachers generally use words very loosely – and terminology matters. “Learning” is one of the key offenders. What do you mean by it? Surely what we really mean by “learning” is a delta or improvement. So far, so unclear: what is it that we suppose is improving? “Understanding, knowledge, skill or attitude” you might say. But (a) this list is increasingly recognised as being intertwined, so it is unhelpful to have to list multiple facets of what is more usefully regarded as a continuum, and (b) all these things refer to internal states, which we cannot observe. All we can observe is the external performance. From the performance we *infer* (a probabilistic process) an internal state or disposition, which in turn is *predictive* of future performances. The best word, in my view, for this general disposition, is “capability”. So learning is an improvement in capability and capability is defined in respect of predicted performances (illustrated at https://edtechnowdotnet.files.wordpress.com/2014/11/capability.gif). So the dichotomy between learning and performance, which is so commonly made, is actually very unhelpful.

        I strongly agree with you on the need for “exemplar work” – and the reason for this illustrates the close connection between the capability (which constitutes our learning objective) and performance (which is what is illustrated by the exemplar). So I think you are contradicting yourself by advocating exemplars at the same time as deprecating performance. And I don’t think you can talk of exemplars “*instead of* wooly assessment criteria” but *in addition to* (now, thanks to the exemplars, more clearly defined) assessment criteria. That is implied, surely, by the use of the word “exemplar”? Without the criterion, what is the examplar supposed to be exemplifying? You cannot dump the criteria – that is the mistake that Daisy Christodoulou (and the Assessment Without Levels Commission) made in the recommendation of national banks of question items – the items have no significance unless they are organised with reference to assessment criteria (which is another term for “learning objective” which is another term for “capability representation”).

        When people use “performative” in a pejorative sense, I think they mean a sort of tick-box approach where people are assessed to have mastered some objective because they have been observed to achieve it in a *single* performance. But my account above shows that capability is predictive and probabilistic- and so an effective assessment can only be made by repeated assessment (which is also how we learn to do something reliably – by repeated practice), often in a variety of different contexts.

        Even then, the assessment of capability must be qualified by a confidence level. While I agree with you that we should avoid excessive certainty – I would avoid the term “mystery” which suggests a retreat into some sort of pseudo-religious obscurantism. Uncertainty can and should be measured and quantified. The only reason we don’t in education has nothing to do with the statisticians (to whom this comes as second nature) but is all to do with the “educational establishment” which cannot bear to admit how little we know about the children we teach.

        In my view, we are going in exactly the wrong direction on data. The Assessment Without Levels Commission recommends that we should have *less* data and that it should have limited use, according to the purpose for which it was originally collected. This removes the possibility of corroborating data, one against the other, and assessing and incrementally increasing reliability and confidence levels. This is fundamental to what data science and analytics is all about. It is a recommendation that forces us back on the position that the data is an expression of someone’s authority (Johnny got a D because OCR says he got a D), instead of being part of a scientific process in which provisional truth claims are cross-referenced and checked. I think you would find that if my approach were taken, many of the arbitrary metrics against which teachers are currently held to account by a top-down bureaucracy would fall over immediately as being hopelessly unreliable.

        There are two other important drivers in the argument. 1. is teachers’ unwillingness to be held to account on the basis of assessment data – a valid position when the data are so bad and teachers are being held to account against simplistic and erratic metrics, but not valid if the data can be improved and the interpretation of that data is much more circumspect, in proportion to the confidence with which valid inferences can be made on the back of that data. You cannot surely be against a culture of accountability in principle, if teachers are being held to account on the right criteria? 2. is workload (and teachers’ general ignorance about statistics, which is related) – but these are easily solved if data-driven education technology systems are used to assist and/or automate data collection and analysis.

        I think at root the problem is that the job that many teachers are asked to do is nigh-on impossible and the performance of the system as a whole is often very poor. One part of the solution is to introduce more clarity into what our objectives are, and more rigour into the way in which we monitor performance against those objectives (and monitor the performance of the monitoring systems). But not much will be achieved by revealing how bad things are if we do not have the means to improve them. And so I interpret the teachers’ talk of mystery and personal intuition and their hostility to data and accountability as a (not entirely unreasonable) conspiracy of silence – because no-one knows how to solve the problem of highly inconsistent and often very poor educational performance.

        That is why I argue that the answer to the conundrum is better education technology (by which I mean a sort of pick-and-mix, heterogeneous, activity-driven, digital textbook) deployed under the control of front-line teachers (who will remain vital to good teaching) and in combination with better data-driven monitoring systems, based on assessment data that is continuously harvested from formative practice exercises. As well as satisfying what in my view are entirely reasonable government demands for higher and more consistent outcomes, I think you would find such a solution would offer teachers a dramatic improvement in working conditions – a cut in workload, higher status in society, the opportunity for better pay, a reduction in stress from being assessed against unattainable objectives, and more motivated and higher performing (spit) students.

        I think we share a similar perception of what the problem is – but propose rather different solutions.

        Thanks for hosting such a long comment, Crispin.

        Like

  30. […] but some degree of caution is in order. Can we improve something just by measuring it regularly? Sociologists use the concept of the “McNamara Fallacy” (named after Defense Secretary Ro… It’s only a few short steps from there to the conclusion that anything which cannot be […]

    Like

  31. […] to the infamous McNamara Fallacy, this analysis supports the theory that “If you can’t measure it, it doesn’t exist” because […]

    Like

  32. Hi Carl,
    Thanks for an interesting post. John Ralston Saul gives an interesting critique (and historical perspective) of Mcnamara’s technocratic/rational approach to solving human problems in his book “Voltaire’s Bastards” – if you ever see it give it a read – I think you’d find it interesting.

    Like

    1. Like the article but disagree vehemently about these not being measurable (below):

      In my experience, the most important factors in great teaching are almost unmeasurable in numbers. The best teachers I know have a set of common characteristics:

      1. They are not only very knowledgable about their subject but they are almost unreasonably passionate about it – something which is infectious for kids.

      2. They create healthy relationships with those students in a million subtle ways, which are not only unmeasurable but often invisible to those involved.

      3. They view teaching as an emancipatory enterprise which informs/guides everything they do. They see it as the most important job in the world and feel it’s a privilege to stand in a room with kids talking about their passion.

      Are these things measurable in numbers and is it even appropriate to do so? Are these things helped or hindered by the current league table culture?

      In fact, I would argue that those 3 things are quite easily measured. How about asking students, department heads, and other teachers within the subject department the following about each teacher.

      Rate on scale of 1-10 the following (1=No Way 10= Undeniably Yes)

      1) Teacher is passionate about what he/she teaches
      2) Teacher has mastered the subject matter and is able to present it understandably to the class
      3) Teacher is well respected by the class.
      4) I view the teacher as an exemplary role model that motivates me to learn more.

      You cannot tell me that an exemplary teacher would not score measurably higher in all categories than a poor teacher in any school system. I do think the average or below average teacher or coach are afraid of such metrics, because he or she know exactly how he/she would score.

      Like

  33. Some genuinely nice and useful information on this website, besides I believe the layout has got excellent features.
    imitation prix bague bulgari http://www.aluxury.nl/fr/

    Like

  34. “1. They are not only very knowledgable about their subject but they are almost unreasonably passionate about it – something which is infectious for kids.

    2. They create healthy relationships with those students in a million subtle ways, which are not only unmeasurable but often invisible to those involved.

    3. They view teaching as an emancipatory enterprise which informs/guides everything they do. They see it as the most important job in the world and feel it’s a privilege to stand in a room with kids talking about their passion.”

    Maybe so, but in the real world if you set the bar this high you would have very few teachers and classes of 200 kids. Is there a way that mere mortals can acquire these characteristics? I suspect the only way to move in that direction would be to change the work environment. Many teachers start off with some or all of these qualities but are quickly worn down, the teachers that last the longest in the job seem to be those with a much less idealistic attitude

    Like

  35. […] The McNamara Fallacy and the Problem with Numbers in Education – chronotope – blog – Carl Hendrick (5-minute read) […]

    Like

  36. Can I allow myself a moment of smugness? Writing from England here, teacher all my life, now retired. I remember the moment I understood what you are all calling the “Macnamara fallacy” all by myself! Sitting in a cafe, on holiday with friends in a wonderful place called Aubeterre sur Dronne in France about ten years ago and explaining to them why the teaching of reading seemed now to be obsessed with phonics. (In my experience of teaching young children to read, phonics is limited in its usefulness, apart from sounding out the first letter to give the child a clue. It is useful a little later when the child is beginning to write). These friends were curious about why their little grandson came home with endless dreary phonic exercises. Suddenly it came to me. You can count how many sounds a child knows on a tick sheet! Quite hard to assess reading in other ways. What about the child who is capable aged 6 of reading The Times but only wants to read the same simple story again and again. What about the child who only wants you to read to them even though they can read themselves? Then there is the child who reads voraciously, but cannot be bothered to sound the word out, and in any case, as soon as you tell her the word is “crocodile”, she has that word imprinted on her brain, so vivid is the picture in her head, and so involved is she in the story. The subtleties of how children learn the magic art of reading cannot successfully be measured. And sadly, in emphasising phonics too early, some young children are simply bored or feel inferior, and the child that “does well” at phonics, may come to view reading, as a series of achievements rather than what it should be…the door to so many joyful experiences.
    At least Macnamara may have seen the light toward the end of his life. The UK education system is ploughing on in true Macnamara style!

    Like

  37. […] the things we can measure distort our values (know as the McNamara Fallacy explained brilliantly here). We want to know if children are learning, but when we attempt to measure this we affect what and […]

    Like

  38. […] a fascinating exploration of the fallacy and application to education see Carl Hendrick’s blog here). Whether a student has a SMART target on their electronic pastoral record is a very easy metric to […]

    Like

Leave a comment

Trending

Blog at WordPress.com.