Why measures often mislead

Why Measures Often Mislead

Here’s a task. Put one leg in a bucket of boiling water and the other in a bucket of freezing water. On average it’s the perfect temperature. Herein lies a major issue with measurement that I commonly observe: believing that aggregated data is an insightful measure of performance. Of course aggregation has some value as a high-level performance indicator, but without interrogating the data beneath it can be very misleading.


Simpson’s Paradox

As a powerful illustration consider Simpson’s Paradox:  the paradox in probability and statistics, in which a trend appears in different groups of data but disappears or reverses when these groups are combined. (a concept I always explain in assignments).

As a true example, a University in the USA was taking to court by a young woman that claimed gender bias on the basis that the annual admission data showed that significantly more boys were being admitted than girls. Sounds fair, yes?

However, the analysis of the data showed that generally girls were applying for the most competitive courses whereas boys were more attracted to the less competitive courses. In reality more girls were being admitted to both the competitive and less competitive courses, but when the numbers were aggregated there were more boys admitted. Simpson’s Paradox.

So whenever I am shown aggregated data (which on most scorecards I see tends to be color-coded green, as with most objectives/KPIs, but that’s another blog) I always ask “but what does this mean?” Typically, the answer I get is “It shows we are performing well,” – then I explain that maybe it does or maybe it doesn’t. I have no idea without looking at the underlying data.


The importance of Analysis

And herein lies another major issue with measurement. The belief that the reported KPI score is sufficient information for decision-making purposes. It is not.  The top-level KPI “number,” does not provide the full picture of performance. It is only an “indicator,” of performance. Once organizations have collected data, they must analyze it before they can work out what it means –  how they may need to change things to improve the likelihood of success against key strategic goals.  Too often organizations simply collect and distribute performance data without conducting any meaningful analysis – or any analysis at all. ‘Performance management analytics’ provide tools and techniques enabling organizations to convert their performance data into relevant information and knowledge. Without it, the whole performance management exercise is of little or no value to the organization.


Basic Training

So, when I work with organizations to build scorecards I always stress that those that work with measures need to understand at least the basics of how measures work. Hell, in assignments I often have to explain the importance of understanding confidence levels and intervals when using surveys. This is not rocket science.

With at least the basics understood then I progress to the basics of analytics. In time they can mature to a more advanced understanding of measurement and analytics. But simply understanding the basics means that organizations do not spend endless amount of times collecting KPIs and then providing commentary that is at best of limited value or (not uncommon) downright dangerous as the so-called analysis leads to strategic, and often expensive, improvement interventions that are not addressing the problem (and perhaps exacerbating the problem).

It is a continued mystery to me how organizations are, as is typically the case, obsessed with measurement but don’t invest the time and money into teaching those that work with measures even the basics of the underpinning science. This is an issue we need to address.


As always feedback is welcome.

James Creelman
    A recognized thought-leading author, trainer and advisor specializing in Strategy Management, The Balanced Scorecard, Leadership & Culture Change, Enterprise Performance Management and Strategic Risk Management. Extensive experience of leading consulting and training assignments across the world, for both Government and commercial organizations, most notably in the Gulf and Indonesia (as a resident in both) as well as Europe North America, Australia and India. Author of numerous articles/blogs as well as 24 in-depth research-based management books, including Doing More with Less: measuring, analyzing and improving performance in the government and not-for profit sector, Palgrave Macmillan, 2014, Risk-based Performance Management: integrating strategy and risk management (Palgrave Macmillan, 2013).


    Jul 28, 2017 at 10:09 AM

    One hundred percent I agree with your view points. One addition, I need to supplement, on many occasion of measuring, we forget to ask the questions Why, Where, How, When, Who, What and what for? Without purpose simply accumulating data would produce redundancy.

    Jul 25, 2017 at 3:39 PM

    Agreed. Most organizations that have not implemented operational excellence systems (which in turn leads to training for all leaders at the very least) fail to recognize opportunities for improvement because they don’t know what they don’t know. Their measures don’t tell them anything useful.

    Jul 21, 2017 at 9:40 PM

    Great piece. I like it

    Leave a Reply