As HR data is becoming an important piece for HR professionals, we recently hear that most HR data is actually bad data. Here is why:
It not clear whether the bosses, colleagues, or even direct reports are reliable raters for anybody. Imagine you as a manager, how accurate do you think your ratings of a staff would be on attributes such as “promotability” or “potential?”
How about more specific attributes such as “customer focus” or “learning agility”? Do you think that you are one of those people who, with enough time spent observing me, could reliably rate these aspects of my performance on a 1-to-5 scale?
These are critically important questions, because in the grand majority of organisations we operate as though the answer to all of them is yes, with enough training and time, people can become reliable raters of other people. We also have constructed our entire edifice of HR systems and processes.
The boss rate your “potential” and to put this rating into a nine-box performance-potential grid, because of the assumption that the boss’s rating is a valid measure of your “potential”— something we can then compare to his (and other managers’) ratings of your peers’ “potential” and decide which of you should be promoted.
As part of your performance appraisal, the boss also rate you on the organisation’s required competencies. HR practitioners do it because of our belief that these ratings reliably reveal how well you are actually doing on these competencies.
The competency gaps your boss identifies then become the basis for your Individual Development Plan for next year. The same applies to the widespread use of 360 degree surveys. HR people use these surveys because they believe that other people’s ratings of you will reveal something real about you, something that can be reliably identified, and then improved.
Unfortunately, this method is a mistake. The research record reveals that neither you nor any of your peers are reliable raters of anyone. And as a result, virtually all of our people data is fatally flawed.
Idiosyncratic Rater Effect
Over the last fifteen years a significant body of research has demonstrated that each of us is a disturbingly unreliable rater of other people’s performance. The effect that ruins our ability to rate others has a name: the Idiosyncratic Rater Effect, which tells us that my rating of you on a quality such as “potential” is driven not by who you are, but instead by my own idiosyncrasies—how I define “potential,” how much of it I think I have, how tough a rater I usually am. This effect is resilient — no amount of training seems able to lessen it. And it is large — on average, 61% of my rating of you is a reflection of me.
In other words, when I rate you, on anything, my rating reveals to the world far more about me than it does about you.
In the world of psychometrics, this effect has been well documented. The first large study was published in 1998 in Personnel Psychology; there was a second study published in the Journal of Applied Psychology in 2000; and a third confirmatory analysis appeared in 2010, again in Personnel Psychology.
In each of the separate studies, the approach was the same: first ask peers, direct reports, and bosses to rate managers on a number of different performance competencies; and then examine the ratings (more than half a million of them across the three studies) to see what explained why the managers received the ratings they did.
They found that more than half of the variation in a manager’s ratings could be explained by the unique rating patterns of the individual doing the rating— in the first study it was 71%, the second 58%, the third 55%.
No other factor in these studies — not the manager’s overall performance, not the source of the rating — explained more than 20% of the variance. When we look at a rating we think it reveals something about the ratee, but it doesn’t, not really. Instead, it reveals a lot about the rater.
Perhaps, you are beginning to suspect that your HR data can’t be trusted. Your suspicions are well founded. This finding must give us all pause.
It means that all of the data we use to decide who should get promoted is bad data; that all of the performance appraisal data we use to determine people’s bonus pay is imprecise; and that the links we try to show between our people strategy and our business strategy — expressed in various competency models — are spurious.
When it comes to our people within our organisations, we are all functionally blind. It is the most dangerous sort of blindness, because we are unaware of it. We think we can see.
Many of our comfortable rituals — the year-end performance review, the nine-box grid, the consensus meeting, our use of 360’s — will be forever changed. We must first stop, then we will have to redesign almost our entire suite of talent management practices.