1. 360 feedback reports: what rater groups can tell you.

    Posted in Latest News, Tips & Best Practice, Using Appraisal360

    360 degree feedback normally collects feedback from a range of different people who know the appraisee well and can provide useful input. Typically to ensure a proper cross section of responses raters will be chosen from the appraisee’s colleagues, peers, reports, managers, maybe customer, external contacts and so on.

    When the report is finally produced it usually identifies which responses have come from which rater group – although in our system we offer the facility to suppress this information. Knowing which responses have come from which rater group can be useful in some cases, but like a lot of things it needs to be treated with some caution.

    Firstly, in a lot of cases rater groups may not be as clearly defined as you might think and they may not necessarily be relevant. Very often working relationships in the modern workplace are flexible and may make very little difference to the way that people actually work together.

    Secondly, just because somebody gives a particular score does not necessarily mean that the formal relationship between them is a contributing factor. It is more likely to depend on how those two people get on in real life and that, of course, depends on a great many things.

    What rater group information might show you, however, is when there is a consistent view from one particular group of people which is markedly different to that from a different group regarding a specific behaviour. For example if you were to ask “Listens to and considers others views” you might find that the person’s managers gave a consistently high score whereas his staff gave a consistently low score. That tells a story, but it only tells a story about one particular behaviour and you would need to look in detail at other behaviours to see if there was a pattern emerging.

    It is tempting to try and roll up the scores from different rater groups, maybe by averaging them, and then looking for differences. DON’T! There will ALWAYS be differences in the results by the time you’ve done the sums, largely because the sample sizes are small and there is statistical variation. But that doesn’t mean that these differences mean anything. In fact, differences due to the statistical variation may well swamp the difference due to the actual scoring.

    To extract any useful meaning from rater group information you need to look closely at the answers to individual questions and then you need to be looking for clear and consistent patterns.