Making the grade

first_imgAsthe practice of forced ranking comes under the spotlight, Keith Rodgers findsthat it needs to be used with other measurement tools to be truly effectiveIf you’re in the bottom 5 per cent of performers at Siebel Systems, theSilicon Valley-based computer software company, you’d do well to start refreshingyour resumé. Every six months, using data aggregated from an ongoingperformance appraisal process, the company culls its lowest-ranking employees.Taking its lead from a process evangelised by Jack Welch, the former head ofGE, Siebel effectively forces its managers to face up to tough questions: whichemployees really add value to the organisation, and which are a drain? This process of ‘forced ranking’, adopted by a number of US companies, hascome under the spotlight over the last year as the economic downturn forcedcompanies to pay closer attention to their bottom-line costs. Criticised insome quarters for taking a mathematical approach to a complex human issue, inmany companies ranking is evolving into a highly sophisticated measurement activity,supported by a growing array of software tools and business processes. Moreimportantly, however, it’s now being viewed not as a standalone activity thatcan make or break individual careers, but as one part of an extensive HRportfolio that incorporates techniques such as competency profiling ande-learning. Carried out as an isolated management activity, forced ranking isonly as good as the metrics and management disciplines that underpin it: usedin association with other business measurement and workforce improvement tools,however, it offers organisations the chance to really leverage their humancapital assets. At a basic level, the processes behind forced ranking are deceptivelysimple. Each employee is set objectives against a specific timeframe: at theend of the period, they’re judged on a scale of one to five (or ‘a’ to ‘e’) asto how effectively they hit their targets. That ranking is used in both formalappraisal processes and to determine performance-related compensation. Inorganisations like GE, the data is also aggregated to provide a checklist ofwhich employees are failing to make the grade. In theory, by culling the bottomperformers, the company improves the average level of performance, raising thestakes for the rest of the workforce when the next review period comes round. In practice, however, the process is far from simple. To begin with, judgingemployees collectively assumes a level playing field that rarely exists.Managers in different departments may set objectives that vary widely in termsof how difficult they are to achieve, and measurement is rarely standardised.If two employees are told to improve their sales presentation skills, forexample, one may be judged merely on how they were ranked in a training session,another on whether they delivered a predetermined number of live presentationsand how the clients responded. Those are two different goals, and moreimportantly, two very different sets of measurement – one a formalised trainingprocess, the other a live sales scenario. The playing field is further distorted by market and geographic conditions.In customer-facing functions such as sales and marketing, the relativeperformance of individuals operating within the same division can be affectedby numerous regional factors: expand that on a multinational scale and thedifferences are greater still. Those variables have to be taken into account bymanagers as they set objectives, bringing a degree of individual autonomy to aprocess that theoretically should be standardised. Finally, the scientific framework that underlies forced ranking takes littleaccount of the realities of people management, a point stressed by Mark Geary,managing director of Hong Kong-based AsiaNet Consultants and a former senior HRexecutive at companies such as ICI, Ladbroke and Inchcape. He believes that thesystem can often be undermined because of the implications of poor ranking.”Most managers are loathe to rank people lower than ‘c’ because they don’twant to demotivate them,” he says. “Also, if the manager’s doing their job, they shouldn’t have to waitfor an appraisal system to see someone’s a ‘c’. And if they end up ratingsomeone as an ‘e’, what are they doing as a manager? That’s the weakness. Soyou end up tolerating under-performance. The whole area is a real can ofworms.” Proponents of forced ranking, however, argue that if the rightinfrastructure is put in place, many of these anomalies can be ironed out.Anthony Deighton, director of Siebel’s Employee Relationship Management (ERM)division in the US, argues that successful employee performance measurementrests on a combination of business processes and software tools, driven bywell-understood business objectives driven from the top down. Siebel, whichmarkets an ERM software suite built on the back of its own internal employeerelationship management applications, has established a top-to-bottom rankingprocess internally that includes a series of management checks and balances.Company-wide consistency in terms of the metrics deployed by managers isenforced through three processes – training and support from the HR department,executive review and load-balancing analytics that spotlight variances (seebelow). The fact that an employee is ranked low doesn’t necessarily reflect onthe manager, he argues – it may simply mean that an individual is in the wrongjob. More importantly, ranking also has to be seen in a wider context. Leavingaside negative attitudes, personality clashes and other “character”issues, the most common explanations for poor performance are that individualshave either been badly trained or that their skillsets don’t match therequirements of their role. By linking the appraisal procedure to learning,competency assessment and career development processes, organisations cantackle both the causes and effects of underachievement. Learning Management Systems, for example, provide the IT infrastructure forself-paced training and Internet-based virtual classrooms, and alloworganisations to monitor which individuals have taken which courses. Used inconjunction with other management tools, they can provide the basis for moreextensive performance analysis. Managers can link improvements in individualranking, for example, to the training courses undertaken by those employees,establish patterns and use that data to determine whether to extend thetraining programmes to other members of their team. There are caveats to thiskind of cause and effect analysis, of course. While there may be a correlationbetween sales staff who’ve gone on a particular training course and an increasein closed deals, the number of variables is high – on the one hand, the salesmay have closed anyway, on the other, failure to close a deal may reflect moreon the customer’s budgetary constraints than the quality of the sales pitch.That said, early adopters of this kind of HR analytics in the US argue that thecorrelations thrown forward have value simply in the fact that they raisequestions: finding the answers may require trial and error, but in many casesthose answers wouldn’t have been sought without the software application. AsDeighton argues, real value comes when analytics are translated into action: ifa specific training course appears to be achieving results, roll it outelsewhere and validate the proposition. “You need an organisationalculture which allows people the flexibility to make changes, where they cantest different things – you can’t create a culture where people are so scaredto act that they can’t do anything.” The training data that’s gleaned from Learning Management Systems also formpart of the information set needed to build competency profiles, which againlink back to the appraisal process. Typically, organisations define at a broadlevel the skillsets or profiles required for particular generic roles – theseare then customised by local managers for the specific requirements of thepositions in their department. The skillsets of the employees that fill eachpost are then matched against the checklist of requirements, highlightingdisparities in competency levels and providing guidelines for future trainingprogrammes, recruitment needs and career development. Populating the initialprofile database can be a daunting task – one US mobile telephone operatorestimates that it would take two people six months to build the templatesrequired for a 34,000-strong workforce. But the implementation timescales canbe radically reduced if employees are encouraged to build their own skillsprofiles, monitored by their line of business manager – that typically requiresan internet-based IT infrastructure that gives controlled access to relevantparts of the central competency database. Organisations like Hewlett-Packard,the Silicon Valley-based IT systems and services company, have already rolledout this kind of competency profiling system to its most senior employees,covering some 10 per cent of its total workforce (see web feature). While profiling has clear value at an individual level, the aggregate data isalso critical for gap analysis and workforce planning, providing seniormanagement with an understanding of organisational weaknesses and an insightinto the company’s capacity to expand its business or move into new markets.Again, if the competency management process is linked to a forced rankingsystem, the data will reflects not only skillsets, but also how effectivelyemployees’ deploy those skills in their day-to-day roles. As each element ofthe HR function is integrated in this way, the combined value of the analyticaloutput increases exponentially. Ultimately, this integrated approach to employee management extends beyondthe HR function and reaches right to the heart of business performancemeasurement. “The appraisal isn’t something that takes place on an annualbasis – it should be continuous,” argues Geary. “Do it the simple way– you don’t need to do a full, big review which takes an hour or two perindividual – but you should be doing a 15 minute review of the objectives thatforms part of the quarterly business review. It’s people that deliver on thecompany goals. Business performance consists of financial and peopleperformance, and the two need to go hand-in-hand.” Case studySiebel: forcing the issuesSiebel Systems, the US-based developerof customer and employee management software, has built its forced rankingsystem on the back of corporate objectives that cascade down from the top ofthe company. On the first day of each quarter, chairman and CEO Tom Siebelpublishes his corporate objectives, generated from an off-site executivemeeting. By day three, senior managers will have reviewed the objectives andcreated their own targets for their specific divisions. By day 15, all 8,000employees of the company will have created their own sets of objectives inconjunction with their managers. According to Anthony Deighton, director ofSiebel Employee Relationship Management (ERM), these objectives are reviewed ona frequent basis through the quarter at both an individual and team level.At the end of the quarter, employees write a self-assessment,and discuss how effectively they hit target with their line manager – theirperformance is measured against each objective, culminating in a one to fiveoverall ranking. Managers have the ability to override the automated rankingcalculation to take into account specific factors that may have influencedperformance, such as extended sickness.In addition to the formal ranking, the review also covers arange of other factors, including soft measures that are not objective-based.Siebel employs three techniques to ensure the ranking processis carried out as consistently as possible across the company. The HR department supplies relevant  documentation, web-based training and an employee helpdesk in aneffort to standardise objectives and measurement techniques. Additionally, allobjectives are reviewed by the next layer of management. Finally, the company’sERM software generates a ratings and distribution report, which highlightsbands and trends.”If someone has given everybody five, you make themjustify it,” says Deighton. “If the manager sees something is skewed,they can drill down, see details and reject a review.”This ranking system forms the basis of Siebel’s six-monthly‘cull’ of the bottom 5 per cent of employees.”We do the analytics, get the names, and then go andinterview them to find out if this is the right 5 per cent, or if there is adifferent set,” says Deighton. “This is not maths, it is people’slives – that 5 per cent is a blurred boundary.”Although the process may seem ruthless, Deighton argues that itis ultimately constructive. Few people who fail to make the grade are ‘bad’employees – maybe one-quarter or half a per cent of an organisation, hebelieves. Most of them, however, are simply in the wrong job for theirskillsets – and it may be there is no suitable alternative opening within theorganisation.”There has always got to be a bottom performer. You areforcing managers to think about their people – who is more of a drain than aplus? It is certainly seen as positive by the people who remain. If you do notdo it, the star performers will get frustrated and leave.” Related posts:No related photos. Comments are closed. Making the gradeOn 2 Apr 2002 in Personnel Today Previous Article Next Articlelast_img

Leave a Reply

Your email address will not be published.Required fields are marked *