Picture
Appendix: reproduced from TVRLS workbook on 360 Degree Feedback. 




 By critical I mean having a scientific look. Science means verifiable and predictable, valid and reliable. The use of 360 Degree Feedback and Assessment centers has gone up in India in the last five years. It is said that more than 90% Fortune 100 companies use 360 Degree Feedback (McLean, 2002). The figures get extended from Fortune 500 to Fortune 1000 companies. Definitely in India the number of organizations using 360 DF has increased enormously in the last three years. So are the service providers. The CEO s and the HR Managers do not differentiate or understand the significance of 360 DF and the turmoil it can generate. Normally the service provider is evaluated in terms of the following:

·         Cost
·         Credibility
·         Proximity
·         KAS (Knowledge, Attitude and Skills)
·         Experience


Cost: The cost quite often is the deciding factor. As Dave Ulrich observes, in the absence of any other factors the efficiency of HR Managers and HR Departments is being judged on the basis of cost reduction. Hence HR Managers use their ability to get the lowest charging facilitator or consultant or the ability to bargain and bring down the costs as an indicator of effective performance. It is therefore understandable that any one who can offer the services at a low cost is sometimes considered good enough to do 360. There are no competencies required. The most importance competence is the ability to collect information, analyze the data and present it in attractive form using several graphs and bar diagrams. Instruments are easily available as it is common practice in India where copy right is interpreted as the fundamental right to copy. Several of the tools are available on the web and it is not too difficult to down load. Instruments published in University Associates, by Pareek (2002) and Pareek and Rao (1981) all give a number of instruments that are copyright free. In the light of this the main qualification of the 360 Degree Facilitator is his ability to offer services at a low cost by providing the tool, a administering it, collecting the anonymous data and presenting the individual feedback with profiles.

Credibility: The credibility of the service provider is used as another criterion. Understandably any HR consultant who has credibility with the company and has handled HR assignments in the past in satisfactory ways is considered credible. Nothing can be more scientific than one’s own experience it self. On this count the firms don’t seem to err much. However, the credibility factor in 360 has many dimensions. The following are the constituents:

1.      Track record: The past record of the consultant or facilitator in terms of the deliverables (quality, time, commitments, seriousness with which the assignment is carried out etc.)

2.      Confidentiality: the credibility of the facilitator in 360 is highly dependent on his ability to maintain confidentiality of the data. He should not under any circumstances part with the original assessments to the candidate to the candidate or any one. They need to be destroyed after a specific period of time. Similarly the feedback profiles should be kept extremely confidential and available only to the parties purported to be available from the beginning and with the candidate being a party to the understanding. In most cases the organisation desires that the feedback is given only to the individual and not to any one else. In some cases the CEO would like to have them and in a few cases only action plans are sought.

3.      Ethics and values: The ability of the facilitator to point out the limitations of 360 and use it for the purposes of which it is meant. The ability of the facilitator to guide the client in ways that are beneficial to him without a bias towards the commercial benefits the facilitator may have.

4.      Knowledge credibility: The awareness of the facilitators of the limitations of the 360 and the research studies available in 360. The ability of the facilitator to have a research mind and keep in touch with the growing literature in the field.

5.      Research and follow up: Though this is in the hands of the Assessee it is the responsibility of the facilitator to ensure that adequate follow up mechanisms are built into the feedback process. These may include a re-administration by the candidate, discussion with the assessors where appropriate etc.

Normally the credibility is judged on the basis of the track record and the other aspects may not be adequately tended to.

Proximity:  This is a very important consideration. It is associated with costs as well as easy availability of the facilitator when facilitating various aspects. A proximate consultant is always preferable as the user could run call him up as and when needed. This is becoming a less of an issue as telephones and cell phones, video conferences etc. have become common. For example the authors of this paper were involved in solving the issue by using the video CDs to reach remote locations all over the world and by using tale-coaching sessions. However a proximate availability of the 360 coaches or facilitator helps a great deal in making the process smooth and convenient.

KAS: This is an important factor and is the most neglected at this point of time. 360 DF is a sensitive process. The T-Group training which a similar process is is to be facilitated only by trained behavioral scientists. They go through a three phased training from ISABS or National Training Laboratories or other similar bides like the ISISD. Alternately they should be trained counselors or psychologists or specially trained in the facilitation process of the 360. The most important requirements are knowledge (of the purposes of 360, the process of 360, applications of 360, limitations of 360, mistakes, knowledge of feedback, leadership models, research in this area etc.), attitudes (empathy, listening, trust, helpful attitudes, patience, flexibility, flexibility etc.), and skills (coaching, follow up and research  skills).

Experience: Experience in the filed is another criterion for choosing the facilitator. An experienced facilitator always brings with him a lot of lessons from the good practices elsewhere and transmits the same. The experience base may bring down the cost and enhance the ROI. An experienced coach and facilitator may prevent you from making the mistakes as he has discovered the wheel already and need not be any more at your cost. He ensures that the entire process is smooth. He also knows how to deal with problems and issues. He anticipated them and gives you a preventive medicine.

Lessons from the past

The following are some of the lessons we have learnt from the past on 360 degree Feedback in India.

Purposes:

  1. It is best suited as a development tool and most firms are not yet ready to use it as performance measure. It could be a good tool for performance management but not for performance measurement. Don’t use it to measure performance but definitely use it to develop performance.
  2. We are not yet ready to link it with incentives and rewards. 360 DF itself can be rewarding for an executive if the firm invests on his development. This investment includes not mere provision of the 360 feedback or profile but a follow up support. The least a corporation can do is to provide coaching help. Unless an external (external to his department or unit) coach is available and the candidate is required to Most candidates follow up support may include asking for development plan, sponsoring for 360
  3. Even if you desire to link with performance management system enough preparation is needed and it is better to wait until its credibility as an objective told is built and the process is institutionalized.
  4. It is a good methodology to use for identifying potential candidates for select positions. When the author was appointed in BEML to head the HRD Department in 1978 a similar method (it was not known as 360 degree feedback at that time) was used to identify potential future HRD Chief. The ONGC experience indicates the same
Readiness

  1. Some organizations feel that the firm should be ready to take 360 degree feedback. There are instruments like “Are you ready to take 360 DF? Etc. While it is important to assess your readiness, it is equally important to create readiness. It is true that some firms are more ready to take 360 feedbacks and institutionalize it more than some others. The same is true with individuals. Some are more ready than others. However experiences of organizations like the Aditya Birla Group have indicated that a good HR Manager can make the firm ready by doing the necessary preparation.
  2. It is always useful to create the readiness by making it in phases. The phases recommended are: phase 1: Voluntary and exploring; 2: Phase 2: Purely developmental and individual focused 3. Phase 3: Institutionalized developmental and Phase 4: Linkages with other HR systems open and organizational. It should be externally administered first and when the comfort level goes up then it could be administered as one progresses to phase three.
  3. Don’t go web based until you institutionalize the process and provide a real good experience to your employees.
  4. Always prepare the employees or the assessors and Assessees for the same
Instrument

  1. There are many tools and models available. There are an estimated over 100 vendors of 360 Tools in the USA. They are based on the leadership models which each company deals with. It is useful to be aware of these models. However leadership model based 360 is useful only for those firms that have followed the leadership competency model fully, institutionalized it and are committed to it. Otherwise the tool and the feedback on it becomes academic and limited practical relevance. The choice of the tool is important. Choose the tool that suits you best and that can give you multiple results.
    1. The tool should address the objectives of the 360 and not any tool. It is the heart of the 360.
    2. The tool should focus on the competencies needed by the candidate,
    3. The competencies needed by the company, the department and the culture, norms and values the firm would like to promote could be focused.
    4. It should be easily understandable and should use the language commonly used in the company
    5. For development purposes the tools could be changed every now and then and used as a past of the training interventions. For example Udai Pareek’s Role efficacy scale can be converted into a 360 tool or any other tools can be converted and used as 360 tools but they re for one time administration and not regular use.
    6. Reliability and validity are always issues and one cannot wait. If you understand the concept and limitations of 360 they become lesser issues. Face validity and content validity are suggested at beast. Test-retest reliability with a gap of about a week to two weeks preceding feedback may be useful to aim at. Internal homogeneity could be used (item total correlations etc.)
  2. It is recommended to use tailor made tools for each form. Some organizations are developing tools for each role with some common items and a few specific items
  3. The tool can be lengthier in the first administration. This gives a comprehensive assessment. Then it can be shortened. In any case it should not take too much time. Say more than an hour is considered as bringing down the efficiency and effectiveness of that tool. 30 to 40 minutes will be ideal
  4. Open ended items seeking the pen ended feedback re the most useful parts.
  5. The best tool is that tool that takes into account the expectations and views and opinions of the assesses and assessors.
Process

  1. The administration should be preceded by a firm wide education explaining the purpose and other details. Without education the Assessees and Assessors the quality of feedback may become of questionable quality.
  2. The feedback should always be given after explaining the scope and limitations of feedback. Prepare the candidate in terms of creating readiness and receptivity to feedback.
  3. Always provide scope for the candidate to have a dialogue with the coach or facilitators to help him understand and interpret appropriately the feedback.
  4. Insist that the 360 degree Feedback could be as biased as any other feedback and give enough scope for reflections about the feedback..
  5. Identify common issues and provide for the participants to debate and discuss as well as come up with organizationally acceptable and managed action plans.
  6. Appoint mentors wherever possible
  7. Provide bench marks but treat them with cautions
  8. Warn the individuals not to attempt to identify the assesses on the basis of the language etc. and quite the experiments and experiences that are available. TVRLS research indicates that only 18% of the guesses on the basis of the language and individual ratings are correct. There is 80% chance of a mistaken identity. Hence it should be used to reflect and prove to one self that the feedback is true or false most of then time. The feedback giver and his views should be respected but not blindly taken for granted; the onus is always on the candidate to prove to him self that he possesses the strengths and does not have the weaknesses etc.



Follow Up

  1. Most 360 experiences don’t give the ROI because like PMS the organisation thinks that its job is over once the 360 feedback is provided. Very little is done to follow up or to provide the needed follow up support. The follow up support may take then following forms:
    1. Insisting on asking the candidate to give his development plans or insights or lessons drawn. Only what he desires to share.
    2. Asking the consultant to give the development suggestions in the form of training needs etc. and offering the same. These are not to be asked by name but for the entire organisation with frequencies and without names (see Chawla and Mishra, 2003).
    3. Conducing follow up workshop after a three months or six months or an year
    4. Re adminsitration and seeing the profile changes
    5. Offering on-line coaching
    6. Making mentors and coaches available
    7. Encouraging the candidates or create their own tools and seek feedback on their own
    8. Integrating it into PMS through KPAs etc.
  2. The HR should be able to play an active role and takes forces may be used
  3. Documenting the success experience and shoring the same to inspire each other


Building Internal Competencies

  1. 360 is a great tool. Its potential and limitations need to be understood. There is no substitute for developing internal competencies of line managers and HR facilitators for administering and using the same. Those who are in this will not only contribute to developing leaders but also may themselves improve their leadership competencies. They are facilitating the building of intellectual capital.
  2. Appendix 1 presents summary of two articles as they appeared in the internet recently.



Assessment and Development Centers

Assessment centers were recommended to be introduced in India in 1975 when the first HRD Department was established in L&T. It was proposed that L&T should be able to introduce Assessment Centers in about three year’s period from the time it started work on the same. By 1980 L&T identified critical attributes (now called as competencies) and established its own competency definitions. However it took another twenty years for L&T to take this concept forward. Why is it that an organization like L&T took twenty years to introduce Assessment and Development Centers? Why is it that in-spite of a lot of experience available across the world many Indian organizations have not gone ahead to introduce the Assessment Centers? While the 360 Degree Feedback is used in some form or the other why is it that not many fortune five hundred organizations are known to use Assessment centers? Why is it that in spite of continuous suggestions of psychometric-ans several organizations continue to use MBTI and such other psychometric tests for recruitment? These are some of the issues that need to be thought through. Answers to these questions may provide insights to the new users of Assessment Centers.

Issues of Validity

The most important form of validity of an assessment centers is the predictive validity. Predictive validity is determined by the extent to which those identified through assessment centers will be found suitable or successful in the jobs for which they are assessed. There are many scientific issues here and the answers may not be very simple. The most important of these are any one can predict the future success mainly on the basis of competency assessment data? First of all not all those found to be competent or pass the test get promoted to higher level jobs. The positions available are always limited as compared to the number of candidates aspiring or those who are going to be put through assessment centers. So there is always a problem. Secondly the success or failure of a candidate on a future job depends on many things besides the competencies of that individual. Assuming that there is the best fit of the individual with the competency requirements of the job, there is no way of measuring that all the situation factors and competencies can be predicted before hand. Most importantly the success or failure also depends in Indian firms on the kind of boss you will have. The chemistry should click. While the likelihood of the chemistry clicking is assessed through role plays and such other techniques they always make certain assumptions about the boss of the candidate which may or may not be valid. Hence the difficulty in validation studies. Low validity need not necessarily mean that the predictive ability of the assessment centers is weak.

Competency mapping is a serious exercise. Spencer (1993) estimates that mapping of one competency using BEI and such other techniques may take as much as two months. Indian firms do not have those resources not patience to get such through analysis and mapping done. In addition if the tests to be used in the Assessment and Development Centers need to be validated it will take as much as a year or more and even that time is not enough to conclude that those identified as potential candidates will have got enough time to prove their worth. In the absence of this one may resort to standard techniques of validity like validating against performance appraisal reports and superior ratings. This is the very reasons for going to assessment centers and therefore cannot be relied up on. Under these circumstances experts have no alternative but to make the best estimates and introduce tests and assessment systems that have not been standardized and that are based on hunches. Face validity and content validity are the main validity systems used in the Assessment centers. This basic limitation in the science of Assessment centers need to be noted and appreciated.

Our work with the Assessment and Development centers has indicated the following to be kept in mind:

Purposes:

  1. Given the difficulties in ascertaining validity and reliability of Assessment centers it is useful to limit their application to competency assessment and development rather than making them the primary bases on which to take promotion decisions. They may at best be used as tools for assessment and development and not as decision making tools. The traditional methods of assessment should not be totally discarded. Especially past performance. BEI and its variations is a good way to ascertain or combine the advantages of assessment knowledge with the past records. In other words more research and validation studies are required in India to use assessment centers for other than development purposes.
  2. We strongly recommend the use of ADCs for development purposes and as additional data providers for promotion and placement purposes.
Competency Mapping:

  1. Establishment of a good assessment center should be based on identification of the competencies of the positions and set of positions such competencies should be identified using scientific methods. The cost effectiveness is always an issue as a scientific assessment of the competencies is time consuming process. However the least any one can do is to use the expert panel method or some such methods described by Spencer (1993) or the task analysis approach  described by Pareek (1990)
  2. Scientific study of competency requirements is cost effective if there are multiple positions or persons doing the same job. However in the event of only a small number of persons the least that should be done is to have a specialist group or the role set group to come up with the competency requirements (see Rao, 1975, Pareek and Rao, 1981 and 2003)
Developing Multiple Methods of Assessment



  1. Assessment centers use multiple methods: In-baskets, presentations, leaderless group discussions, role plays, simulation exercises, Behavior Event Interviews etc. Development of these methods should be based on a thorough understanding of the competencies resulting from the competency mapping exercises. The tests themselves should be constructed by specialists. However test development is a time consuming exercise. Wherever possible standard test may be used. Such standard test may not be culture free. Hence it is important to ensure that the tests used are appropriate and suitable to the organisational and the geography specific culture.
  2. Psychometric tests should be used with caution. Unlike the in-baskets, role plays and simulation exercises psychometric tests are standardized and most of the standardized tests are available in the market. However they re available to specialists. In most cases. The users should ascertain the appropriateness of these psychometric tests. Any psychometric tests ill be of interest to a respondent. Respondents enjoy psychometric tests and feedback on them. Respondents or the candidates are not necessarily the best parties to judge the appropriateness of the instruments. Their relevance should be indicated by the Assessment center specialist and linkage to the competencies established. As Spencer’s iceberg model indicates it is the basic attitudes, motives and traits that lie beneath the surface and the psychometric tools may have the potential to bring them out. At the same time the psychometric tests are fakable in the sense they re subject to guesses. Hence the need to ensure that they are not decisive inputs. This and other issues in Assessment center are discussed on the internet recently by promotional Assessment Skills service (2003) and are reproduced in the appendix 2.
  3. In sum where possible develop and standardize new tools which suit the requirements of the role and competencies mapped. Other wise use standardized tests. In any case both these approaches have limitations and this should be recognized.
Assessors:

  1. We have tried to experiment with a five day training programs for Assessors. The experience indicates that this is perhaps the best of the time any organsational can spare their senior employees to be trained as Assessors. However our experience also indicates that this is extremely insufficient time for training of the Assessors. There are always dilemmas in training the Assessors. The Assessment center is meant to test those a level below them and yet they are tested on those very competencies and using the very exercises meant to be used for their juniors at two levels below them to be considered for jobs one level below them. Sounds very inappropriate but there are not many alternatives. The best raining that Assessors will get is when they participate in an assessment center under the supervision of expert assessors or facilitators.
  2. Another factor most organisations have neglected is the continuing education and training of the assessors. Some of the organisations are so impatient that they cannot wait for the training to be complete and put the assessors in to action.
  3. In addition there are sensitivities associated with dropping any assessor as not competent to be an assessor. One may be an excellent Manager and successful and may not have the assessment skills. In such cases it is useful to combine a grade assessors with B Grade ones and train them up. However it is better to avoid assessors whoa re not competent. At this point of time there is so much of dearth for competent assessors that there is very little choice left for organisations to choose from.
 

Combining with Other methods and specially 360 Degree Feedback

 

  1. Our own research studies have indicated that there are some factors on which 360 can give a good degree of feedback similar to the ADCs. There are some other competencies which can be tested and assessed only in the assessment centers. Fore example while communication skills can be assessed on 360 DF, risk taking and decision making and delegation may not be accurately assessed on the 360 DF. This is because the current chemistry of the individual may not have allowed him to delegate (for example if his own boss is not a delegating type). In such case ADCs are good tools. However the experience of those who have been working with the individual for several months’ even years cannot be totally ignored. Hence use of 360 degree feedback and Performance management data as supplements to ADCs is strongly recommended.
  2. Data from other sources like employee commitment studies, Satisfaction surveys and OCS studies are additional inputs. Such multiple methods will have the conclusions on the competencies.
Development Plans and Other follow up

 

  1. The process of giving feedback, assisting the candidate in preparing development plans are the most neglected parts of assessment centers in India today. After investing so much on a candidate it is in small investments of time and failure to ask the right questions and ensure a good follow up on the development al needs firms are making serious mistakes.
  2. On the basis of the competency gaps observed in the ADCs the firm should prepare a development action plan and offer training and other OD interventions.
When some of these principles are followed and we learn from experiences the quality of ADCs will improve and their ROI is likely to be higher.


Appendix 1

Two Views of 360-Degree Feedback




The 360-degree feedback evaluation program uses worker surveys to create a snapshot of company performance. But should managers be permitted to review their subordinates' ratings? And just how effective is 360-degree feedback, when all is said and done?

Below we present two
Leadership in Action articles about 360-degree feedback. The first maintains that employee ratings must remain confidential, while the second argues that reports of 360-degree feedback leading to decreased organizational performance indicate only that employees were inadequately prepared for the program.

"Should Managers Be Able to Review the Ratings Their Subordinates Receive from 360-Degree Feedback Instruments?" Leadership in Action, 23 (2): 13 (May/June 2003): Cynthia McCauley (Center for Creative Leadership; Colorado Springs, CO)

If the primary purpose of a 360-degree feedback process is leader development, then managers should not expect to review the ratings their subordinates receive. Confidential feedback is a necessary ingredient for development. To take a hard look at themselves and commit to self-improvement, people need a safe environment.

For co-workers to give honest feedback, they need to know that it will go only to the recipient, and not worry about who else might see the data and what implications that could have for the recipient. A 360-degree assessment instrument is an ideal tool for providing developing leaders with this feedback.

This does not mean that managers should not seek assessments from multiple sources as input for evaluations of their subordinates. Managers regularly undertake these evaluations as part of the process of making administrative decisions about the people who report to them—that is, decisions about which jobs to give them, how much to reward them, and what kind of support they need.

Managers would be foolish to rely only on their own assessments in making these administrative decisions. They do not have the opportunity to observe many of the skills and behaviors they need to assess, and they cannot know what it is like to be these people's peers, customers, or subordinates.

Any organization that is moving toward more distributed or empowered decision-making processes is going astray if it continues to use a strongly hierarchical process for making decisions about pay, promotions, and job assignments.

So instead of relying on the data from a developmental 360-degree feedback process to get assessments from others, managers should seek input from multiple sources as part of the organization's formal performance appraisal process. If this is done, participants in each process—developmental and performance appraisal—will be clear about the purpose each set of data will serve. . . .

Data collected for development—ratings on a broad range of skills and behaviors from a wide variety of co-workers—are not the best kind for performance appraisals and administrative decision making. Data used for these decisions need to focus on the dimensions of performance most relevant to the individual's job and need to come from co-workers who have the most and best opportunities to observe such performance.

For example, as a manager you might want data on a subordinate's collaborative behavior to come from peers in other departments, and data on how well the subordinate keeps his or her staff informed about organizational issues to come. . . .

Qualitative examples—rarely sought by 360-degree assessment instruments—are particularly useful for performance appraisals. Also, if data collection instruments are used for performance appraisal, they need to be short so that a number of employees can be assessed at the same time. (Assessment instruments designed for developmental purposes are often too long for practical use in performance appraisals.)

The greatest danger of gathering data from multiple sources for use in performance appraisals is that coworkers, knowing that their feedback will not be strictly confidential, will not provide honest input, and decisions could then be made based on inaccurate data.

Managers can reduce this possibility by demonstrating that they use the data fairly and responsibly, by setting clear performance expectations and standards, and by helping employees to understand that providing co-workers with straightforward feedback is a necessary ingredient for improving their collective performance.
--------------------------------------------------------------------------------------------------------

"News Flash: 360-Degree Feedback Is Alive and Well," Leadership in Action, 23 (2): 22-23 (May/June 2003) Craig Chappelow (Center for Creative Leadership; Greensboro, NC)

I was sitting in the waiting area at my mechanic's shop drinking charred coffee while he finished the oil change. Spread out on the institutional table in front of me was the typical reading fare: a six-month-old copy of Sports Illustrated, one of the local shoppers, and a newsletter that included an eye-catching headline, "Will Your New Tires Kill You?" I had recently replaced all eight tires on our family cars, so I read the article eagerly.

As it turned out the newsletter article suggested that if you buy new tires and they are not properly installed, it could cause a potentially fatal accident. For example, if the technician mounts the tires correctly but forgets to tighten all the lug nuts on one of the wheels, the wheel can come off, resulting in a crash. The article extolled the virtues of using only service centers that employ certified mechanics, such as the ones employed at the garage where I was waiting for my oil change to be done.

A more accurate headline for the article would have been, "If Improperly Mounted, Tires Can Be Hazardous." But if it hadn't been for the sensational title, I probably wouldn't have read the piece.

Down the Garden Path

I often fall for this shock tactic used by publications, and it happened again when I saw an article in the June 2002 HR Magazine with the headline, "Does 360-Degree Feedback Negatively Affect Company Performance?" The headline captured my attention, particularly because my employer is in the 360-degree-feedback business, and I read the article with great interest.

Like the headline on the article about tires, the headline on the HR Magazine piece is misleading. Even though it is phrased as a question, it's a rhetorical one, and the implied answer is clear. A better and more accurate title would have been, "360-Degree Feedback, When Poorly Implemented, Can Be Hazardous."

The article makes two main points. The first is an attempt to correlate the use of 360-degree-feedback programs with negative organizational performance. The second is a list of potential pitfalls that organizations should avoid when using 360-degree feedback.

The coauthors, Bruce Pfau and Ira Kay, are national practice leaders at Watson Wyatt Worldwide, an international consulting firm that focuses primarily on the human resource areas of employee benefits and compensation but also deals with specialized areas such as HR-related technology. Pfau and Kay are also the coauthors of The Human Capital Edge: 21 People Management Practices Your Company Must Implement (or Avoid) to Maximize Shareholder Value (McGraw-Hill, 2002).

In that book, they maintain that specific human capital management practices can add to—or subtract from—an organization's bottom line to the tune of millions of dollars. In the HR Magazine article, Pfau and Kay zero in on one of these practices—360-degree feedback—and the picture they paint isn't pretty.

Their conclusions are based on Watson Wyatt's 2001 Human Capital Index, which was compiled after the consulting firm explored and correlated the HR practices and financial performances of 750 companies in North America and Europe in 1999 and again in 2001. Pfau and Kay then compared the scores and financial data of fifty-one companies that participated in both studies, in an effort to determine which HR practices drive positive financial results and which don't.


One of their findings was that 360-degree feedback processes are associated with a decrease in financial performance. According to the 2001 Human Capital Index report, companies in which employees evaluate their managers have a market value that is 5.7% lower than the market value of companies that don't use employee review of managers, and companies in which peers evaluate one another have a market value that is 4.9% lower than that of companies that don't use peer review.

Suspect Conclusion

The problem with this, as I see it, is that shareholder value is an iffy way to measure leader impact in a window of less than three years. I have always understood shareholder value to be a long-haul proposition (and for the sake of my currently anemic 401(k) plan, I certainly hope it is).

Jerry Donini, an HR consultant and contributor to the Washington Business Journal, writes in an article in the November 15, 2002, issue of that publication that "a more direct and reliable measure of the influence of HR practices is one that compares them to a combination of outcomes like top-line revenues, earnings growth, return on assets, net margin and return on equity."

Consider Owens Corning, a company that I consider to be an example of best practices in using 360-degree feedback tools responsibly and strategically. In September 1998, I sat in the back of a seminar room at the company's headquarters in Toledo, Ohio. Glen Hiner, then the chairman and CEO of the company, was addressing a group of forty high-potential leaders.

He was there to communicate the importance of the activity they were about to pursue: getting feedback through Benchmarks®, a comprehensive, 360-degree leadership assessment tool from CCL, and using that feedback to improve their effectiveness in the organization. On that day Owens Corning stock was trading at $36 a share. Exactly two years later it was selling for $3.94 a share. As I write this, Owens Corning stock is listed at 17 cents a share.

In this example, there was no causal relationship between the use of 360-degree-feedback instruments and the company's dramatic decline in shareholder value. The reason the stock price dropped so precipitously is that on October 5, 2000, Owens Corning filed for Chapter 11 bankruptcy protection as a result of a multibillion-dollar liability from asbestos litigation claims.

This is just one example, but consider the dot-com boom—and subsequent bust—and decide for yourself the risk of using shareholder value as the sole measure of an organization's viability.

The second part of the HR Magazine article is more helpful. The authors point to the following issues related to the use of 360-degree-feedback programs in organizations.

  • Multiple views may cause confusion for the participants when there is disagreement between different rater groups.
  • Unless everyone participating in the process is trained for that role, the process may lead to uncertainty and confusion.
  • There may be a gap between an organization's business objectives and what a 360-degree-feedback instrument measures.
  • Time and cost may be stumbling blocks.
  • Participants and managers may fail to follow up after feedback is received.
The authors are right to point out these potential pitfalls, and I share their concerns. However, even though their points are accurate, none is new. In CCL's Handbook of Leadership Development (Jossey-Bass, 1998), I address each of these challenges in the chapter on using 360-degree-feedback instruments, and I make specific recommendations on how to avoid these problems.

So do other books by experts in research and practice, including 360-Degree Feedback: The Powerful New Model for Employee Assessment and Performance (AMACOM, 1996), by Mark R. Edwards and Ann J. Ewen, and Maximizing the Value of 360-Degree Feedback: A Process for Successful Individual and Organizational Development (Jossey-Bass, 1998), by Walter W. Tornow, Manuel London, and CCL associates.

The thing that all of us—including Pfau and Kay—seem to agree on is that a great deal of a 360-degree-feedback program's success depends on responsible planning, use, and follow-up. What many organizations neglect is the amount of work that has to be done long before the surveys are distributed.

External Factors

Many organizations figured this out a long time ago. In addition to Owens Corning, I would point out Microsoft and Pfizer as companies that do a solid job of planning, implementing, and following up on their 360-degree feedback programs to avoid the pitfalls mentioned in the HR Magazine article.

By the way, if we look at Microsoft and Pfizer strictly through a shareholder-value lens, Microsoft shares are worth less than half what they were three years ago, and Pfizer's stock is down about 40% over the same period. My guess is that those declines have more to do with the overall downturn in the economy, saturation in the computer industry, and drugs coming off patent than they do with the use of any specific HR practice such as 360-degree feedback.

I suggest that organizations should continue with the responsible use of 360-degree-feedback instruments. Just make sure to tighten the lug nuts before starting out.


This excerpt is presented with the kind permission of the publisher. Copyright © 2003 John Wiley & Sons, Inc. (reproduced from Internet)




Appendix 2


Some Common Questions About Assessment Centers (Source:  Promotional Assessment Skills Service 2034 Lambert Street, Atco, NJ 08004-2111)

What is an assessment center? An assessment center is commonly defined as a method and not a place. Simply stated, an assessment center is a systematic process that evaluates an individual's capabilities of performing certain critical knowledge, skills or abilities that are deemed critical to the successful performance of job tasks that have also been identified as critical to the job in question.

How is a modified assessment center different from an assessment center? Professional standards and supporting validation research only support a process defined as an assessment center. A modified assessment center, or other variation such as "mini-assessment center," "assessment center type process," "assessment labs," or other terms that in some way indicate a variation of an assessment center, have not been professionally defined. Therefore, the exact nature of these other processes is specific to the situation and the individual who has developed them.

While there is nothing wrong with using an assessment procedure that is not an assessment center, care should be taken to ensure that it is not erroneously identified as an assessment center.

The Guidelines specifically state that any procedure that does not adhere to those Guidelines, should not be identified as an assessment center or try to be an assessment center by using the term "assessment center" as part of its descriptive title.

However, the Guidelines do recognize that there is a difference between an assessment center and the assessment center methodology, and that many times it is appropriate to use the assessment center methodology while not using the assessment center.

Therefore, the fact that some variation of an assessment center methodology is being used, does not mean that it is not a proper assessment procedure. It simply means that it is not an assessment center

How can I best prepare to take an assessment center? Many persons try to prepare for an assessment center by learning the "tricks." The general consensus of "test developer" seems to be that specific preparation for an assessment center has little impact on assessment center performance, and could result in a lower assessment center score because the individual tries to perform in a manner different than they believe is correct or otherwise would on the job. However, skilled professional preparation courses with extensive knowledge of the assessment methodology contradict the test developers consensus. Learning to take assessment tests is an art which is based on teaching the skills necessary to do the job that is being tested. If an individual can say and do the critical and most important elements of the job, then the assessment results should and do reflect these abilities Research confirms this. Those candidates who practice on-the-job the learned skills, volunteer for special assignments related to the job, watch others use the skills, able to pick out inappropriate behaviors, train others in the skills, practice skills outside of the job (people skills, eg. salesman, EMS, human relation part-time jobs, etc.), and cross train in the position being tested usually are successful within the assessment center.

The best way to prepare for an assessment center is to change one's way of thinking. To prepare for promotion or selection, look into the critical and important elements of the job or position. This is best done by determining what types of capabilities is necessary for successful performance on the job, and then developing yourself to those capabilities.

For example, virtually all positions at or above that of a first line supervisor (e.g., sergeant, lieutenant, captain, etc.) require oral communication skills. This particular attribute is measured in virtually every assessment center. Therefore, the individual must determine what degree in proficiency the job requires in oral communications.

Having determined that, it is then necessary to assess one's own oral communication abilities and pursue a developmental course to achieve a sufficient level of proficiency in oral communication. (Joining a Toastmaster's organization is probably the single best method that one can undertake to make drastic improvements in oral communications skills, especially public speaking, in a short period of time.)

Obviously, the amount of time necessary to improve one's skills in oral communication is highly individualistic. If an individual already has extensive experience in public speaking and in participating in staff meetings and interviewing, then the amount of time involved may not be as extensive as for a person who has limited experience. On the other hand, extensive experience does not necessarily equate with good oral communication skills

Negative habits associated with oral communication skills may actually detract from one's ability in this area, and therefore require more time to offset those bad habits or unlearn those habits than a person who has little or no experience.

A similar process must be undertaken for all the other skills that are a part of the job.

Shouldn't the assessors always be one level above the position for which the test is being conducted? Not necessarily! While it might be helpful to have an assessor team that is at least one level above the target position, other considerations become much more important. It is more important to chose assessors based upon their capabilities as assessors than to select them merely because of their position in the organization. In addition, carried to the extreme, this hypothesis eventually runs its course.

For example, if we are selecting a chief, we usually use other chiefs as assessors. In this case, peers are being used to select the highest position in the organization.

If peers can be used to select the highest ranking person in the organization, there is little reason to challenge the notion that peers can be used at lower levels within the organization. Thus, a lieutenant is equally capable of selecting a lieutenant as a chief is in selecting other chiefs.

Isn't law enforcement or fire officer a unique job that can only truly be measured by those with experience in that field? The answer to this question is closely related to the answer to the previous question and answers here may also apply to the previous question. Again, the answer is no!

First, it must be remembered that assessment centers measure generic supervisory and management skills, not knowledge specific to an occupation. Simply stated, oral communication skills for a police sergeant are no different from oral communication skills for a fire officer or a foreman in a factory or a first line supervisor of social workers. Techniques of effective delegation are the same for anyone who delegates. Employee counseling techniques are the same for anyone who must counsel employees concerning their work performance.

Furthermore, in many assessment centers, constraints are placed upon the selection of assessors that are often artificial and serve no valid purpose. Restraints such as assessors must have the same occupational experiences or be one level above the position being tested are two prime examples. Research has not established that either condition has any bearing on assessment center validity or the reliability of assessor ratings. Indeed, some research has shown that psychologists are better assessors than non psychologists. This research, therefore, would argue against a police officer serving as an assessor when a psychologist, with no police background, might be available as an assessor.

Only when the technical aspects of the job are assessed, such as fire scene scenario given verbally, or specific laws being violated, the candidate should have at least one technically skilled assessor on the team to answer questions other assessor may have about the job.

How is experience measured in an assessment center? Experience can be measured in one of three methods, and, it is always measured to some degree.

First, there can be an indirect measure of experience. Indeed, all testing, to some degree, measures indirectly one's experience and education. For example, a doctor who has attended medical school has his education measured at the time that he takes a test for licensing by his state medical association. Indirectly, a test of knowledge measures what a person has learned, although it does not directly measure the amount of formal education that they may have achieved.

Similarly, the assessment center indirectly measures what one has learned through their experience and education. How one becomes effective at delegation, or effective in written communication skills, is not really relevant. The relevant measure is what is their level of competency as it relates to the job being tested for. Consequently, if one candidate has formal education that has contributed to strong written communication skills, then he will be assessed accordingly. Likewise, if another candidate has no formal education, but is equally competent in written communication because of other experiences that he may have obtained, he may also be equally rated on his written communication ability.

There is a direct way of assessing experience and training in the assessment center through an interview process known as the "background interview." The background interview is a form of structured interviewing in which the participant completes a rather lengthy questionnaire prior to the assessment center. The assessor prepares questions specific to the individual's background relating to the dimensions being measured, and then conducts the interview. This interview becomes a direct measure of one's experience and training and is considered in making final assessment center scores.

Another way of measuring one's experience and training is as a part of the assessment procedure. An interview separate from the assessment center, similar to the background interview, or an evaluation of training and experience, are two common ways in which one's experience and training can be directly assessed as part of the assessment procedure, even though they may not necessarily be a part of the assessment center.

As can be seen then, an assessment center that does not directly measure one's training or experience is more a matter of design rather than a limitation on the assessment center process.

Doesn't a candidate who goes through an exercise later than other candidates have an advantage in that he or she may find out about the exercise content? This is a problem common through all examination procedures, and is certainly not unique to assessment centers. It has been an issue for all types of procedures in which candidates do not all take the examination simultaneously. Interviews, which have been used for many years, have often had this potential problem. Of course, the problem only exists if one conspires with another to cheat on the examination.

That is, a candidate who has gone through the process earlier must disclose to another candidate who has yet to go through the process what the content of the assessment process is. This, of course, is often a direct violation of civil service rules, and in some cases may even be criminal misconduct.

Aside from this, however, such disclosure has minimal impact in an assessment center. While one can compromise an interview by disclosing specific questions, it is much more difficult to disclose information about an assessment center exercise. Furthermore, how one candidate sees a particular exercise may be different from how another candidate sees it. In a group discussion, how one group deals with and interacts with each other may be totally different from how another group sees the problem and interacts with each other. Thus, assessment center exercises are situational specific and, because of that, are difficult to compromise.

Furthermore, assessment center exercises often contain a great detail of information. It is difficult for an individual who is concentrating on analyzing the information available and coming to a decision or course of action to retain all of the innuendo and finer points that may exist in a particular problem.

The inherent danger to the candidate going into an exercise with pre knowledge is the blurting out of information the role player has not released. Under the pressure of the moment and a high motivation to do well, the candidate exposes information he wasn't given. Both the assessor and role player generally pick-that up. So will a video tape. More commonly the candidate has a preconceived way to resolve a situation when all the facts are not given by the role player, therefore, his score reflects his poor judgment and analysis.

However, it must be remembered that the assessment center exercises assess several dimensions. Merely getting the `'right answer" is not sufficient to perform well. If there is a written analytical problem, then the candidate's total score within the exercise is based not only on ascertaining the exact nature of the problem and the solution, but explaining and perhaps defending it.

Even assuming the a candidate has the ability to retain much of this information and explain their own approach, this still does not provide another candidate an advantage. Assuming that the person passing on the information is very capable and has performed well on that exercise, some benefit may accrue. However, if the person receiving the information is not equally capable of analyzing and presenting the information on their own, they are not likely to perform as well. If they are capable of performing as well, they probably don't need the information in the first place.

In summary, the nature of assessment center exercises makes it extremely difficult to compromise them and it is unlikely that any one candidate can gain a significant advantage in an assessment center simply by taking the test at a later time and perhaps gleaning some tidbits of information about the exercise content. Where this is a possibility, it is possible to provide sufficient information about the content of the exercises to all candidates prior to the assessment center so that they all operate on an equal basis.

 

References

 

Craig Chappelow "News Flash: 360-Degree Feedback Is Alive and Well," Leadership in Action, 23 (2): 22-23 (May/June 2003) (Center for Creative Leadership; Greensboro, NC)


Cynthia McCauley "Should Managers Be Able to Review the Ratings Their Subordinates Receive from 360-Degree Feedback Instruments?" Leadership in Action, 23 (2): 13 (May/June 2003): (Center for Creative Leadership; Colorado Springs, CO)

McLean, Gary. Multi-rater feedback: Presentation made at he First Asian Conference on HRD held at Bangalore, IIM, and October 2002.

Mishra, Shishir and Chawla, Nandini. Deriving training needs from 360 Degree Feedback, Ahmedabad, TVRLS, 2003

Pareek, Udai and Rao, T. V. Designing and Managing Human Resource System, New Delhi: Oxford & IBH, 1981 and third edition 2003.

Pareek, Udai and Rao, T. V. Pioneering Human Resource System in L&T, Ahmedabad: Academy of Human Resources Development. Consulting report of 1975 and 1976 published in 1997.

Pareek, Udai. Handbook of HRD Instruments, New Delhi: Tata McGraw-Hill, 2002.

Pareek, Udai. Task Analysis for Human Resources Development, University Associates Annual Handbook for group Facilitators, 1990.

Promotional Assessment Skills Service 2034 Lambert Street, Atco, NJ 08004-2111) http.www.pass-prep.com/overview/faq

Spencer, L. M. and Spencer, S. M.. Competencies at Work, New York: John Wiley & Sons, 1993