Picture
Appendix: reproduced from TVRLS workbook on 360 Degree Feedback. 




 By critical I mean having a scientific look. Science means verifiable and predictable, valid and reliable. The use of 360 Degree Feedback and Assessment centers has gone up in India in the last five years. It is said that more than 90% Fortune 100 companies use 360 Degree Feedback (McLean, 2002). The figures get extended from Fortune 500 to Fortune 1000 companies. Definitely in India the number of organizations using 360 DF has increased enormously in the last three years. So are the service providers. The CEO s and the HR Managers do not differentiate or understand the significance of 360 DF and the turmoil it can generate. Normally the service provider is evaluated in terms of the following:

·         Cost
·         Credibility
·         Proximity
·         KAS (Knowledge, Attitude and Skills)
·         Experience


Cost: The cost quite often is the deciding factor. As Dave Ulrich observes, in the absence of any other factors the efficiency of HR Managers and HR Departments is being judged on the basis of cost reduction. Hence HR Managers use their ability to get the lowest charging facilitator or consultant or the ability to bargain and bring down the costs as an indicator of effective performance. It is therefore understandable that any one who can offer the services at a low cost is sometimes considered good enough to do 360. There are no competencies required. The most importance competence is the ability to collect information, analyze the data and present it in attractive form using several graphs and bar diagrams. Instruments are easily available as it is common practice in India where copy right is interpreted as the fundamental right to copy. Several of the tools are available on the web and it is not too difficult to down load. Instruments published in University Associates, by Pareek (2002) and Pareek and Rao (1981) all give a number of instruments that are copyright free. In the light of this the main qualification of the 360 Degree Facilitator is his ability to offer services at a low cost by providing the tool, a administering it, collecting the anonymous data and presenting the individual feedback with profiles.

Credibility: The credibility of the service provider is used as another criterion. Understandably any HR consultant who has credibility with the company and has handled HR assignments in the past in satisfactory ways is considered credible. Nothing can be more scientific than one’s own experience it self. On this count the firms don’t seem to err much. However, the credibility factor in 360 has many dimensions. The following are the constituents:

1.      Track record: The past record of the consultant or facilitator in terms of the deliverables (quality, time, commitments, seriousness with which the assignment is carried out etc.)

2.      Confidentiality: the credibility of the facilitator in 360 is highly dependent on his ability to maintain confidentiality of the data. He should not under any circumstances part with the original assessments to the candidate to the candidate or any one. They need to be destroyed after a specific period of time. Similarly the feedback profiles should be kept extremely confidential and available only to the parties purported to be available from the beginning and with the candidate being a party to the understanding. In most cases the organisation desires that the feedback is given only to the individual and not to any one else. In some cases the CEO would like to have them and in a few cases only action plans are sought.

3.      Ethics and values: The ability of the facilitator to point out the limitations of 360 and use it for the purposes of which it is meant. The ability of the facilitator to guide the client in ways that are beneficial to him without a bias towards the commercial benefits the facilitator may have.

4.      Knowledge credibility: The awareness of the facilitators of the limitations of the 360 and the research studies available in 360. The ability of the facilitator to have a research mind and keep in touch with the growing literature in the field.

5.      Research and follow up: Though this is in the hands of the Assessee it is the responsibility of the facilitator to ensure that adequate follow up mechanisms are built into the feedback process. These may include a re-administration by the candidate, discussion with the assessors where appropriate etc.

Normally the credibility is judged on the basis of the track record and the other aspects may not be adequately tended to.

Proximity:  This is a very important consideration. It is associated with costs as well as easy availability of the facilitator when facilitating various aspects. A proximate consultant is always preferable as the user could run call him up as and when needed. This is becoming a less of an issue as telephones and cell phones, video conferences etc. have become common. For example the authors of this paper were involved in solving the issue by using the video CDs to reach remote locations all over the world and by using tale-coaching sessions. However a proximate availability of the 360 coaches or facilitator helps a great deal in making the process smooth and convenient.

KAS: This is an important factor and is the most neglected at this point of time. 360 DF is a sensitive process. The T-Group training which a similar process is is to be facilitated only by trained behavioral scientists. They go through a three phased training from ISABS or National Training Laboratories or other similar bides like the ISISD. Alternately they should be trained counselors or psychologists or specially trained in the facilitation process of the 360. The most important requirements are knowledge (of the purposes of 360, the process of 360, applications of 360, limitations of 360, mistakes, knowledge of feedback, leadership models, research in this area etc.), attitudes (empathy, listening, trust, helpful attitudes, patience, flexibility, flexibility etc.), and skills (coaching, follow up and research  skills).

Experience: Experience in the filed is another criterion for choosing the facilitator. An experienced facilitator always brings with him a lot of lessons from the good practices elsewhere and transmits the same. The experience base may bring down the cost and enhance the ROI. An experienced coach and facilitator may prevent you from making the mistakes as he has discovered the wheel already and need not be any more at your cost. He ensures that the entire process is smooth. He also knows how to deal with problems and issues. He anticipated them and gives you a preventive medicine.

Lessons from the past

The following are some of the lessons we have learnt from the past on 360 degree Feedback in India.

Purposes:

  1. It is best suited as a development tool and most firms are not yet ready to use it as performance measure. It could be a good tool for performance management but not for performance measurement. Don’t use it to measure performance but definitely use it to develop performance.
  2. We are not yet ready to link it with incentives and rewards. 360 DF itself can be rewarding for an executive if the firm invests on his development. This investment includes not mere provision of the 360 feedback or profile but a follow up support. The least a corporation can do is to provide coaching help. Unless an external (external to his department or unit) coach is available and the candidate is required to Most candidates follow up support may include asking for development plan, sponsoring for 360
  3. Even if you desire to link with performance management system enough preparation is needed and it is better to wait until its credibility as an objective told is built and the process is institutionalized.
  4. It is a good methodology to use for identifying potential candidates for select positions. When the author was appointed in BEML to head the HRD Department in 1978 a similar method (it was not known as 360 degree feedback at that time) was used to identify potential future HRD Chief. The ONGC experience indicates the same
Readiness

  1. Some organizations feel that the firm should be ready to take 360 degree feedback. There are instruments like “Are you ready to take 360 DF? Etc. While it is important to assess your readiness, it is equally important to create readiness. It is true that some firms are more ready to take 360 feedbacks and institutionalize it more than some others. The same is true with individuals. Some are more ready than others. However experiences of organizations like the Aditya Birla Group have indicated that a good HR Manager can make the firm ready by doing the necessary preparation.
  2. It is always useful to create the readiness by making it in phases. The phases recommended are: phase 1: Voluntary and exploring; 2: Phase 2: Purely developmental and individual focused 3. Phase 3: Institutionalized developmental and Phase 4: Linkages with other HR systems open and organizational. It should be externally administered first and when the comfort level goes up then it could be administered as one progresses to phase three.
  3. Don’t go web based until you institutionalize the process and provide a real good experience to your employees.
  4. Always prepare the employees or the assessors and Assessees for the same
Instrument

  1. There are many tools and models available. There are an estimated over 100 vendors of 360 Tools in the USA. They are based on the leadership models which each company deals with. It is useful to be aware of these models. However leadership model based 360 is useful only for those firms that have followed the leadership competency model fully, institutionalized it and are committed to it. Otherwise the tool and the feedback on it becomes academic and limited practical relevance. The choice of the tool is important. Choose the tool that suits you best and that can give you multiple results.
    1. The tool should address the objectives of the 360 and not any tool. It is the heart of the 360.
    2. The tool should focus on the competencies needed by the candidate,
    3. The competencies needed by the company, the department and the culture, norms and values the firm would like to promote could be focused.
    4. It should be easily understandable and should use the language commonly used in the company
    5. For development purposes the tools could be changed every now and then and used as a past of the training interventions. For example Udai Pareek’s Role efficacy scale can be converted into a 360 tool or any other tools can be converted and used as 360 tools but they re for one time administration and not regular use.
    6. Reliability and validity are always issues and one cannot wait. If you understand the concept and limitations of 360 they become lesser issues. Face validity and content validity are suggested at beast. Test-retest reliability with a gap of about a week to two weeks preceding feedback may be useful to aim at. Internal homogeneity could be used (item total correlations etc.)
  2. It is recommended to use tailor made tools for each form. Some organizations are developing tools for each role with some common items and a few specific items
  3. The tool can be lengthier in the first administration. This gives a comprehensive assessment. Then it can be shortened. In any case it should not take too much time. Say more than an hour is considered as bringing down the efficiency and effectiveness of that tool. 30 to 40 minutes will be ideal
  4. Open ended items seeking the pen ended feedback re the most useful parts.
  5. The best tool is that tool that takes into account the expectations and views and opinions of the assesses and assessors.
Process

  1. The administration should be preceded by a firm wide education explaining the purpose and other details. Without education the Assessees and Assessors the quality of feedback may become of questionable quality.
  2. The feedback should always be given after explaining the scope and limitations of feedback. Prepare the candidate in terms of creating readiness and receptivity to feedback.
  3. Always provide scope for the candidate to have a dialogue with the coach or facilitators to help him understand and interpret appropriately the feedback.
  4. Insist that the 360 degree Feedback could be as biased as any other feedback and give enough scope for reflections about the feedback..
  5. Identify common issues and provide for the participants to debate and discuss as well as come up with organizationally acceptable and managed action plans.
  6. Appoint mentors wherever possible
  7. Provide bench marks but treat them with cautions
  8. Warn the individuals not to attempt to identify the assesses on the basis of the language etc. and quite the experiments and experiences that are available. TVRLS research indicates that only 18% of the guesses on the basis of the language and individual ratings are correct. There is 80% chance of a mistaken identity. Hence it should be used to reflect and prove to one self that the feedback is true or false most of then time. The feedback giver and his views should be respected but not blindly taken for granted; the onus is always on the candidate to prove to him self that he possesses the strengths and does not have the weaknesses etc.



Follow Up

  1. Most 360 experiences don’t give the ROI because like PMS the organisation thinks that its job is over once the 360 feedback is provided. Very little is done to follow up or to provide the needed follow up support. The follow up support may take then following forms:
    1. Insisting on asking the candidate to give his development plans or insights or lessons drawn. Only what he desires to share.
    2. Asking the consultant to give the development suggestions in the form of training needs etc. and offering the same. These are not to be asked by name but for the entire organisation with frequencies and without names (see Chawla and Mishra, 2003).
    3. Conducing follow up workshop after a three months or six months or an year
    4. Re adminsitration and seeing the profile changes
    5. Offering on-line coaching
    6. Making mentors and coaches available
    7. Encouraging the candidates or create their own tools and seek feedback on their own
    8. Integrating it into PMS through KPAs etc.
  2. The HR should be able to play an active role and takes forces may be used
  3. Documenting the success experience and shoring the same to inspire each other


Building Internal Competencies

  1. 360 is a great tool. Its potential and limitations need to be understood. There is no substitute for developing internal competencies of line managers and HR facilitators for administering and using the same. Those who are in this will not only contribute to developing leaders but also may themselves improve their leadership competencies. They are facilitating the building of intellectual capital.
  2. Appendix 1 presents summary of two articles as they appeared in the internet recently.



Assessment and Development Centers

Assessment centers were recommended to be introduced in India in 1975 when the first HRD Department was established in L&T. It was proposed that L&T should be able to introduce Assessment Centers in about three year’s period from the time it started work on the same. By 1980 L&T identified critical attributes (now called as competencies) and established its own competency definitions. However it took another twenty years for L&T to take this concept forward. Why is it that an organization like L&T took twenty years to introduce Assessment and Development Centers? Why is it that in-spite of a lot of experience available across the world many Indian organizations have not gone ahead to introduce the Assessment Centers? While the 360 Degree Feedback is used in some form or the other why is it that not many fortune five hundred organizations are known to use Assessment centers? Why is it that in spite of continuous suggestions of psychometric-ans several organizations continue to use MBTI and such other psychometric tests for recruitment? These are some of the issues that need to be thought through. Answers to these questions may provide insights to the new users of Assessment Centers.

Issues of Validity

The most important form of validity of an assessment centers is the predictive validity. Predictive validity is determined by the extent to which those identified through assessment centers will be found suitable or successful in the jobs for which they are assessed. There are many scientific issues here and the answers may not be very simple. The most important of these are any one can predict the future success mainly on the basis of competency assessment data? First of all not all those found to be competent or pass the test get promoted to higher level jobs. The positions available are always limited as compared to the number of candidates aspiring or those who are going to be put through assessment centers. So there is always a problem. Secondly the success or failure of a candidate on a future job depends on many things besides the competencies of that individual. Assuming that there is the best fit of the individual with the competency requirements of the job, there is no way of measuring that all the situation factors and competencies can be predicted before hand. Most importantly the success or failure also depends in Indian firms on the kind of boss you will have. The chemistry should click. While the likelihood of the chemistry clicking is assessed through role plays and such other techniques they always make certain assumptions about the boss of the candidate which may or may not be valid. Hence the difficulty in validation studies. Low validity need not necessarily mean that the predictive ability of the assessment centers is weak.

Competency mapping is a serious exercise. Spencer (1993) estimates that mapping of one competency using BEI and such other techniques may take as much as two months. Indian firms do not have those resources not patience to get such through analysis and mapping done. In addition if the tests to be used in the Assessment and Development Centers need to be validated it will take as much as a year or more and even that time is not enough to conclude that those identified as potential candidates will have got enough time to prove their worth. In the absence of this one may resort to standard techniques of validity like validating against performance appraisal reports and superior ratings. This is the very reasons for going to assessment centers and therefore cannot be relied up on. Under these circumstances experts have no alternative but to make the best estimates and introduce tests and assessment systems that have not been standardized and that are based on hunches. Face validity and content validity are the main validity systems used in the Assessment centers. This basic limitation in the science of Assessment centers need to be noted and appreciated.

Our work with the Assessment and Development centers has indicated the following to be kept in mind:

Purposes:

  1. Given the difficulties in ascertaining validity and reliability of Assessment centers it is useful to limit their application to competency assessment and development rather than making them the primary bases on which to take promotion decisions. They may at best be used as tools for assessment and development and not as decision making tools. The traditional methods of assessment should not be totally discarded. Especially past performance. BEI and its variations is a good way to ascertain or combine the advantages of assessment knowledge with the past records. In other words more research and validation studies are required in India to use assessment centers for other than development purposes.
  2. We strongly recommend the use of ADCs for development purposes and as additional data providers for promotion and placement purposes.
Competency Mapping:

  1. Establishment of a good assessment center should be based on identification of the competencies of the positions and set of positions such competencies should be identified using scientific methods. The cost effectiveness is always an issue as a scientific assessment of the competencies is time consuming process. However the least any one can do is to use the expert panel method or some such methods described by Spencer (1993) or the task analysis approach  described by Pareek (1990)
  2. Scientific study of competency requirements is cost effective if there are multiple positions or persons doing the same job. However in the event of only a small number of persons the least that should be done is to have a specialist group or the role set group to come up with the competency requirements (see Rao, 1975, Pareek and Rao, 1981 and 2003)
Developing Multiple Methods of Assessment



  1. Assessment centers use multiple methods: In-baskets, presentations, leaderless group discussions, role plays, simulation exercises, Behavior Event Interviews etc. Development of these methods should be based on a thorough understanding of the competencies resulting from the competency mapping exercises. The tests themselves should be constructed by specialists. However test development is a time consuming exercise. Wherever possible standard test may be used. Such standard test may not be culture free. Hence it is important to ensure that the tests used are appropriate and suitable to the organisational and the geography specific culture.
  2. Psychometric tests should be used with caution. Unlike the in-baskets, role plays and simulation exercises psychometric tests are standardized and most of the standardized tests are available in the market. However they re available to specialists. In most cases. The users should ascertain the appropriateness of these psychometric tests. Any psychometric tests ill be of interest to a respondent. Respondents enjoy psychometric tests and feedback on them. Respondents or the candidates are not necessarily the best parties to judge the appropriateness of the instruments. Their relevance should be indicated by the Assessment center specialist and linkage to the competencies established. As Spencer’s iceberg model indicates it is the basic attitudes, motives and traits that lie beneath the surface and the psychometric tools may have the potential to bring them out. At the same time the psychometric tests are fakable in the sense they re subject to guesses. Hence the need to ensure that they are not decisive inputs. This and other issues in Assessment center are discussed on the internet recently by promotional Assessment Skills service (2003) and are reproduced in the appendix 2.
  3. In sum where possible develop and standardize new tools which suit the requirements of the role and competencies mapped. Other wise use standardized tests. In any case both these approaches have limitations and this should be recognized.
Assessors:

  1. We have tried to experiment with a five day training programs for Assessors. The experience indicates that this is perhaps the best of the time any organsational can spare their senior employees to be trained as Assessors. However our experience also indicates that this is extremely insufficient time for training of the Assessors. There are always dilemmas in training the Assessors. The Assessment center is meant to test those a level below them and yet they are tested on those very competencies and using the very exercises meant to be used for their juniors at two levels below them to be considered for jobs one level below them. Sounds very inappropriate but there are not many alternatives. The best raining that Assessors will get is when they participate in an assessment center under the supervision of expert assessors or facilitators.
  2. Another factor most organisations have neglected is the continuing education and training of the assessors. Some of the organisations are so impatient that they cannot wait for the training to be complete and put the assessors in to action.
  3. In addition there are sensitivities associated with dropping any assessor as not competent to be an assessor. One may be an excellent Manager and successful and may not have the assessment skills. In such cases it is useful to combine a grade assessors with B Grade ones and train them up. However it is better to avoid assessors whoa re not competent. At this point of time there is so much of dearth for competent assessors that there is very little choice left for organisations to choose from.
 

Combining with Other methods and specially 360 Degree Feedback

 

  1. Our own research studies have indicated that there are some factors on which 360 can give a good degree of feedback similar to the ADCs. There are some other competencies which can be tested and assessed only in the assessment centers. Fore example while communication skills can be assessed on 360 DF, risk taking and decision making and delegation may not be accurately assessed on the 360 DF. This is because the current chemistry of the individual may not have allowed him to delegate (for example if his own boss is not a delegating type). In such case ADCs are good tools. However the experience of those who have been working with the individual for several months’ even years cannot be totally ignored. Hence use of 360 degree feedback and Performance management data as supplements to ADCs is strongly recommended.
  2. Data from other sources like employee commitment studies, Satisfaction surveys and OCS studies are additional inputs. Such multiple methods will have the conclusions on the competencies.
Development Plans and Other follow up

 

  1. The process of giving feedback, assisting the candidate in preparing development plans are the most neglected parts of assessment centers in India today. After investing so much on a candidate it is in small investments of time and failure to ask the right questions and ensure a good follow up on the development al needs firms are making serious mistakes.
  2. On the basis of the competency gaps observed in the ADCs the firm should prepare a development action plan and offer training and other OD interventions.
When some of these principles are followed and we learn from experiences the quality of ADCs will improve and their ROI is likely to be higher.


Appendix 1

Two Views of 360-Degree Feedback




The 360-degree feedback evaluation program uses worker surveys to create a snapshot of company performance. But should managers be permitted to review their subordinates' ratings? And just how effective is 360-degree feedback, when all is said and done?

Below we present two
Leadership in Action articles about 360-degree feedback. The first maintains that employee ratings must remain confidential, while the second argues that reports of 360-degree feedback leading to decreased organizational performance indicate only that employees were inadequately prepared for the program.

"Should Managers Be Able to Review the Ratings Their Subordinates Receive from 360-Degree Feedback Instruments?" Leadership in Action, 23 (2): 13 (May/June 2003): Cynthia McCauley (Center for Creative Leadership; Colorado Springs, CO)

If the primary purpose of a 360-degree feedback process is leader development, then managers should not expect to review the ratings their subordinates receive. Confidential feedback is a necessary ingredient for development. To take a hard look at themselves and commit to self-improvement, people need a safe environment.

For co-workers to give honest feedback, they need to know that it will go only to the recipient, and not worry about who else might see the data and what implications that could have for the recipient. A 360-degree assessment instrument is an ideal tool for providing developing leaders with this feedback.

This does not mean that managers should not seek assessments from multiple sources as input for evaluations of their subordinates. Managers regularly undertake these evaluations as part of the process of making administrative decisions about the people who report to them—that is, decisions about which jobs to give them, how much to reward them, and what kind of support they need.

Managers would be foolish to rely only on their own assessments in making these administrative decisions. They do not have the opportunity to observe many of the skills and behaviors they need to assess, and they cannot know what it is like to be these people's peers, customers, or subordinates.

Any organization that is moving toward more distributed or empowered decision-making processes is going astray if it continues to use a strongly hierarchical process for making decisions about pay, promotions, and job assignments.

So instead of relying on the data from a developmental 360-degree feedback process to get assessments from others, managers should seek input from multiple sources as part of the organization's formal performance appraisal process. If this is done, participants in each process—developmental and performance appraisal—will be clear about the purpose each set of data will serve. . . .

Data collected for development—ratings on a broad range of skills and behaviors from a wide variety of co-workers—are not the best kind for performance appraisals and administrative decision making. Data used for these decisions need to focus on the dimensions of performance most relevant to the individual's job and need to come from co-workers who have the most and best opportunities to observe such performance.

For example, as a manager you might want data on a subordinate's collaborative behavior to come from peers in other departments, and data on how well the subordinate keeps his or her staff informed about organizational issues to come. . . .

Qualitative examples—rarely sought by 360-degree assessment instruments—are particularly useful for performance appraisals. Also, if data collection instruments are used for performance appraisal, they need to be short so that a number of employees can be assessed at the same time. (Assessment instruments designed for developmental purposes are often too long for practical use in performance appraisals.)

The greatest danger of gathering data from multiple sources for use in performance appraisals is that coworkers, knowing that their feedback will not be strictly confidential, will not provide honest input, and decisions could then be made based on inaccurate data.

Managers can reduce this possibility by demonstrating that they use the data fairly and responsibly, by setting clear performance expectations and standards, and by helping employees to understand that providing co-workers with straightforward feedback is a necessary ingredient for improving their collective performance.
--------------------------------------------------------------------------------------------------------

"News Flash: 360-Degree Feedback Is Alive and Well," Leadership in Action, 23 (2): 22-23 (May/June 2003) Craig Chappelow (Center for Creative Leadership; Greensboro, NC)

I was sitting in the waiting area at my mechanic's shop drinking charred coffee while he finished the oil change. Spread out on the institutional table in front of me was the typical reading fare: a six-month-old copy of Sports Illustrated, one of the local shoppers, and a newsletter that included an eye-catching headline, "Will Your New Tires Kill You?" I had recently replaced all eight tires on our family cars, so I read the article eagerly.

As it turned out the newsletter article suggested that if you buy new tires and they are not properly installed, it could cause a potentially fatal accident. For example, if the technician mounts the tires correctly but forgets to tighten all the lug nuts on one of the wheels, the wheel can come off, resulting in a crash. The article extolled the virtues of using only service centers that employ certified mechanics, such as the ones employed at the garage where I was waiting for my oil change to be done.

A more accurate headline for the article would have been, "If Improperly Mounted, Tires Can Be Hazardous." But if it hadn't been for the sensational title, I probably wouldn't have read the piece.

Down the Garden Path

I often fall for this shock tactic used by publications, and it happened again when I saw an article in the June 2002 HR Magazine with the headline, "Does 360-Degree Feedback Negatively Affect Company Performance?" The headline captured my attention, particularly because my employer is in the 360-degree-feedback business, and I read the article with great interest.

Like the headline on the article about tires, the headline on the HR Magazine piece is misleading. Even though it is phrased as a question, it's a rhetorical one, and the implied answer is clear. A better and more accurate title would have been, "360-Degree Feedback, When Poorly Implemented, Can Be Hazardous."

The article makes two main points. The first is an attempt to correlate the use of 360-degree-feedback programs with negative organizational performance. The second is a list of potential pitfalls that organizations should avoid when using 360-degree feedback.

The coauthors, Bruce Pfau and Ira Kay, are national practice leaders at Watson Wyatt Worldwide, an international consulting firm that focuses primarily on the human resource areas of employee benefits and compensation but also deals with specialized areas such as HR-related technology. Pfau and Kay are also the coauthors of The Human Capital Edge: 21 People Management Practices Your Company Must Implement (or Avoid) to Maximize Shareholder Value (McGraw-Hill, 2002).

In that book, they maintain that specific human capital management practices can add to—or subtract from—an organization's bottom line to the tune of millions of dollars. In the HR Magazine article, Pfau and Kay zero in on one of these practices—360-degree feedback—and the picture they paint isn't pretty.

Their conclusions are based on Watson Wyatt's 2001 Human Capital Index, which was compiled after the consulting firm explored and correlated the HR practices and financial performances of 750 companies in North America and Europe in 1999 and again in 2001. Pfau and Kay then compared the scores and financial data of fifty-one companies that participated in both studies, in an effort to determine which HR practices drive positive financial results and which don't.


One of their findings was that 360-degree feedback processes are associated with a decrease in financial performance. According to the 2001 Human Capital Index report, companies in which employees evaluate their managers have a market value that is 5.7% lower than the market value of companies that don't use employee review of managers, and companies in which peers evaluate one another have a market value that is 4.9% lower than that of companies that don't use peer review.

Suspect Conclusion

The problem with this, as I see it, is that shareholder value is an iffy way to measure leader impact in a window of less than three years. I have always understood shareholder value to be a long-haul proposition (and for the sake of my currently anemic 401(k) plan, I certainly hope it is).

Jerry Donini, an HR consultant and contributor to the Washington Business Journal, writes in an article in the November 15, 2002, issue of that publication that "a more direct and reliable measure of the influence of HR practices is one that compares them to a combination of outcomes like top-line revenues, earnings growth, return on assets, net margin and return on equity."

Consider Owens Corning, a company that I consider to be an example of best practices in using 360-degree feedback tools responsibly and strategically. In September 1998, I sat in the back of a seminar room at the company's headquarters in Toledo, Ohio. Glen Hiner, then the chairman and CEO of the company, was addressing a group of forty high-potential leaders.

He was there to communicate the importance of the activity they were about to pursue: getting feedback through Benchmarks®, a comprehensive, 360-degree leadership assessment tool from CCL, and using that feedback to improve their effectiveness in the organization. On that day Owens Corning stock was trading at $36 a share. Exactly two years later it was selling for $3.94 a share. As I write this, Owens Corning stock is listed at 17 cents a share.

In this example, there was no causal relationship between the use of 360-degree-feedback instruments and the company's dramatic decline in shareholder value. The reason the stock price dropped so precipitously is that on October 5, 2000, Owens Corning filed for Chapter 11 bankruptcy protection as a result of a multibillion-dollar liability from asbestos litigation claims.

This is just one example, but consider the dot-com boom—and subsequent bust—and decide for yourself the risk of using shareholder value as the sole measure of an organization's viability.

The second part of the HR Magazine article is more helpful. The authors point to the following issues related to the use of 360-degree-feedback programs in organizations.

  • Multiple views may cause confusion for the participants when there is disagreement between different rater groups.
  • Unless everyone participating in the process is trained for that role, the process may lead to uncertainty and confusion.
  • There may be a gap between an organization's business objectives and what a 360-degree-feedback instrument measures.
  • Time and cost may be stumbling blocks.
  • Participants and managers may fail to follow up after feedback is received.
The authors are right to point out these potential pitfalls, and I share their concerns. However, even though their points are accurate, none is new. In CCL's Handbook of Leadership Development (Jossey-Bass, 1998), I address each of these challenges in the chapter on using 360-degree-feedback instruments, and I make specific recommendations on how to avoid these problems.

So do other books by experts in research and practice, including 360-Degree Feedback: The Powerful New Model for Employee Assessment and Performance (AMACOM, 1996), by Mark R. Edwards and Ann J. Ewen, and Maximizing the Value of 360-Degree Feedback: A Process for Successful Individual and Organizational Development (Jossey-Bass, 1998), by Walter W. Tornow, Manuel London, and CCL associates.

The thing that all of us—including Pfau and Kay—seem to agree on is that a great deal of a 360-degree-feedback program's success depends on responsible planning, use, and follow-up. What many organizations neglect is the amount of work that has to be done long before the surveys are distributed.

External Factors

Many organizations figured this out a long time ago. In addition to Owens Corning, I would point out Microsoft and Pfizer as companies that do a solid job of planning, implementing, and following up on their 360-degree feedback programs to avoid the pitfalls mentioned in the HR Magazine article.

By the way, if we look at Microsoft and Pfizer strictly through a shareholder-value lens, Microsoft shares are worth less than half what they were three years ago, and Pfizer's stock is down about 40% over the same period. My guess is that those declines have more to do with the overall downturn in the economy, saturation in the computer industry, and drugs coming off patent than they do with the use of any specific HR practice such as 360-degree feedback.

I suggest that organizations should continue with the responsible use of 360-degree-feedback instruments. Just make sure to tighten the lug nuts before starting out.


This excerpt is presented with the kind permission of the publisher. Copyright © 2003 John Wiley & Sons, Inc. (reproduced from Internet)




Appendix 2


Some Common Questions About Assessment Centers (Source:  Promotional Assessment Skills Service 2034 Lambert Street, Atco, NJ 08004-2111)

What is an assessment center? An assessment center is commonly defined as a method and not a place. Simply stated, an assessment center is a systematic process that evaluates an individual's capabilities of performing certain critical knowledge, skills or abilities that are deemed critical to the successful performance of job tasks that have also been identified as critical to the job in question.

How is a modified assessment center different from an assessment center? Professional standards and supporting validation research only support a process defined as an assessment center. A modified assessment center, or other variation such as "mini-assessment center," "assessment center type process," "assessment labs," or other terms that in some way indicate a variation of an assessment center, have not been professionally defined. Therefore, the exact nature of these other processes is specific to the situation and the individual who has developed them.

While there is nothing wrong with using an assessment procedure that is not an assessment center, care should be taken to ensure that it is not erroneously identified as an assessment center.

The Guidelines specifically state that any procedure that does not adhere to those Guidelines, should not be identified as an assessment center or try to be an assessment center by using the term "assessment center" as part of its descriptive title.

However, the Guidelines do recognize that there is a difference between an assessment center and the assessment center methodology, and that many times it is appropriate to use the assessment center methodology while not using the assessment center.

Therefore, the fact that some variation of an assessment center methodology is being used, does not mean that it is not a proper assessment procedure. It simply means that it is not an assessment center

How can I best prepare to take an assessment center? Many persons try to prepare for an assessment center by learning the "tricks." The general consensus of "test developer" seems to be that specific preparation for an assessment center has little impact on assessment center performance, and could result in a lower assessment center score because the individual tries to perform in a manner different than they believe is correct or otherwise would on the job. However, skilled professional preparation courses with extensive knowledge of the assessment methodology contradict the test developers consensus. Learning to take assessment tests is an art which is based on teaching the skills necessary to do the job that is being tested. If an individual can say and do the critical and most important elements of the job, then the assessment results should and do reflect these abilities Research confirms this. Those candidates who practice on-the-job the learned skills, volunteer for special assignments related to the job, watch others use the skills, able to pick out inappropriate behaviors, train others in the skills, practice skills outside of the job (people skills, eg. salesman, EMS, human relation part-time jobs, etc.), and cross train in the position being tested usually are successful within the assessment center.

The best way to prepare for an assessment center is to change one's way of thinking. To prepare for promotion or selection, look into the critical and important elements of the job or position. This is best done by determining what types of capabilities is necessary for successful performance on the job, and then developing yourself to those capabilities.

For example, virtually all positions at or above that of a first line supervisor (e.g., sergeant, lieutenant, captain, etc.) require oral communication skills. This particular attribute is measured in virtually every assessment center. Therefore, the individual must determine what degree in proficiency the job requires in oral communications.

Having determined that, it is then necessary to assess one's own oral communication abilities and pursue a developmental course to achieve a sufficient level of proficiency in oral communication. (Joining a Toastmaster's organization is probably the single best method that one can undertake to make drastic improvements in oral communications skills, especially public speaking, in a short period of time.)

Obviously, the amount of time necessary to improve one's skills in oral communication is highly individualistic. If an individual already has extensive experience in public speaking and in participating in staff meetings and interviewing, then the amount of time involved may not be as extensive as for a person who has limited experience. On the other hand, extensive experience does not necessarily equate with good oral communication skills

Negative habits associated with oral communication skills may actually detract from one's ability in this area, and therefore require more time to offset those bad habits or unlearn those habits than a person who has little or no experience.

A similar process must be undertaken for all the other skills that are a part of the job.

Shouldn't the assessors always be one level above the position for which the test is being conducted? Not necessarily! While it might be helpful to have an assessor team that is at least one level above the target position, other considerations become much more important. It is more important to chose assessors based upon their capabilities as assessors than to select them merely because of their position in the organization. In addition, carried to the extreme, this hypothesis eventually runs its course.

For example, if we are selecting a chief, we usually use other chiefs as assessors. In this case, peers are being used to select the highest position in the organization.

If peers can be used to select the highest ranking person in the organization, there is little reason to challenge the notion that peers can be used at lower levels within the organization. Thus, a lieutenant is equally capable of selecting a lieutenant as a chief is in selecting other chiefs.

Isn't law enforcement or fire officer a unique job that can only truly be measured by those with experience in that field? The answer to this question is closely related to the answer to the previous question and answers here may also apply to the previous question. Again, the answer is no!

First, it must be remembered that assessment centers measure generic supervisory and management skills, not knowledge specific to an occupation. Simply stated, oral communication skills for a police sergeant are no different from oral communication skills for a fire officer or a foreman in a factory or a first line supervisor of social workers. Techniques of effective delegation are the same for anyone who delegates. Employee counseling techniques are the same for anyone who must counsel employees concerning their work performance.

Furthermore, in many assessment centers, constraints are placed upon the selection of assessors that are often artificial and serve no valid purpose. Restraints such as assessors must have the same occupational experiences or be one level above the position being tested are two prime examples. Research has not established that either condition has any bearing on assessment center validity or the reliability of assessor ratings. Indeed, some research has shown that psychologists are better assessors than non psychologists. This research, therefore, would argue against a police officer serving as an assessor when a psychologist, with no police background, might be available as an assessor.

Only when the technical aspects of the job are assessed, such as fire scene scenario given verbally, or specific laws being violated, the candidate should have at least one technically skilled assessor on the team to answer questions other assessor may have about the job.

How is experience measured in an assessment center? Experience can be measured in one of three methods, and, it is always measured to some degree.

First, there can be an indirect measure of experience. Indeed, all testing, to some degree, measures indirectly one's experience and education. For example, a doctor who has attended medical school has his education measured at the time that he takes a test for licensing by his state medical association. Indirectly, a test of knowledge measures what a person has learned, although it does not directly measure the amount of formal education that they may have achieved.

Similarly, the assessment center indirectly measures what one has learned through their experience and education. How one becomes effective at delegation, or effective in written communication skills, is not really relevant. The relevant measure is what is their level of competency as it relates to the job being tested for. Consequently, if one candidate has formal education that has contributed to strong written communication skills, then he will be assessed accordingly. Likewise, if another candidate has no formal education, but is equally competent in written communication because of other experiences that he may have obtained, he may also be equally rated on his written communication ability.

There is a direct way of assessing experience and training in the assessment center through an interview process known as the "background interview." The background interview is a form of structured interviewing in which the participant completes a rather lengthy questionnaire prior to the assessment center. The assessor prepares questions specific to the individual's background relating to the dimensions being measured, and then conducts the interview. This interview becomes a direct measure of one's experience and training and is considered in making final assessment center scores.

Another way of measuring one's experience and training is as a part of the assessment procedure. An interview separate from the assessment center, similar to the background interview, or an evaluation of training and experience, are two common ways in which one's experience and training can be directly assessed as part of the assessment procedure, even though they may not necessarily be a part of the assessment center.

As can be seen then, an assessment center that does not directly measure one's training or experience is more a matter of design rather than a limitation on the assessment center process.

Doesn't a candidate who goes through an exercise later than other candidates have an advantage in that he or she may find out about the exercise content? This is a problem common through all examination procedures, and is certainly not unique to assessment centers. It has been an issue for all types of procedures in which candidates do not all take the examination simultaneously. Interviews, which have been used for many years, have often had this potential problem. Of course, the problem only exists if one conspires with another to cheat on the examination.

That is, a candidate who has gone through the process earlier must disclose to another candidate who has yet to go through the process what the content of the assessment process is. This, of course, is often a direct violation of civil service rules, and in some cases may even be criminal misconduct.

Aside from this, however, such disclosure has minimal impact in an assessment center. While one can compromise an interview by disclosing specific questions, it is much more difficult to disclose information about an assessment center exercise. Furthermore, how one candidate sees a particular exercise may be different from how another candidate sees it. In a group discussion, how one group deals with and interacts with each other may be totally different from how another group sees the problem and interacts with each other. Thus, assessment center exercises are situational specific and, because of that, are difficult to compromise.

Furthermore, assessment center exercises often contain a great detail of information. It is difficult for an individual who is concentrating on analyzing the information available and coming to a decision or course of action to retain all of the innuendo and finer points that may exist in a particular problem.

The inherent danger to the candidate going into an exercise with pre knowledge is the blurting out of information the role player has not released. Under the pressure of the moment and a high motivation to do well, the candidate exposes information he wasn't given. Both the assessor and role player generally pick-that up. So will a video tape. More commonly the candidate has a preconceived way to resolve a situation when all the facts are not given by the role player, therefore, his score reflects his poor judgment and analysis.

However, it must be remembered that the assessment center exercises assess several dimensions. Merely getting the `'right answer" is not sufficient to perform well. If there is a written analytical problem, then the candidate's total score within the exercise is based not only on ascertaining the exact nature of the problem and the solution, but explaining and perhaps defending it.

Even assuming the a candidate has the ability to retain much of this information and explain their own approach, this still does not provide another candidate an advantage. Assuming that the person passing on the information is very capable and has performed well on that exercise, some benefit may accrue. However, if the person receiving the information is not equally capable of analyzing and presenting the information on their own, they are not likely to perform as well. If they are capable of performing as well, they probably don't need the information in the first place.

In summary, the nature of assessment center exercises makes it extremely difficult to compromise them and it is unlikely that any one candidate can gain a significant advantage in an assessment center simply by taking the test at a later time and perhaps gleaning some tidbits of information about the exercise content. Where this is a possibility, it is possible to provide sufficient information about the content of the exercises to all candidates prior to the assessment center so that they all operate on an equal basis.

 

References

 

Craig Chappelow "News Flash: 360-Degree Feedback Is Alive and Well," Leadership in Action, 23 (2): 22-23 (May/June 2003) (Center for Creative Leadership; Greensboro, NC)


Cynthia McCauley "Should Managers Be Able to Review the Ratings Their Subordinates Receive from 360-Degree Feedback Instruments?" Leadership in Action, 23 (2): 13 (May/June 2003): (Center for Creative Leadership; Colorado Springs, CO)

McLean, Gary. Multi-rater feedback: Presentation made at he First Asian Conference on HRD held at Bangalore, IIM, and October 2002.

Mishra, Shishir and Chawla, Nandini. Deriving training needs from 360 Degree Feedback, Ahmedabad, TVRLS, 2003

Pareek, Udai and Rao, T. V. Designing and Managing Human Resource System, New Delhi: Oxford & IBH, 1981 and third edition 2003.

Pareek, Udai and Rao, T. V. Pioneering Human Resource System in L&T, Ahmedabad: Academy of Human Resources Development. Consulting report of 1975 and 1976 published in 1997.

Pareek, Udai. Handbook of HRD Instruments, New Delhi: Tata McGraw-Hill, 2002.

Pareek, Udai. Task Analysis for Human Resources Development, University Associates Annual Handbook for group Facilitators, 1990.

Promotional Assessment Skills Service 2034 Lambert Street, Atco, NJ 08004-2111) http.www.pass-prep.com/overview/faq

Spencer, L. M. and Spencer, S. M.. Competencies at Work, New York: John Wiley & Sons, 1993 


 
Picture

       The spirit of 360 Degree feedback in India goes back to centuries ago when good kings used to go in disguise to find out the perceptions of their people of the way they are ruling, their style, impact of their decisions and the way people are living and feeling about their rule  and the needs of their people. Even Ramayana Lord Rama used spies (Goodacharis) to find out how people felt and even if one person said something bad Lord Rama acted on it than to punish the person who said it. He punished himself and sent away his wife. As late a few hundred years ago Emperor Akbar is said to have used the method of going in disguise to find out how people are living and the impact of his decisions. 
Modern organizations did not exist in those days. If they did perhaps India would have been rated as the most innovative country in terms of its HR processes. Today it is in for HR technologies from outside India. There is nothing wrong in borrowing good practices but to think “People Management practices are the best from the west while ignoring our own traditions and experience is perhaps is ignoring the treasures we have within. As Nitin Sawadekar quoted in his book on Assessment Centres. Sawadekar (2002) indicated that the use of Assessment Centres methodology are known to have been used or at least recommended to be used by kings at least 1500 years ago in India as mentioned in Kautilya’s Arthasshastra. In Arthashastra different methods of assessing a candidate for Ministerial positions has been mentioned including: observation, performance appraisal, assessment by those who know him, interviewing and other forms of testing etc.
                                         
                     I have stumbled on the methodology of 360 Degree feedback on my own. Since the time I started my career in 1968 as a Lecturer in Psychology I have been using Psychometric tools. As a student at Osmania University we used to go through a number of psychometric tests. TAT, Allport, Vernon Lindzey values scale, DAT, and various other personality tests were taken by most of us in 1966-68. I used to teach the same. Some time in 70s I was introduced to FIRO-B and other tools. In the Achievement Motivations Laboratories we conducted at NIHAE, Delhi; University of Udaipur and latter at IIMA we have been extensively suing these tools. If you are a Psychologist you are licensed to use the tests were not as commercialize as today. They used to be sold only to Psychology degree holders. Bodies like ISABS have developed a new era Behavioral Science Professionals who were trained and encouraged to use such tools. I have extensively used tools to measure work values, Locus of Control, Interpersonal trust, Tolerance for ambiguity etc. We conducted many Executive development Programs at IIMA and as a part of ISABS using a variety of tests. I have even developed a Psychosocial Maturity Scale with Abigail Stewart (Now at University of Michigan department of Psychology) using TAT. David McClelland invited me to do this after he looked at the work I was doing on entrepreneurship.
                                         
                      It was in one of the programs at IIMA when we were using these tools some of the participants suggested that these tools are useful but they could be more useful if I they have some way of knowing how people thought of them. It is this suggestion that made me to start a program in IIMA in 1986. It was actually as soon as I returned from XLRI I proposed this program to OB Area which promptly approved the same. Pradip Khandwalla encouraged this by joining and lending his tool on Management Styles. It also gave him an opportunity to use the tool on Management styles he developed in Canada. Measuring ten different styles of the top management as group. I have used the leadership Styles tool I developed based on the work we did with McClelland on Indian Managers. With J. P. Singh joining us with his tools on decision making we launched the first program which required the participants to register three months in advance and supply us the names of about 15 to 20 of the person with whom they interacted in the last few years from their Juniors, colleagues, and seniors as well as friends and acquaintances whose views they valued. We were surprised to get around 60 nominations to the program while we did not expect more than 15 to 20. We did not want to take them all as the program was emotionally involving and many tests were involved. We designed the program as a three day workshop. We used a number of tests to measure their styles, roles, decision making, delegation, interpersonal behavior etc. The tests required over a couple of hours to answer. In our first program we had top level Managers from all over the country like K. L. Chug, Mahendra Agarwal, Sinha from SRF, Arora from reliance, Anil Sachdev from Eicher and so on. The first day was devoted to explain the tools based on self assessment, and the concepts behind the tool and their significance to leadership. The Program itself was titled as “Leadership Styles and Organizational Effectiveness”. The participants were eager to know how their styles were assessed and the impact it has made on Organizational effectiveness. The second day was devoted to give them feedback tool by tool. On the third day they were required to choose one or two behaviors that they would like to change or further develop. We focused on weak areas than strengths. We created simulations on the third day when the candidates were to experiment with the new behaviors. Not all of them had an opportunity but a few did. For example we create meeting management situations to test out how they would conduct meetings and improve the same. The group would give them feedback.

                         The program was a great success and we repeated this program in the next year also with the first batch of the sixty participants who registered. Hrishikesh Mafatlal sent all his top management latter. Prof. Ramnarayan joined the team at this point of time and we started conducting in-house programs. Little did we know at that time that this methodology will be christened in the USA as 360 Degree feedback? Once we knew that it was called 360 degree Feedback we continued use this term without changing our philosophy.

                         When I look back the last 25 Years since we started this program and methodology I am left with a sense of satisfaction that we made some difference those who like to make a difference. We have stuck to our methodology and tried to counter the intrusion from other parts on the world into the philosophy and methodology.
Once I started my company TVRLS, I have even invited Larry Cippola from CCI, a Minnesota based company to come to India and share with us their approach and methodology. Larry offered a few joint programs with TVRLS in 1998. One was held in Hyderabad and another in Mumbai and Larry also addressed National HRD Network in their Conference at Delhi 1998. Larry used to introduce his firma s a vendor of 360 tools and I used to feel a little strange and the term “vendor” did not go well with our philosophy that knowledge is not for sale. But now we have learnt perhaps the hard way. We still maintain that knowledge is for sharing and developing the society around you. My continuous struggle to discourage corporations from tendering process is an in tune with this philosophy. We sell knowledge for those who can afford so that we can build more from the money w recollect but we give it free for those who cannot. For example we offer 360 DF for teachers and Head Masters with little or no investments while for profit making Industry we do charge. Sometime in early 2002 e were invited by one corporation to conduct a 360 DF program based on a tool they imported from abroad:  leader-Manager tool developed in UK. We assisted them with our process and latter also contacted the tool vendor in Australia (Ronald Forbes of 360 Degree Facilitated).

                           Today learn that there are many people conducting the 360 Degree Feedback. We have learnt a lot of lessons from our work on 360 DF. These lessons are summarized in our latest book on Life after 360 Published by Excel Publications and edited by me, Prof. S Ramnarayan and Nandini Chawla.
  • All assessments of people by other people are subjective. Hence, 360-degree feedback can be as subjective as any other assessment. However, it is the aggregate feedback and consistency in feedback that tends to make it more objective.
  • 360 DF should be used as indicative and reflected upon.
  • 360 DF could also be provocative. The candidate should use this for review, reflection and action.
  • The action plans worked out as an outcome of the feedback, should primarily be directed at empowering self and changing oneself where necessary.
  • Even if one has to change others, it requires change in oneself: ones approach, attitude, communication etc.
  • 360 DF should be used to empower the self and used the enhanced awareness to become a more effective leader. 
(As given in the manual for Leadership development through 360 Degree Feedback by TVRLS)
                      We don’t believe in 360 Degree Appraisals. We believe only in Feedback for development. In my view those organizations that use 360 Degree Feedback or Appraisal for rewards and promotions or for increments etc. are undermining the process and are likely to create new forms of politicking and manipulation in organizations. 360 is an individual process.

A caution for those who are facilitating 360 Degree Feedback:
  1. Please understand human psychology. You need to have the right background and skill to give feedback. Today you have many ways of acquiring the skills to provide 360 DF services:  ISABS, Sumedhas, Coaching Foundation of India, TVRLS to name a few and many others offer programs to develop Facilitation skills.
  2. You must be sensitive to feelings and Indian mind set. We (in India) are still not good at giving and receiving feedback and hence the feedback needs to be interpreted with caution.
  3. People should be helped to use it as an empowering tool. Please read some books and literature and conduct 360 DF. Just because you are a MBA in HR or a HR Consultant it is not right to declare yourself as a 360 specialist unless you yourself have experienced the same.

For those who are choosing 360 Tools:
  1. Choose the tool to suit your purpose.
  2. You may not keep using the same tool again and again. It is good enough if you use it first time and the second time after a gap of six months to an year and then at a three year and five year periods.
  3. However the tool you use could be shorter you may have to keep changing the tool depending on your needs.
  4. Some tools are based on well researched constructs: Example Leader-Manager tool, RSDQ tools etc.  Many tools have face validity.
  5. I think some of the off the shelf tools are free tools and are available on the net. They are good tools for an interested person to take for the first time. It is always good to have a first experience.  However for systematic leadership development guided learning might do some good and if your corporations are going to facilitate the same it is still better.
  6. Ask questions on validity, reliability etc.  for tools that are based on constructs. Make sure that your executives understand the constructs easily. Some tools that use factors are more difficult assimilate and use. The items are more important than the constructs. The items should be easily understood and usable.
  7. While graphic presentations of feedback go a long way in communicating feedback, it is not wise to choose a tool on how well the feedback is presented. Some tools are not rich in content but extremely well presented with graphs etc.
  8. Simple tools don’t require sophisticated validity and reliability coefficients. Usability is more important than the psycho metrics. If you are showing the tool to a few of your executives before administration and they are modifying the tool or choosing items of relevance to them you are already doing a “Face Validity” check.
  9. The issue of reliability is difficult as 360 is expected to bring change. If you administer it the second time the answers are different it is not fully correct conclude the tool as not reliable. If any the tool may have worked. Hence your interpretation of the psychometric properties needs to be done cautiously for 360 tools.

Way Ahead
                       A good 360 DF should be followed by Action Plans, Sharing of Action Plans and reviewing Action Plans. Corporations will get better returns on their investments if they have follow up workshops and coaching sessions. Just doing a 360 Survey and leaving it may not be a wise idea.


For Indian References:
Rao, T. V and Rao, Raju. The Power of 360 Degree feedback; Sage India: Response Books, New Delhi, 2005.

Ramnarayan, S, and Rao, T. V. Organization Development: Acceleraing Learning and Transformation. New Delhi: Response Books, Sage India, 2011.

Rao, T. V. and Raju Rao (editors) 360 Degree Feedback and Performance Management Systems, 2003, New Delhi: Excel Publications Revised.

Rao, T. V., Mahapatra, Gopal., Rao, Raju., and Chawla, Nandini. (Editors) 360 Degree Feedback and Performance Management Systems, 2002, Ned Delhi: Excel Publications. Volume 2

Sawardekar, Nitin. Assessment Centers, New Delhi: Sage Response Books, 2002

Sharma, Radha R. 360 Degree Feedback, Competency Mapping and Assessment
Centers: for personal and business development, Tata McGraw Hill, New Delhi, 2002

Rao, T. V., Ramnarayan, S; and Chawla, Nandini.  (2010) Life After 360 Degree feedback: New Delhi: Excel Publications



 
Picture
Performance measurement objectively of employees at senior levels has always posed a problem to most organizations. Compare the return on the investment (ROI) on of each senior employee has not been an easy task particularly managers are placed in various departments that are amenable to varying degrees of quantification. For example how do you compare the output of production manager, with maintenance manager with that of a personnel manager or finance manager or a manager in-charge of security or town administration etc. has not been an easier task? TVRLS has come up with a methodology to assess the input costs (I in ROI or Investment made by the company) of managers which can be compared with a reasonable degree of objectivity. The CTC (cost to the Company) of the individual is taken as the “I” on that individual employee or manager. Thus if a general manager gets a salary of Rs. 6 lakhs per annum excluding the house and other benefits and community facilities his CTC may come to be Rs. 12 lakhs in Public sector. This estimate is normally based on the housing costs, community facilities like the schools, hospitals and other welfare expenditure etc.

KTR International  Ltd. (KTRIL)

KTR International  Ltd. (KTRIL) is a construction company involved in infrastructure projects.  It is currently having Rs. 10 Billion turn over and intends to multiply five times in the next three years. Its people costs are estimated at Rs. One Billion. The General Manger heading the Cement unit carries a CTC of Rs. One Million. There are eight HODs who are Deputy General Managers reporting to him looking after Materials, Quality, Maintenance, Marketing, Personnel, Finance, IT & Logistics, and Planning. Each of the DGMs is in the salary bracket of Rs. 6,00,000  and being the private sector made investments on community services and the CTC of the DGMs is calculated to be Rs. 8 Lakhs. The company prescribed 2000 working hours to be put in by each employee in a year as the managers all follow a five day week. Every manager gets 25 days off in a year if he punches in 2000 hours of work in that year.


 

Exercise 1

 Please calculate the following:

1. How much is the opportunity cost of a one hour meeting if the GM decides to convene a meeting of all HODs to  discuss whether tea should be served on the tables or at common place?

Your Answer: 8x 400+500= Rs. 3700
O COT = 37,000

2. What is the cost of a 15 minute Telephone conversation between the GM and his DGM marketing?

Your Answer: Rs. 225 (+ telephone cost)
O-COT = 2,250

3. If the GM has a habit of coming 30 minutes late for the production meeting and all the HODs are on time and wait for his arrival he is wasting how much of the company’s money and with what  opportunity cost?

 

Your Answer: 8x 200 =  Rs. 1600
O-COT = Rs. 16,000

 

4. What is the annual cost of  interactions between two GMs for an hour a day? And Opportunity cost?

Your Answer : Rs. 2,50,000
Opp. cost = 2.5 lakhs x 10= 25,00,000 (twent five lakhs)


5. If two  of the HODs do not get along well with each other and they have been found to send notes to each other on even trivial matters and often the GM had to intervene. In one year it has been found that there were 800 such e-mail transactions recorded between them. Assuming that on an average each mail takes 15 minutes to compose and send  it what is the cost to the company?

 Your Answer: (200 hours x 4,00) Rs. 80,000/- per year
Opp Cost = 8, 00,000 (+ GMs time)


6. It was felt by the GM and DGM HR that if two of their HODs (DGM Marketing  and DGM Production)  would fully benefit and their conflicts will reduce if they attend a program on conflict management together. If they are to be sponsored a Training program in Conflict Management at IIM Ahmedabad  for five days and the program fee is Rs. 20,000 and the travel cost for each of them is Rs. 10,000. After how many weeks does the company start getting its return on  Training Investment or costs, assuming that the conflict is reduced by 50% after the program. (Returns will begin after the costs are recovered)?

Your Answer: Cost =  2x 40 x 400 = Rs. 32,000

Program cost = 20,000
Travel = 10,000
Total = Rs 62,000
Opp cost = 6,20,000

 
7. The Opportunity cost of the daily production meetings if the plant has a record of meeting every day on an average for 90 minutes and the plant is shut down in a year only for two weeks for annual maintenance during which period the production meetings are not held?

Your Answer:


How to calculate Cost of Time (COT) using TVRLS methodology

Step 1: Calculate your CTC. (all direct costs in terms of salaries and perks + indirect costs incurred by the company for getting you, your maintenance, socialization, guidance and development + investments made or likely to be made to enable you to perform your current role or future likely roles well). Let this be figure X (sum of x1+ x2+ x3)

Step 2: Estimate the number of hours you are expected to work or you normally work in a year. (No. of working days (after deducting the number of holidays and leaves you are formally eligible annually) x number of hours per day on an average you are likely to work excluding your travel time to and from work). This may range any where between 2000 to 2,400 hours. Let this be figure T.

Step 3: Divide X by T. This will give you the real cost per hour of your time. Let this be real Cost of Time R-COT per hour = X/T

Step 4: Opportunity Cost Factor for the Company: opportunity cost is the returns you are expected to give to the company as a result of its investments on time. While these vary from job to job and expectations may hike them up, there is a crude way of calculating the same. For the same take the annual turn over of the company in financial terms as targeted for the current year (AT). Also find out the annual estimated people costs (salaries + perks+ all other people costs including welfare costs etc.). Your HR department or Finance department will give you an approximate figure. The previous year’s balance sheet will give you some idea. Normally divide the annual turn over (top line targeted) by the company by the annual people costs estimated for the year. You will get a factor Opportunity Cost factor or OCF = AT/ACTC or OCF = AT/∑X. normally in manufacturing set up the OCF varies from 8 to 10. In IT firms and consulting companies it may be around 3 to 4.

Step 5: Your opportunity cost (O-COT) is arrived at by multiplying your R-COT by the OCF. I.e. if your R-COT is Rs. 1,000 per hour and the OCF is 3 your Opportunity cost for the company O-COT =  Rs. 3,000 or if the OCF is 10 then your O- COT is Rs. 10,000. Per hour.

Note: R-COT and O-COT are always expressed in terms of rupee costs per hour.

 Examples: A Senior Vice President HR is working in an IT firm having a CTC of Rs 25 Lakhs. He is expected to give about 2,000 hours of work. His R-COT is 25,00,000/2,000 = Rs. 1,250 and per day cost therefore is Rs. 10,000 if he is expected to work for about 8 hours a day and 250 days in a year. If the firm’s OCF is 3 then his R-COT is Rs. 3,750 per hour or Rs 30,000 per day.

In a manufacturing set up a senior Vice president may have a CTC of  Rs. 24 lakhs and is expected to work only 200 hours in a month his R-COT is 24,00,000/2,400 = Rs. 1000 per hour. His O-COT is Rs. 8,000 per hour if OCF of that company is 8. 


 

Exercise 2:

An Executive   Vice-President Operations of a Machine Tools company holds weekly meetings of all his HODs. Each meeting lasts for two hours. There are six HODs who attend the meeting. Two of the HODs (VP Manufacturing and VP systems) come from two different plants located outside the HO. Their travel time is one hour from HO. Four others sit in the same building as the EVP operations. The CTC of various HODs is given below. Calculate the opportunity R-COT and O-COT for the meeting and give your rating of the appropriateness of each of the following agenda items given in Table 1 below.

EVP-Operations (R-COT = Rs. 2000)

VP- HR  ( R-COT = Rs. 1000)

VP- Marketing and Corporate Affairs ( R-COT = Rs 1250)

VP- Logistics and  Systems (R-COT = Rs. 1500)

VP- Manufacturing (R-COT = Rs. 1250)

VP- Finance (R-COT = Rs. 1,000)

VP- Sales and Distribution ( R-COT = Rs 1500)

Answer: R-COT of the meeting = Rs. (9,500 + 2750) x 2  = Rs. 24,500/-

If the OCF of the company is 8, the O-COT = Rs. 1,96,000/-

Table 1

Evaluate which of the agenda items are worth taking up in this meeting:

Use the following scale:

4= Very appropriate, take it up

3 = Useful, take it up

2 = Can be postponed or some one lese can take this

1 = Not appropriate. Drop it from this meeting and use other methods like delegation etc.



Exercise 3

Evaluate your R-COT and O-COT?

Determine how much it costs you to accompany your spouse for buying vegetables? Or for your spouse to pick up your child from the school or school bus?

What is the cost you have to pay if you don’t do the above? Short term cost? Long term Costs?

 

Everything can’t be reduced to cost. Many of them are investments. They are  investments which may have returns in the very purpose for which we earn money.. Happiness, good life.. Healthy living etc.  There are many things we should keep investing on to have a good life. Particularly relationships that build our future and bring happiness or those that take us close to achieve the purpose of life?

 






 
Picture


With shortage of talent and increased focus on the need to have talented employees a number of organizations are wanting to have ADCs. This is a good trend. We wish the following to be noted.






What is required for Conducting an ADC?

  1. Define your purpose. (Potential assessment for promotion decisions or only for development decisions like preparation for the future roles or Identification of fast movers or high potential employees to put them on a fast track or a substitute for performance appraisal due to limitations of assessment on competencies by the existing performance appraisal system etc.)
  2. Identify the competencies required for assessment (This depends on the purpose. If the purpose is to promote employees, the competencies should relate to the future jobs they are expected to perform after promotion. If it is a substitute for current performance the competencies defined in the PMS can be taken up or alternately some new competencies as per the purpose should be defined and stated. If it is for fast track identification the competencies needed across various levels to be called as potential fast trackers need to be identified.) 
  3. Identify Tools. The tools will depend on the competencies. Several tools are available from the market. They are priced per candidate and the price may range around $ 20 to $100 per candidate depending on the nature of tools. Simple role plays are less expensive and sophisticated management games and in-baskets are expensive. They are normally charged on the basis of the each candidate. Role plays and Management Games can be accessed form the books of Pfeiffer and Jones or Pfeiffer Associates (Available in India from Multimedia publishers from Mumbai). There are several books on Role plays.
  4. Identify assessors ( the assessors should be trained in ADCs and should have skills in observation, recording, classification, assessment and feedback. Assessors training programs are conducted for them by various agencies. Such training may range from about 30 to 40 hours depending on the prior exposure of the assessors and their skill background and occupation. If in-house assessors are chosen they should be at least two levels above in hierarchy from the levels of the participants and they should not have supervised the assesses in the past and should not have had any prior opinions of the candidates or free from biases)
  5. Train the assessors in the tools chosen. This may require a one to two day orientation. Alternately if the assessors are experienced they may be given the tools along with scoring systems and be oriented for 3 to 4 hours. Please note that if the Assessors are not trained they need to be trained  and untrained assessors should not be used.
  6. Design the ADC . The principles of ADC should be kept in mind.
  7. Provide feedback to candidate and help in developing Personal Development Plan on the basis of the feedback received.
  8. Interpret the data from ADC with care in using them for Administrative decisions. Feedback to the candidate and taking his view point are important development tools and is highly recommended. Remember that while the data generated from ADCs are helpful they have not yet demonstrated a very high predictive validity and have been found to be not necessarily related to past performance. Also the competencies demonstrated by the candidate using the ADC may vary from exercise to exercise and it should be interpreted and sued cautiously. ADC




Knowledge and Skills Required for Designing and Managing ADCs

Competencies required (K, A, S) To be  a participant in ADC

This will be decided by the company normally on the basis of the purpose and the levels for which it is meant

2  Competencies required (K, A, S) To be an Assessors in ADCs

  • Conceptual Knowledge of ADCs (history, purpose, methods, tools, assessments, validity, reliability, process, rules etc.)
  • Knowledge of competency mapping and assessment tools and techniques.
  • Knowledge of various tools and their potential and limitations
  • At least one experience of going through an ADC and using the tool which he/she is an assessor
  • Skills of Observation, Recording, Classification and Assessment
  • Skills of giving feedback
3 Competencies required (K, A, S) To Design and Conduct ADC.

  • All the above.
  • Knowledge of the tools and their sources of availability
  • Knowledge of the organization and its requirements
  • Knowledge of the limitations of ADC and their appropriateness for various purposes
  • Knowledge of the principles of conducting an ADC
  • Ability to design the ADC in terms of sequencing of the exercises, allocation of assessors, use of assessment sheets, spacing between the exercises, dynamics of participants etc.
4 Competencies required (K, A, S)To manage an ADC

  • Administration sills
  • IT skills
  • Logistics management
5 Competencies required (K, A, S)To Develop Tools required for an ADC

  • Tool development skills. These are intense skills to be developed. Training in developing various tools requires various durations of training. Easiest to develop may be Role plays which can be done through a two to three day skills training on Role plays.
  • In-basket development skills can be developed similarly over a two to three day skill development exercises.
  • Behavior simulations and Psychometrics may require a wider exposure and insights into human behavior. Tools like MBTI requires a week’s training and 16 PF two to three day training and a psychology background, Thomas Profiling a two day training and so on. Each tool has its own requirements. Even use of Udai Pareek’s Handbook of HR Instruments needs training in its use.
  • TVRLS offers special programs in psychometric testing
6 Competencies required (K, A, S) Using Data of ADC for development decisions, Promotions etc.

  • Knowledge of ADCs, their reliability, validity and limitations
  • Ability to prepare development plans from ADC data. Most often ADC data are sketchy and numbers. This requires an understanding of behavior indicators and seeking right information from the Assessors on the meaning of competencies etc.
  • Understanding of the organization and its requirements.
  • Ability to use multiple sources of information and taking HR decisions.



When should I go for an ADC

 ADCs are resource (experts) intensive and hence they are expensive. A one day ADC may require a minimum of four assessors and may cost a minimum of eight consulting days of charges (4 x 2 days one day for conducting the ADC and one day for processing the data and preparing reports etc. and some times for the first ADC it may involve more days if tools have to be identified etc.) This is one of the reasons why ADCs have not become popular in India and even in the US or UK itself.  They are gaining popularity now a days as talent has become scarce and organizations are repeatedly required to spot talent within their own company or outside.

Look for the benefits of the ADCs. Calculate the ROI. The development needs identified in the ADC may require further investments by the company than merely the investments on ADC and are often not easy. For example some organizations have identified Achievement Motivation and Change management and leadership skills as development needs after an ADC. These skills require not merely sponsoring a candidate for a program but following up and continuously providing an environment for practicing the skills.

Low Cost Assessments or Substitute ADCs

There can be many innovative and cheaper methods of assessment.

A well designed biographic form or career history form itself can be a good assessment tool.

Similarly a well conducted Interview using the principles of Behavior even interview can be a good assessment tool. This combined with Psychometric tests conducted in a development atmosphere and trust can go a long way in identifying candidate with potential.

If we add to the above a case discussion and a group discussion or a presentation in-house can be a deadly combination.

Add to this some external assessment by trained experts may solve the issue for a number of organization.

However remember that in such a case each candidate needs to be assessed on the basis of three to four hours of inputs given to a team of experts. In a day, the team may be able  at best to assess four to six candidates.

There are always costs involved in assessment. Ascertain the ROI before you undertake the assessment task and choose your methodology.