You are on page 1of 25

Reengineering the Graphic Rating Scale

1

Reengineering the Graphic Scale

REENGINEERING THE GRAPHIC RATING SCALE


Sven Aelterman
Troy State University / Hogeschool Gent
TSU Box # 821292, Troy, AL, 36082
1-334-670-4487
sven@aelterman.com


Dr. Hank Findley
Troy State University
200 Bibb Graves, Troy, AL, 36082
1-334-670-3200
hfin655887@aol.com
Reengineering the Graphic Rating Scale
2

ABSTRACT
It appears that few Human Resource Management topics are as controversial as performance
management and/or appraisal. This article is not meant to discuss the merits of performance
appraisal, rather it will attempt to propose two changes to one of the most if not the single
most used performance appraisal method: the graphic rating scale. The first proposed changed
is a completely new one and attempts to provide a method for identifying high performers more
clearly and at the same time identifying poor performers more easily. The second part of the
paper advocates the use of automated systems to store and analyze performance appraisals.

Reengineering the Graphic Rating Scale
3

REENGINEERING THE GRAPHIC RATING SCALE
INTRODUCTION
Performance Appraisals
Performance appraisals are among the most controversial topics in Human Resource
Management. For every article advocating the use of performance appraisal and/or management,
there seems to be one other article that argues against using performance appraisals. Especially
TQM advocates argue against performance appraisals. Their main concern is that performance
appraisals are contrary to the total quality principle, where striving for quality is an ongoing
effort (Allender, 1995).
There is a need for depicting accurate performance. Many human resource decisions are
made based on performance, such as pay raises, promotions and demotions, and even
termination. Evaluating recruitment results is also often tied with performance. Identifying areas
where development for staff is needed can also be based, at least partially, on performance data.
Research even suggests that formal appraisals help in creating a competitive edge (Longenecker
& Fink, 1999). So at least for now, performance appraisals remain a necessity, for several
reasons, including legal and motivational (Findley, Amsler & Ingram, 1999). Therefore, it is
important to continue to improve the way performance appraisals are performed.
Graphic Rating Scales
Graphic rating scales have come under scrutiny because of several issues related to their
use in performance appraisal. The most cited problems with rating scales are halo and
leniency/strictness (Chiu & Alliger, 1990). The halo effect is a problem that occurs when a rating
Reengineering the Graphic Rating Scale
4

on one job dimension affects the rating on another one (Solomonson & Lance, 1997). Leniency
or strictness occurs when a rater tends to rate all ratees respectively low or high.
However, the graphic rating scales also have important advantages. Graphic rating scales
are easy to develop, administer, and interpret. Furthermore, graphic rating scales yield results
that allow comparison across ratee groups (Chiu & Alliger, 1990). Besides, they are generally
recognized as being the most widely used method in performance appraisals.
New research also found that graphic rating scales are as good as or better than two other
methods that are generally deemed better. Tziner, Joanis and Murphy concluded that especially
in terms of ratee attitudes and goal characteristics, the graphic rating scale outperforms
behaviorally anchored rating scales (BARS) and behavior observation scales (BOS) (Tziner,
Joanis & Murphy, 2000).
For all of the reasons above, reengineering should aim to combine the best factors of the
existing methods while trying to eliminate the weak elements of each method. Because of their
widespread use, it is worth spending time trying to improve the graphic rating scales. This text
will try to do exactly that, by suggesting two ways to eliminate problems with the graphic rating
scale. Each method will be explained in detail, after which possible cost issues and fairness
issues will be discussed.
Typical Problems with the Current Situation
Throughout this text, the same cases will be used to illustrate the proposed
improvements. These cases will be introduced here. The first case looks at the performance
appraisal problems in a sales environment. The second case deals with the appraisal of hourly
workers at a major US manufacturer of toys.
Reengineering the Graphic Rating Scale
5

Case 1
One of the salespeoples interest in the firms products has declined to the point where it
starts interfering with his ability to sell the product effectively. His manager has noticed this and
has rated him accordingly on the appropriate job dimension, Product Knowledge. The full
performance appraisal can be found in the appendix (Figure 4: Performance appraisal of Jason
Borman.). Mathematically, he ends up being rated average (using the values from the graph from
Figure 1: Graph showing the current relation between performance dimensions and their
mathematical values.). The fact that he has been rated Unacceptable on Product Knowledge does
not reflect in the overall performance.
Case 2
The supervisor of a work team in a large manufacturing plant is faced with the feared,
traditional year-end performance appraisal. The company uses the graphic rating scale to
perform the performance appraisal for its hourly workers. This particular supervisor supervises a
team of 14 in the main manufacturing plant. As for many supervisors, with the year end comes
what he sees as a major annoyance: performance appraisal.
In this particular plant, the year also means a lot of extra work because of the holiday
season. Two weeks ago, one employee made a mistake that caused his team to have to put in
some serious overtime to catch up on lost production time. That employee put in the most
overtime because she felt guilty. The remainder of the year has been uneventful. The supervisor
failed to note however that she did go through great lengths to increase the teams overall
productivity, and succeeded.
Her performance form can be found in the appendix (Figure 6: Performance appraisal of
Erika De Wit.).
Reengineering the Graphic Rating Scale
6

This case presents two of the problems described above: objectiveness (or rather lack
thereof) and recency errors. As can be seen from the performance appraisal form, John rated
Erika down on Quality of Work (although there have been no other events) and Teamwork. A
fairer review would probably have given Erika an Above Average rating for Teamwork, and
depending on the outcome of an investigation regarding the mistake, Average for Quality of
Work. This would have caused Erika to be rated Above Average on the overall score.
This performance appraisal shows neither objectiveness or diligence. The supervisor
clearly let the recent mistake weigh too heavily on the appraisal (halo and recency effect), while
not taking into account the overall performance the employee exhibited.
FIRST PROPOSAL TO IMPROVE GRAPHIC RATING SCALES
The first proposed improvement will target the identification of poor and high
performers. Identifying poor and high performers is important in maintaining a motivated
workforce.
Problem Background
The traditional relationship between performance dimensions and their mathematical
values is an interval scale, as is shown in the graph below:
Reengineering the Graphic Rating Scale
7

U
n
a
c
c
e
p
t
a
b
l
e
E
x
c
e
l
l
e
n
t
A
b
o
v
e
A
v
e
r
a
g
e
A
v
e
r
a
g
e
B
e
l
o
w
A
v
e
r
a
g
e
O
u
t
s
t
a
n
d
i
n
g
1
6
5
4
3
2
7

Figure 1: Graph showing the current relation between performance dimensions and their
mathematical values.
The interval between each performance dimension is equal, in this case one. So someone
with an unacceptable performance (either overall or on one single job dimension) is deemed to
be equally far away from being average than someone with an excellent performance; just on the
other side of the scale. This is not the way it is in reality. However, no one really knows what the
difference between Unacceptable and Below Average on one hand and Below Average and
Average on the other really is.
It is important to note that an unacceptable rating on one or more job dimensions
constitutes a cost to the company. That cost is hard to determine, but can be very high. Think
Reengineering the Graphic Rating Scale
8

about an employee getting an Unacceptable rating on Safety in an oil refinery. Therefore, it is
very important to identify poor performers.
Proposed Solution
The solution to this issue would be to use some type of a logarithmic scale, as presented
in the graph below:
U
n
a
c
c
e
p
t
a
b
l
e
E
x
c
e
l
l
e
n
t
A
b
o
v
e
A
v
e
r
a
g
e
A
v
e
r
a
g
e
B
e
l
o
w
A
v
e
r
a
g
e
O
u
t
s
t
a
n
d
i
n
g
-1
4
3
2
1
0
5
-2

Figure 2: First proposed graph with an alternative relationship between performance dimensions
and their mathematical value.
The graph shows clearly that the difference between Unacceptable and Below Average
now is considerably greater than the difference between Below Average and Average. So, if a
person was rated Unacceptable on a certain job dimension, this rating of Unacceptable would
weigh far more heavily on the overall performance than in the existing use of graphic rating
Reengineering the Graphic Rating Scale
9

scales. At the same time, for employees being rated average, this proposal has no influence on
their overall rating, which increases the perceived fairness of the method, which is as indicated
above important.
Does this graph come close to depicting the real difference between Unacceptable and
Below Average on one hand and Below Average and Average on the other? As stated before,
that difference has not been determined. The recommendation here is to use a common policy
throughout the company, again to increase the perceived fairness.
Refining the Solution
To continue on the issue of perceived fairness: high performers may find the use of the
graph above unfair because their high performance is now less recognized than the performance
of above average employees. The solution to this issue would be to change the direction of the
curve at a certain point, to allow a greater distance between Outstanding performance and
Excellent performance. This is shown in the graph below:
Reengineering the Graphic Rating Scale
10

U
n
a
c
c
e
p
t
a
b
l
e
E
x
c
e
l
l
e
n
t
A
b
o
v
e
A
v
e
r
a
g
e
A
v
e
r
a
g
e
B
e
l
o
w
A
v
e
r
a
g
e
O
u
t
s
t
a
n
d
i
n
g
-1
4
3
2
1
0
5
-2

Figure 3: Graph displaying the relationship between performance dimensions and their
mathematical value.
The point of inflection in the curve can occur sooner, so that the difference between
Above Average and Excellent would be greater. Furthermore, the curvature of the graph is also
variable. The only requirement is that within a company or department, the same graph is used to
project performance ratings. If not, employees may perceive it to be easier for some job
categories to get an Outstanding overall rating. This will affect the perception of fairness and
therefore citizenship behavior (Chan Kim & Maugorgne, 1997).
The tables below show an example performance rating using graphic rating scales before
and after the use of the first proposed improvement. The table uses traditional performance
dimensions, such as Quality of Work and Quantity of Work. This example is not meant to
provide a sample performance appraisal form, rather is it used to show how this proposal works.
Reengineering the Graphic Rating Scale
11

Below the performance (i.e. Unacceptable, etc) is the numerical value that is assigned to it,
according to the graphs above.
Job
Dimension
Unacceptable
(1)
Below
Average
(2)
Average
(3)
Above
Average
(4)
Excellent
(5)
Outstanding
(6)
Quality of
Work
4
Quantity of
Work
3
Job
Knowledge
1
Attendance 4
Reliability 3
Safety 4
Overall
Performance
3.167
Table 1: Performance appraisal form using the traditionally assigned values.
Job
Dimension
Unacceptable
(-1.5)
Below
Average
(1.9)
Average
(3)
Above
Average
(3.75)
Excellent
(4)
Outstanding
(5)
Quality of
Work
3.75
Quantity of
Work
3
Job
Knowledge
-1.5
Attendance 3.75
Reliability 3
Safety 3.75
Overall
Performance
2.625
Table 2: Performance appraisal form using the weighted performance dimensions.
As the example above shows, by weighting the performance dimensions, different results
can be obtained.
This refinement can also have a drawback. Assigning a higher value to Outstanding
might undo the effect of assigning the lower value to Unacceptable in those cases where an
employee has been rated Outstanding and Unacceptable. The table below illustrates this.
Reengineering the Graphic Rating Scale
12

Job
Dimension
Unacceptable
(-1.5)
Below
Average
(1.9)
Average
(3)
Above
Average
(3.75)
Excellent
(4)
Outstanding
(5)
Quality of
Work
5
Quantity of
Work
3
Job
Knowledge
-1.5
Attendance 5
Reliability 3.75
Safety 4
Overall
Performance
3.208
Table 3: If the employee has both Unacceptable and Outstanding job dimensions.
As can be seen from the table above, the fact that this particular employee scores low on
job knowledge, but average to high on the other job dimensions, causes the advantage of the
solution to be lost. When returning to the values of the original graph (Figure 2: First proposed
graph with an alternative relationship between performance dimensions and their mathematical
value.), we get this result:
Job
Dimension
Unacceptable
(-1.5)
Below
Average
(1.9)
Average
(2.9)
Above
Average
(3.5)
Excellent
(3.9)
Outstanding
(4)
Quality of
Work
4
Quantity of
Work
2.9
Job
Knowledge
-1.5
Attendance 4
Reliability 3.5
Safety 3.9
Overall
Performance
2.800
Table 4: Using the values from the original improvement proposition.
In this example, the assigned ratings remained unchanged, but the mathematical values
assigned to each performance dimension have been taken from Figure 2. Using these values, the
Reengineering the Graphic Rating Scale
13

overall rating falls back into the Below Average category. We can conclude that organizations
must make a choice between using a curve with a point of inflection and using a regular
exponential-like curve. When using a graph with a point of inflection, it may be advisable to
further increase the difference between Unacceptable and Below Average.
Cost
The added cost of adopting this method is minimal. Initially, time must be spent finding
fair weights to apply to the performance dimensions. However, once these numbers have been
found and introduced on the performance appraisal forms or, better yet, introduced in the
performance appraisal software, the use of this proposal becomes completely transparent. It may
be necessary to spend time explaining the new method to the raters and ratees, in order to make
sure that the new scales are perceived as being fair.
Fairness
This method is probably not entirely without critique. For example, it is necessary to
inform employees of the use of this method. If not, employees may consider it unfair that
unacceptable performance is treated differently than performance rated below average. It is to be
expected that especially poor performers would perceive this method as less fair, while high
performers are expected to think positively of it.
It is also important to mention that this method can be combined with a previously found
method: weighting the job dimensions. If a company finds that safety is three times as important
as quantity of work, the performance rating for safety is multiplied by three before the totals are
made. This proposal complements and possibly enhances the weighting of job dimensions. That
Reengineering the Graphic Rating Scale
14

is because the use of a negative number for an unacceptable performance that is also weighted
will increase the influence of that unacceptable rating (through the negative number) even more.
Example Solution Using the First Proposal
To provide another example of the use of this proposal, Case 1 (see above) will be
reviewed using the improved graphic rating scale. The employee was rated average using the
traditional method of assigning linearly increasing numbers to the performance dimensions. In
the appendix, the performance appraisal form using weighted values for the performance
dimensions can be found (Figure 5: Performance appraisal of Jason Borman using the proposed
improvement.). Note that in this instance, different values have been assigned to the performance
dimensions that are shown in the previous examples.
SECOND PROPOSAL TO IMPROVE GRAPHIC RATING SCALES
The second improvement will attempt to effectively combat recency errors and leniency
or strictness. Recency errors occur when the raters only include recent events and performance
while doing the performance appraisal. Research has shown that with paper-based systems, only
performance of the last 6 to 12 weeks is taken into account (Dutton, 2001). When performing
annual or bi-annual performance reviews, raters should strive to include the performance of the
affected period, i.e. either 12 or 6 months, and more importantly, to weight each performance
incident equally, regardless of when it occurred. Strictness or leniency occurs when raters have a
tendency to rate either low or high and occurs more with graphic rating scales than with a
method that uses ranking.
Reengineering the Graphic Rating Scale
15

Problem Background
Often, performance appraisal is done at the end of the year. Many managers consider the
performance appraisal to be a necessary evil and want to get it over with as soon as possible. On
top of that, the end of the year may be a busy period for the firm, causing the issue of
performance appraisal and review to be taken lightly. Research shows that because of these
factors, objectivity suffers and raters tend to use the most recent performance during the
performance appraisal. Usually, paper-based performance appraisals dont take raters back more
than 12 weeks (Dutton, 2001). This last factor is known as recency error. Other negative effects
affecting performance appraisal are halo and rating inflation, as discussed above.
"The reason managers dread doing performance appraisals and employees dread getting
them is that they see them as an event and not a process," says Dick Grote, author of "The
Complete Guide to Performance Appraisals" (Grote, 1996).
Proposed Solution
To avoid recency errors, companies must consider automating their performance
management system. While many businesses have automated many of their administrative
processes, the automation of performance management in general and performance appraisal in
particular has only recently attracted attention. Through the use of new software, it becomes
easier to implement an automated system. Some of the applications that are available now are
also accessible to small and medium-sized businesses.
While a discussion of the different software packages that are available for performance
management is beyond the scope of this paper, it is worth mentioning the advantages that come
with such software. Afterwards, some of the changes that would need to be made to the
performance management practices in a company will be highlighted.
Reengineering the Graphic Rating Scale
16

Advantages of Automating Performance Appraisal Management
Lately, 360-degree performance appraisals have gained acceptance. While adding
reviewers to the appraisal process may be beneficial to the outcome of the appraisal, it does
create a significant overhead for the manager involved. By automating the actual recording of the
performance appraisal, a lot of time can be won. Instead of having each reviewer fill out a paper
form, then submitting that form to the responsible manager who has to compile the information.
Automating this system could mean that the reviewers log on to an internal web site and fill out
the form electronically
1
. The responsible manager can then check on the progress of the review
and simply request the final, compiled data from the computer system.
Of equal usefulness is the ability to keep a history of performance appraisals without the
need for an archiving system and storage space to store the paper documents. Tied to keeping an
extended history without added cost or effort is the ability to search that history quickly. Keeping
a history of performance appraisals is useful because that data can be analyzed to predict future
performance of new hires, an important aspect of the selection process.
A third advantage of using automated systems for performance appraisals is because
information is a very important asset that must be safeguarded. Electronic data can be backed up
easier that paper records. And while electronic data is usually viewed as less secure than paper-
based records, implementing access controls to electronic records is actually more feasible than
doing the same thing with paper records.
While keeping an electronic history allows you to analyze the data easier than do paper
files, they also allow different views on the data to be created with ease. The performance
appraisal history can be viewed by department, by age, by education level, etc. With the

1
While at the same time receiving assistance from the software on how to fill out the form appropriately!
Reengineering the Graphic Rating Scale
17

technologies that are available today, non-technical managers can create those reports
themselves, without being dependent on a IS department to create the necessary queries.
Since compiling the performance appraisal data is so easy and quick, it becomes possible
to schedule performance reviews more than once a year. This has been advocated by many
performance appraisal specialists as a way to improve the entire performance management
process.
Finally, using software to perform performance appraisals ensures that company-wide the
same policies and procedures are used. Having a company-wide policy that is enforced helps
ensuring the validity and the fairness of the whole process.
Disadvantages As Well
While automating the performance appraisal process clearly does have a lot of tangible
advantages, care should be given to possible pitfalls, the first of which is data security and
privacy. As has been mentioned above, electronic data is often regarded as being less secure. But
by taking the necessary precautions, it is possible to create an electronic system that is both
secure and efficient. This is shown in practice by Red Hat Corp., a distributor of Linux operating
system software. Red Hat has installed an internet-enabled performance management software
package, and has so far successfully secured the data (Dutton, 2001). More than that, while it
may be hard to limit access to paper-based performance appraisals, especially inside the HR
department, the software allows for strict rules determining the access permissions of users.
Next to security issues, there is the issue of cost. Acquiring or implementing software that
is capable of the functions listed above is costly. Then there is the added cost of training, support,
and maintenance. However, in the total cost of performance appraisals, the cost of the
technology is minimal. Research shows that the annual cost of performance appraisal per
Reengineering the Graphic Rating Scale
18

employee can be as high as $3,200. The majority of the costs are in preparing the appraisal,
conducting reviews, designing the appraisal system, etc. (Dutton, 2001). An automated system
will actually help reduce the time spent on the most costly parts of performance management,
thereby directly giving a return on investment.
Companies must also be alert to over-automating the process. In the words of Robert
Bacal (1999): Performance appraisal is an interpersonal communication process. While
software may assist reviewers in gathering and analyzing the data, the performance review
sessions do remain a human affair.
Effective Use of Software
To use software effectively for performance appraisals, it should offer an employee log
functionality. The employee log could be redesigned to include an immediate evaluation of
critical incidents as soon as they have been fully investigated. This solution comes forth from the
belief that when evaluating performance based on critical incidents, the evaluation tends to be
different after time has passed. In general, the evaluation tends to reduce extreme performance
(good or bad) to average performance (Mitchell & James, 2001).
When entering new critical incidents in the employee log, the rater should be given
maximum support from the information system. For example, companies that currently use the
graphic rating scale and coupled it with descriptions of what performance level relates to what
behavior
2
may choose to have the rater select from a list of behaviors instead of a list of
numbers. In that case, the rater is relieved from the duty of having to rate someone Average,
Below Average, or Excellent. Instead, he selects the behavior that was exhibited by the employee

2
Instead of Graphic Rating Scales (GRS), these scales are Behaviorally Anchored Rating Scales (BARS).
Reengineering the Graphic Rating Scale
19

from a list. The software assigns the actual rating in the background. Then, when the time for the
performance appraisal comes, it suffices to retrieve the list of critical incidents with their
associated rating.
Performance appraisals today are based on more than critical incidents. For example, the
goals that are set for employees and work units should also be entered in the system. When a
goal is met or not met, this should be appropriately entered into the system. The system can then
rate the performance of the individual or team. Rating the achievement of goals is a separate
appraisal method, Management by Objectives (MBO). However, it is possible to combine
graphic rating scales with MBO. The advantage of using software when combining different
appraisal methods is that the different methods can be transparent to the user.
This continuous use of software is consistent with the belief that performance appraisals
should be an ongoing assessment instead of a once-in-a-year event (Fandray, 2001). While the
performance review with the employee may still take place only once a year, it facilitates and
encourages a continuous evaluation of employee performance. This proposal also achieves three
of the six improvements suggested by Weizmann, namely Link the performance-management
calendar to the organizations business calendar, Conduct a mid-year review, and Dont get
bogged down in paperwork. (Weizmann, 2001)
Fairness
This proposal has to potential to increase employees perception of fairness with
performance appraisals. Since an unbiased computer system actually does the rating, issues with
performance appraisals such as discrimination based on sex or race can be avoided. Since
software-based performance-appraisals tend to focus on results and actions rather than
personality traits, employees are more likely to view them as fair (Dutton, 2001).
Reengineering the Graphic Rating Scale
20

To use this solution effectively, it is necessary for supervisors to include every critical
incident promptly. This is the only way to ensure that all behavior is logged and will be taken
into account during the performance appraisal interview. Companies can enforce the use of this
type of employee log by training managers to recognize and rate behavior and by regularly
checking if the employee logs are filled out conscientiously.
Example Solution Using the Second Proposal
By revisiting the case 1, we can show that by diligently keeping records of performance
on each employee, this employee would not have been rated Below Average. Her efforts would
have been recognized properly. Although she did cause overtime, something that must
undoubtedly be included in the performance appraisal, it should not reflect on other job
dimensions and it should not be the only event included in the appraisal.
As described above, using performance appraisal software has the potential of reducing
recency and objectiveness errors when using graphic rating scales. As always, introducing a new
method should be followed by training of both supervisors and employees.
RECOMMENDATION
While other solutions may exist to improve the efficiency of the graphic rating scale, the
solutions that have been presented in the paper may very well tackle the problems more
effectively than others, since they attempt to reduce the influence of human behavior more than
solutions.
The reader should also be aware of the fact that no research has been conducted as to the
feasibility of the proposed improvements. The ideas presented in this paper are purely
theoretical. However, the example cases present frequent occurrences of problems with
performance appraisals. In those cases that are presented in this paper using the proposed
Reengineering the Graphic Rating Scale
21

improvements definitely helps. These errors may be reduced greatly by improving the training
for the supervisors and managers performing the performance appraisal. However, the need for
training supervisors has been known for a long time (Buzzotta, 1988; Eyres, 1989). One would
expect that companies taking performance appraisal serious have already implemented training
and awareness programs for their supervisors and managers. And yet, performance appraisals
continue to be a source of controversy.
One final remark: as with any proposed solution to a problem, when first implementing it,
care should be given to combine the guidelines described in the paper with common sense. This
is especially important in this case because as stated before, no research as to the actual
usefulness of the proposals.
Reengineering the Graphic Rating Scale
22

APPENDIX
Performance Appraisal Forms for Case 1
Figure 4: Performance appraisal of Jason Borman.
General Manufacturers

Performance Appraisal Form for: J ason B ormanJob Title: S ales Representative
Date: 12/ 10/ 2001
Prepared by: B elinda G omez Job Title: S ales Manager

Category Unacceptable Below
Average
Average Above
Average
Excellent Outstanding
Quality of Work
2
Quantity of Work
4
Teamwork
4
Attendance
4
Product
Knowledge
1
Overall
3
Comments:

Employee has seen and read this performance appraisal form on 12/ 11/ 2001:

J ason B orman(signature of employee)
Reengineering the Graphic Rating Scale
23

Figure 5: Performance appraisal of Jason Borman using the proposed improvement.

General Manufacturers

Performance Appraisal Form for: J ason B ormanJob Title: S ales Representative
Date: 12/ 10/ 2001
Prepared by: B elinda G omez Job Title: S ales Manager

Category Unacceptable
(-1.5)
Below
Average
(1.5)
Average
(3)
Above
Average
(4.5)
Excellent
(6)
Outstanding
(8)
Quality of Work
1.5
Quantity of Work
4.5
Teamwork
4.5
Attendance
4.5
Product
Knowledge
-1.5
Overall
2.7
Comments:

Employee has seen and read this performance appraisal form on 12/ 11/ 2001:

J ason B orman(signature of employee)
Reengineering the Graphic Rating Scale
24

Performance Appraisal Forms for Case 2
Figure 6: Performance appraisal of Erika De Wit.
REFERENCES
Allender, H. D. (1995). Reengineering performance appraisals the TQM way. Industrial
Management, Nov/Dec 1995, 10.
Bacal, R. (1999). Seven Stupid Things Human Resource Departments Do To Screw Up
Performance Appraisals. Retrieved on January 13, 2002 from
http://www.work911.com/performance/particles/stuphr.htm
Buzzotta, V. R. (1988). Improve your performance appraisal. Management Review, Aug. 1988,
Vol 77, 40-44.
Toys Corp.

Performance Appraisal Form for: E rika De Wit Job Title: Production L ine
Date: 12/ 15/ 2001
Prepared by: J ohn S mithJob Title: Manufacturing S upervisor

Category Unacceptable Below
Average
Average Above
Average
Excellent Outstanding
Quality of Work
1
Quantity of Work
3
Teamwork
2
Attendance
4
Safety
3
Overall
2.6
Comments: C aused teamto put in overtime

Employee has seen and read this performance appraisal form on 12/ 17/ 2001:

E rika De Wit (signature of employee)
Reengineering the Graphic Rating Scale
25

Chan Kim, W. & Maugorgne, R. (1997). Fair process: Managing in the knowledge economy.
Harvard Business Review, July/August 1997, 65-76.
Chiu, C. & Alliger, G.M. (1990). A proposed method to combine ranking and graphic rating in
performance appraisal: The quantitative ranking scale. Educational and Psychological
Measurement, Fall 1990, Vol. 50, Issue 3, 493-505.
Dutton, G. (2001). Making reviews more efficient and fair. Workforce, April 2001, Vol. 80, Issue
4, 76-82.
Eyres, P. S. (1989). Legally defensible performance appraisal systems. Personnel Journal, Juli
1989, 58-62.
Fandray, D. (2001). The new thinking in performance appraisals. Workforce, May 2001, Vol. 80,
Issue 5, 36-40.
Findley, H. M., Amsler, G. M. & Ingram, E. (1999). Reengineering the performance appraisal.
National Productivity Review, Winter 2000, 39-42.
Longenecker, C. O. & Fink, L. S. (1999). Creating effective performance appraisals. Industrial
Management, Sep/Oct 1999, 18-23.
Mitchell, T. R. & James, L. R. (2001). Building better theory: Time and the specification of
when things happen, The Academy of Management Review, October 2001, Vol. 26, Issue
4, 530-547.
Solomonson, A. & Lance, C. (1997). Examination of the relationship between true halo and halo
effort in performance ratings. Journal of Applied Psychology, Vol. 82, Issue 5, 665-674.
Tziner, A., Joanis, C. & Murphy, K. R. (2000). A comparison of three methods of performance
appraisal with regard to goal properties, goal perception and ratee satisfaction. Group and
Organization Management, Vol. 25 No. 2, June 2000, 175-190.
Weizmann, J. (2001). Quote found in Fandray, D. (2001).

The companies, people, and events presented in this paper are fictitious. Any resemblance to actual companies,
people, or events is entirely coincidental.

You might also like