A Comprehensive Faculty Performance Evaluation Model using Expert Systems 1 Introduction

In the midst of competitive turbulence, organizations are facing an unprecedented need to re-structure and re-engineer in order to remain competitive and to survive (Grote, 1996 and Stivers and Joyce, 2000). Technical educational institutions today are facing extreme pressures of competition in terms of ratings. With the changing paradigms of the way TEIs manage people, issues of evaluation of performance have risen to the upper level of every decision amking agenda. Performance evaluation systems have come to play an indispensable role in helping TEIs achieve their goals. In TEIs faculty forms the core of human resource Faculty performance plays an important role in the overall evaluation of an institution. Institutions cannot survive with unmotivated and uncommitted faculty in the continually changing global competitive academic environment. An effective fpe system contributes as a major component of an organization which allows every employee to feel that his/her contribution has had impact on the success of the organization and should have the desire to add to that success (Boice and Kleiner, 1997). Consequently, faculty performance evaluation or faculty appraisal is an important component of the HR policy of TEIs. A comprehensive fpe system is necessary for any academic institution to maintain high standards of excellence, effectiveness and accountability (Aubrecht,1984). Experience and studies indicate that absence of up-todate performance evaluation systems has been one of the main reasons for difficulties and delay in review of recognition and rewards and promotions as well as planning for employee development. Effective performance appraisal systems help to create a motivated and committed workforce and to be effective they require the support of top management to translate organizational goals and objectives into personalized employee specific objectives (Boice and Kleiner, 1997). When implemented and managed properly, fpe systems are value added service to give the organization a competitive edge by helping achieve the organizational strategies (Allan, 1994, Kotter and Hesketh,1992, Martin and Bartol, 1998 and Orpen, 1997) . Performance appraisal can be simply defined as “an effort to determine worth”. More formally it is defined as “progress towards previously stated objectives”. A fpe process is a formal management procedure used in the performance appraisal of employees. The objective of the fpe is to examine the work performance of the employee with a view to identify weaknesses and strengths. In most organizations fpe results are used to determine the reward outcomes for the better performing employees as well as identify opportunities for improvement and development for the not so good or poor performers. It is important to link fpe systems directly to recognition as well as consequences absence of which will result in employees discounting the appraisal process. Such linkage will allow the employees to distinguish how successful or unsuccessful completion of objectives affects them directly (Boice and Kleiner, 1997). The objective of a fpe system should reflect the organizational goals and provide linkage between the employee and organizational performance (Boice and Kleiner, 1997). The organizational goals

1

should be translated into individual goals as the employee’s personal performance targets and standards by which the performance will be evaluated. A fpe system must ensure objective, transparent, distinct and fair evaluations and make the performance evaluations the key input to recognition decisions and personnel development plans. It must comprise an integrated procedure linking mandates, objectives, strategies, activities and employee performance agreements and standards as a basis for both sound staff performance appraisal and more effective overall organizational performance (Othman, 1994). A well defined performance appraisal system supports an integrated human resource strategy which enables the attainment of organizational goals (Dattner, n.d.). A faculty performance evaluation system has a dual purpose – employee development and key decision making. Dual Purpose of Faculty Performance Evaluation Key Decision Making Faculty Development Support to retention and promotion o Assessment of individual Establishment and measurement of performance (Centra, 1977) o Positive feedback for self goals Promotion for faculty development improvement (Arreola and Raoul, 2000) and production (Centra, 1977) o Motivation towards superior Fair grant of recognition, awards performance and compensation o Counseling for poor performance Succession planning o Identification of training and Clarify job responsibilities and development needs (Boice and expectations Kleiner, 1997) Recognition of expertise in critical o Realization of faculty potential areas for job assignments o Faculty self assessment and plans Faculty training programs and QIPs for self improvement and career development

o o o o o o o o

The rest of the paper is organized as follows. Section 2 presents the related and other published work in the field of performance appraisals. It also discusses the work done in the applications of expert systems to multi criteria decision making. Section 3 emphasizes the faculty performance evaluation scenario in TEIs. Section 4 categorozes the faculty . activity domains and the metrics for evaluation. Section 5 discusses the characteristics of fpe system and focuses on why most fpe systems are not successful. Section 6 proposes a model for the fpe and decision making based on based on the faculty activity domains and institutional objectives for faculty evaluation. Section 7 shows the implementation of the fpe model using expert systems. Section 8 concludes the paper. 2 Related Literature

2

A) Performance Evaluation System Performance appraisal is a means to evaluate employee’s current performance based on the performance standards expected from the employee and his/her past performance (Dessler, 2000). Boice and Kleiner(1997) explained the need for appropriate training for supervisors, raters and employees, a system for frequent review of performance, accurate record keeping, a clearly defined measurement system and a multi rater group to perform the appraisal. Various methods have been used by organizations to evaluate employee performance. Yee and Chen(2009) and Vicky(2002) pointed out some of these methods as ranking, trait scales, critical incidence narrative, criteria based, management by objectives (MBO), 3600 appraisal and peer review. Yee and Chen(2009) concluded that since different techniques have their own advantages and disadvantages, most organizations mix and match different techniques to develop their performance evaluation system based on their organizational needs and objectives. According to Othman(1994), most performance evaluation efforts are not successful because they are based on subjective measurement of characteristics and traits of the employees rather than their actual performance and work accomplished. Such systems become meaningless for career development, rewards and recognition and provide no corrective actions for ineffective and poor performance. Samarakone(2010) proposed a real time talent management system where managers can easily and continuously document an employee’s performance and provide coaching, recognition and awards and more based on the actual performance results, information on job related actions or behaviors and employee feedback allowing for improved management and decision making. Based on the concepts of multi criteria value measurement, Costa(n.d.) proposed a faculty evaluation model that addresses a whole range of academic activities with each criteria integrating the quantitative and qualitative components of an academic activity. The model had a two level hierarchical structure with the areas of activity at the first level and an exhaustive, concise, non-redundant and independent evaluation criteria at the second level. Arreola and Raoul(2000) emphasized on the linkage of faculty evaluation to faculty development with the purposes of providing feedback for self improvement and data for personnel decisions. The proposed system showed the use of faculty evaluation system to develop an overall comprehensive rating score to aid in using faculty evaluation in promotion and tenure, merit pay and post tenure decisions. Hassna and Raza(n.d.) identified teaching, scholarly endeavor and service to the University as three major components of the appraisal system and these need to work in harmony so as to achieve the desired excellence. They proposed a conceptual model to study the relationship among the three components and the results of testing the model

3

indicated the presence of a significant positive relation between the faculty teaching and service performances and no relationship between the scholarly endeavor with either teaching or service performance. Zhao and Li(2006) presented a scientific evaluating index system for performance evaluation of faculty of higher educational institutions with two major indexes – teaching evaluation index and scientific research evaluation index with 18 sub indexes. The system is based on the principles of being scientific, measured, objective and easy to do. Wright and Cheung(2007) investigated how managers see, interpret and make sense of their performance management system experiences and recommend the way forward for policy and practice of appraisal systems by using the repertory grid technique. The study provided a cross-sectional view of the current state of managerial cognitions and opened up new ways of thinking and doing things in appraisal research and practice. The findings provided very meaningful insights on what managers look for in appraisal system effectiveness. Pemberton, Hoskins and Boninti(n.d.) proposed a Human Performance Technology (HPT) model to identify and address employee performance issues by use of measurable performance indicators which would be used to set standards of performance for employees. Soltani, et al.(2006) addressed the question of quality driven organizations adjusting their performance appraisal systems to integrate total quality management requirements. Empirical research based on information gathered through questionnaires and interviews showed that by acquiring relevant knowledge and understanding of contextually appropriate performance appraisal and management, practitioners would be able to translate and mediate TQM requirements into performance appraisal criteria to maintain the integrity of organizational change initiatives aimed at long term business excellence. 3 Faculty Performance Evaluation in TEIs

There have been major transformations in the TEIs in recent years in terms of performance and accountability of people concerned and the transparency in decisions and policies related to faculty appraisal. A sound fpe system is seen as a motivator and opportunity for strategically aligning the performance of faculty with the institutional goals and objectives. Performance appraisals are important for both the employer and the employee point of view and a systematic framework ensures that performance appraisal is fair and consistent (Boice and Kleiner, 1997). An organization must be firmly committed to the fundamental way in which it values and recognizes performance; the key is to have a clear strategy and time table for putting in place an effective performance management system (Othman, 1994. Faculty performs various roles in the domains in TEIs. The performance evaluation system should be in alignment with the various roles performed by faculty as well as the personal skills and behavioral characteristics. A fpe system has higher acceptability in an

4

organization with shared objectives and reflecting organizational goals and understanding of the roles and responsibilities of each member Crane (1991). This implies that the performance of faculty be viewed and evaluated based on multiple criteria addressing a whole range of academic and administrative activities and their personal traits and at the same time respecting their criticality. 4 Metrics for Faculty Performance Evaluation

The authors studied the existing performance appraisal systems in TEIs, engineering colleges in particular. They interviewed senior employees like Heads of institutions, Deans and Heads of departments who are responsible for evaluating their juniors as well as employees who are rated by their seniors. The authors collected appropriate information from the interviewees on the faculty activity domains and the performance metrics used to evaluate faculty. Series of group meetings, brainstorming sessions and workshops were conducted to extract the knowledge. They also assessed the relative importance of the metrics or performance indicators towards the overall institutional strategy and its performance objectives. The meetings with top management gave insight into the objectives with which TEIs conduct faculty performance evaluation and the actions taken based on the results. Based on their study, the authors have identified four major domains– teaching, research and consultancy, administrative services and personal skills as the starting point for faculty evaluation. The authors have approached fpe from a multi dimensional perspective based on the roles performed by the faculty in these domains and the organizational objectives pertaining to them. Kaplan and Norton(1992), Chakravarthy(1986), Harrington(1991) and Maisel(1992) and others argued for a comprehensive evaluation system to measure along multiple dimensions of performance. Anthony(1988) stressed on “goal congruence” by linking the overall organizational strategy and objectives with performance evaluation. The authors have developed a performance evaluation model that integrates the qualitative and quantitative components of faculty performance in a TEI. The model describes the domains of faculty activity in a TEI along with the descriptors for each domain. The descriptors are in alignment with the goals and objectives of the TEI and the roles performed by faculty. Further it defines the specific indicators for each descriptor. All the descriptors or roles should be defined in terms of observable achievements, products or performances that can be documented (Arreola and Raoul, 2000). The descriptors describe the objects/functionality within a domain and the indicators contribute the specific metrics on which performance is evaluated. A well crafted system of metrics will lead to better decision making (Gates,1991). The study has brought forward a wide range of descriptors and indicators to measure faculty performance. However all descriptors and performance indicators may not match the institutional goals and objectives of all TEIs. TEIs need to be careful in designing the performance indicators. Performance evaluation systems are not generic; they must be tailor made to match the employee and organizational characteristics Henderson (1984). The descriptors along with the indicators that emerged from our study and survey of related literature are given the the tables.

5

Teaching Performance Evaluation Descriptors Indicators Course Portfolio o Course objectives o Maintenance of course files and their closeness and mapping with course objectives o Effective instructional design and delivery (Arreola and Raoul, 2000) o Adequate preparation (Glassick, Huber, and Maeroff, 1997) o Appropriate methods and teaching aids o Quality of questions in assignments and tests o Quality of test papers o Quality of evaluation work o Course management (Arreola and Raoul, 2000) o Conduct of laboratories o Development of new experiments Student Feedback (Kreber,2002) o Coverage of subject curriculum o Preparation and organization of the lectures o Presentation and communication skills o Planning of tutorials and quality of tutorial assignments o Uniformity of pace in teaching o Familiarity with latest developments in the field o Availability and accessibility to students o Query handling / doubt clearing Results o Pass percentage o Class average o Students with honors Peer Rating (Ory, 1991) o Experienced faculty visiting classes (Arreola and Raoul, 2000)  Subject knowledge  Quality of topic covered  Interaction with students

6

  Student Project guidance

Presentation and communication skills Quality of teaching material / handouts

o Quality of projects and their mapping with program objectives and industry needs o Level of guidance o Project completion

Publications

Research and Consultancy Activity Evaluation Descriptors Indicators o Number of publications o First author / co-author o Journals / Conferences o Publisher o Grants o Reviewer o Editor o o Number of books o Publisher o Text / Reference o Number of projects o Project Domain o Funding agency – national/state/private sector/sponsoring trust or society o Number of scholars o Post graduate / doctoral o Topic selection o Level of knowledge o Quality of presentation o Number of seminars, workshops, training programs, conferences attended o International / national / corporate / institutional

Book publication

Industrial consultancy and projects

Research guidance Seminar Presentation

Attending seminars, workshops, training programs, conferences

7

Administrative Services Evaluation Descriptors Indicators Departmental Responsibilities o Number of responsibilities College Responsibilities o Fulfillment University Responsibilities o Commitment Mentoring Students o Academics o Behavioral o Personal Problems Organizing seminars, workshops, training o Number of seminars, workshops, programs, conferences training programs, conferences organised o Team leader / team member o Commitment o Team work Personal Skill Evaluation Descriptors Initiative o o o o o o o o o o o o o o o o o Indicators Self starter Ability to work without constant supervision Understand duties Accept responsibilities readily Punctuality Availability to students Compliance to schedules Meeting deadlines Tenure of service Support and compliance to institutional policies and guidelines Oral Communication Written Communication Effectiveness in a team Leadership Relations with seniors Relations with peers Relation with students

Responsibility Timeliness

Loyalty

Communication Skills Team work Interpersonal Skills

The authors have not suggested specific descriptors or indicators to be used as a set of metrics by TEIs. Activity domains, organizational hierarchies, job profiles, faculty 8

competencies, the competitive situation and other factors create a unique performance evaluation environment for each TEI that requires a customized fpe system. It is unlikely that a single set of metrics capturing every nuance of every institution even exists, the authors have only suggested a common set of evaluation criteria for development of a “good” fpe system. 6 Proposed Model

The paper proposes a fpe model based on four major domains – teaching, research and consultancy, administrative services and personal skills. It is important that the components have a harmony so that the institution can achieve the delivered excellence and the application of specific criteria under these broad headings and their weighting may vary among academic units and among faculty members (Hassna and Raza) The proposed model is based on a rating system with an overall comprehensive rating score for the faculty in the activity domains of the institution. The metrics are associated with weights and a mathematical evaluation of ratings and weights compute the faculty score. The weights are based on the importance of the indicators. The faculty evaluation score obtained from the fpe will aid in key decisions on faculty development, faculty recognition and promotions as well as facilitate self assessment and improvement of faculty. According to Bacal (n.d.), a rating system compares employee performance to some pre defined set of criteria and produces a number of letter grade that represents the employee’s level of performance. In educational environments ratings are obtained from superiors, peers and students.
Teaching Research and Consultancy Administrative Services Personal Skills Faculty Performance Evaluation Model

Scores Prescribed Actions

Input Domains

Output

The model proposes a rating system for fpe to obtain quantifiable results. A n-point scale is assigned to each indicator for a descriptor within a domain.

9

The ratings are given on a n-point scale, the value of n based on the range of ratings determined by the needs of the institution. Each rating is associated with a performance level as shown in the table for a 5-point rating scale. The definitions of the performance levels are given by TEIs based on adequacy of the fpe requirement. Rating Scale 5 4 3 2 1 Rating Scale 5 4 3 2 1 Performance Level Excellent Very Good Good Average Unsatisfactory Performance Level Very High High Moderate Low Very Low

A clear definition of each level of performance must be provided and disseminated to all employees (Boice and Kleiner, 1997). This will help the employees to clearly understand the measurement system. The evaluation system incorporates assigning of ratings to the indicators for descriptors within the domains. Each indicator is further assigned a weighting factor or weight based on the importance of the indicator in faculty performance and institutional strategy. Weights are the key in multi-dimensional comprehensive evaluation (Zang, et al., 2008). A simple procedure for assigning the weights is to arrange the indicators in the order of their importance and assign the weights accordingly, the sum of weights for a descriptor being 100% or 1. The weights may vary with the institutions, rankings and even departments, depending upon the evaluation purpose and policy. The most appropriate method for determination of the weights is assessment by experienced and mature experts. The score for the indicator (xi) is obtained by multiplying the rating on the n-point scale with the weight assigned to the indicator i.e. the indicator score is evaluated as (r * wi) where r is the rating on the n-point scale and wi is the weight attached to the ith indicator. The fpe model first aggregates the scores for the descriptors based on the indicator scores, next the scores for the specific domains are aggregated by summation of descriptor scores. Lastly the total score for the faculty is evaluated by summation of the domain scores. 10

The descriptor score(ds) is given as m ds = wd / m ∑ xi i=1 xi - score for each indicator of the descriptor wd – total weight of the descriptor m – total number of indicators for the descriptor The score ds obtained from each descriptor lies in the range (0.1 to 0.n) where n is the number of rating scales. The descriptor scores are converted to performance levels as follows – Pl = ds * 10 Descriptor score 0.5 0.4 0.3 0.2 0.1 In this case, n=5. The domain score (Ds) is evaluated as follows – m Ds = ws / m ∑ ds i=1 ds – descriptor score for each descriptor in the domain m – total number of descriptors for the domain ws - total weight of the domain Similar to the descriptor scores, the domain scores are also converted to performance levels. The aggregate score for the faculty is the summation of individual scores for all the domains. Score = Dteaching + Dresearch + Dadministration + Dpersonal skills A report generation follows describing the overall ratings for the domains, the “critical” descriptors where immediate attention for improvement is needed and the descriptors where performance is exceptionally good. The sample performance rating may be described as shown in the table Rating Scale 5 4 3 2 1 Performance Level Very High High Moderate Low Very Low

11

Performance Rating 5

Description o Excellent o Recommendations for recognition as per institutional policies o Very Good o Descriptors where there is scope for improvement o Good o Descriptors where improvement is required o Average o Descriptors where improvement is necessary o Unsatisfactory o “Critical” descriptors o Suggestions for self improvement o Suggestions for faculty development initiatives o Recommended actions for continued non-performance

4 3 2 1

At the same time, the model also suggests the domains where the faculty qualifies for due recognition and rewards as per the institutional policies. An important function of performance evaluation is to help employees develop so that they can contribute more effectively (Bacal, n.d.). Most existing rating systems do not fulfill this requirement. In order that employees develop and learn, they need to know what they need to change, where they have fallen short and what they need to do. The organization needs to know what the employees need to do to overcome the deficiencies and give recognition where needed. The proposed model caters to all these requirements. The concepts of “floor” and “ceiling” have been added in the model. Flooring facilitates an “alert” in case of score falling below a threshold value for a descriptor or a domain. The “alert” is an indication for the faculty and the institution to initiate necessary corrective action. Ceiling facilitates that a single descriptor or domain does not score beyond a threshold value to add towards overwhelmingly high performance in an area and overshadowing the weaker performance in other areas. The domain scores of the faculty and the overall score facilitates key decision making by the management and initiation of self improvement and career development plans by the faculty. The fpe model may evaluate an impressive overall score for a faculty whereas the faculty may be deficient in certain domains or descriptors. It is important to highlight these deficient areas. The fpe model evaluates domain wise and descriptor wise scores for independent analysis.

12

Activity Domains of Faculty

Descriptors in each Domain

Indicators for each descriptor

Scores for indicators

Score for each descriptor as summation of scores for indicators

Normalize descriptor score

Yes

Descriptor score exceeds “ceiling” threshold threshold

No

Descriptor score below “floor” threshold

Yes

Sound “Alert”

No

Score for each domain as summation of scores for descriptors

Normalize domain score

Yes

Domain score exceeds “ceiling” threshold

No

Domain score below “floor” threshold

Sound “Alert”
Yes

No

Total Score as summation of domain scores

Key Decision Making

Faculty Development

13

References [1] Adams, M.R. (1989). “Tenuring and Promoting Faculty”. Thought and Action,Vol.5 No.2, ,pp.55-60 [2] Allan, P. (1994), “Designing and implementing an effective performance appraisal system”, Review of Business, Vol. 16 No. 2, p. 3 þ . [3] Anastassiou, T.I., Doumpos, M., 2000, “Multicriteria Evaluation of the Performance of Public Enterprises : The Case of Greece”, Investigaciones Europeas de Direcciony Economia de la Empresa, Vol. 6, No. 3, pp 11-24 [4] Anthony, R.N., 1988, “The Management Control Function”, Cambridge, MA: Harvard Business School Press [5] Arreola, Raoul 2000, “Developing a Comprehensive Faculty Evaluation System”, A Handbook for College Faculty and Administrators on Designing and Operating a Comprehensive Faculty Evaluation System, 2nd Ed., Boulton, MA: Anker Publishing, ISBN 1-882982-32-0 [6] Aubrecht, J. D. (1984). “Better faculty evaluation systems”. In P. Seldin, Changing practices in faculty evaluation: A critical assessment and recommendations for improvement (pp. 85-91). San Francisco: Jossey-Bass [7] Bacal, R., “Performance Appraisal - Why Ratings Based Appraisals Fail”, Work911/Bacal & Associates Business & Management Supersite, available at <http: //work911.com/performance/particles/rating.htm> [8] Bhimani, A. (1993), “Performance measures in UK manufacturing companies: the state of play”, Management Accounting, Vol. 71 No. 11, pp. 20-2. [9] Boice, D.F., Kleiner, B.H., 1997, "Designing effective performance appraisal systems", Work Study, International Journal of Productivity and Performance ManagementVol. 46 Iss: 6, pp.197 – 201 [10] Caplice, C., Sheffi, Y., n.d., “A Review and Evaluation of Logistics Performance Measurement Systems”, The International Journal of Logistics Management, pp. 61-74

14

[11] Centra, J.A., (1977). “How Universities Evaluate Faculty Performance: A Sur vey Of Department Heads”. Prince ton, NJ Educational Testing Service, GRE Board Research Report, No. 75- 5bR [12] Chakravarthy, B.S., 1986, “Measuring Strategic Performance”, Strategic Management Journal, Vol. 7, pp. 437-458 [13] Coelho, J.F., Moy, D., 2003, “The new performance evaluation methodology and its integration with management systems”, The TQM Magazine, Vol. 15, No. 1, pp. 25-29 [14] Costa, C.A.B.E., Faculty evaluation using multicriteria value measurement, ADVANCES in MATHEMATICAL and COMPUTATIONAL METHODS, ISBN: 978960-474-243-1, pp. 287-290 [15] Crane, J.G. (1991), “Getting the performance you want”, The American Society of Association Executives, February, pp, 25-30. [16] Dattner, B., n.d., “Performance Appraisal”, Dattner Consulting, LLC, available at http://www.dattnerconsulting.com/presentations/performanceappraisal.pdf [17] Dessler, G. (2000), Human Resource Management (8th Edition), New Jersey, Pearson Education, Inc. [18] Hassna, L.O., Raza, S.,” An assessment of the relationship between the faculty performance in teaching, scholarly endeavor, and service at Qatar University” , Research in Higher Education Journal, available at www.aabri.com/manuscripts/10614.pdf [19] Gates, A. (1991), “The smartest way to give a performance review”, Working Woman, May, pp. 65-8. [20] Glassick, C., Huber, M., & Maeroff, G. (1997). Scholarship assessed: Evaluation of the professorate. San Francisco, CA: Jossey-Bass. [21] Goff, S.J. and Longenecker, C.O. (1990), “Why performance appraisals still fail”, Journal of Compensation and Benefits, November/December, pp. 36-41. [22] Gomes, C.F., Yasin, M.M., 2011, “Performance Measurement Practices in Manufacturing Firms Revisited”, International Journal of Operations and Production Management, Vol. 31, No. 1, pp. 5-30 [23] Grote, D.,1996, The Complete Guide to Performance Appraisals, Amacom, New York, NY. [24] Harrington, J.H., 1991, “Business Process Improvement : The Breakthrough Strategy for Total Quality, Productivity and Competitiveness”, New York, McGraw-Hill Inc.

15

[25] Henderson, R.I. (1984), Practical Guide to Performance Appraisal, Reston Publishing, Virginia. [26] Kaplan, R.S., Norton, D.P., 1992, “The Balanced Scorecard- Measures that drive Performance”, Harvard Business Review, Vol 70, No. 1, pp. 71-79 [27] Kaplan, R.S., Norton, D.P., 1993, “ Putting the Balanced Scorecard to Work”, Harvard Business Review, Vol 70, No. 5, pp. 134-142 [28] Kotter, J. and Hesketh, J. (1992), Corporate Culture and Performance, Free Press, New York, NY. [29] Kreber, C. (2002). “Teaching excellence, teaching expertise, and the scholarship of teaching”, Innovative Higher Education,Vol. 27,pp. 5-23 [30] Levine, P., Pomerol, M.J. , Saneh, R., 2002, “Rules integrate data in a multicriteria decision support system”, Systems, Man and Cybernetics, IEEE Transactions ISSN: 0018-9472 Vol 20, Issue 3 , pp. 678 - 686 [31] Maisel, L.S., 1992, “Performance Measurement : The Balanced Scorecard Approach”, Journal of Cost Management, Vol. 6, No. 22, pp. 47-52 [32] Manoochehri, G. (1999), “The road to manufacturing excellence: using performance measures to become world-class”, Industrial Management, March-April, pp. 7-13. [33] Martin, D.C. and Bartol, K.M. (1998), “Performance appraisal: maintaining system effectiveness”, Public Personnel Management, Vol. 27 No. 2, pp. 223-31. [34] Morris, M.L., 2009, “The Need for a Results-based Performance Management System : Employee Appraisal and Development”, available at http://www.slideshare.net/Jackie72/department-head-workshop-on-faculty-performancereview [35] Orpen, C. (1997), “Performance appraisal techniques, task types and effectiveness: a contingency approach”, Journal of Applied Management Studies, Vol. 6 No. 2, pp. 13948. [36] Ory, J. C. (1991). “Changes in evaluating teaching in higher education”. Theory into Practice, Vol.30, pp. 30-36 [37] Othman, K.I., 1994 “Toward a new system of performance appraisal in the united nations secretariat : Requirements for successful implementation”, JIU/REP/94/5 June 1994

16

[38] Patterson, D.W., 2010. Introduction to Artificial Intelligence and Expert Systems, India: PHI Learning P. Ltd [39] Pemberton, A., Hoskins, J., Boninti, C., n.d., “Minding the Gap: Identifying Performance Issues Using the Human Performance Technology Model”, Emerald Group Publishing Limited [40] Samarakone, P., 2010, Improving performance appraisals using a real-time talent management system, HUMAN RESOURCE MANAGEMENT INTERNATIONAL DIGEST VOL. 18 NO. 4 2010, Pp. 35-37, Emerald Group Publishing Limited [41] Soltani, E., et al., 2006, “The compatibility of performance appraisal systems with TQM principles – evidence from current practice”, International Journal of Operations & Production Management , Vol. 26 No. 1, pp. 92-112, Emerald Group Publishing Limited [42] Stivers, B.P. and Joyce, T. (2000), “Building a balanced performance management system”, SAM Advanced Management Journal, Vol. 65 No. 2, pp. 22-9. [43] Terrence, H. M. and Joyce, M. (2004), Performance Appraisals, ABA Labor and Employment Law Section, Equal Employment Opportunity Committee [44] Tsang, A.H.C., Jardine, A.K.S. and Kolodny, H. (1999), “Measuring maintenance performance: a holistic approach”, International Journal of Operations & Production Management, Vol. 19 No. 7, pp. 691-715. [45] Vladimir, M.O.,1988, “Some issues in designing an expert system for multiple criteria decision making”, Acta Psychologica, Vol 68, Issues 1-3, pp. 237-253 [46] Vicky, G. (2002), Performance Appraisals, Loss Control Services, Texas Association of Counties. [47] Wright, R.P., Cheung, F.K.K., 2007, “Articulating Appraisal System Effectiveness Based on Managerial Cognitions”, Personnel Review, Emerald Group Publishing Limited, Vol. 36, No. 2, pp. 206-230 [48] Yee, C.C., Chen, Y.Y., 2009, “Performance Appraisal System Using Multifactorial Evaluation Model”, World Academy of Science, Engineering and Technology pp. 231235 [49] Zang, Z., et al., 2008, “A Grey Model for Evaluating Research Performance of University Researchers in China”, I-4244-2384-2/08 IEEE, pp. 2039-2044

17

[50] Zhao, J.H., Li, X.H., 2006, “Investigation on Comprehensive Evaluation of University Teachers’ Achievements”, Proceedings of the Fifth International Conference on Machine Learning and Cybernetics, Dalian, 13-16 August 2006

18

Sign up to vote on this title
UsefulNot useful