Professional Documents
Culture Documents
Q058 Tome 2008
Q058 Tome 2008
www.emeraldinsight.com/0309-0590.htm
The evaluation
The evaluation of HRD: a critical of HRD
study with applications
Eduardo Tomé
Universidade Lusı́ada – Vila Nova de Famalicão, 513
Vila Nova de Famalicão, Portugal
Received 19 September 2008
Revised 19 December 2008
Abstract Accepted 8 April 2009
Purpose – The purpose of this paper is to analyze critically the most important methods that are
used in the evaluation of human resource development (HRD).
Design/methodology/approach – The approach is to ask two questions: What are the methods
available to define the impact of HRD in the economy? How can we evaluate the evaluations that have
been made?
Findings – There are two main perspectives to evaluate any program, by results (counting
occurrences) and by impacts (calculating the differences the investment made in the society). The first
type of method does not find the impact of the program, the second type does.
Research limitations/implications – The analysis is limited by existing studies on HRD. The
implications are that the conditions that underline the existence of HRD programs define the type of
evaluation that is used.
Originality/value – The results of this paper put the evaluation problem in a new perspective. It
explains the difference between methodologies (results and impacts) and scientific fields used (public
administration, social policy, HRD, KM, IC, microeconomics, HR economics) by the type of person
responsible: public administrator, private manager, HRD expert, knowledge manager, IC expert,
microeconomist. The differences between the applications of those methodologies based on the type of
funding – private, public, external – are also explained.
Keywords Human resource development, Knowledge management, Business policy, Lifelong learning,
Financing
Paper type Conceptual paper
Introduction
In the economy of the twenty-first century, it is well understood that human resources
(HR) are a major factor of economic and social wellbeing. In world terms, the Education
Index, computed by the United Nations Development Program (UNDP) shows the clear
relation between education, income and human development (Table I). The Human
Development Report Education Index (HDREI) is obtained by computing the adult
literacy rate (percent for the population aged 15 years old and above) with the
combined gross enrolment ratio for primary, secondary and tertiary education in
percentage. We must note that the HDREI is not only a measure of the level of
Education in countries, but also a result of decades of investment in Education and in
basic Human Resource Development (HRD) in any country. Journal of European Industrial
Training
Vol. 33 No. 6, 2009
The author thanks colleagues who were present at the 2008 AHRD Europe Conference session in pp. 513-538
q Emerald Group Publishing Limited
Lille, when a first version of this paper was presented, and a very enthusiastic anonymous 0309-0590
referee for comments and support. DOI 10.1108/03090590910974400
JEIT In what concerns specifically training, it is well known that the more developed
33,6 countries are also those that invest more in private training (funded by companies and
by individuals) and in public training (funded by the State). Usually, those investments
are measured by the share of adults that participate in life long learning, and by the share
of individuals that undergo training, or by the funds spent in active labour market
policies (see Table II). The data usually relate to developed countries (OECD, 2006;
514 Eurostat, 2007; UNESCO, 2006). However, even if the investment in HRD is seen as an
Theoretical background
Concepts
The two main concepts that will be used in this paper are those of human resources
(HR) and human resource development (HRD). We assume that those two concepts are
intrinsically linked but are different from each other. We will explain those differences
in a moment, We use a very broad and generic definition of HR grounded in HR
Economics (Bannock et al., 2005). For us HR are all the characteristics an human being
has that may be valued in the labour market. It includes physical characteristics like
health, beauty, and non-physical characteristics as education, training, competences,
and skills. And for us HRD is also any process, formal or informal, made by the
individual or by an organization that tries to increase, adapt and improve HR (McLean,
2005) As a final note to this sub-section we must had that, continuing to use the
economist perspective, and in line to Gramlich’s analysis (Gramlich, 1990, pp. 160-164)
we will view HRD as an investment, and therefore we will assume an economic
perspective We will also assume that all the HRD initiatives may be assimilated to
social policies (Tomé, 2005).
JEIT Theories
33,6 The positive influence of HRD in the situation of individuals, companies, and nations
was noticed by the most important thinkers in the economic world, since the dawn of
Economics. Indeed, Adam Smith, was the first to notice that skills and competences are
an important factor in the development of countries (Paul, 1989). The other most
important classical economists (Marshall (Paul, 1989), Stuart Mill (Shackleton, 1995),
516 Pigou (Stevens, 1996)) agreed with Smith’s idea. Keynes also gave competences and
skills an important place in the economic process (Keynes, 1990). But it is commonly
accepted that the basic economic theory that is nowadays considered on HRD saw the
light with the works of neo-classical economists in the 1950s (Becker, 1962, 1976, 1980;
Ben Porath, 1967; Mincer, 1958, 1962, 1974; Oi, 1962; Schultz, 1971) who built the
foundations of the Human Capital Theory (HCT). According to the HCT, HRD
activities should improve labour productivity, and in consequence the wages of
qualified people should be higher than those of non-qualified people. HRD should also
improve the employment possibilities of the involved. And HRD should contribute to
improve the product quality, and the social climate of companies. Finally, at a national
level. HRD should generate exports, income and ultimately social cohesion.
The HCT analysis were quite strong and valid. But in the last decades, HCT theories
have been challenged in a significant a number of ways. Some of those challenges
relate to:
.
the explanation of how discriminative factors like race, gender and company
dimension may be incorporated in the analysis;
.
the public intervention in the labour market; and
.
the role of knowledge and intellectual capital in the economy.
Evaluation methods
Economists and managers have developed many important evaluation methods in
order to evaluate HRD. Those methods apply to programs, and therefore may be
analyzed in the context of the evaluation of social policies (Tomé, 2005).
Accordingly, some of those methods relate basically to occurrences (they may be
called results methods), and some of those methods relate to the perceived
consequences of the training programs (they may be called impact methods). We
will present result methods and impact methods in order to analyze the methods
used in evaluations.
Question Meaning
How? Methodology
When? Timing
Where? Sampling, target group, organization, country
Why? Rationale
Who? Planner, evaluator
What? Objectives
Whom? Decision maker
What decisions? Decisions
Table IV. Which criteria? Criteria
Questions that might be Which benefit? Beneficiary
addressed when
evaluating a program Source: Chinapaj and Miron (1990)
Result methods The evaluation
The easiest way to evaluate HRD investments or policies is to count the level of of HRD
investment in money terms (financial indicator) and the number of people that
participate in the operation, namely the trainees (physical indicator). This is the easiest
way of describing a program in administrative terms. Usually, the analysis of every
program begins with this sort of data. However, result indicators only analyze costs
and participants, and are not related with benefits. At least in theory it is absolutely 519
possible that a large program (regarding costs and participants) may have very low or
even negative consequences for the people involved. But, result methods tend to be
used to evaluate policies because they are cheaper, less risky, less random and much
easier to command than impact methods (Tomé, 2005). Sometimes these methods
analyze the accessibility of the program, relating the number of participants to the total
number of possible users. In other cases the reports also check the employability of the
program, six or more months after its conclusion.
It is quite interesting to note that in the public administrators’ perspective the best
way of evaluating a program is to use basic result methods. Quite crucially, the public
administrators are concerned with the execution of the program and the correctness of
the planning. To execute means to spend the money that is available and to support a
large number of people and organizations. If the operation was received by a large
number of persons and organizations, this should mean it was a success. Costs should
be compensated by accessibility. Needs should have been met. Furthermore, result
methods provide the administrator with the description of the operation in financial
and physical terms, as administrators want. Results methods provide the
administrator with a simple and ready to use measure of the “public good” that was
originated from the operations. Accordingly, to this way of thinking:
.
the final report should include financial and physical results;
.
the public administrator may also sense the importance of assessing the
employability of the program; and
.
the definition of the impact of the program is only a second thought in the
administrator’s mind in particular if it evolves analyzing the reality outside the
program (like it happens with the impact methods in particular with the
“with-without” analysis).
Finally, result methods are not random, in the sense that different methods do not
provide different figures (we only have to count people, organizations or expenses), and
therefore they are much more preferred by the administrators.
Impact methods
These type of methods try to define the difference the program makes to the society or
organization in which it is implemented (Tomé, 2005). This difference may be detected
in a “with-without” perspective or in a “before-after” perspective. In any case, impact
methods are very important because they are the finest way to analyze programs. This
happens because impact studies do not only take into account costs (as result methods
do when they refer to financial indicators) but they try to define the benefits of the
program and to compare them with the costs. However, as a counterpart, impact
studies present quite a number of difficulties: they are expensive (in money, time,
resources, decisions), they are risky (because they may expose the programs’ and the
JEIT administrators’ failures), and they are random because they often require the use of
33,6 random based statistical techniques and because there are different techniques
available that may originate different estimates (Tomé, 2005).
We will analyze these six types of studies in succession. The six perspectives are
summarized in Table V.
Economics. According to this point-of-view an HRD operation may be analyzed in
microeconomic or in macroeconomic terms. The first type of analysis is adapted to
situations in which we can detect a difference between the participants and the
non-participants in the program. Heckman et al. (1999) called this analysis the
“treatment on the treated” scheme. The idea behind this analysis is very simple: the
These two situations imply that public programs of HRD tend to be evaluated by
legitimistic indicators, which do not question the program, and that, quite on the
contrary, tend to elude the eventual shortcomings or failures that the program might
have. The legitimistic approach is also palpable in the fact that the evaluations that are
made are inward looking and tend to be made after the end of the program. We may
elaborate further on that idea:
.
First of all, the programs are submitted to a procedural analysis in order to
measure the gaps between the reality and the expectations; but no analysis on
the consequences of the reality is frequently made (Laffan, 1988);
.
Second, the evaluation that is done tends to be concerned with the participants;
the question of comparing with the non participants and looking outside the
program to find differences or similarities between the program and the external
environment is secondary if existent (European Union, 2002);
.
Third, the economic evaluations tend to be made after the conclusion of the
program, and not after the end of the first economic year of the program
(European Union, 2002).
The management approach. This approach is used when the HRD operation is funded
by a private company. According to this approach, HRD programs must be evaluated
by the traditional accountability and financial mechanisms that exist to evaluate any
investment of any private company. This amounts to “putting numbers on people”.
Because all humans are different and because competences are immaterial and their
outputs difficult to measure, this one is not at all an easy task. Anyway, trials have
been made to estimate the return on investment (ROI) of HRD operations (Fits-zen,
2000).
The intellectual capital approach. However, in the last two decades, the management
world has began to understand that a considerable and important difference exists
between the market value (MV) and the book value (BV) of any company. And The evaluation
furthermore, it has been considered that the difference between the MV and the BV is of HRD
due to the intangible assets. HR are considered to be an important part of those
intangible assets, together with the organizational routines, the R&D, the brands, and
the relations with costumers (Sveiby, 1998) his perspective was above all compounded
in the Balance Scorecard analysis (Kaplan and Norton, 1996). This approach generated
some “new management practices”, related to Knowledge Management (see v). 523
Accordingly, the Balance Scorecard methodology has been used to assess investments
in KM. Furthermore, this approach was consigned in recent guidelines on
accountability issued by the European Commission. Finally, in recent years, new
methods have been proposed to address the difference between MV and BV (Pike and
Roos, 2007; Volkov and Garanina, 2008).
The knowledge management perspective. Knowledge became such an important
scientific subject in the last two decades, that a scientific field developed around the
concept (Nonaka and Takeuchi, 1995). In organizational terms that event result in the
emergence of studies on increasingly important phenomena related to knowledge
creation, knowledge sharing, knowledge renewal (Kianto, 2008), knowledge transfer
and also learning and unlearning (Cegarra-Navarro and Sanchez-Polo, 2008). The
relation between those analysis and the HRD analysis is immense, and indeed we
consider that KM and HRD are the two faces of the same coin. From a KM
point-of-view, HRD operations are evaluated on the impact they have on knowledge
and on the various aspects in which knowledge is analyzing: sharing, transfer,
creation, renewal etc. Those impacts have been assessed using complex factor analysis.
The HRD science perspective. Finally, HRD operations may be analyzed from the
perspective of the HRD science itself. This perspective may be also called the HRM
perspective. The basic model that underlines this approach was made by Kirkpatrick
(1998). This model defines four stages in which the HRD operation may have an impact
in the organization. The stages are:
(1) reaction;
(2) learning
(3) behaviour; and
(4) results.
Finally, the evolution of the control group or comparison (D) has to be taken into
account. The impact of the program (E) is given by B – D.
JEIT How to evaluate an evaluation
33,6 The problem
We think that nowadays evaluation is a common procedure in developed countries and
in conscious and prosperous institutions. To our knowledge, evaluation practices have
already been made by National bodies in more than 50 countries (www.
internationalevaluation.com/members/national_organizations.shtml).
524 However, not all the efforts made to evaluate training and HRD have the same
quality (see www.cgu.edu/pages/4085.asp) and therefore, the evaluation of the
evaluation is a very important scientific topic.
The model
In this context, a very interesting model to evaluate evaluations was presented some
years ago (Gramlich, 1981, pp. 171-5; Gramlich, 1990, pp. 159-64). In this model the
benefit derived from an evaluation is defined as a function of three variables:
BEN ¼ p * q * PV ð1Þ
The meaning of each one of those variables is the following:
. p is the probability of discovering the true impact of the program;
.
q is the probability that the government/private authority will pay attention to
the result of the evaluation and take it into account; and
.
PV is the program cost measured in monetary units.
It is also interesting to understand that (Burtless and Orr, 1986) the benefit of an
evaluation can be measured as the costs (F) that are saved if the program is stopped
minus the benefits that cease to exist if the program is stopped (G) plus the evaluation
costs (H). It is clear that if F is higher than G þ H the evaluation is socially beneficial.
Therefore, high values of p, q and PV may generate a high F, that will ensure that G
plus H are offset. In this circumstance the evaluation is worth. However, a costly
evaluation (with an high H) and a high F, may identify an even higher G and in this
hypothesis, H is justified because it showed that G exists and G is an important and
positive social benefit.
The conclusions
Using the method of evluating evaluations just exposed we arrive to the following
three conclusions concerning the priorities regarding program evaluations:
(1) It is preferable to evaluate using very fine, even if extremely costly impact
methods (for which p is large). This happens because impact methods are the
correct way of obtaining a correct estimation of the program benefits. The
alternative is to use results methods, which overlook the estimation of benefits
and only deal with the costs. This happens even if there is some scientific
controversy about what are the best impact methods.
(2) It is preferable to evaluate programs when there is a doubt about the program
usefulness (and therefore q is large). This happens because by logic, if the
administrator (private or public) has no doubts about the program, he will never
evaluate and also will tend not to take in account the results from the
evaluation.
(3) It is preferable to evaluate large programs (for which PV is large). This happens The evaluation
because, given that evaluation studies are a scarce resource, it makes more of HRD
sense to use them analysing very important (and therefore big) programs, than
to use them studying very small ones.
Discussion
However, there are open questions and controversies regarding all the three conclusions: 525
(1) It is not quite clearly scientifically what is the best econometric estimator to
define the program impact (Heckman et al., 1999): the debate on this topic is
continuing; some authors defend the experimental studies (Burtless and Orr,
1986), and some others try to put the different methods in perspective (Heckman
and Smith, 1995; Heckman et al., 1998), pointing that the non-experimental
studies are also valuable, using cross sections or time series data.
(2) The readiness of the administrator and political agent to listen to the evaluators’
ideas is fundamental; from the public choice analysis (Stiglitz, 2000) it is well
known that politicians may have interest in their own programs. This interest may
be reflected in the evaluation procedures. Administrators may only evaluate
programs if they think the evaluation suits their interests. The timing of the
evaluation procedures may be done in order to suit the same interests. And finally,
evaluations may be made in short time spans in order to legitimize the programs.
(3) It may be very useful to evaluate a pilot study, whose value PV is small, in order
to obtain insights for the making of a bigger program.
PV Physical indicators
Financial indicators
Q Political discourse
Type of methodology put in place
P Detailed analysis of the methodologies used Table VI.
Elements of the model
Source: Own analysis of Gramlich (1981, pp. 171-155); and Gramlich (1990, pp. 159-154) and relevant proxys
JEIT categories of methods we presented. Basically, the American experience of evaluation
33,6 of HRD programs has three basic characteristics:
(1) On one hand, some programs were funded by public funds, were analyzed with
social concerns, using microeconomic control group methods that would expose
their impact;
(2) On the other hand, some programs were based in companies, funded by
526 companies and evaluated because of the company’s own concerns;
(3) For the two cases just mentioned, p, q and PV had high values, contributing to
the great importance of the evaluative activity in the USA.
USA Public MDTA, CETA, High: 1 to 3 High: impact studies Very high: concern Very high: billions of LaLonde (1995); White
CETA, WIA million dollar over evaluation dollars invested and House, 2008
spent each year reaches Academy of assessed
Sciences
USA Private Main private Very high: many Very high: plethora of High: concern of with Very high: huge McLean (2005); Smith
companies and billions of dollars scientific methods in impact in order to investment assessed and Piper (1990)
organizations. each year four scientific fields: justify investment. on daily basis
Small companies microeconomics, Return calculated
when concerned HRD, knowledge using different
with HR management, and methods
development accounting
Western Public EU Until 1990 Increasing: from 1 Small: results and Small: spending and Very small: weak EU (1997a); Laffan
Europe percent to absorption absorption are evaluation process (1983); Bradley (1995)
7 percent of the calculations only one indicators of benefits was
EU budget real evaluation study
performed: YTS
(Bradley, 1995)
Western Public EU From 1990 to Increasing Rare: Spain, Germany, Medium: legal concern Very small. in practice EU (1997b); Tomé
Europe 2000 slightly: 7 to 8 Ireland, UK with evaluation almost nothing (2001b); Ballart (1998);
percent of the EU changes Saez and Toledo
budget (1996); Bradley (1995);
Lechner (1998);
O’Connell and
McGinnitty (1996)
Western Public EU Since 2000 Still augmenting Increasing: impact High: legal concern Medium: countries EU (2002)
Europe studies made on the with evaluation. that use impact
European Strategy for Evaluation reports studies are not the
Employment for majority
Spain, Greece, France,
Austria Netherlands,
Sweden
(continued)
The evaluation
according to Gramlich’s
Evaluation experiences
model
Table VII.
527
of HRD
33,6
528
JEIT
Table VII.
Country Funding Programs PV P Q BEN Sources
Western Public, Since the WWII Sweden, Norway, High: impact studies High: need to assess High: investment Bjorklund (1994);
Europe National and mainly since Denmark, in growing number of scientifically the use scientifically assessed Torp (1994); Neilson
the 1980s Austria, Belgium, cases of public money and et al. (1993); Heckman
the Netherlands, its economic impact et al. (1999); EU
Finland, France (1997b)
Western Private Major private Very high: High: economic High: evaluation is a High: investment Russell (2003)
Europe companies and billions since methods even if not need, even if with frequently assessed
organizations. WWII using social based company based
Some small impact studies. concerns, case studies
companies Studies based on
companies
Eastern Public High: training Small: existing as a Very small: taking for Very small: almost ILO (2008); Barro and
Europe was a widespread form of positive granted that to the impossible to detect Lee (2000); Tovstiga
phenomenon evaluation program was positive any impact and Tulugorava
(2007); Volkov and
Garanina (2008)
Other Public, Brazil, Canada, High: billions Public- high: economic Public high: concern Public: High: policy Hum (2002);
cases Private, Other: Australia, Brazil, since WWII in impact studies with public money guidance Middleton (1992);
World Bank Canada, particular in Private: high Private: High: Private. high guidance Ordonez (2002);
and NGOs Australia, Japan. Canada and company based concerns about of private investments Johanson (2006);
New Zealand Australia. methods usefulness Small: international Guthrie (2007); Bontis,
Increasing in LDC Other: low; result Other small: beneficial aid concerns prevail 2002
in recent years methods instincts dominate
in 1998. This program has not yet been evaluated by impact studies (White The evaluation
House, 2008). of HRD
Privately funded HRD programs
Private training and HRD programs have been made in the USA by the most
influential companies and organizations since World War II (Table VII, line 3). And, as
a consequence, in the USA, the evaluation of HRD private programs is a common 529
occurrence. A wide web of consultants, and associations for evaluation research exists
to evaluate HRD programs (www.eval.org). HRD operations have been evaluated using
a plethora on methods, namely the following (McLean, 2005; Smith and Piper, 1990):
.
questionnaires: testing and griding; Kirkpatrick four levels;
.
trained observers: interviews; return on investment methods; impact
microeconomic methods; and more recently Balance Scorecard and systemic
approaches.
Those methods cover the various types of evaluation perspectives indicated in Table V.
In fact, the various types of evaluators compete for the market of the evaluation of HRD
programs. Therefore, when related to the Gramlich methodology, the values of p, q and
PV in regarding privately funded HRD programs in the USA have been high, because
respectively, the methodologies used have been scientific, the private administrators
have shown concerns over the outcomes and the value of the programs analyzed were
very high.
The only cases we know of programs funded by the ESF and evaluated with impact
methods between 1990 and 2000 relate to Spain (Ballart, 1998; Saez and Toledo, 1996),
the UK (Bradley, 1995), Germany (Lechner, 1998) and Ireland (O’Connell and
McGinnitty, 1996). Therefore, the value of p was not considerably augmented, even if it
was said and announced that the effect of the programs was going to be detected. So,
the potential value of the evaluations continued to be quite small (see Table VII, line 5).
However, in a clear sign that the evaluation of HRD was an important matter, in, 1999,
the European published the collection “MEANS”, on the “Evaluation of Socio-Economic
Programmes” (European Union, 1999). That book was composed of five volumes and in
them the European Commission made a summary of the various types of methods that
were available to study the Structural Funds. As a consequence, it was expectable that
those methods would be extensively put in use in the third programming phase
(2001-2006). And indeed, for some countries (namely Spain, Greece, France, Netherlands,
Austria, Sweden) the investment in HRD by the EU was evaluated in 2002 using impact The evaluation
studies, as a part of the European Strategy for Employment, and in some cases (Spain, of HRD
Greece and Austria) the impact was found to be positive (European Union, 2002).
Therefore, the benefit that might be derived from the evaluations increased in the last
decade, at least in same cases (see Table VII, line 6).
Additionally we must note, that one very interesting characteristic of the
evaluations made on the programs funded by the ESF, is that by rule, they are made 531
ex-post on the whole program and not only ex-post on one of the years of the program.
This characteristic in turn implies that it is almost impossible to perform a meaningful
impact study because impact studies should be made when the program is still
operating (European Union, 2002).
In consequence of what was just written, when we apply the Gramlich method to the
HRD programs funded by the European Union, we arrive to the following conclusions
(see Table VII, lines 4, 5 and 6 taken in consideration at the same time):
.
The value of PV has been increasing steadily since 1958, and that increase was a
consequence of the increasing importance that HR, and HRD has had in the
European economy;
.
However, the value of p has not increased very much, in all the countries, because
the methodologies of evaluation that were used in most of the cases were not able
to detect the true impact of the program in socio-economic terms. In most cases
those methodologies analyzed the program itself and not its consequences.
.
Finally, regarding q the political discourse was seldom translated in practice.
Even if the concerns of the administrators rose with the time, and if the
evaluation became one of the phases of every operational program, in practice
the change was small: the methodologies used in the evaluation were basically
basic result ones: the evaluations compared the situation of the participants in
the program in a before-after comparison; the immediate effect of the evaluation
was severely diminished by the timing in which they had to be made.
Therefore, we think that, as a rule, the potential value that might be extracted from the
evaluations made to the programs funded by the EU was small. This fact might be in
turn explained because the funds that were are awarded by Brussels, covered usually
more than 50 percemt of the program costs, and therefore those programs were
considered by the national recipient as “goods” or even “gifts”, and money that should
be absorbed. Absorption was seen by the European administration and by the National
administrations as the first and essential signal of success.
532 In consequence, and in what relates to Gramlich’s model, it seems evident that the
value of the PV variable has been increasing with the passage of the time, because the
economic changes that happened since 1973 meant that the public presence in the
labour market, regarding training and HRD should be widened and reinforced in order
to support the people and organizations in need. But at the same time the values of p
and q were increased, because in one hand the shortage of public money called for
better impact studies, and because in the other hand that some shortage made the
public authorities to be more open and sensitive to the results of the evaluations.
Private funded programs
HRD practices have been a frequent practice in Western Europe since the end of WWII
and mainly since the 1960s (Table VII, line 8). Most of the privately funded programs
made in Europe are and have been submitted to simple forms of evaluations, namely
result methods. Many programs have been evaluated on pedagogical and reactive
grounds. Some programs have been evaluated with ROI techniques and with Balance
Scorecard techniques (Russell, 2003). Most of the evaluations made were related to the
company own results. It seems that the European private companies rely on the fact
the HRD is useful, and that they expect that usefulness to be shown in terms of
competences, financial results, routines or consumer satisfaction.
Other cases
Outside Western and Eastern Europe, the USA, the evaluations on HRD have been
essentially made in developed countries like Japan (Johanson et al., 2006), Australia
(Guthrie et al., 2007) and Canada (Bontis, 2002a). However, in the last ten years, with
the globalization of knowledge, and the widespread diffusion of good practices, some
first manifestations of HRD evaluations have been made on Brazil and South America,
on Russia, on China and on India (Hum and Simpson, 2002; Middleton et al., 1992; The evaluation
Ordóñez de Pablos, 2002). of HRD
Quite strikingly, the evaluation procedures implemented to analyze those programs,
varied with the funding source, namely:
.
Important HRD programs funded by the public administration took place
outside the USA and Europe in countries like Brazil (SENAI (www.senai.br/br/
Publicacoes/snai_ind_pub.aspx) Canada (with the Canada’s Job Strategy) and 533
Australia (Australian Government, 2008).
.
In some Low Developed Countries (LDCs) countries, HRD programs have been
done with the support of the World Bank and the other Non Governamental
Organizations (NGOs) (www.worldbank.org/aid/). Those programs were usually
evaluated using result indicators, and competence based indicators being made
on the assumption that their existence was positive for the recipient societies. We
don’t know of any deep and fine socio economic impact analysis regarding the
use of those HRD programs.
.
Finally, privately funded training and HRD was also a common characteristic of
developed countries, like Japan, Canada, Australia, and New Zeeland. As it
happened in the USA and in Europe, those programs had an important
dimension, and were frequently evaluated using complex scientific techniques
(see above). That fact probably happened because the companies were concerned
with the return of the investment they had made.
Concluding comments
Conclusions
We derived eight important findings from this paper:
(1) Evaluations of HRD experiences have spread worldwide, as a function of
economic development. In less developed societies HRD investments are very
small, and HRD evaluations are also almost non-existent.
(2) The first finding is true even if in some cases, like the programs that are funded
by private companies in the developed world, the evaluation is not done because
the administrator believe that the operation is profitable.
(3) The public programs on HRD tend to be large and oriented to the
disadvantaged people. They tend to achieve social goals. As a consequence,
they may be evaluated by two main sorts of evaluation techniques: results
methods and microeconomic impact methods.
(4) In the first hypothesis indicated in number 3, the evaluation is more of the social
policy type. This situation tends to exist when the funds are perceived to come
from abroad (like in the EU case) or if the idea of the administrator is so strong
that he feels that there is not need to evaluate the impact of the program; this
last hypothesis is analogous to the case of the private manager mentioned in 1).
(5) The private programs on HRD tend to be made by profit seeking companies.
They are meant to generate large future profits. Those programs use to be
evaluated in terms of competences and learning (in an HRD perspective) in
terms of Knowledge (in a KM perspective), in terms of the difference between
the market value and the book value (in terms of IC analysis) and in terms of
JEIT ROI (using the traditional management methodology). This situation is true and
33,6 compatible with the situation described in 2).
(6) By geographic areas, the USA has been the country in which more HRD
programs have been evaluated, and better evaluated; they were followed by the
Nordic countries, the UK, Central Europe, Japan, Canada and Australia.
(7) In the twenty-first century benefiting with the possibilities that the internet and
534 the IT technologies give, HRD evaluations were made in many other regions
like Southern Europe, Latin America, the Muslim World, Africa, and South East
Asia. Nowadays, HRD and HRD evaluations are worldwide phenomena.
(8) Quite crucially, the BRICs (Brazil, Russia, India and China) all made significant
advances in HR and HRD in the last two decades.
Limitations
We used the internet to find the cases that illustrated most part of our study. That one
is in itself a huge limitation. We should get in touch with authors in each one of the four
geographical regions (USA, Western Europe, Eastern Europe, other cases) we
identified, and we should try to find more detailed and precise information for each
case.
Practical implications
We think we made clear that the type of evaluation that is put in place for HRD related
programs relate to the type of administrator/user and to the type of funding. That fact
in itself helps putting the HRD efforts in perspective. The different types of
administrators should be aware of the alternative methods available to evaluate HRD.
For scholars this finding is also important because it helps building bridges between
apparently separated scientific fields.
References
Alcock, P., Erskine, A. and May, M. (2003), The Student’s Companion to Social Policy, 2nd ed.,
Blackwell Publishing, London.
Ashenfelter, O. (1978), “Estimating the effect of training programs on earnings”, Review of
Economics and Statistics, Vol. LX, pp. 47-57.
Ashenfelter, O. and LaLonde, R. (Eds.) (1996), The Economics of Training, Volume I, Theory and
Measurement, Edward Elgar Publishing, Aldershot.
Australian Government (2008), “New National training system”, available at: www.dest.gov.au/
sectors/training_skills/policy_issues_reviews/key_issues/nts
Ballart, X. (1998), “Spanish evaluation practice versus program evaluation theory”, Evaluation, The evaluation
Vol. 4 No. 2, pp. 149-70.
of HRD
Bannock, G., Baxter, R. and Davis, E. (2005), The Penguin Dictionary of Economics, 6th ed.,
Penguin, Harmondsworth.
Barro, R. and Lee, J-W. (2000), “International data on educational attainment: updates and
implications”, CID Working Paper No. 42, Cambridge, MA.
Bartel, A. (1995), “Training wages growth and Job performance – evidence from a company 535
database”, Journal of Labour Economics, Vol. 13 No. 3, pp. 401-25.
Becker, G. (1962), “Investment in human capital: a theoretical analysis”, The Journal of Political
Economy, Vol. LXX No. 5, pp. 9-49.
Becker, G. (1976), The Economic Approach to Human Behaviour, The University of Chicago
Press, Chicago, IL.
Becker, G. (1980), “Investment in human capital: effects on earnings”, in Bernstein, M. and
Crosby, F. (Eds), Human Capital: A Theoretical and Empirical Analysis with Special
Reference to Education, University of Chicago Press, Chicago, IL, pp. 15-44.
Ben Porath, Y. (1967), “The production of human capital and the life cycle of earnings”, Journal of
Political Economy, Vol. 75, August, pp. 352-65.
Bjorklund, A. (1994), “Evaluations of labour market policies in Sweden”, Industrial Journal of
Manpower, Vol. 15 No. 5, pp. 16-31.
Bontis, N. (2002), “Intellectual capital disclosure in Canadian corporations”, Journal of Human
Resource Costing and Accounting, Vol. 14, pp. 9-20.
Booth, A.L. and Snower, D.J. (1996), Acquiring Skills, Cambridge University Press, Cambridge.
Bradley, S. (1995), “The Youth Training Scheme: a critical review of the evaluation literature”,
International Journal of Manpower, Vol. 16 No. 4, pp. 30-56.
Burtless, G. and Orr, L. (1986), “Are classical experiments needed for manpower policy?”, Journal
of Human Resources, Vol. 21 No. 4, pp. 606-39.
Card, D. and Sullivan, D. (1988), “Measuring the effect of subsidized training programs on
movements in and out of employment”, Econometrica, Vol. 56, pp. 497-530.
Cegarra-Navarro, J. and Sanchez-Polo, M. (2008), “Linking unlearning and relational capital
through organisational relearning”, International Journal of Human Resources
Development and Management, Vol. 7 No. 1, pp. 37-52.
Chapman, P. (1993), The Economics of Training, Harvester, Manchester.
Chinapah, V. and Miron, G. (1990), Evaluating Educational Programmes and Projects: Holistic
and Practical Considerations, Socio-Economic Studies Vol. 15, UNESCO, Paris.
Deacon, B. (2000), “Eastern European Welfare States: the impact of the politics of globalization”,
Journal of Social European Policy, Vol. 10 No. 2, pp. 146-61.
Doeringer, P.B. and Piore, M.J. (1985), Internal Labour Markets and Manpower Analysis, Sharp
Armonk, New York, NY.
Dougherty, C. and Tan, J.-P. (1997), “Financing training – issues and options”, International
Journal of Manpower, Vol. 18, pp. 29-62.
Esping Anderesen, G. (1990), The Three Worlds of Welfare Capitalism, Princeton University
Press, Princeton, NJ.
European Union (1997a), Fonds Social Européen: Vue d’ensemble de la période de programmation
1994-9, European Union, Luxembourg.
European Union (1997b), Labour Market Studies, European Union, Luxembourg.
JEIT European Union (1999), Evaluation of Socio-economic Programs Collection, MEANS,
Luxembourg.
33,6 European Union (2002), Impact Evaluation of the European Employment Strategy: Technical
Analysis, available at: http://ec.europa.eu/employment_social/employment_strategy/eval/
papers/technical_analysis_complete.pdf
Eurostat (2007), Labour Market Policy Expenditure in Active Measures by Type, Eurostat,
536 Luxembourg.
Ferrera, M., Hemerijck, A. and Rhodes, M. (2000), The Future of Social Europe. Recasting Work
and Welfare in the New Economy, Celta Editora, Oeiras.
Fits-zen, J. (2000), The ROI of Human Capital: Measuring the Economic Value of Employee
Performance, AMACOM, New York, NY.
Gramlich, E. (1981), Benefit-Cost Analysis of Government Programs, Prentice-Hall, Englewood
Cliffs, NJ.
Gramlich, E. (1990), A Guide to Cost Benefit Analysis, 2nd ed., Prentice-Hall, Englewood Cliffs, NJ.
Guthrie, J., Petty, R. and Ricceri, P. (2007), Intellectual Capital Reporting: Lessons from Hong
Kong and Australia, The Institute of Chartered Accountants of Scotland, Glasgow.
Heckman, J. and Hotz, J. (1989), “Choosing among alternative nonexperimental methods for
estimating the impact of social programs: the case of manpower training”, Journal of the
American Statistical Association, Vol. 84 No. 408, pp. 862-74.
Heckman, J. and Smith, J. (1995), “Assessing the case of social experiments”, Journal of Economic
Perspectives, Vol. 4 No. 2, pp. 85-100.
Heckman, J., Lalonde, R. and Smith, J. (1999), “The economics and econometrics of active labour
market programs”, in Ashenfelter, O. and Lalonde, R. (Eds), Handbook of Labour
Economics, Vol. 3A, Chapter 31, North Holland, Amsterdam, pp. 1865-2097.
Heckman, J., Ichimura, H., Smith, J. and Todd, P. (1998), “Characterizing the selection bias using
experimental data”, Econometrica, Vol. 66, pp. 1017-98.
Hum, D. and Simpson, W. (2002), “Public training programs in Canada: a meta-evaluation”,
Canadian Journal of Program Evaluation, Vol. 17 No. 1, pp. 119-38.
Ignjatovic, M., Svetlik, I., European, H.R.M. and clusters, E.K.K. (2003), “European HRM
clusters”, EKK Toim, Vol. 17, Autumn, pp. 25-39.
ILO (2008), Operational Programme for Human Resource Development – Czech Republic,
available at: www.ilo.org/public/english/employment/skills/hrdr/init/cze_2.htm
Johanson, U., Koga, C., Skoog, M. and Henningsson, J. (2006), “The Japanese Government’s
intellectual capital reporting guideline: what are the challenges for firms and capital
market agents?”, Journal of Intellectual Capital, Vol. 7, pp. 474-91.
Kaplan, R. and Norton, D. (1996), The Balance Scorecard, Harvard Press, Cambridge, MA.
Katz, E. and Ziderman, A. (1990), “Investment in general training: the role of information and
labour mobility”, Economic Journal, Vol. 100 No. 403, pp. 1147-58.
Keynes, J. (1990), Teoria Geral do Emprego, do Juro e da Moeda, São Paulo Editora Atlas,
McMillan Press, Basingstoke, (originally published in 1973).
Kianto, A. (2008), “Assessing organizational renewal capabilities”, International Journal of
Innovation and Regional Development, Vol. 1 No. 2, pp. 115-29.
Kirkpatrick, D. (1998), Evaluating Training Programs: The Four Levels, 2nd ed., Berrett-Kohler,
San Francisco, CA.
Laffan, B. (1988), “Policy implementation in the European community: the European social fund
as a case study”, Journal of Common Market Studies, Vol. 21 No. 4, pp. 393-408.
LaLonde, R. (1995), “The promise of public sponsored training programs”, Journal of Economic The evaluation
Perspectives, Vol. 9 No. 2, pp. 149-68.
of HRD
Lechner, M. (1998), Training in the East German Labour Force: Microecnometric Evaluations of
Continuous Vocational Training after Unification, Physica Verlag, Heidelberg.
McLean, G. (2005), “Examining approaches to HR evaluation”, Strategic Human Resources, Vol. 4
No. 2.
Middleton, J., Ziderman, A. and Adams, V. (1992), “The World Bank’s Policy Paper on Vocational 537
and Technical Education and Training”, Prospects, Vol. XXII No. 2.
Mincer, J. (1958), “Investment in on job training and personal incomes distribution”, Journal of
Political Economy, Vol. 66 No. 4, pp. 281-392.
Mincer, J. (1962), “On-the-job training: costs, returns, and implications”, Journal of Political
Economy, Vol. LXX No. 5, pp. 50-80.
Mincer, J. (1974), Schooling, Experience and Earnings, NBER Columbia University Press, New
York, NY.
Nielsen, W., Pedersen, P., Jensen, P. and Smith, N. (1993), “The effects of labour market training
on wages and unemployment: some Danish results”, in Bunzel, H. (Ed.), Panel Data and
Labour Market Dynamics, North Holland, Amsterdam.
Nonaka, I. and Takeuchi, H. (1995), The Knowledge Creating Company, Oxford University Press,
Oxford.
O’Connel, P. and McGinnity, F. (1996), “What works, who works? The impact of active labour
market programmes on the employment prospects of young people in Ireland”, Discussion
paper, Social Science Research Centre, Berlin.
OECD (2006), Education at a Glance, OECD, Paris.
Oi, W. (1962), “Labour as a quasi-fixed factor”, The Journal of Political Economy, Vol. 70,
pp. 538-55.
Ordóñez de Pablos, P. (2002) in Bontis, N. (Ed.), “Intellectual capital measuring and reporting in
leading firms: evidences from Asia, Europe and the Middle East”, Proceedings of the
5th World Congress on Intellectual Capital, Butterworth-Heinemann, Oxford.
Orr, L.L., Bloom, H.S., Bell, S.H., Doolittle, F., Lin, W. and Cave, G. (1996), Does Training for the
Disadvantaged Work? Evidence from the National JTPA Study, The Urban Institute Press,
Washington, DC.
Paul, J. (1989), La relation formation – emploi: un défi pour l’ économie, Economica, Paris.
Pike, S. and Roos, G. (2007), “Recent advances in the measurement of intellectual capital: a critical
survey”, paper presented at the European Conference on Knowledge Management
Consorci Escola Industrial de Barcelona (CEIB), Barcelona, 6-7 September.
Russell, R. (2003), “The international perspective: Balanced Scorecard usage in Europe”,
Balanced Scorecard Report, Vol. 5 No. 3, pp. 13-14.
Saez, F. and Toledo, M. (1996), “Formation, Mercado de Trabajo e Empleo”, Economistas, Vol. 69,
pp. 351-7.
Schellhaass, H.M. (1991), “Evaluation strategies and methods with regard to labour market
programs – a German perspective”, Evaluating Labour Market and Social Programs – The
State of a Complex Art, OECD, Paris, pp. 89-106.
Schultz, T. (1971), Investment in Human Capital: The Role of Education and of Research, Free
Press, New York, NY.
Shackleton, J. (1995), Training and Unemployment in Western Europe and the United States,
Edward Elgar, Aldershot.
JEIT Smith, A. and Piper, J. (1990), “The tailor-made training maze: a practitioner’s guide to
evaluation”, Journal of European Industrial Training, Vol. 14 No. 8.
33,6 Snower, D. (1994), “The low-skill, bad-job trap”, IMF Working Paper, 94/83, available at: http://
ssrn.com/abstract ¼ 883810“The low-skill, bad-job trap, IMF Working Paper, 94/83,
available at: http://ssrn.com/abstract ¼ 883810
Spence, A.M. (1973), “Job market signaling”, Quarterly Journal of Economics, Vol. 87 No. 3,
538 pp. 355-74.
Stiglitz, J. (1975), “The theory of screening, education, and the distribution of income”, American
Economic Review, Vol. 65 No. 3, pp. 283-300.
Stiglitz, J. (2000), The Economics of the Public Sector, W.W. Norton & Company, New York, NY.
Stevens, M. (1996), “Transferable training and poaching externalities”, Alison Booth and Denis
Snower editors, Acquiring Skills: Market Failures, their Symptoms, and Policies Responses,
CEPR Cambridge University Press, New York, NY, pp. 19-40.
Sveiby, K.-E. (1998), “Measuring intangibles and intellectual capital – an emerging first
standard”, available at: www.sveiby.com/Portals/0/articles/EmergingStandard.html
Tomé, E. (2001a), “The evaluation of vocational training: a comparative analysis”, Journal of
European Industrial Training, Vol. 25 No. 7, pp. 380-8.
Tomé, E. (2001b), “The European social fund in Portugal: a study in a perspective of evaluation”,
ISEG Lisbon, PhD thesis, pp. 531-50 (in Portuguese).
Tomé, E. (2005), “Evaluation methods in social policy Bits Boletin informativo de trabajo social”,
No. 8, Universidad de Castilla la Mancha, Cuenca.
Tomé, E. (2009), “HRD in a Multipolar World: an introductory study”, paper to be presented, if
accepted, at the AHRD 2009 European Congress, Newcastle.
Torp, H. (1994), “The impact of training in employment: assessment a Norwegian labour market
program”, Scandinavian Journal of Economics, Vol. 96 No. 4, pp. 531-50.
Tovstiga, G. and Tulugurova, E. (2007), “Intellectual capital practices and performance in
Russian enterprises”, Journal of Intellectual Capital, Vol. 8 No. 4, pp. 695-707.
UNESCO (2006), Participation in Formal Technical and Vocational Education Training
Programmes Worldwide. An Initial Statistical Study, UNEVOC International Centre for
Technical and Vocational Education and Training, Bonn.
United Nations (2008), United Nations Human Development Report 2007/8 – Fighting Climate
Change: Human Solidarity in a Divided World, United Nations, New York, NY.
White House (2008), Program Assessment, Work Investment Act, Youth Activities, available at:
www.whitehouse.gov/omb/expectmore/summary/10000342.2003.html
Volkov, D. and Garanina, T. (2008), “Value creation in Russian companies: the role of intangible
assets”, Electronic Journal of Knowledge Management, Vol. 6 No. 1.
Corresponding author
Eduardo Tomé can be contacted at: eduardo.tome@clix.pt