You are on page 1of 26

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0309-0590.htm

The evaluation
The evaluation of HRD: a critical of HRD
study with applications
Eduardo Tomé
Universidade Lusı́ada – Vila Nova de Famalicão, 513
Vila Nova de Famalicão, Portugal
Received 19 September 2008
Revised 19 December 2008
Abstract Accepted 8 April 2009
Purpose – The purpose of this paper is to analyze critically the most important methods that are
used in the evaluation of human resource development (HRD).
Design/methodology/approach – The approach is to ask two questions: What are the methods
available to define the impact of HRD in the economy? How can we evaluate the evaluations that have
been made?
Findings – There are two main perspectives to evaluate any program, by results (counting
occurrences) and by impacts (calculating the differences the investment made in the society). The first
type of method does not find the impact of the program, the second type does.
Research limitations/implications – The analysis is limited by existing studies on HRD. The
implications are that the conditions that underline the existence of HRD programs define the type of
evaluation that is used.
Originality/value – The results of this paper put the evaluation problem in a new perspective. It
explains the difference between methodologies (results and impacts) and scientific fields used (public
administration, social policy, HRD, KM, IC, microeconomics, HR economics) by the type of person
responsible: public administrator, private manager, HRD expert, knowledge manager, IC expert,
microeconomist. The differences between the applications of those methodologies based on the type of
funding – private, public, external – are also explained.
Keywords Human resource development, Knowledge management, Business policy, Lifelong learning,
Financing
Paper type Conceptual paper

Introduction
In the economy of the twenty-first century, it is well understood that human resources
(HR) are a major factor of economic and social wellbeing. In world terms, the Education
Index, computed by the United Nations Development Program (UNDP) shows the clear
relation between education, income and human development (Table I). The Human
Development Report Education Index (HDREI) is obtained by computing the adult
literacy rate (percent for the population aged 15 years old and above) with the
combined gross enrolment ratio for primary, secondary and tertiary education in
percentage. We must note that the HDREI is not only a measure of the level of
Education in countries, but also a result of decades of investment in Education and in
basic Human Resource Development (HRD) in any country. Journal of European Industrial
Training
Vol. 33 No. 6, 2009
The author thanks colleagues who were present at the 2008 AHRD Europe Conference session in pp. 513-538
q Emerald Group Publishing Limited
Lille, when a first version of this paper was presented, and a very enthusiastic anonymous 0309-0590
referee for comments and support. DOI 10.1108/03090590910974400
JEIT In what concerns specifically training, it is well known that the more developed
33,6 countries are also those that invest more in private training (funded by companies and
by individuals) and in public training (funded by the State). Usually, those investments
are measured by the share of adults that participate in life long learning, and by the share
of individuals that undergo training, or by the funds spent in active labour market
policies (see Table II). The data usually relate to developed countries (OECD, 2006;
514 Eurostat, 2007; UNESCO, 2006). However, even if the investment in HRD is seen as an

Countries Income Human development Education index

High income 33,082 0.936 0.937


Middle income 7,416 0.776 0.843
Low income 2,631 0.570 0.589
World 9,543 0.743 0.750
Table I.
Education, income and Notes: The countries are grouped according their value of income. Income is measured in terms of
human development – Gross Domestic Product in Power Purchase Parity (GDP PPP)
2006 Source: United Nations (2008)

Life long learning (2006) Public funds (2005) (million euros)

Belgium 7.5 608


Bulgaria 1.3 14
Czech Republic 5.6 13
Denmark 29.2 1,059
Germany 7.5 5,580
Estonia 6.5 4
Ireland 7.3 342
Greece 1.9 71
Spain 10.4 1,334
France 7.6 4,927
Italy 6.1 2,839
Latvia 6.9 13
Lithuania 4.9 11
Hungary 3.8 34
Netherlands 15.6 723
Austria 13.1 800
Poland 4.7 250
Portugal 4.2 431
Romania 1.3 10
Slovenia 15.0 12
Slovakia 4.1 9
Finland 23.1 581
Sweden 32.0 981
United Kingdom 26.6 1,559
Norway 18.7 891
Notes: Life long learning refers to the percentage of the population aged 25 to 64 participating in
Table II. education and training over the four weeks prior to the survey. Public funds refers to the public
Investment in training: expenditure on training for unemployed and groups at risk. The figures relate to EU countries only
basic statistics Source: Eurostat (2007)
obvious necessity, the evaluation of that investment poses many crucial questions. First The evaluation
of all, it is well known in Economics, that resources are scarce and must be used with the
best return possible. Second, in order to guide policies, it is essential to evaluate
of HRD
investments. That evaluation might be made in microeconomic, in macroeconomic
terms, or in both perspectives, depending on the type of impacts the program in judged
to have: partial, national or both. Therefore, it is crucial to know which are the best
methods available to evaluate HRD. The evaluation problem has a particular 515
significance, because we may assume that with the emergence of a knowledge-based
economy (KBE) the societal needs for HRD worldwide are not decreasing, quite on the
contrary, they are increasing. Finally, we may consider that the evaluation methods put
in place are dependent on the assumptions and feelings the administrators have on the
programs. Putting it very basically, we only evaluate when we have doubts. We don’t
evaluate if we are sure that the program is very good or very bad.
Given the ideas that were just expressed, this paper addresses two basic research
questions:
RQ1. What are the methods available to define the impact of HRD in the economy?
RQ2. How can we evaluate the evaluations that have been made?
Accordingly, to deal with those two questions, we divided the paper in four parts. In the
first part of the paper we define the theoretical background. Namely, we mention the basic
concepts that underline the analysis, review the basic economic theories on HRD,
introduce the evaluation problem and describe the most important evaluation methods on
the topic, dividing them in result methods, and impact methods. In the second part of the
paper we present a model (Gramlich, 1990, pp. 160-164) that we consider is fundamental to
answer to the two questions we put. In the third part we describe the data we collected on
evaluation experiences from some of the most important HRD programs we know about,
and we analyze those data accordingly to the methodology defined in the second part. We
describe the American cases, the Western European case, the Eastern European case, and
the other cases. In the fourth part, we present the paper’s conclusion, limitations, practical
implications, and we suggest venues for further research.

Theoretical background
Concepts
The two main concepts that will be used in this paper are those of human resources
(HR) and human resource development (HRD). We assume that those two concepts are
intrinsically linked but are different from each other. We will explain those differences
in a moment, We use a very broad and generic definition of HR grounded in HR
Economics (Bannock et al., 2005). For us HR are all the characteristics an human being
has that may be valued in the labour market. It includes physical characteristics like
health, beauty, and non-physical characteristics as education, training, competences,
and skills. And for us HRD is also any process, formal or informal, made by the
individual or by an organization that tries to increase, adapt and improve HR (McLean,
2005) As a final note to this sub-section we must had that, continuing to use the
economist perspective, and in line to Gramlich’s analysis (Gramlich, 1990, pp. 160-164)
we will view HRD as an investment, and therefore we will assume an economic
perspective We will also assume that all the HRD initiatives may be assimilated to
social policies (Tomé, 2005).
JEIT Theories
33,6 The positive influence of HRD in the situation of individuals, companies, and nations
was noticed by the most important thinkers in the economic world, since the dawn of
Economics. Indeed, Adam Smith, was the first to notice that skills and competences are
an important factor in the development of countries (Paul, 1989). The other most
important classical economists (Marshall (Paul, 1989), Stuart Mill (Shackleton, 1995),
516 Pigou (Stevens, 1996)) agreed with Smith’s idea. Keynes also gave competences and
skills an important place in the economic process (Keynes, 1990). But it is commonly
accepted that the basic economic theory that is nowadays considered on HRD saw the
light with the works of neo-classical economists in the 1950s (Becker, 1962, 1976, 1980;
Ben Porath, 1967; Mincer, 1958, 1962, 1974; Oi, 1962; Schultz, 1971) who built the
foundations of the Human Capital Theory (HCT). According to the HCT, HRD
activities should improve labour productivity, and in consequence the wages of
qualified people should be higher than those of non-qualified people. HRD should also
improve the employment possibilities of the involved. And HRD should contribute to
improve the product quality, and the social climate of companies. Finally, at a national
level. HRD should generate exports, income and ultimately social cohesion.
The HCT analysis were quite strong and valid. But in the last decades, HCT theories
have been challenged in a significant a number of ways. Some of those challenges
relate to:
.
the explanation of how discriminative factors like race, gender and company
dimension may be incorporated in the analysis;
.
the public intervention in the labour market; and
.
the role of knowledge and intellectual capital in the economy.

We will describe succinctly these three contributions next:


(1) Regarding the discriminative factors, critical analysts explained why some
classes (among them ethnic minorities, adult women, disabled persons) are
given less HRD opportunities because they are perceived beforehand (screened)
by companies as being less productive; and on the contrary people with
diplomas and “politically correct” status are given all the training opportunities
by organizations because they are perceived as productive. As a consequence,
inefficiencies and inequalities are originated (Spence, 1973; Stiglitz, 1975). Other
important fact is that not all the companies that act in the labour market are
equal. Quite on the contrary there is a tendency for the market to be quite
dualistic, with small SMEs having difficulties training employees, and big
companies offering solid career perspectives and big training opportunities. As
a consequence training tends to be better remunerated in big companies than in
small ones (Chapman, 1993; Doeringer and Piore, 1985).
(2) On what concerns the public intervention in the labour market, for HCT
theorists, HR investments were mainly private investments, made by workers
and companies; but, quite a number of studies have since suggested that the
public intervention in the HR arena should be made, to eliminate some market
failures related to externalities, market imperfections, and weak private
capacity to train (Middleton et al., 1992, p. 35), funding difficulties (Katz and
Ziderman (1990), transferable training (Stevens, 1996), the bad job, low skills
trap (Snower, 1994), and individual myopia (Dougherty and Tan, 1997, p. 38). In The evaluation
fact, HRD theorists believe there is a positive externality associated with HR of HRD
meaning that the possessor of HR has a positive effect in its surroundings, by
transferring knowledge and good practices, etc; as a consequence HRD
activities should be subsidized;. Also, the market is not perfect, and people who
get unfairly screened should be supported. Furthermore, if the private sector is
too weak, the State should come to its support. Additionally, in the case of 517
transferable training, some companies may “poach” workers from workers, that
trained their, creating a big disincentive to train. The “bad job, low skills trap”
means that when companies think people are not qualified offer bad jobs,
inducing a vicious cycle of qualification, because individuals don’t fell
encouraged to invest. Those studies also pointed out that in case of equity
problems the public intervention could be justified (Booth and Snower, 1996). Of
course, as a general golden rule, the public intervention in the HRD market
should only be made if the dimension of the government failure it carries with,
is smaller than the dimension of the public failure that is to be solved (Booth and
Snower, 1996). It is also very curious to note, that several philosophic schools
exist in social policy, and we can assume that they differ in the relative
dimension they give to market failures versus government failures (Alcock et al.,
2003; Deacon, 2000; Esping Anderesen, 1990; Ferrera et al., 2000). An
application of those differences of conception to the human resource
management reality was already done (Ignatovic and Svetlik, 2003). Those
analysis, generate the following Table III. According to the table, different
forms of social systems (column 1), are based in different types of ideas on
market failures (column 2) and Government failures (column 3), which have
very significant consequences to the HRD market (column 4). Examples are
given in column 5. The liberal conception is based in the idea that market
failures are minimal and government failures maximal; as a consequence HRD
should be privatized; this system exists in the Anglo Saxon countries. For
conservatives, market failures are moderate, and government failures
considerable; therefore the public intervention should be subsidiary; this
system exists basically in Central Europe. Social Democrats assume that
market failures are considerable, and government failures moderate; as a
consequence the HRD market should be State led, as it happens in the Nordic
countries. Finally, according to the Socialist conception, market failures are
maximal, government failures are minimal, and therefore the HRD should be

Type of social Government Consequence for


system Market failure failure HRD market Example

Liberal Minimal Maximal Privatized UK, USA


Conservative Moderate Considerable Public subsidiary Central Europe
Social Democratic Considerable Moderate State led Nordic countries
Socialist Maximal Minimal Nationalization Communist regimes Table III.
Public intervention
Sources: Own analysis of Alcock et al. (1997); Deacon (2000); Esping Anderesen (1990); and Ferrera versus private
et al. (2000) intervention in HRD
JEIT nationalized as happened in the Eastern European countries between World
33,6 War II and the fall of the Berlin Wall, and as it still occurs in China now. For a
more detailed analysis of the problem see (Tomé, 2009).
(3) Finally, a vast number of studies have been done on intellectual capital and
knowledge management, following the studies of Kaplan and Norton (1996) and
Nonaka and Takeuchi (1995), that in some way extend the analysis of the use of
518 HR in companies. We will address these studies again when analyzing the
impact methods of evaluation.

Quite crucially, as a consequence of the mentioned critical contributions that


challenged the HCT, the corpus of economic theories on HRD grew in stature.

The problematic about evaluation


Program evaluation is a complex issue. At least ten questions can be put when
evaluating a program as shown in Table IV (Chinapah and Miron, 1990). We think that
all the process of evaluation leads to question number 8: What decisions? And we
consider that the evaluation process only has a sense if we may decide to continue, to
stop or to change the program evaluated. But in order to make good decisions we think
it is important to answer correctly to other questions like How? (methodology), When?
(timing), Where? (organization, country), Who? (evaluator), Whom? (decision maker),
Which benefit? (beneficiary). The fourth question (Why?) was analyzed in the
subsection about theories.

Evaluation methods
Economists and managers have developed many important evaluation methods in
order to evaluate HRD. Those methods apply to programs, and therefore may be
analyzed in the context of the evaluation of social policies (Tomé, 2005).
Accordingly, some of those methods relate basically to occurrences (they may be
called results methods), and some of those methods relate to the perceived
consequences of the training programs (they may be called impact methods). We
will present result methods and impact methods in order to analyze the methods
used in evaluations.

Question Meaning

How? Methodology
When? Timing
Where? Sampling, target group, organization, country
Why? Rationale
Who? Planner, evaluator
What? Objectives
Whom? Decision maker
What decisions? Decisions
Table IV. Which criteria? Criteria
Questions that might be Which benefit? Beneficiary
addressed when
evaluating a program Source: Chinapaj and Miron (1990)
Result methods The evaluation
The easiest way to evaluate HRD investments or policies is to count the level of of HRD
investment in money terms (financial indicator) and the number of people that
participate in the operation, namely the trainees (physical indicator). This is the easiest
way of describing a program in administrative terms. Usually, the analysis of every
program begins with this sort of data. However, result indicators only analyze costs
and participants, and are not related with benefits. At least in theory it is absolutely 519
possible that a large program (regarding costs and participants) may have very low or
even negative consequences for the people involved. But, result methods tend to be
used to evaluate policies because they are cheaper, less risky, less random and much
easier to command than impact methods (Tomé, 2005). Sometimes these methods
analyze the accessibility of the program, relating the number of participants to the total
number of possible users. In other cases the reports also check the employability of the
program, six or more months after its conclusion.
It is quite interesting to note that in the public administrators’ perspective the best
way of evaluating a program is to use basic result methods. Quite crucially, the public
administrators are concerned with the execution of the program and the correctness of
the planning. To execute means to spend the money that is available and to support a
large number of people and organizations. If the operation was received by a large
number of persons and organizations, this should mean it was a success. Costs should
be compensated by accessibility. Needs should have been met. Furthermore, result
methods provide the administrator with the description of the operation in financial
and physical terms, as administrators want. Results methods provide the
administrator with a simple and ready to use measure of the “public good” that was
originated from the operations. Accordingly, to this way of thinking:
.
the final report should include financial and physical results;
.
the public administrator may also sense the importance of assessing the
employability of the program; and
.
the definition of the impact of the program is only a second thought in the
administrator’s mind in particular if it evolves analyzing the reality outside the
program (like it happens with the impact methods in particular with the
“with-without” analysis).

Finally, result methods are not random, in the sense that different methods do not
provide different figures (we only have to count people, organizations or expenses), and
therefore they are much more preferred by the administrators.

Impact methods
These type of methods try to define the difference the program makes to the society or
organization in which it is implemented (Tomé, 2005). This difference may be detected
in a “with-without” perspective or in a “before-after” perspective. In any case, impact
methods are very important because they are the finest way to analyze programs. This
happens because impact studies do not only take into account costs (as result methods
do when they refer to financial indicators) but they try to define the benefits of the
program and to compare them with the costs. However, as a counterpart, impact
studies present quite a number of difficulties: they are expensive (in money, time,
resources, decisions), they are risky (because they may expose the programs’ and the
JEIT administrators’ failures), and they are random because they often require the use of
33,6 random based statistical techniques and because there are different techniques
available that may originate different estimates (Tomé, 2005).

Different perspectives of the evaluation


In fact, to our knowledge, currently there are at least six different types of scientific
520 fields which use different impact methodologies to analyze HRD programs. Those
scientific fields are:
(1) economics;
(2) social policy;
(3) management;
(4) intellectual capital;
(5) knowledge management; and
(6) intellectual capital.

We will analyze these six types of studies in succession. The six perspectives are
summarized in Table V.
Economics. According to this point-of-view an HRD operation may be analyzed in
microeconomic or in macroeconomic terms. The first type of analysis is adapted to
situations in which we can detect a difference between the participants and the
non-participants in the program. Heckman et al. (1999) called this analysis the
“treatment on the treated” scheme. The idea behind this analysis is very simple: the

Perspective User Problem Methods Classification

Social policy Public administrator Public good Progress reports Results


Economics Human resource Impacts on Control group Impact
economist society Input output methods
Supply and demand
methods
Management Private manager Impact on the Return for investment Impact
Traditional organization
Accountant
Intellectual New accountants Impacts on the Balanced scorecard: Impact
capital organization finances, structural capital,
social capital, human capital
Knowledge Knowledge manager Impacts on the Knowledge sharing, Impact
management organization transfer, creation, learning
and unlearning
HRD science HRD expert Impact for the Interviews Impact
agents involved questionnaire, participant –
observer Kirkpatrick 4
Table V. stages
A typology of evaluation
methods Source: Own analysis of impact methods as related to the sections on result and impact methods
HRD operation should have a (positive or negative) impact on the participants, in The evaluation
relation with the evolution of the non-participants after the operation, and that impact of HRD
should be essential to define the benefit of the program. The analysis is based on the
HCT with some extensions, like the screening hypothesis, the dualism in the labour
market, and the public intervention in HRD as referred in 1.2. The “treatment on the
treated” idea is very simple but becomes easily a very complicated by at least three
reasons: 521
(1) First, many types of differences may be observed between the participants and
the non-participants, that may bias the estimation of the impacts (Burtless and
Orr, 1986);
(2) Second, many different types of methods may be used to estimate the impact,
that may originate different estimates of the benefit; among the big differences
in methods is the divide between experimental methods (in which the selection
to the program is made randomly, generating acceptance problems in the non
selected) and non experimental methods (in which the two groups are not
selected at random) (Heckman and Hotz, 1989);
(3) Third, if the program is big enough in relation with the society in which it is
applied, it may well have not only partial equilibrium effects, but it may also
effect the whole society. Therefore, in this case, general equilibrium analysis
have to be made.
Quite interestingly, in the microeconomic day to day analysis of the HRD programs,
and when the programs are funded by the public sector, the benefits are to be found in
terms of wages, of probability of employment and in the duration of the unemployment
spells (Card and Sullivan, 1988; LaLonde, 1995). But, when the programs are funded by
private companies, the benefits are usually counted in terms of the impact on the
productivity of employees (Bartel, 1995). The mentioned general equilibrium analysis
are of macroeconomic scope and may be done with supply and demand functions of
labour or with input and output models. As a final note, we must emphasize, that in
order to the economic evaluation of the program to have a immediate effect, it must be
completed before the end of the program, and not to be done only after the end of the
program.
The social policy approach. This point-of-view relates essentially to programs that
are funded by public administrations. Those programs are usually made to solve some
market failures that exist in the HRD market. The market failures were described in
section 1.2 According to this point-of-view, the most important thing is to analyze if the
program occurred in line with the expectations that existed in relation to it. This
implies that the monitoring of the program during the operational phase is an essential
part of the evaluation process. This also implies that the administrator should put
down the target of its ambitions in order to fulfil them more easily when the program is
made (Schellhaass, 1991). Other aspect that from this point-of-view is very important,
is the legality of the program, a question that calls for a strict pedagogic and financial
auditing. This, in turn, may generate a misleading confusing between the impact of the
program and the costs of the program. Indeed, all costs may be legal, but the impact of
the program, as measured in microeconomic terms may be negative.
Furthermore, from the administrator’s point-of-view, it is very important to analyze
the programs’ results. Meaning that it is very important to stress the “public good” that
JEIT sprang out from the program. And that “public good” must be immediately and
33,6 undisputedly perceived. It is by this reason that administrators, in particular public
administrators tend to favour result methods instead of result methods to evaluate the
programs they have in charge (Tomé, 2005). Therefore, public administrators tend to
measure HRD programs by the following indicators; number of supported people, and
organizations; accessibility rate (number of participants versus number of potential
522 participants); employment situation of the participants some few months after the
operations; and program costs (European Union, 1997a). If all those variables are high,
public administrators tend to think the program is worth.
Finally, we must not forget that public HRD programs, even when they are made to
prevent market failures, may fall in several public failures (Stiglitz, 2000):
(1) The administrator may have a personal interest in pursuing the program even if
its impacts are low or negative, and the because this means that he will
maintain his job; this situation may lead to the perpetuation of programs,
almost in dynastic terms, for decades; and
(2) The administrator may have a personal interest in augmenting the costs of the
program over the socially efficient level, if that augmentation may be increase
the perceived importance of the program.

These two situations imply that public programs of HRD tend to be evaluated by
legitimistic indicators, which do not question the program, and that, quite on the
contrary, tend to elude the eventual shortcomings or failures that the program might
have. The legitimistic approach is also palpable in the fact that the evaluations that are
made are inward looking and tend to be made after the end of the program. We may
elaborate further on that idea:
.
First of all, the programs are submitted to a procedural analysis in order to
measure the gaps between the reality and the expectations; but no analysis on
the consequences of the reality is frequently made (Laffan, 1988);
.
Second, the evaluation that is done tends to be concerned with the participants;
the question of comparing with the non participants and looking outside the
program to find differences or similarities between the program and the external
environment is secondary if existent (European Union, 2002);
.
Third, the economic evaluations tend to be made after the conclusion of the
program, and not after the end of the first economic year of the program
(European Union, 2002).

The management approach. This approach is used when the HRD operation is funded
by a private company. According to this approach, HRD programs must be evaluated
by the traditional accountability and financial mechanisms that exist to evaluate any
investment of any private company. This amounts to “putting numbers on people”.
Because all humans are different and because competences are immaterial and their
outputs difficult to measure, this one is not at all an easy task. Anyway, trials have
been made to estimate the return on investment (ROI) of HRD operations (Fits-zen,
2000).
The intellectual capital approach. However, in the last two decades, the management
world has began to understand that a considerable and important difference exists
between the market value (MV) and the book value (BV) of any company. And The evaluation
furthermore, it has been considered that the difference between the MV and the BV is of HRD
due to the intangible assets. HR are considered to be an important part of those
intangible assets, together with the organizational routines, the R&D, the brands, and
the relations with costumers (Sveiby, 1998) his perspective was above all compounded
in the Balance Scorecard analysis (Kaplan and Norton, 1996). This approach generated
some “new management practices”, related to Knowledge Management (see v). 523
Accordingly, the Balance Scorecard methodology has been used to assess investments
in KM. Furthermore, this approach was consigned in recent guidelines on
accountability issued by the European Commission. Finally, in recent years, new
methods have been proposed to address the difference between MV and BV (Pike and
Roos, 2007; Volkov and Garanina, 2008).
The knowledge management perspective. Knowledge became such an important
scientific subject in the last two decades, that a scientific field developed around the
concept (Nonaka and Takeuchi, 1995). In organizational terms that event result in the
emergence of studies on increasingly important phenomena related to knowledge
creation, knowledge sharing, knowledge renewal (Kianto, 2008), knowledge transfer
and also learning and unlearning (Cegarra-Navarro and Sanchez-Polo, 2008). The
relation between those analysis and the HRD analysis is immense, and indeed we
consider that KM and HRD are the two faces of the same coin. From a KM
point-of-view, HRD operations are evaluated on the impact they have on knowledge
and on the various aspects in which knowledge is analyzing: sharing, transfer,
creation, renewal etc. Those impacts have been assessed using complex factor analysis.
The HRD science perspective. Finally, HRD operations may be analyzed from the
perspective of the HRD science itself. This perspective may be also called the HRM
perspective. The basic model that underlines this approach was made by Kirkpatrick
(1998). This model defines four stages in which the HRD operation may have an impact
in the organization. The stages are:
(1) reaction;
(2) learning
(3) behaviour; and
(4) results.

A note on the definition of impact


Regarding all the methodologies just presented it is important to follow (Schellhaass,
1991) and stress some important differences between the optimal impact of the
program (A), its real effects (B), the predicted effects (C), the evolution of the
comparison group (D) and the impact of the program (E). Thus:
.
A tends to be higher than B because A requires optimal circumstances to happen.
.
However, for political and administrative purposes the predicted effects (C) may
be estimated be default. As a consequence, the real effects tend to be higher than
the predicted ones (C . B).

Finally, the evolution of the control group or comparison (D) has to be taken into
account. The impact of the program (E) is given by B – D.
JEIT How to evaluate an evaluation
33,6 The problem
We think that nowadays evaluation is a common procedure in developed countries and
in conscious and prosperous institutions. To our knowledge, evaluation practices have
already been made by National bodies in more than 50 countries (www.
internationalevaluation.com/members/national_organizations.shtml).
524 However, not all the efforts made to evaluate training and HRD have the same
quality (see www.cgu.edu/pages/4085.asp) and therefore, the evaluation of the
evaluation is a very important scientific topic.

The model
In this context, a very interesting model to evaluate evaluations was presented some
years ago (Gramlich, 1981, pp. 171-5; Gramlich, 1990, pp. 159-64). In this model the
benefit derived from an evaluation is defined as a function of three variables:

BEN ¼ p * q * PV ð1Þ
The meaning of each one of those variables is the following:
. p is the probability of discovering the true impact of the program;
.
q is the probability that the government/private authority will pay attention to
the result of the evaluation and take it into account; and
.
PV is the program cost measured in monetary units.

It is also interesting to understand that (Burtless and Orr, 1986) the benefit of an
evaluation can be measured as the costs (F) that are saved if the program is stopped
minus the benefits that cease to exist if the program is stopped (G) plus the evaluation
costs (H). It is clear that if F is higher than G þ H the evaluation is socially beneficial.
Therefore, high values of p, q and PV may generate a high F, that will ensure that G
plus H are offset. In this circumstance the evaluation is worth. However, a costly
evaluation (with an high H) and a high F, may identify an even higher G and in this
hypothesis, H is justified because it showed that G exists and G is an important and
positive social benefit.
The conclusions
Using the method of evluating evaluations just exposed we arrive to the following
three conclusions concerning the priorities regarding program evaluations:
(1) It is preferable to evaluate using very fine, even if extremely costly impact
methods (for which p is large). This happens because impact methods are the
correct way of obtaining a correct estimation of the program benefits. The
alternative is to use results methods, which overlook the estimation of benefits
and only deal with the costs. This happens even if there is some scientific
controversy about what are the best impact methods.
(2) It is preferable to evaluate programs when there is a doubt about the program
usefulness (and therefore q is large). This happens because by logic, if the
administrator (private or public) has no doubts about the program, he will never
evaluate and also will tend not to take in account the results from the
evaluation.
(3) It is preferable to evaluate large programs (for which PV is large). This happens The evaluation
because, given that evaluation studies are a scarce resource, it makes more of HRD
sense to use them analysing very important (and therefore big) programs, than
to use them studying very small ones.

Discussion
However, there are open questions and controversies regarding all the three conclusions: 525
(1) It is not quite clearly scientifically what is the best econometric estimator to
define the program impact (Heckman et al., 1999): the debate on this topic is
continuing; some authors defend the experimental studies (Burtless and Orr,
1986), and some others try to put the different methods in perspective (Heckman
and Smith, 1995; Heckman et al., 1998), pointing that the non-experimental
studies are also valuable, using cross sections or time series data.
(2) The readiness of the administrator and political agent to listen to the evaluators’
ideas is fundamental; from the public choice analysis (Stiglitz, 2000) it is well
known that politicians may have interest in their own programs. This interest may
be reflected in the evaluation procedures. Administrators may only evaluate
programs if they think the evaluation suits their interests. The timing of the
evaluation procedures may be done in order to suit the same interests. And finally,
evaluations may be made in short time spans in order to legitimize the programs.
(3) It may be very useful to evaluate a pilot study, whose value PV is small, in order
to obtain insights for the making of a bigger program.

Known experiences of evaluation analyzed and evaluated


In this section we will evaluate the most important experiences on HRD we know,
according to the Gramlich’s method. We will separate the evaluations on HRD
geographically. First, we will describe the American cases, then we will deal with the
Western European cases, and in section after we will address the other Eastern
European Cases, and finally we will address the other countries’ cases. The variables
that we will use to study PV, q and p are basically described in the following Table VI.

The American case


The USA were the place in which HRD evaluations began sooner, regarding both
privately funded and publicly funded programs. America managers were pioneers in
the ways they assessed the programs they funded. This happened with all the

Concept Relevant proxys

PV Physical indicators
Financial indicators
Q Political discourse
Type of methodology put in place
P Detailed analysis of the methodologies used Table VI.
Elements of the model
Source: Own analysis of Gramlich (1981, pp. 171-155); and Gramlich (1990, pp. 159-154) and relevant proxys
JEIT categories of methods we presented. Basically, the American experience of evaluation
33,6 of HRD programs has three basic characteristics:
(1) On one hand, some programs were funded by public funds, were analyzed with
social concerns, using microeconomic control group methods that would expose
their impact;
(2) On the other hand, some programs were based in companies, funded by
526 companies and evaluated because of the company’s own concerns;
(3) For the two cases just mentioned, p, q and PV had high values, contributing to
the great importance of the evaluative activity in the USA.

These three ideas will be developed in the next two sub-sections.

Public funded HRD programs


Regarding the public programs made in the USA, an important effort was made in
order to increase both p, q and PV, by respectively implementing fine impact studies,
discussing the findings in scientific forums and increasing the dimension of the
programs Therefore, the expected value of the evaluation could be very high. This
basic idea is grounded in the following evidence (Table VII, line 2):
(1) The US public administration of public HRD programs was also very advanced
in the evaluation of the programs it funded, often setting the standards by
which the other countries measured themselves: the programs were assessed
since their beginnings in a social policy perspective (Ashenfelter, 1978); but the
discussion on the correct economic impact methods of the CETA program
generated a major academic movement. The debate reached the Academy of
Sciences of the United States (LaLonde, 1995).
(2) Very important evaluations on public funded HRD programs have been made in
the USA since the 1960s (Tomé, 2001a). Initially, those evaluations were above
all made using result studies (Tomé, 2001a). But since the final years of the
1970s impact studies began to be performed on the those programs
(Ashenfelter, 1978). The programs that were evaluated include those made
under the Manpower Development Training Act (MDTA, from 1962 to 1972)
and the three Acts that followed that piece of legislation in succession: the
Comprehensive Employment Training Act (CETA, from 1973 to 1982), the Job
Training Partnership Act (JPTA, from 1982 to 1997) and the Workforce
Investment Act (WIA, since 1998). A detailed list of the evaluations made to the
training programs in the United States is included in (LaLonde, 1995).
(3) It is quite interesting to study the evolution of the evaluation of those programs,
in special those four major programs, because in the mid-1980s the CETA
program was subjected to a theoretical discussion in the Academy of Sciences
of the United States (Ashenfelter and LaLonde, 1996). The most important part
of the discussion was related with the fact that very different methods of
non-experimental evaluation had originated very different estimates of the
program impact. That troubling finding had a crucial consequence: the JTPA
program was evaluated with an experimental study based on 1988 (Orr et al.,
1996). The impact estimates obtained were positive for adults but negative or
null for young people (LaLonde, 1995). The JTPA was substituted by the WIA
Country Funding Programs PV P Q BEN Sources

USA Public MDTA, CETA, High: 1 to 3 High: impact studies Very high: concern Very high: billions of LaLonde (1995); White
CETA, WIA million dollar over evaluation dollars invested and House, 2008
spent each year reaches Academy of assessed
Sciences
USA Private Main private Very high: many Very high: plethora of High: concern of with Very high: huge McLean (2005); Smith
companies and billions of dollars scientific methods in impact in order to investment assessed and Piper (1990)
organizations. each year four scientific fields: justify investment. on daily basis
Small companies microeconomics, Return calculated
when concerned HRD, knowledge using different
with HR management, and methods
development accounting
Western Public EU Until 1990 Increasing: from 1 Small: results and Small: spending and Very small: weak EU (1997a); Laffan
Europe percent to absorption absorption are evaluation process (1983); Bradley (1995)
7 percent of the calculations only one indicators of benefits was
EU budget real evaluation study
performed: YTS
(Bradley, 1995)
Western Public EU From 1990 to Increasing Rare: Spain, Germany, Medium: legal concern Very small. in practice EU (1997b); Tomé
Europe 2000 slightly: 7 to 8 Ireland, UK with evaluation almost nothing (2001b); Ballart (1998);
percent of the EU changes Saez and Toledo
budget (1996); Bradley (1995);
Lechner (1998);
O’Connell and
McGinnitty (1996)
Western Public EU Since 2000 Still augmenting Increasing: impact High: legal concern Medium: countries EU (2002)
Europe studies made on the with evaluation. that use impact
European Strategy for Evaluation reports studies are not the
Employment for majority
Spain, Greece, France,
Austria Netherlands,
Sweden
(continued)
The evaluation

according to Gramlich’s
Evaluation experiences

model
Table VII.
527
of HRD
33,6

528
JEIT

Table VII.
Country Funding Programs PV P Q BEN Sources

Western Public, Since the WWII Sweden, Norway, High: impact studies High: need to assess High: investment Bjorklund (1994);
Europe National and mainly since Denmark, in growing number of scientifically the use scientifically assessed Torp (1994); Neilson
the 1980s Austria, Belgium, cases of public money and et al. (1993); Heckman
the Netherlands, its economic impact et al. (1999); EU
Finland, France (1997b)
Western Private Major private Very high: High: economic High: evaluation is a High: investment Russell (2003)
Europe companies and billions since methods even if not need, even if with frequently assessed
organizations. WWII using social based company based
Some small impact studies. concerns, case studies
companies Studies based on
companies
Eastern Public High: training Small: existing as a Very small: taking for Very small: almost ILO (2008); Barro and
Europe was a widespread form of positive granted that to the impossible to detect Lee (2000); Tovstiga
phenomenon evaluation program was positive any impact and Tulugorava
(2007); Volkov and
Garanina (2008)
Other Public, Brazil, Canada, High: billions Public- high: economic Public high: concern Public: High: policy Hum (2002);
cases Private, Other: Australia, Brazil, since WWII in impact studies with public money guidance Middleton (1992);
World Bank Canada, particular in Private: high Private: High: Private. high guidance Ordonez (2002);
and NGOs Australia, Japan. Canada and company based concerns about of private investments Johanson (2006);
New Zealand Australia. methods usefulness Small: international Guthrie (2007); Bontis,
Increasing in LDC Other: low; result Other small: beneficial aid concerns prevail 2002
in recent years methods instincts dominate
in 1998. This program has not yet been evaluated by impact studies (White The evaluation
House, 2008). of HRD
Privately funded HRD programs
Private training and HRD programs have been made in the USA by the most
influential companies and organizations since World War II (Table VII, line 3). And, as
a consequence, in the USA, the evaluation of HRD private programs is a common 529
occurrence. A wide web of consultants, and associations for evaluation research exists
to evaluate HRD programs (www.eval.org). HRD operations have been evaluated using
a plethora on methods, namely the following (McLean, 2005; Smith and Piper, 1990):
.
questionnaires: testing and griding; Kirkpatrick four levels;
.
trained observers: interviews; return on investment methods; impact
microeconomic methods; and more recently Balance Scorecard and systemic
approaches.

Those methods cover the various types of evaluation perspectives indicated in Table V.
In fact, the various types of evaluators compete for the market of the evaluation of HRD
programs. Therefore, when related to the Gramlich methodology, the values of p, q and
PV in regarding privately funded HRD programs in the USA have been high, because
respectively, the methodologies used have been scientific, the private administrators
have shown concerns over the outcomes and the value of the programs analyzed were
very high.

The Western European case


The programs funded by the European Union
The EU began funding training and HRD in 1960; however, until 1989 the programs
funded by the European Social Fund (ESF) were only evaluated by result indicators
(European Union, 1997a; Tomé, 2001a). This happened because from the perspective of
the European Union (at the time known as European Community) the simple fact of
awarding money to the less developed regions of the EU, and to some target groups
was seen as beneficial to those regions and persons. Therefore, physical and financial
indicators were seen as good indexes of the benefits derived from the programs. This
“logic of absorption” was extremely important in the management of the ESF.
However, different phases existed in those 30 years (Tomé, 2001b):
.
Until 1972, each country knew the difference between the money received and the
money spent paying the ESF operations, and the operations were only funded
ex-post, meaning that there was a direct link between the money spent and the
level of employability: only successful operations were funded.
.
From 1973 to 1989, with the establishment of ex-ante payments, the awarding of
funds was money according to some “Orientation guidelines”. In consequence,
when the demanded amounts exceeded the available amounts, the proposed
value of funds was reduced in percentage even if they were eligible. The
evaluation of the operations was made by accounting the number of persons
supported and the money spent. Almost no impact studies were performed
(Laffan, 1988). The only program funded by the ESF that was submitted to a true
evaluation of impact was the Youth Training Scheme in the UK (Bradley, 1995).
JEIT .
The ESF also inquired the situation of the trainees six months after the
33,6 operations but those data were not made public. Calculations were possible on
the relation between the amount of money awarded in the candidature phase and
the amount of money. It was also possible to analyze the number of trainees per
1,000 habitants at the EU level and the number of ECUs per inhabitant at EU
level. That way of acting may be called the “absorption analysis” (Tomé, 2001b).
530
In consequence, we think that before 1989, regarding the EU, PV was increasing, put p
was small, and q was even smaller. Therefore, the potential benefit derived from the
type of evaluation that was put in place was very small (see Table VII, line 4).
However, in 1989, the European Commission tried to reform the Structural Funds
(SFs). As a consequence a new procedure of implementation was defined. The SFs
began to be organized in programs. For each program four operational phases were
defined:
(1) ex-ante appreciation;
(2) monitoring;
(3) control; and
(4) ex-post evaluation.

In consequence, the importance of the evaluation of programs was reinforced, at least


in theory. And we may say that from 1990 to 1999, at the epoch of the first two
“programming” phases from the Structural Funds reform (phase I 1990-1993; phase II
1994-1999), the value of PV was increased because the funds available for the ESF were
augmented, and the value of q was also increased because the concern with the
evaluation of programs was clearly shown in the legislation. However, in practice, the
desire of evaluation was expressed in evaluation reports that had the following
structure:
.
description of the situation before the program and justification for the program;
.
description of the program and description of the differences between the
expectations and the reality; and
.
inquiry on the people involved (trainees and trainers) and in the organization of
evolution of the situation “before-after”.

The only cases we know of programs funded by the ESF and evaluated with impact
methods between 1990 and 2000 relate to Spain (Ballart, 1998; Saez and Toledo, 1996),
the UK (Bradley, 1995), Germany (Lechner, 1998) and Ireland (O’Connell and
McGinnitty, 1996). Therefore, the value of p was not considerably augmented, even if it
was said and announced that the effect of the programs was going to be detected. So,
the potential value of the evaluations continued to be quite small (see Table VII, line 5).
However, in a clear sign that the evaluation of HRD was an important matter, in, 1999,
the European published the collection “MEANS”, on the “Evaluation of Socio-Economic
Programmes” (European Union, 1999). That book was composed of five volumes and in
them the European Commission made a summary of the various types of methods that
were available to study the Structural Funds. As a consequence, it was expectable that
those methods would be extensively put in use in the third programming phase
(2001-2006). And indeed, for some countries (namely Spain, Greece, France, Netherlands,
Austria, Sweden) the investment in HRD by the EU was evaluated in 2002 using impact The evaluation
studies, as a part of the European Strategy for Employment, and in some cases (Spain, of HRD
Greece and Austria) the impact was found to be positive (European Union, 2002).
Therefore, the benefit that might be derived from the evaluations increased in the last
decade, at least in same cases (see Table VII, line 6).
Additionally we must note, that one very interesting characteristic of the
evaluations made on the programs funded by the ESF, is that by rule, they are made 531
ex-post on the whole program and not only ex-post on one of the years of the program.
This characteristic in turn implies that it is almost impossible to perform a meaningful
impact study because impact studies should be made when the program is still
operating (European Union, 2002).
In consequence of what was just written, when we apply the Gramlich method to the
HRD programs funded by the European Union, we arrive to the following conclusions
(see Table VII, lines 4, 5 and 6 taken in consideration at the same time):
.
The value of PV has been increasing steadily since 1958, and that increase was a
consequence of the increasing importance that HR, and HRD has had in the
European economy;
.
However, the value of p has not increased very much, in all the countries, because
the methodologies of evaluation that were used in most of the cases were not able
to detect the true impact of the program in socio-economic terms. In most cases
those methodologies analyzed the program itself and not its consequences.
.
Finally, regarding q the political discourse was seldom translated in practice.
Even if the concerns of the administrators rose with the time, and if the
evaluation became one of the phases of every operational program, in practice
the change was small: the methodologies used in the evaluation were basically
basic result ones: the evaluations compared the situation of the participants in
the program in a before-after comparison; the immediate effect of the evaluation
was severely diminished by the timing in which they had to be made.

Therefore, we think that, as a rule, the potential value that might be extracted from the
evaluations made to the programs funded by the EU was small. This fact might be in
turn explained because the funds that were are awarded by Brussels, covered usually
more than 50 percemt of the program costs, and therefore those programs were
considered by the national recipient as “goods” or even “gifts”, and money that should
be absorbed. Absorption was seen by the European administration and by the National
administrations as the first and essential signal of success.

Programs funded by the national budgets


Furthermore, many programs have occurred in the last 60 years in the Western
European countries that have implemented the more advanced forms of Welfare States,
namely the Nordic countries, the UK, France, Germany and Austria, and the Benelux
(see Table VII line 7). These programs, were subjected to a large set of different types
of evaluation:
.
Basic result methods were widely applied in the generality of the programs put
in place as an essential part of the description of the program;
JEIT .
Since the 1980s more sophisticated evaluation methods were implemented.
Impact microeconomic methods were used in some cases, in different countries:
33,6 Sweden, Norway, Denmark, Austria, Belgium, the Netherlands, Finland, and
France (see Bjorklund, 1994; Torp, 1994; Nielsen et al. 1993; Heckman et al., 1999;
European Union, 1997b.).

532 In consequence, and in what relates to Gramlich’s model, it seems evident that the
value of the PV variable has been increasing with the passage of the time, because the
economic changes that happened since 1973 meant that the public presence in the
labour market, regarding training and HRD should be widened and reinforced in order
to support the people and organizations in need. But at the same time the values of p
and q were increased, because in one hand the shortage of public money called for
better impact studies, and because in the other hand that some shortage made the
public authorities to be more open and sensitive to the results of the evaluations.
Private funded programs
HRD practices have been a frequent practice in Western Europe since the end of WWII
and mainly since the 1960s (Table VII, line 8). Most of the privately funded programs
made in Europe are and have been submitted to simple forms of evaluations, namely
result methods. Many programs have been evaluated on pedagogical and reactive
grounds. Some programs have been evaluated with ROI techniques and with Balance
Scorecard techniques (Russell, 2003). Most of the evaluations made were related to the
company own results. It seems that the European private companies rely on the fact
the HRD is useful, and that they expect that usefulness to be shown in terms of
competences, financial results, routines or consumer satisfaction.

The Eastern European case


In what concerns the cases of the Eastern European countries it is well known that in
the Socialist regimes training was a part of the educational system, and that in those
countries the level of training and education was high, when compared to the world
average (ILO, 2008) (Table VII, line 9). The average level of schooling years of the
citizens of Eastern European countries was about ten, and comparable to the value
registered for the more developed European countries (Barro and Lee, 2000). Also,
those operations were essentially made and funded by the public organizations
because they were meant to be beneficial for the countries in which they occurred.
However, no impact studies were performed to those programs. When the regimes
changed, the training schemes were also radically transformed. It was necessary to
adjust the healthy and culturally advanced population to the needs of the profit making
companies that had to operate in those countries. On evaluation procedures the Eastern
countries began to converge with Western Europe (Tovstiga and Tulugurova, 2007;
Volkov and Garanina, 2008).

Other cases
Outside Western and Eastern Europe, the USA, the evaluations on HRD have been
essentially made in developed countries like Japan (Johanson et al., 2006), Australia
(Guthrie et al., 2007) and Canada (Bontis, 2002a). However, in the last ten years, with
the globalization of knowledge, and the widespread diffusion of good practices, some
first manifestations of HRD evaluations have been made on Brazil and South America,
on Russia, on China and on India (Hum and Simpson, 2002; Middleton et al., 1992; The evaluation
Ordóñez de Pablos, 2002). of HRD
Quite strikingly, the evaluation procedures implemented to analyze those programs,
varied with the funding source, namely:
.
Important HRD programs funded by the public administration took place
outside the USA and Europe in countries like Brazil (SENAI (www.senai.br/br/
Publicacoes/snai_ind_pub.aspx) Canada (with the Canada’s Job Strategy) and 533
Australia (Australian Government, 2008).
.
In some Low Developed Countries (LDCs) countries, HRD programs have been
done with the support of the World Bank and the other Non Governamental
Organizations (NGOs) (www.worldbank.org/aid/). Those programs were usually
evaluated using result indicators, and competence based indicators being made
on the assumption that their existence was positive for the recipient societies. We
don’t know of any deep and fine socio economic impact analysis regarding the
use of those HRD programs.
.
Finally, privately funded training and HRD was also a common characteristic of
developed countries, like Japan, Canada, Australia, and New Zeeland. As it
happened in the USA and in Europe, those programs had an important
dimension, and were frequently evaluated using complex scientific techniques
(see above). That fact probably happened because the companies were concerned
with the return of the investment they had made.

Concluding comments
Conclusions
We derived eight important findings from this paper:
(1) Evaluations of HRD experiences have spread worldwide, as a function of
economic development. In less developed societies HRD investments are very
small, and HRD evaluations are also almost non-existent.
(2) The first finding is true even if in some cases, like the programs that are funded
by private companies in the developed world, the evaluation is not done because
the administrator believe that the operation is profitable.
(3) The public programs on HRD tend to be large and oriented to the
disadvantaged people. They tend to achieve social goals. As a consequence,
they may be evaluated by two main sorts of evaluation techniques: results
methods and microeconomic impact methods.
(4) In the first hypothesis indicated in number 3, the evaluation is more of the social
policy type. This situation tends to exist when the funds are perceived to come
from abroad (like in the EU case) or if the idea of the administrator is so strong
that he feels that there is not need to evaluate the impact of the program; this
last hypothesis is analogous to the case of the private manager mentioned in 1).
(5) The private programs on HRD tend to be made by profit seeking companies.
They are meant to generate large future profits. Those programs use to be
evaluated in terms of competences and learning (in an HRD perspective) in
terms of Knowledge (in a KM perspective), in terms of the difference between
the market value and the book value (in terms of IC analysis) and in terms of
JEIT ROI (using the traditional management methodology). This situation is true and
33,6 compatible with the situation described in 2).
(6) By geographic areas, the USA has been the country in which more HRD
programs have been evaluated, and better evaluated; they were followed by the
Nordic countries, the UK, Central Europe, Japan, Canada and Australia.
(7) In the twenty-first century benefiting with the possibilities that the internet and
534 the IT technologies give, HRD evaluations were made in many other regions
like Southern Europe, Latin America, the Muslim World, Africa, and South East
Asia. Nowadays, HRD and HRD evaluations are worldwide phenomena.
(8) Quite crucially, the BRICs (Brazil, Russia, India and China) all made significant
advances in HR and HRD in the last two decades.

Limitations
We used the internet to find the cases that illustrated most part of our study. That one
is in itself a huge limitation. We should get in touch with authors in each one of the four
geographical regions (USA, Western Europe, Eastern Europe, other cases) we
identified, and we should try to find more detailed and precise information for each
case.

Practical implications
We think we made clear that the type of evaluation that is put in place for HRD related
programs relate to the type of administrator/user and to the type of funding. That fact
in itself helps putting the HRD efforts in perspective. The different types of
administrators should be aware of the alternative methods available to evaluate HRD.
For scholars this finding is also important because it helps building bridges between
apparently separated scientific fields.

Suggestions for further research


It would be quite interesting to follow this paper with an international and worldwide
atlas on the HRD practices. This Atlas could be constructed in two ways (see Tomé,
2009):
(1) by methodology, following the six types of methods that were described in
Table V; and
(2) by geographic zone, dividing the world in eleven major regions: Europe, North
America, South America, Japan, Africa, Asia, Arab World, China, India, Russia,
and Oceania.

References
Alcock, P., Erskine, A. and May, M. (2003), The Student’s Companion to Social Policy, 2nd ed.,
Blackwell Publishing, London.
Ashenfelter, O. (1978), “Estimating the effect of training programs on earnings”, Review of
Economics and Statistics, Vol. LX, pp. 47-57.
Ashenfelter, O. and LaLonde, R. (Eds.) (1996), The Economics of Training, Volume I, Theory and
Measurement, Edward Elgar Publishing, Aldershot.
Australian Government (2008), “New National training system”, available at: www.dest.gov.au/
sectors/training_skills/policy_issues_reviews/key_issues/nts
Ballart, X. (1998), “Spanish evaluation practice versus program evaluation theory”, Evaluation, The evaluation
Vol. 4 No. 2, pp. 149-70.
of HRD
Bannock, G., Baxter, R. and Davis, E. (2005), The Penguin Dictionary of Economics, 6th ed.,
Penguin, Harmondsworth.
Barro, R. and Lee, J-W. (2000), “International data on educational attainment: updates and
implications”, CID Working Paper No. 42, Cambridge, MA.
Bartel, A. (1995), “Training wages growth and Job performance – evidence from a company 535
database”, Journal of Labour Economics, Vol. 13 No. 3, pp. 401-25.
Becker, G. (1962), “Investment in human capital: a theoretical analysis”, The Journal of Political
Economy, Vol. LXX No. 5, pp. 9-49.
Becker, G. (1976), The Economic Approach to Human Behaviour, The University of Chicago
Press, Chicago, IL.
Becker, G. (1980), “Investment in human capital: effects on earnings”, in Bernstein, M. and
Crosby, F. (Eds), Human Capital: A Theoretical and Empirical Analysis with Special
Reference to Education, University of Chicago Press, Chicago, IL, pp. 15-44.
Ben Porath, Y. (1967), “The production of human capital and the life cycle of earnings”, Journal of
Political Economy, Vol. 75, August, pp. 352-65.
Bjorklund, A. (1994), “Evaluations of labour market policies in Sweden”, Industrial Journal of
Manpower, Vol. 15 No. 5, pp. 16-31.
Bontis, N. (2002), “Intellectual capital disclosure in Canadian corporations”, Journal of Human
Resource Costing and Accounting, Vol. 14, pp. 9-20.
Booth, A.L. and Snower, D.J. (1996), Acquiring Skills, Cambridge University Press, Cambridge.
Bradley, S. (1995), “The Youth Training Scheme: a critical review of the evaluation literature”,
International Journal of Manpower, Vol. 16 No. 4, pp. 30-56.
Burtless, G. and Orr, L. (1986), “Are classical experiments needed for manpower policy?”, Journal
of Human Resources, Vol. 21 No. 4, pp. 606-39.
Card, D. and Sullivan, D. (1988), “Measuring the effect of subsidized training programs on
movements in and out of employment”, Econometrica, Vol. 56, pp. 497-530.
Cegarra-Navarro, J. and Sanchez-Polo, M. (2008), “Linking unlearning and relational capital
through organisational relearning”, International Journal of Human Resources
Development and Management, Vol. 7 No. 1, pp. 37-52.
Chapman, P. (1993), The Economics of Training, Harvester, Manchester.
Chinapah, V. and Miron, G. (1990), Evaluating Educational Programmes and Projects: Holistic
and Practical Considerations, Socio-Economic Studies Vol. 15, UNESCO, Paris.
Deacon, B. (2000), “Eastern European Welfare States: the impact of the politics of globalization”,
Journal of Social European Policy, Vol. 10 No. 2, pp. 146-61.
Doeringer, P.B. and Piore, M.J. (1985), Internal Labour Markets and Manpower Analysis, Sharp
Armonk, New York, NY.
Dougherty, C. and Tan, J.-P. (1997), “Financing training – issues and options”, International
Journal of Manpower, Vol. 18, pp. 29-62.
Esping Anderesen, G. (1990), The Three Worlds of Welfare Capitalism, Princeton University
Press, Princeton, NJ.
European Union (1997a), Fonds Social Européen: Vue d’ensemble de la période de programmation
1994-9, European Union, Luxembourg.
European Union (1997b), Labour Market Studies, European Union, Luxembourg.
JEIT European Union (1999), Evaluation of Socio-economic Programs Collection, MEANS,
Luxembourg.
33,6 European Union (2002), Impact Evaluation of the European Employment Strategy: Technical
Analysis, available at: http://ec.europa.eu/employment_social/employment_strategy/eval/
papers/technical_analysis_complete.pdf
Eurostat (2007), Labour Market Policy Expenditure in Active Measures by Type, Eurostat,
536 Luxembourg.
Ferrera, M., Hemerijck, A. and Rhodes, M. (2000), The Future of Social Europe. Recasting Work
and Welfare in the New Economy, Celta Editora, Oeiras.
Fits-zen, J. (2000), The ROI of Human Capital: Measuring the Economic Value of Employee
Performance, AMACOM, New York, NY.
Gramlich, E. (1981), Benefit-Cost Analysis of Government Programs, Prentice-Hall, Englewood
Cliffs, NJ.
Gramlich, E. (1990), A Guide to Cost Benefit Analysis, 2nd ed., Prentice-Hall, Englewood Cliffs, NJ.
Guthrie, J., Petty, R. and Ricceri, P. (2007), Intellectual Capital Reporting: Lessons from Hong
Kong and Australia, The Institute of Chartered Accountants of Scotland, Glasgow.
Heckman, J. and Hotz, J. (1989), “Choosing among alternative nonexperimental methods for
estimating the impact of social programs: the case of manpower training”, Journal of the
American Statistical Association, Vol. 84 No. 408, pp. 862-74.
Heckman, J. and Smith, J. (1995), “Assessing the case of social experiments”, Journal of Economic
Perspectives, Vol. 4 No. 2, pp. 85-100.
Heckman, J., Lalonde, R. and Smith, J. (1999), “The economics and econometrics of active labour
market programs”, in Ashenfelter, O. and Lalonde, R. (Eds), Handbook of Labour
Economics, Vol. 3A, Chapter 31, North Holland, Amsterdam, pp. 1865-2097.
Heckman, J., Ichimura, H., Smith, J. and Todd, P. (1998), “Characterizing the selection bias using
experimental data”, Econometrica, Vol. 66, pp. 1017-98.
Hum, D. and Simpson, W. (2002), “Public training programs in Canada: a meta-evaluation”,
Canadian Journal of Program Evaluation, Vol. 17 No. 1, pp. 119-38.
Ignjatovic, M., Svetlik, I., European, H.R.M. and clusters, E.K.K. (2003), “European HRM
clusters”, EKK Toim, Vol. 17, Autumn, pp. 25-39.
ILO (2008), Operational Programme for Human Resource Development – Czech Republic,
available at: www.ilo.org/public/english/employment/skills/hrdr/init/cze_2.htm
Johanson, U., Koga, C., Skoog, M. and Henningsson, J. (2006), “The Japanese Government’s
intellectual capital reporting guideline: what are the challenges for firms and capital
market agents?”, Journal of Intellectual Capital, Vol. 7, pp. 474-91.
Kaplan, R. and Norton, D. (1996), The Balance Scorecard, Harvard Press, Cambridge, MA.
Katz, E. and Ziderman, A. (1990), “Investment in general training: the role of information and
labour mobility”, Economic Journal, Vol. 100 No. 403, pp. 1147-58.
Keynes, J. (1990), Teoria Geral do Emprego, do Juro e da Moeda, São Paulo Editora Atlas,
McMillan Press, Basingstoke, (originally published in 1973).
Kianto, A. (2008), “Assessing organizational renewal capabilities”, International Journal of
Innovation and Regional Development, Vol. 1 No. 2, pp. 115-29.
Kirkpatrick, D. (1998), Evaluating Training Programs: The Four Levels, 2nd ed., Berrett-Kohler,
San Francisco, CA.
Laffan, B. (1988), “Policy implementation in the European community: the European social fund
as a case study”, Journal of Common Market Studies, Vol. 21 No. 4, pp. 393-408.
LaLonde, R. (1995), “The promise of public sponsored training programs”, Journal of Economic The evaluation
Perspectives, Vol. 9 No. 2, pp. 149-68.
of HRD
Lechner, M. (1998), Training in the East German Labour Force: Microecnometric Evaluations of
Continuous Vocational Training after Unification, Physica Verlag, Heidelberg.
McLean, G. (2005), “Examining approaches to HR evaluation”, Strategic Human Resources, Vol. 4
No. 2.
Middleton, J., Ziderman, A. and Adams, V. (1992), “The World Bank’s Policy Paper on Vocational 537
and Technical Education and Training”, Prospects, Vol. XXII No. 2.
Mincer, J. (1958), “Investment in on job training and personal incomes distribution”, Journal of
Political Economy, Vol. 66 No. 4, pp. 281-392.
Mincer, J. (1962), “On-the-job training: costs, returns, and implications”, Journal of Political
Economy, Vol. LXX No. 5, pp. 50-80.
Mincer, J. (1974), Schooling, Experience and Earnings, NBER Columbia University Press, New
York, NY.
Nielsen, W., Pedersen, P., Jensen, P. and Smith, N. (1993), “The effects of labour market training
on wages and unemployment: some Danish results”, in Bunzel, H. (Ed.), Panel Data and
Labour Market Dynamics, North Holland, Amsterdam.
Nonaka, I. and Takeuchi, H. (1995), The Knowledge Creating Company, Oxford University Press,
Oxford.
O’Connel, P. and McGinnity, F. (1996), “What works, who works? The impact of active labour
market programmes on the employment prospects of young people in Ireland”, Discussion
paper, Social Science Research Centre, Berlin.
OECD (2006), Education at a Glance, OECD, Paris.
Oi, W. (1962), “Labour as a quasi-fixed factor”, The Journal of Political Economy, Vol. 70,
pp. 538-55.
Ordóñez de Pablos, P. (2002) in Bontis, N. (Ed.), “Intellectual capital measuring and reporting in
leading firms: evidences from Asia, Europe and the Middle East”, Proceedings of the
5th World Congress on Intellectual Capital, Butterworth-Heinemann, Oxford.
Orr, L.L., Bloom, H.S., Bell, S.H., Doolittle, F., Lin, W. and Cave, G. (1996), Does Training for the
Disadvantaged Work? Evidence from the National JTPA Study, The Urban Institute Press,
Washington, DC.
Paul, J. (1989), La relation formation – emploi: un défi pour l’ économie, Economica, Paris.
Pike, S. and Roos, G. (2007), “Recent advances in the measurement of intellectual capital: a critical
survey”, paper presented at the European Conference on Knowledge Management
Consorci Escola Industrial de Barcelona (CEIB), Barcelona, 6-7 September.
Russell, R. (2003), “The international perspective: Balanced Scorecard usage in Europe”,
Balanced Scorecard Report, Vol. 5 No. 3, pp. 13-14.
Saez, F. and Toledo, M. (1996), “Formation, Mercado de Trabajo e Empleo”, Economistas, Vol. 69,
pp. 351-7.
Schellhaass, H.M. (1991), “Evaluation strategies and methods with regard to labour market
programs – a German perspective”, Evaluating Labour Market and Social Programs – The
State of a Complex Art, OECD, Paris, pp. 89-106.
Schultz, T. (1971), Investment in Human Capital: The Role of Education and of Research, Free
Press, New York, NY.
Shackleton, J. (1995), Training and Unemployment in Western Europe and the United States,
Edward Elgar, Aldershot.
JEIT Smith, A. and Piper, J. (1990), “The tailor-made training maze: a practitioner’s guide to
evaluation”, Journal of European Industrial Training, Vol. 14 No. 8.
33,6 Snower, D. (1994), “The low-skill, bad-job trap”, IMF Working Paper, 94/83, available at: http://
ssrn.com/abstract ¼ 883810“The low-skill, bad-job trap, IMF Working Paper, 94/83,
available at: http://ssrn.com/abstract ¼ 883810
Spence, A.M. (1973), “Job market signaling”, Quarterly Journal of Economics, Vol. 87 No. 3,
538 pp. 355-74.
Stiglitz, J. (1975), “The theory of screening, education, and the distribution of income”, American
Economic Review, Vol. 65 No. 3, pp. 283-300.
Stiglitz, J. (2000), The Economics of the Public Sector, W.W. Norton & Company, New York, NY.
Stevens, M. (1996), “Transferable training and poaching externalities”, Alison Booth and Denis
Snower editors, Acquiring Skills: Market Failures, their Symptoms, and Policies Responses,
CEPR Cambridge University Press, New York, NY, pp. 19-40.
Sveiby, K.-E. (1998), “Measuring intangibles and intellectual capital – an emerging first
standard”, available at: www.sveiby.com/Portals/0/articles/EmergingStandard.html
Tomé, E. (2001a), “The evaluation of vocational training: a comparative analysis”, Journal of
European Industrial Training, Vol. 25 No. 7, pp. 380-8.
Tomé, E. (2001b), “The European social fund in Portugal: a study in a perspective of evaluation”,
ISEG Lisbon, PhD thesis, pp. 531-50 (in Portuguese).
Tomé, E. (2005), “Evaluation methods in social policy Bits Boletin informativo de trabajo social”,
No. 8, Universidad de Castilla la Mancha, Cuenca.
Tomé, E. (2009), “HRD in a Multipolar World: an introductory study”, paper to be presented, if
accepted, at the AHRD 2009 European Congress, Newcastle.
Torp, H. (1994), “The impact of training in employment: assessment a Norwegian labour market
program”, Scandinavian Journal of Economics, Vol. 96 No. 4, pp. 531-50.
Tovstiga, G. and Tulugurova, E. (2007), “Intellectual capital practices and performance in
Russian enterprises”, Journal of Intellectual Capital, Vol. 8 No. 4, pp. 695-707.
UNESCO (2006), Participation in Formal Technical and Vocational Education Training
Programmes Worldwide. An Initial Statistical Study, UNEVOC International Centre for
Technical and Vocational Education and Training, Bonn.
United Nations (2008), United Nations Human Development Report 2007/8 – Fighting Climate
Change: Human Solidarity in a Divided World, United Nations, New York, NY.
White House (2008), Program Assessment, Work Investment Act, Youth Activities, available at:
www.whitehouse.gov/omb/expectmore/summary/10000342.2003.html
Volkov, D. and Garanina, T. (2008), “Value creation in Russian companies: the role of intangible
assets”, Electronic Journal of Knowledge Management, Vol. 6 No. 1.

Corresponding author
Eduardo Tomé can be contacted at: eduardo.tome@clix.pt

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like