You are on page 1of 40


M. B. A. IV Core CC205

Balance score card in Infosys Limited

Ms. Jyoti Ghanchi

Patel Sumit D (45)
Prajapati Hitesh K (48)
Prajapati Narendra (49)
Prajapati Hemang K (46)





Balance score card:

A balanced scorecard is a performance metric used in strategic management to identify and
improve various internal functions of a business and their resulting external outcomes. It is used
to measure and provide feedback to organizations. Data collection is crucial to
providing quantitative results, as the information gathered is interpreted by managers and
executives, and used to make better decisions for the organization.

How can you ensure that the right data is gathered for effective measurement of business
objectives? When objectives are not being met, how can you determine the reasons, then
implement systems and business processes to put the company back on track? These questions,
and many more, can be answered through a Balanced Scorecard (BSC), a framework of strategic
measurement-based management that was selected by Harvard Business Review as one of the
most important management practices of the past 75 years.

Robert S. Kaplan and David P. Norton initially conceived the basic concepts in the early 1990s,
culminating with their definitive book The Balanced Scorecard: Translating Strategy Into Action
(Harvard Business Press, 1996). They described a Balanced Scorecard as a multidimensional
framework for describing, implementing, and managing change at all levels of an enterprise by
linking objectives, initiatives, and measures to an organization. s strategy. What you measure is
what you get,. Kaplan wrote. .Senior executives understand that their organizations
measurement system strongly affects the behavior of managers and employees. Executives also
understand that traditional financial accounting measures, like return-on-investment and
earnings-per-share, can give misleading signals for continuous improvement and innovation.
Activities todays competitive environment demands. The traditional financial performance
measures worked well for the industrial era, but they are out of step with the skills and
competencies companies are trying to master today..

The Balanced Scorecard concept of performance management quickly gained favor with
organizations that sought a more well-rounded, forwardlooking approach to guiding their

businesses. The interest continues. In its 1999 executive survey of management practices, Bain &
Company reported that 55 percent of surveyed companies in the United States and 45 percent in
Europe claimed to be using a Balanced Scorecard to direct key activities.

The Balanced Scorecard complements measures of past performance (typically called .lagging
indicators.) with measures of the drivers of future performance (.leading indicators.). The
objectives and measures of the Scorecard are derived from an organization.s vision and strategy.
These objectives and measures provide a view of an organization.s performance from four
perspectives, With proper attention to these four perspectives, the BSC approach reduces the
dangers of over-dependence on past financial results by ensuring that companies take regular
measurements of their customer base, internal business processes and levels of internal learning
and growth as strategic objectives are pursued.

Implementation to Business Strategy

Corporate mantras such as .become a world-class company. and .set the industry agenda. adorn
the walls of many aspiring organizations. But how many of these companies have executed an
action oriented framework to help them achieve a technology-led business transformation, or
developed a method to ensure that their information systems support and uphold corporate goals
over time? In other words, what systems do they have in place to tie the company.s vision and
strategy to the technology objectives of the organization?

EBSS, a joint venture launched by Toshiba, Accenture and Oracle Japan in 2001, serves as a
good example of how a Balanced Scorecard (BSC) can serve as a foundation for launching new
business practices and setting the direction of a company. EBSS is deriving substantial value
from its BSC effort, as managers create a framework for aligning their technology
implementation initiatives with overall business strategy.

Here is a real-world example of how one company is using the Balanced Scorecard to create a
framework for aligning technology implementation initiatives with overall business strategy.

Putting Concepts into Action

A Balanced Scorecard begins with the premise that traditional financial measures are not
sufficient to manage an organization. In other words, while financial measures tell the story of
past events, they are not helpful to guide the creation of future value through investments in
customers, suppliers, employees, technology, or innovation. Corporate officers need to know
precisely what investments in people, systems, and procedures are necessary to propel their firms
down the road to profitability. EBSS is taking an intelligent approach to the build of a complex
business: the implementation of Oracle E-Business applications in the Asia/Pacific region. EBSS
is offering complete enterprise solutions for the electrical, electronics, mechanical and
automobile markets. Soon after EBSS was formed, consultants from Infosys Technologies Ltd.
helped the new firm develop a Balanced Scorecard to determine how they could best create value
for customers. EBSS worked with Infosys to develop the basic Balanced Scorecard principles:
financial considerations followed by customer/market considerations followed by capabilities.

.Most discussions of BSC precepts are theoretical, favoring high-level concepts rather than low-
level details,. explains Mr. Yoshimi Shimano, President, Enterprise Business System Solutions
Corporation. .Infosys helped us establish key strategic initiatives and prioritize our objectives. Its
consultants played an important role in helping us articulate our business strategy and create a
framework to align technology implementations..

Balanced Scorecard Overview

Driving New Business

EBSS worked with Infosys to identify owners, schedules and resources; to set up strategic
performance measures; and to review the progress and results of the strategy implementation to
measure its effectiveness. The primary market consideration was ensuring EBSS had the right
systems, business processes and strategy to acquire large customers in selected domestic
markets. Capabilities- Planning exercises included ensuring that the company could meet
aggressive sales targets through its newly defined sales channels, and to utilize capacity
effectively over designated time periods.

Phase 1 of the Scorecard defined the implementation plan, with attention to medium- and long
termorizons.from formation of a core implementation team to discussions with senior
management to conceptualization and finalization of the Scorecard. This Phase of the BSC
project was completed in September 2001 using a combination of onshore and offshore
consulting services, as shown in Figure 2. Blue boxes represent activities performed at EBSS
headquarters in Tokyo and gray boxes represent activities being done by Infosys consultants in

.Infosys successfully incorporated the principles of its global delivery model (GDM) to bring in
relevant expertise and reduce costs,. Says Shimano. .In some cases, work was performed on-site
at EBSS. In other cases, we were able to leverage their extensive staff resources in India..

Overview of EBSS Business Process Flow

The next task facing EBSS was to implement Oracle Applications internally. Figure 3 reveals the
steps that are currently being taken to complete this implementation in a timely, cost-effective
manner. It defines which modules are being implemented, how the product teams blend, and
which BSC measures are supported by which systems.

Here again, the Balanced Scorecard was used to help align business strategy with technology
implementation activities. In that sense, it serves as a bridge between the technology
implementation, the business requirements and the business model. On the financial side, this
includes revenue, profitability and return-on-investment (ROI). The Balanced Scorecard is also
helping EBSS establish important business parameters. By carefully delineating the inputs,
objectives, activities, and deliverables, the Scorecard serves as a bridge between the technology
implementation, the business requirements and the business model.

History and Development of the Balanced


Balanced Scorecard has been launched twenty years ago as a first set of principles for balanced
strategic Objectives and Measures/KPIs setting and measurement. The parents of Balanced
Scorecard are Dr. Robert S. Kaplan, Baker Foundation Professor at Harvard Business School and
Dr. David P. Norton, the founder of the consulting team that contributed over the past two
decades to the development of Balanced Scorecard into todays integrated and aligned
management system. That company is now called The Palladium Group
( and is based in Massachusetts, near the Harvard Business

Since their first article about Balanced Scorecard, published in 1992 in the Harvard Business
Review, Drs. Kaplan and Norton have published five books, which have marked the evolution of
the BSC framework and of its implementation methodology, starting with The Balanced
Scorecard, in 1996 and ending with The Execution Premium, in 2008 (now also translated in
Romanian, soon to be published).

Over the years many consulting or educational organizations have tried to add to the Balanced
Scorecard initial concepts, in some cases attempting to build a better mouse trap, but few
succeeded to bring any significant theoretical or practical value to the BSC framework and
implementation methodology, developed continuously by Drs. Kaplan and Norton and by their
team at Palladium and based on thousands of implementation projects in most industries,
geographies worldwide and organization sizes, in both public and private sectors.

Chapter No. 2
Research Methodology:

Sample size

Stratified sampling method was used to design sample. From the selected organizations, the
respondents were categorized into Managers or executives working in corporate/industry and
teachers working in academic organizations. Random sampling was used to collect the data. The
lists of persons working in the organizations were taken and randomly respondents were selected
to collect the data.

(a) Sample Plan for Corporate/Industry: The total numbers of companies selected are 28.
From each company random selection of 5 executives or managers is done. In total data is
collected for 140 respondents. The designations of these five respondents are senior
managers, managers, executives, senior supervisor, and supervisor. One from each
designation is selected.

(b) Sample Plan for Academic Organizations: The total numbers of academic organizations
selected are 15. From each organization random selection of 8 respondents is done. In total data
is collected for 120 respondents. The designations of these respondents are Director, Professor,
Assistant Professor, Lecturers.

Data Collection Tools:

Two comprehensive structured questionnaires, (See Appendix I & II) one each for the managers
and executives working in corporate /industry and in academic organizations were designed.
Keeping in view the time limitation of the respondents and to ensure speedy responses from the
respondents, the questionnaires were administered personally to all the respondents. The
questionnaires were so framed that the maximum information relating to the objectives may be
extracted from the respondents, on various aspects of HR performance drivers. Most of the
queries were to be responded on various scales. All the questions were close ended, so that the
respondent time may be optimally used. A few of the questions were framed in such a way that
the respondents may respond one or more options by tick mark. To cross check the response of
the respondents, some questions were common with different wording. A few dummy questions
were also asked from the respondent to keep them at ease.
The questionnaire I was designed to study the impact of HR performance drivers for achieving
organizational excellence and whether organizational culture impact on the employees
motivation whereas questionnaire II was designed to study the impact of cultural acceptance of
change on employees performance. The efficiency of the questionnaires (schedules) was tested
on a small group of executives working in corporate/industry (30) and academic organizations
(30) separately and the necessary modifications were made on the basis of the feedback received
from these respondents. The modified questionnaires were used for collecting the data. The
questions were framed so as to cover all the dimensions for the study.

Validation of Questionnaire
Validity of a questionnaire refers to the degree to which we are measuring what we
think are measuring (Kerlinger, 1973). Insufficient validity means a research error when the
research design is not able to accomplish what is required to be done. And, high degree of
validity reflects the accurate approximation to the real value.

Face Validity
This refers to the degree of fit between researchers perception and the concept of the
variables, which are operationalized through the questionnaire. The operational definition looks
on the face of the questionnaire as though, it measures the concept under study. Experts opinion
was taken for establishing their viewpoints, wordings and suggestions. The final validity was
done through number of validation sessions after revision/refining of the questions.

Criterion Related Validity

Criterion related validity refers to the degree to which the measurements with the
questionnaire are meaningfully related to the objectives of the questionnaire. This validation was
also done with active involvement of the experts and language/wording of the questions was

Content Validity
Content validation is guided by the question. Is the content of this measure
representative of the content or the universe of content of the property being measured?
(Kerlinger, 1973). Content validation is essentiality judgmental. The experts examined the
content of the questions with a view to variability and objectivity. Re-sequencing of the
questions also may be guided by the experts. Accordingly, the questions are revised/refined to
meet the above two aspects, i.e. variables and objectivity.

The questionnaires have high content validity for the following reasons: i) Identification of items
was based on the logical analysis of literature available on performance measurement techniques
(Six sigma and Balanced score card). ii) Framing of questions was done by involving
professional judgment of knowledgeable persons with vast experience of in the particular area.

Construct Validity
In attempting to evaluate construct validity, both the theory and the measuring
instrument being used are considered. For example, if we are interested in measuring the effect
of ceremony on organizational culture, the way in which ceremony was operationally defined
would have to correspond to an empirically grounded theory. Once it is assured that the construct
is meaningful in a theoretical sense, then investigate the adequacy of the instrument. Attitude
scales and aptitude and personality tests are the generally concerned concepts that fall in this
category. Although this situation is much more difficult, some assurance is still needed that the
measurement has an acceptable degree of validity.

Testing of Questionnaire
On completion of questionnaire validation it was subjected to pre-testing through
a small sample of respondents (12). The pre-testing respondents are generally selected from the
same population from which actual survey is done (Thakur, 1993). The object of the pre-testing
was to ensure easy understandability, and eliminating any confusion or misunderstanding.

Reliability refers to the extent to which a scale produces consistent results if repeated
measurements are made. Systematic sources of error do not have an adverse impact on reliability,
because they affect the measurement in a constant way and do not lead to inconsistency. In
contrast, random error produces inconsistency, leading to lower reliability. Reliability can be
defined as the extent to which measures are free from random error, XR. If XR = 0, the measure
is perfectly reliable. Approaches for accessing reliability include the test-retest, alternative forms
and internal consistency methods.
Test-retest reliability is an approach for accessing reliability in which respondents are
administered identical sets of scale items at two different times under as nearly equivalent
conditions as possible. Alternative-forms reliability is an approach for accessing reliability that
requires two equivalent forms of the scale to be constructed and then the same respondents are
measured at two different times. The internal consistency reliability is an approach for accessing
the internal consistency of the set of items when several items are summated in order to form a
total score for the scale.

The scientific requirements of a project call for the measurement process to be
reliable and valid, while the operational requirements call for it to be practical. Practicality has
been defined as economy, convenience, and interpretability. While this definition refers to the
development of educational and psychological tests, it is meaningful for business measurements
as well.

Duration of the Survey

Data was collected from January 2009 till November 2009.

Data Analysis Techniques Filled-up questionnaires were examined for their correctness and
observed gaps were mitigated through follow-up with the respondents. To reach the meaningful
inferences, data analysis was done by using various statistical techniques such as tests of
significance (t test), correlation and multiple regressions etc. One sample t-test was applied to
analyze the difference between the performance of employees before training and after training.
Thus the hypothesis was formed H0 and a significant level of =0.05 is selected.

t = (X - )


SX = s

The degree of freedom for the t statistic to test the hypothesis about one mean is n-1.
Correlation was applied to explore the association that:

Whether organizational culture impact on the employees motivation or not. In other

words, whether performance is correlated to motivation.
Whether Leadership is correlated with participation.

Correlation between Interchangeability and flexibility/ adaptability was carried out.

In statistics correlation is used to summarize the strength of association between two metric
variables. The product moment correlation is the most widely used method. From the sample of
n observations X and Y, the product moment correlation, r is calculated.
Multiple regression analysis was done for analyzing the

Dynamic organizations work on Balanced Scorecard principles to identify key HR

performance drivers for achieving organizational excellence.
Impact of HR performance drivers for achieving organizational excellence.
The impact of motivation on performance is analyzed by using multiple regression taking
performance as dependent variable and motivation factors as independent variables.
Whether organizational culture impact on the employees motivation or not.

R is a measure of the correlation between the observed value and the predicted value of the
criterion variable. R Square (R2) is the square of this measure of correlation and indicates the
proportion of the variance in the criterion variable which is accounted for by our model.
However, R square tends to somewhat over-estimate the success of the model when applied to
the real world, so an adjusted R Square value is calculated which takes into account the number
of variables in the model and the number of observations (participants) our model is based on.

In general then, multiple regression procedures will estimate a linear equation of the form:

Y = a + b1*X1 + b2*X2 + ... + bp*Xp


Y is the true dependent, the bs are the regression coefficients for the corresponding X
(independent) terms, c is the constant or intercept, and e is the error term reflected in the

Question 1. Is the study descriptive or is the study comparative?

Descriptive studies include surveys to assess prevalence, needs assessments, chart reviews, etc.
that have as a main aim the estimation of rates, proportions and means in a population with a
secondary aim being to examine whether the rates are related to demographic variables (i.e. a
correlational analysis). For example, a survey may be undertaken to assess the extent of
doughnut consumption in Belltown. The main results to be reported might be the percentage of
residents who consume doughnuts on a daily basis, or the mean number of doughnuts consumed
by a resident per week. Follow-up analysis might examine whether the consumption rates depend
on the sex or age of the resident. Sample size determination for descriptive studies is based on
confidence intervals; that is, the level of precision required in providing estimates of the rates,
proportions and means. Comparative studies include case-control designs, randomized clinical
trials, etc. where a comparison between two or more groups is the key analysis. The main aim
here is to establish whether there are statistically significant differences between groups with
respect to some key outcome variable. Sample size determination for comparative studies is
based on hypothesis tests and power, that is, the probability of being able to find differences
when they do, in fact, exist. In brief, the first question asks, Are P-values relevant here?
Note that descriptive studies often lead to comparative studies; in fact, post hoc analysis often
involves informal inference and model-building to examine relationships among variables. The
sample size estimation should also take this into account and be sufficiently large for analysis of
future questions.

Question 2. Is the primary outcome variable a measurement variable (a.k.a. interval or

continuous) or is the primary outcome variable a categoric variable?
Choosing a primary outcome variable is analogous to choosing dessert in a restaurant. You may
have many favorites, but when the server comes to take your order you have to settle on one,
although you can probably procure a taste of everyone elses dessert. Everyone elses desserts
are analogous to secondary outcome variables. You will be able to assess them too, but your
main assessment what determines whether the research question was answered, or whether the
dessert was a success depends on a single variable. A measurement variable is one where the
characteristic is assessed on a scale with many possible values representing an underlying
continuum (e.g. age, height, blood pressure, pain (on a visual analogue scale)). It involved a
measuring process and usually requires some sort of instrumentation (e.g. ruler, stopwatch,
biochemical analysis, psychometric tool). Measurement variables are usually summarized with a
mean or median.
A categoric variable involves classification of subjects into one of a number of categories on the
basis of a characteristic. There can be two categories only (binary variable), multiple categories
where order does not matter (nominal variable) or multiple categories where order does matter
(ordinal variable). Categoric variables are usually summarized with proportions.
Sample Size Estimation for Descriptive Studies
If the answer to Question 1 in the previous section was descriptive study, youre in the right
No further questions need to be answered. The following section will derive these two formulas.
Categoric Measurement Descriptive Studies (sample size is based on

margin of error, E, in confidence intervals)

N = 1 / E2, for P near 0.5

N = (2 / E)2 P(1-P),
for P near 0 or 1.
N = (2S / E)2
S is the standard deviation of the variable
A confidence interval is a range of likely or plausible values of the population characteristic of
interest. For example, a sample survey can be used to give a range of values that the true
proportion or population mean is expected to lie within. The intervals can be constructed to
provide greater or lesser levels of confidence; however, the usual choice is 95% (with 90% and
99% useful in certain situations). For more information on confidence intervals, see the Modules
Analysis I and II.
Confidence intervals usually take the form:
(Point estimate) (Margin of error)
The point estimate is a value computed from the sample; for example, the sample
mean or sample proportion.
The margin of error (or plus or minus number) is a value computed from a variety
of components the level of confidence (e.g. 95%), the variability in the outcome
variable, and the sample size.
Confidence intervals are used to estimate sample sizes as follows.
>>> When interest is in a population mean (i.e. the primary outcome variable is
measurement/continuous), the total number of subjects required (N) is:
N = 4 z
2 S2 / W2
where S is the standard deviation of the variable, W is the width of the confidence interval (equal
twice the margin of error), and z is a value from the normal distribution related to and
representing the confidence level (equal to 1.96 for 95% confidence).
The table in Appendix 6.D (p 90, Hulley) provides the sample size for common values of W/S
three choice of confidence level.
The formula can be rewritten as:
N = (z S / E ) 2
E is the margin of error (half the width, W).
As an approximation, for 95% confidence, use the value of 2 for z (instead of 1.96) remember
that this is an approximation, after all! Then the formula is a very concise and easily
N = (2S / E ) 2

That is twice the standard deviation over the margin of error, all squared.
Where does the value of S come from? There are a number of sources, including previously
published research or a pilot study. When these sources fail, as in the case of brand-new research,
with a new instrument or a new population under study, a rough approximation can be made
using the six-sigma rule for bell-shaped distributions; the standard deviation is approximately the
range (maximum minus minimum) divided by six.
>>> When interest is in a population proportion (i.e. the primary outcome variable is categoric
specifically, binary), the total number of subjects required (N) is:
N = 4 z
2 P(1-P) / W2
where P is the expected proportion who have the characteristic of interest, W is the width of the
confidence interval (equal to twice the margin of error), and z is a value from the normal
distribution related to and representing the confidence level (equal to 1.96 for 95% confidence).
Note that this formula looks like the one for measurement data except that S2 has been replace
P(1-P). The table in Appendix 6.E (p 91, Hulley) provides the sample size for common choices
of P and W, and three choice of confidence level.
The formula can be rewritten as:
N = (z / E ) 2 P(1-P)
where E is the margin of error (half the width, W).
As an approximation, for 95% confidence, use the value of 2 for z (instead of 1.96) remember
that this is an approximation, after all! Also, use the most conservative value of P, which is 0.5.
Then the formula is a very concise and easily remembered:
N = 1 / E2
That is, one over the square of the margin of error.
This formula can also be easily rearranged to get: E = 1 / N;
That is, the margin of error is one over the square root of the sample size.
For example, if the sample size is 100, the margin of error is 10%; for a sample size of 400 the
margin of error is 5%; and for a sample size of 1000, the margin of error is 3%.
Note that doubling the sample size from 1000 to 2000 only reduces the margin of error to 2%,
not much improvement in precision for double the effort. That explains why so many national
opinion polls are about 1000 in size.
If the expected proportion is more than half, then plan the sample size based on the proportion
expected NOT to have the characteristic. That is, switch the roles of P and 1-P.
If P or 1-P is very close to 0 or 1 (i.e. the characteristic of interest is rare or happens most of the
time), the sample size formula of N = 1 / E2 is not appropriate. Instead you need to use the fuller
version seen earlier:
N = (2 / E)2 P(1-P)
[Note for the obsessive-compulsives among you: These formulas assume that the population is
infinite (i.e. very large) in comparison to the sample. There is a finite population correction
factor that will come into play when the final confidence intervals are being constructed. But it
can be safely ignored in calculating sample size for a survey.]
A confidence interval for the mean should be based on at least twelve observations. The width of
a confidence interval, involving estimate of variability and sample size decreases rapidly until 12

observations are reached and then decreases less rapidly. Sample Size Estimation for
Comparative Studies If the answer to Question 2 in the previous section was comparative
study, youre in the right place. This section will present:

A review of hypothesis testing

Baseline information Questions to be answered for comparative studies
Question 1: What is an acceptable significance level (alpha)
Question 2: How large a power is needed?
Question 3: How large is the variability in the effect of interest?
Question 4. What is the smallest detectable effect of interest
Calculating the sample size for comparative studies
A statistical test always challenges some hypothesis. A new treatment is investigated by
testing that the given treatment has no effect. A comparative study tests a hypothesis that two
groups under different treatment exhibit no differences in responses. We describe the results as
significant or positive when such a challenge has been successful and the tested hypothesis
overthrown (i.e. the null hypothesis is rejected). Significance refers to the events and data that
were actually observed, but which had small probability (P-value) according to the null
hypothesis (so the null hypothesis is rejected as being incompatible with the data).
Before proceeding to sample size estimation we need to review the basic concepts of hypothesis
testing. Review of Basic Concepts of Hypothesis Testing Hypothesis testing requires, first of all,
and not surprisingly, hypotheses! That is, two competing claims about a parameter or parameters
(characteristics of a population). In the context of sample size estimation the parameters are
usually the mean or proportion of the key outcome variable of interest. The null hypothesis is the
status quo hypothesis, the position of no difference, no effect, or no change. The alternative
hypothesis is often referred to as the research hypothesis. It represents a difference between
groups, a real effect, and an abandonment of the status quo. A hypothesis test culminates with a
conclusion about which of the two hypotheses is supported by the available data. The conclusion
can either be correct or incorrect. And statisticians, who have their ignorance better organized
than ordinary mortals, have classified the ways in which the conclusion can be correct or
incorrect. Errors in the conclusion are imaginatively called either Type
I or Type II.
A Type I error occurs when the null hypothesis is rejected, but in fact the null hypothesis is
actually true. That is, the conclusion is that there is a significant difference when in fact there
really isnt. A
Type I error can be thought of as a false positive.
A Type II error occurs when the null hypothesis is accepted, but in fact the null hypothesis is
actually false. That is, the conclusion is that there is no difference when is fact there really is a
difference. A Type II error can be thought of as a false negative.
Next we define alpha () as the probability of making a Type I error. It is also known as the
significance level. Usually is set at 0.05 (keeping it consistent with 1 or .95 or 95% in the
context of confidence intervals. And we define beta () as the probability of making a Type II
error. Although doesnt have another name, 1 does. It is know as power.
Power is the probability of correctly rejecting the null hypothesis; for example, concluding that
there was a difference when, in fact, there really was one! Sample size calculations are often
called power calculations, which tells you how crucial the concept of power is to the whole
exercise. Aside: A Type III error has been referred to as getting the right answer to the wrong

question or to a question nobody asked! The following two-by-two table summarizes the
previous concepts and quantities. Truth No Difference Difference
Study Accept Ho 1
Reject Ho 1 -
A useful analogy is to our Western legal system. In our system a defendant is innocent until
proven guilty. The null hypothesis is not guilty; the alternative hypothesis is guilty. The
onus is on the investigator (i.e. the prosecution) to present the evidence to convince the judge or
jury to abandon the null hypothesis in favour of the alternative. If the data are convincingly more
consistent with the alternative hypothesis, the judge or jury (barring legal technicalities and
theatrics) must conclude that the defendant is guilty. The conclusion, whichever way it goes, may
be the right one or the wrong one. Convicting the guilty or acquitting the innocent are correct
decisions. However, convicting an innocent person is a Type I error, while acquitting a guilty
person is a Type II error. Neither of these errors is desirable (in this case a Type I error is the
worse of the two, but there are other situations where a Type II error is the worse). We would
rather not make any errors. Notice however the problem that this presents. In order not to make
ANY Type I errors we would have to acquit everyone, which would lead to a high rate of Type II
errors. In order not make ANY Type II errors we would convict rather a lot of innocent people
along the way. Hypothesis testing, therefore tries to keep both error rates under control, and this
is accomplished by collecting more and more evidence (what a non-legal researcher would call
data). Sample size estimation concerns ensuring enough data so as to keep the probabilities of
Type I and
Type II errors ( and ) at suitable levels.

Question 1: What is an acceptable significance level (alpha)? Convention chooses .05 (or, if
you like percentages more than proportions, 5%), but some situations dictate a different choice of
alpha. Alpha is also know by another name; it is the probability of making a Type I error
(discussed earlier in Review of Hypothesis Testing). Why 5%? Sir Ronald Fisher suggested this
as an appropriate threshold level. However, he meant that if the p-value from an initial
experiment were less than .05 then the REAL research should begin. This has been corrupted to
such an extent that at the first sign of a p-value under .05 the researchers race to publish the
Question 2: How large a power (i.e. probability of detection). Convention chooses power of .
80 or 80%. Note that this assumes that the risk of a Type II error can be four times as great as the
risk of a Type I error. Why 80%? According to Streiner and Norman, this was because Jacob
Cohen [who wrote the landmark textbook on Statistical Power Analysis] surveyed the literature
and found that the average power was barely 50%. His hope was that, eventually, both and
would be .05 for all studies, so he took = .20 as a compromise and thought that over the years,
people would adopt more stringent levels. It never happened.
Question 3: How large will be the variability in estimating the effect or difference of
interest? For measurement outcome variables this means estimating the population standard
Question 4: What is the smallest effect or non-null difference that the researcher wants to
detect? That is, what is the magnitude of the clinical difference of interest? Low magnification
on a microscope may fail to detect something. Too high a magnification may make unimportant
details look large. Finding a needle in a haystack is difficult, but finding an elephant in a

haystack is comparatively easy! The answers to Questions 3 and 4 are the key ingredients of the
formulas for sample size estimation.
Sample Size Estimation; Answering THE QUESTION!
The basic formulas require the four components discussed above:
an acceptable alpha
an acceptable power (or beta)
the population standard deviation of the outcome variables, and
the magnitude of the clinical difference of interest.

Comparison of two means (independent):

We begin with a basic formula for sample size. Start with two groups, a continuous measurement
endpoint, a two-sided alternative, normal distributions with the same variances and equal sample
sizes. The basic formula is:
N = 16 / 2, where = [0 1] / = /
Note: This is the sample size for EACH group.
can be thought of as the standardized difference between means, measured in units of the
standard deviation. The magnitude of clinical difference of interest and the standard deviation are
combined into a single quantity. And this quantity has a famous name it is known as the Effect
Size (ES). As a guideline, Jacob Cohen classified effect sizes as small, moderate, and large (0.2,
0.5, and 0.8 for two-group comparisons); you can use these as a starting point. In the one-sample
case, the numerator is 8, instead of 16; that is, N = 8 / 2. This situation occurs when a single
sample is being compared with an external population value (i.e. a target). Note that the sample
size for a one-sample case is one-half the sample size for each sample in a two-sample case. But
since there are two samples, the total in the two-sample case will therefore be four times that of
the one-sample case.
Example: If the standardized treatment difference is expected to be 0.5, then 16/(0.5)2 = 64
subjects per treatment will be needed. Hence a total of 128 subjects are required. If the study
only requires one group then 32 subjects will be needed; this is one-fourth of the number in the
twosample scenario.
This illustrates the rule that the twosample scenario requires four times as many observations as
the one-sample scenario. The reason is that in the two-sample situations two means have to be
estimated, doubling the variance, and, additionally, requires two groups. Note that the two key
ingredients are the difference to be detected ( 0 1) and the inherent variability of the
observations indicated by .

Note also that the equation can be inverted to allow you to calculate the detectable difference for
a given sample size N.
= 4 / N or (0 1) = 4 ./ N
For a one-sample case, replace 4 by 2.
This rule is very robust and useful. Many sample size questions can be formulated so that this
rule can be applied.
Where does the multiplier of 16 come from? I hear you asking. The full formula is the
N = 2 (z + z )2 / (/) 2
For = .05, z = 1.96; for = .20, z =0.84. Hence 2 (z + z )2 = 2(1.96 + 0.84)2 = 15.68 16

What if you want other values of and ? Here is a small table of the multipliers for various
values of for a two-sided of .05.

Comparison of two means (dependent):

If your design has paired observations (e.g. before-and-after), then what seems to be a two-group
test is really just a one-group test of whether the average change is different from zero. (See
paired t-test in the Analysis II Module). Another description of this situation is that each subject
is serving as his/her own control. In this case the sample size formula is:

N = (z + z )2 / (/) 2
It looks very similar to the two-sample situation, but with two important changes. First, there is
no multiplier of 2. Second, the is the standard deviation of the differences within pairs, not
the standard deviation of the original measurements. This is almost never known in advance, but
as Streiner and Norman say, On the brighter side, this leaves more room for optimistic

Comparison of two proportions:

Next, we estimate the sample size required to compare two proportions. To compare two
proportions p0 and p1, use the formula:

N = 16 p(1 - p) / (p0 - p1)2 where p = (p0 + p1)/2.

For example, if p0 = .30 and p1 = .10, then p = .20, so the required sample size per group is 64.
As with the comparison of two means, the multiplier of 16 can be changed. The same values as
in the previous table apply here too.
An upper limit on the required sample size occurs when p = .50; then the formula becomes:
N= 4 / (p0 - p1)2 . This is very conservative and works best when the proportions are centered
around .50.
When the proportions are less than .05, use: N = 4 / (p0 - p1)2

More complicated designs:

For situations comparing more than two groups (where analysis of variance will be the technique
of analysis), the simplest (and justifiable) approach is to focus on the two groups you are most
interested in being able to compare and then use the two-sample formulas above. For repeated
measures designs with more than two measurements per subject, once again, focus on the two
measurements of most interest (e.g. baseline and final follow-up) and use the formula for paired
measurements. For categoric data with more than two rows and/or more than two columns,

analysis is usually based on chi-squared tests. Sample size procedures are not worked out for
these situations. Instead, find the two key groups to be compared, collapse the outcome to binary,
and use the procedures for comparing two proportions. Note also that from theoretical
considerations, a chi-square test of independence requires at least five observations per cell to
give fully valid results, so plan accordingly. That is, find the key crosstabulation, determine the
number of rows and columns (i.e. the number of levels of the two variables), compute the
number of cells and then multiply by 5; that should be the minimum sample size for this

For correlation and regression:

Rarely is a study aiming only to establish a single pairwise correlation between two variables. So
the formula for sample size to achieve a significant correlation is not very useful. As well, a test
of whether correlation is significant is really only testing whether the correlation is zero or not; it
doesnt say anything about the strength of the correlation. Correlation is usually just the first
step in model-building, especially regression models.

For multiple regression:

It is impossible to estimate regression coefficients before doing the research and data collection
study so power studies arent really relevant here. Instead, just ensure that the number of data
points (i.e. observations or cases) is considerably more than 5 to 10 times the number of
variables. (A reference for this rule of thumb is: Kleinbaum, Kupper and Muller (1988).)

For logistic regression:

The same rule of thumb applies but I would suggest aiming for a sample size of 10 times the
number of variables (rather than 5), because the outcome variable is binary rather than

For factor analysis (a useful in tool for instrument development):

There are no power tables available to say how many subjects to use. Once again we go with a
rule of thumb based on conventional wisdom and simulations that says we should have an
absolute minimum of five subjects per variables with the added constraint that we have at least
100 subjects in total. This rule will give useful results only in the best of circumstances. To be on
the safe side, double everything! Unfortunately, a huge percentage of factor analyses performed
each year are in flagrant violation of these!

For rank tests (e.g. Mann-Whitney, Wilcoxon, Kruskal-Wallis):

There are also no formulas for sample size calculations. What to do? Streiner and Norman
advise, Determine the sample size from the equivalent parametric test and leave it at that. Recent
work has shown that you dont lose any power with these tests and when the data arent normal,
they may even be more powerful than their parametric equivalent.

External Assessment:

This company is based in India and its competitive advantage is increased.

The Indian economy has lo labor cost although its economic indicators are quite weak as
increasing rate of inflation.

The workforce is high skilled in Information Technology.

Infosys is in a strong financial position as its business turned over more than $4 billion in 2008.
This strong financial position shows that its capital is expanding, and it provides the base to
leverage the potential investors.

The company has 44 global development centers in which most of them are located in India.
This company has offices in many developed and developing nations. This means that Infosys is
becoming a global brand and it has capability to support global operations, which it carries out
for its multinational clients.


Infosys struggles in the US markets on different occasions, and has particular problems in
securing United States Federal Government contracts in North America. As these contracts are
very profitable and they can be continued for long periods of time, Infosys is losing its strength
in lucrative business.
This company is considered the big IT Company if it is compared to its Indian competitors, but
Infosys is much smaller than its global competitors.


Poised to benefit from the acquisition of Lodestone Holding


The company needs to fulfill increasing demand for cloud computing services.

They should give positive outlook for enterprise mobility market.

The company can get the benefit of big data about IT technology.


The company has to face intense competition in the local markets as various local players
provide their services at cheap rates.

Employee attrition may increase personnel costs of the company.

Another great threat is instability of economic environment. The company can face pressure for
conducting business because of pricing and low employee utilization.

Research Design
We conducted a nationwide questionnaire-based survey to capture the issues in the design and
applications of the performance scorecard. The universe of companies selected for this study
consisted of the bt-500 private sector companies and 75 most valuable PSUs which is a fair
representation of corporate India. The subsidiaries of multinational corporations (MNCs) form a
major con- stituent of the Indian corporate sector. Based on value judgment, four such companies
from the automobile, engineering, and software sectors were included in the sample.

We developed the draft questionnaire based on the review of literature and circulated it to a
group of prominent academicians and chief financial officers (CFOs) for feedback as a part of
the pilot study. Based on their suggestions, we revised the questionnaire. The final questionnaire
on performance scorecard contained seven questions with 106 sub-parts. The survey asked the
CFOs to respond on a Likert scale of 0 to 5 (where 0 means not used, 1 means unimportant,
and 5 means most important).

We sent the questionnaire to the CFOs of 579 companies in batches during the week from

October 30, 2002 to November 7, 2002. We also sent two reminders (one in December, 2002 and
the second in February, 2003) for follow-up in order to maximize the response rate. We had
indicated to the CFOs that the identity of the res- pondent companies and the respondents would
be kept strictly confidential and only aggregate generalizations would be published.
Fifty-three completed questionnaires were received by June 9, 2003. All the four MNCs
responded. In addition, 49 out of the remaining 575 with a response rate of 8.52 per cent*
returned the duly filled questionnaires. These 53 companies constitute the sample for deriving
infer- ences for the present study.

Limitations of the Methodology

In any such survey, it is likely that the firm that does not respond on time may have a
nonresponse bias. Whatever the respondents say is believed to be true and, hence, no statistical
test is performed to study the non- response bias and consistency of the individual respons- es.
Another limitation of the survey methodology is that it measures belief and not necessarily


1 No Right to Continued Employment

Nothing contained in the Plan shall give any Employee the right to be retained in the
employment of the Participating Company or Affiliate or affect the right of any such employer to
dismiss any Employee with or without cause. The adoption and maintenance of the Plan shall not
constitute a contract between any Participating Company and Employee or consideration for, or
an inducement to or condition of, the employment of any Employee. Unless a written contract of
employment has been executed by a duly authorized representative of a Participating Company,
such Employee is an employee at will.
2 Payment on Behalf of Payee
If the Plan Administrator finds that any person to whom any amount is payable under the Plan is
unable to care for such persons affairs because of illness or accident, or is a minor, or has died,
then any payment due such person or such persons estate (unless a prior claim therefor has been
made by a duly appointed legal representative) may, if the Plan Administrator so elects, be paid
to such persons spouse, a child, a relative, an institution maintaining or having custody of such
person, or any other person .
3 Nonalienation

No interest, expectancy, benefit, payment, claim or right of any Participant or Beneficiary under
the Plan shall be (a) subject in any manner to any claims of any creditor of the Participant or
Beneficiary, (b) subject to the debts, contracts, liabilities or torts of the Participant or Beneficiary
or (c) subject to alienation by anticipation, sale, transfer, assignment, bankruptcy, pledge,
attachment, charge or encumbrance of any kind. If any person attempts to take any action
contrary to this Section, such action shall be null and void and of no effect.
4 Missing Payee
If the Plan Administrator cannot ascertain the whereabouts of any person to whom a payment is
due under the Plan, and if, after five years from the date such payment is due, a notice of such
payment due is mailed to the last known address of such person, as shown on the records of the
Plan .
5 Required Information
Each Participant shall file with the Plan Administrator such pertinent information concerning
himself or herself, such Participants Beneficiary, or such other person as the Plan Administrator
may specify; and no Participant, Beneficiary, or other person shall have any rights or be entitled
to any benefits under the Plan unless such information is filed by or with respect to the
6 No Trust or Funding Created
The obligations of such Participating Company to make payments hereunder constitutes a
liability of such Participating Company to a Participant or Beneficiary, as the case may be. Such
payments shall be made from the general funds of the Participating Company; and the
Participating Company shall not be required to establish or maintain any special or separate fund,
or purchase or acquire life insurance on a Participants life, or otherwise to segregate assets to
assure that such payment shall be made; and neither a Participant nor a Beneficiary shall have
any interest in any particular asset of the Participating Company by reason of its obligations
hereunder. Nothing contained in the Plan shall create or be construed as creating a trust of any
kind or any other fiduciary relationship between any Participating Company and a Participant or
any other person, it being the intention of the parties that the Plan be unfunded for tax purposes
and for Title I of ERISA..
7 Binding Effect
Obligations incurred by any Participating Company pursuant to the Plan shall be binding upon
and inure to the benefit of such Participating Company, its successors and assigns, and the
Participant and the Participants Beneficiary.

8 Merger or Consolidation
In the event of a merger or a consolidation by any Participating Company with another
corporation, or the acquisition of substantially all of the assets or outstanding stock of a
Participating Company by another corporation, then and in such event the obligations and
responsibilities of such Participating Company under the Plan shall be assumed by any such
successor or acquiring corporation, and all of the rights, privileges and benefits of the
Participants and Beneficiaries hereunder shall continue.
9 Entire Plan
This document, any elections provided for in the Plan, any written amendments hereto and the
Exhibits attached hereto contain all the terms and provisions of the Plan and shall constitute the
entire Plan, any other alleged terms or provisions being of no effect.

10 Withholding
Each Participating Company shall withhold from benefit payments all taxes required by law.
11 Compliance with Section 409A of the Code
The Plan is intended to comply with Section 409A of the Code. Notwithstanding any provision
of the Plan to the contrary, the Plan shall be interpreted, operated and administered consistent
with this intent.
12 Construction
Unless otherwise indicated, all references to articles, sections and subsections shall be to the Plan
as set forth in this document. The titles of articles and the captions preceding sections and
subsections have been inserted solely as a matter of convenience of reference only and are to be
ignored in any construction of the provisions of the Plan. Whenever used herein, unless the
context clearly indicates otherwise, the singular shall include the plural and the plural the
13 Applicable Law
The Plan shall be governed and construed in accordance with the laws of the State of Delaware,
except to the extent such laws are preempted by the laws of the United States of America.

Chapter No. 3
Literature Review
This chapter embarks upon the concept of Balanced Scorecard. It discusses the evolution and
history of Balanced Scorecard. Further, this chapter undertakes a review of existing literature by
various scholars on Balanced Scorecard. It also summarizes the studies on Balanced Scorecard
carried out in different sectors and countries. The chapter culminates with a discussion on the
research gap identified within the existing literature on change management and Balanced
To assess the merits of a particular strategy, a need for performance measuring tools arises. The
past two decades have witnessed a dramatic shift in this process of performance measurement.
3.1.1 From Shareholder Value to Stakeholder Theory

There are several ways to consider the strategy of the firm and each has different implications in
reporting organizational performance. The key performance measurement processes are
shareholder theory and stakeholder theory (Owen, 2006; Brown & Fraser, 2006). In the 1980s,
any firm was viewed as belonging to the shareholders. Shareholder theory used shareholder
return to measure overall organization performance and was thus, dominated by organizational
performance measurement systems (Porter, 1980).
3.1.2 Stakeholder Theory: The Balanced Scorecard
Early 1990s witnessed a shift to a more stakeholder-based view. The firm was considered as
having responsibilities to a wider set of groups including shareholders (Freeman, 1984; Reich,
1998; Post et al., 2002; Brown & Fraser, 2006; Steurer. 2006). Other stakeholders may include
employees, customers, suppliers, governments, industry bodies and local communities.
Stakeholder theory assesses organization performance against the expectations of a variety of
stakeholder groups that have particular interests in the organization's activities. Stakeholder
theorv perspective of organizational performance includes shareholder value and recognizes
shareholders as single group of stakeholders.
The Balanced Scorecard performance measurement system by Kaplan and Norton
(1992a) is based on stakeholder theory. Balanced Scorecard is primarily a tool for measuring
external and internal economic value. The original Balanced Scorecard model did not incorporate
employee, supplier or community perspectives (Mooraj etal., 1999). Kaplan and Norton
originally suggested that a Balanced Scorecard should have a total of 14-16 performance
measures, divided into four quadrants with not more than 4-6 performance measurement in each
quadrant. The researchers argued that these measures could be integrated and linked by means of
cause and effect (Figge et al., 2002). However, most organizations have neither developed causal
links between the factors nor foimd a systematic and consistent way of incorporating either new
or less tangible organizational performance measures, such as those related to environmental
responsibility or commimity relationships.

3.13 Stakeholder Theory: The Triple Bottom Line

Around the same time that firms began adopting Balanced Scorecard, public, media and
commimity groups started paying more attention to the effect of organizations on the natural
environment and society as a whole. Several countries started attributing firms to more than
creating economic value. In 1997, the triple bottom line (Elkington, 1997) emerged as a new tool
for measuring organizational performance. Although, based on stakeholder theory, it carries a
wider perspective of the stakeholders influence on the organization when compared to Balanced
Scorecard. The triple bottom line is essentially based on the idea that a firm should measure its
performance in relation to stakeholders as well as local communities and governments. The
stakeholders may only be those with who firm maintains direct relationships such as by way of
employees, suppliers and customers, but a much wider population to which a firm is related
indirectly such as the local community and envirormient. A successful performance measurement
needs to include the view of these unconventional stakeholders.

The triple bottom line implies that responsibilities of organizations are much wider than simply
those related to the economic aspects of producing products and services. It adds social and
environmental measures of performance to the economic measures. Environmental performance
refers to the amount of resources, such as energy, land and w^ater, a firm uses in its operations. It
also includes the by-products created by an organization, like waste, air emissions and chemical
residues. Social performance refers to the impact of a firm and its suppliers on the commimities
in which it fimctions. Measures developed by one organization are readily transferable to others,
whereas social and environment performance are unique to each organization. Unlike the
Balanced Scorecard, the triple bottom line has not been successful in penetrating organizational
performance system, as organizations are reluctant in accepting the influence of these
performance measures have actual economic production. integrate strategy, organization
framework and vision into management systems. translate the long-term strategy and innovation
of customer value into operational activities. It also balances the competitiveness and short-term
fortunes of stockholders through blending of traditional and modem indicators (Talbot, 1999).
3.2.1 Defining Balanced Scorecard
The Balanced Scorecard was originally a one-year multi-company study (Kaplan & Norton,
1992a). The study concluded that in increasingly complex business environment, dependence on
only financial measures was no longer adequate for managing organizations, especially where
intellectual capital and knowledge-based assets were critical for success. Kaplan and Norton
(1996c) defined Balanced Scorecard as a fi-amework that helps organizations translates strategy
into operational objectives that drive both behavior and performance. The Balanced Scorecard
strategic management system is comprised of "a framework, core principles and processes that
translate an organization's mission and strategy into a comprehensive set of performance
measures strategically aligned with initiatives" (Inamdar et al., 2002, p. 21). The measures and
objectives are viewed across four dimensions of performance: financial, customer, internal
business process and learning and growth. Every perspective has their respective objectives for
three to five years that are communicated throughout the organization and presented on a
strategy map (Kaplan & Norton, 1996c). The word balanced in the term 'Balanced Scorecard' is
indicative of the balanced consideration given to long and short-term objectives, financial and
non-financial measures, leading and lagging indicators and external and internal performance
perspectives (Kaplan & Norton, 1996b, 1996c: Hendricks et al., 2004). Rimar and Garstka
(1999) highlighted the need to articulate organization's strategy. Even a clearly stated vision and
strategy can be interpreted differently by individual members in an organization. Developing a
Balanced Scorecard clarifies the significance of the strategy and translates it into terms that are
considered meaningful by the people involved. It focuses on fundamentally changing the way
organization is strategically led and not with keeping organizational score. Kanji and Moura
(2001) concluded, "the Balanced Scorecard is more than a performance measurement system. It
is commonly adopted as a strategic management system to describe the organization's vision of
the future and create shared understanding; clarify and update corporate strategy; communicate
strategic objectives throughout the organization; align customer need and business objectives;
work as a holistic model of strategy allowing all employees to see how they contribute to
organizational success; link strategic objectives to targets and budgets; build a reward system
that is geared to achieving targets; and obtain feedback on the effectiveness of the strategic view"
(p. 898).
3.2.2 The Four Perspectives of Balanced Scorecard

The Balanced Scorecard allows the manager to look at the business from four important
Financial Perspective: How do we look to shareholders?
The Balanced Scorecard retains the financial perspective since financial measures are valuable in
summarizing the measurable economic consequences of actions already taken (Kaplan & Norton,
1996c). Financial performance measures indicate whether the company's strategy,
implementation and execution contribute to bottomline improvement (Kaplan & Norton, 1992a).
Typical financial goals relate to profitability measured by operating income, return on capital
employed or economic value added (Kaplan & Norton, 1996c). Kaplan and Norton (1992a)
identified three stages of business's life cycle: growth, sustain and harvest. During growth
businesses are at the early stage of their life cycle. They possess products or services with
significant growth potential. In order to capitalize on this potential, organizations have to employ
resources to develop and improve new products and services: construct and expand production
facilities, build operating capabilities, invest in systems, infrastructure and distribution networks
and foster and develop customer relationships. Organizations at the sustain stage attract
investment and reinvestment, but are required to receive fine returns on invested capital. These
businesses are likely to maintain their existing market share and grow the business. When
business units reach a mature phase of their life cycle, the organization harvests the investments
made in the previous two stages. These businesses no longer need significant investment but only
adequate to maintain existing equipments and capabilities. The key objective during this phase is
to maximize cash flow to the corporation (Kaplan & Norton, 1996c).

The Customer Perspective-Core Measures


Internal Business Process Perspective: What must we excel at?

Kaplan and Norton (1996c) recommend that managers should define a comprehensive mtemal-
process value chain beginning wdth the innovation process (identifying existing and fiiture

customer needs and developing new solutions to meet those needs), continues through the
operations process (delivering existing products and services to existing customers) and ends
with after sale service (offering services after the sale that add to the value customers receive fi-
om a company's product and service offerings). The process of deriving objectives and measures
for the internal business process perspective represents one of the major distinctions between the
Balanced Scorecard and traditional performance measurement systems. Traditional performance
measurement systems focus on controlling and improving existing responsibility centers and
departments (Kaplan. 1984; Howell et al., 1987; Johnson & Kaplan, 1987; Kaplan, 1990). Most
organizations now-a-days supplement financial measurements with measures of quality, yield
and cycle time (Naimi et al., 1988; Cross & Lynch, 1989; Lessner. 1989;Nannietal., 1990).

Learning and Growth Perspective: Can we continue to improve and create value?
Kaplan and Norton (1992a) advocated that organizations are required to introduce continual
improvements to their existing products and processes and gain the ability to set up entirely new
product with expanded capabilities. The focus was laid on investing for future such as, new
equipments and product research and development (Kaplan & Norton, 1996a). They asserted that
through the ability to launch new products, generate more value for customers and enhance
operating efficiencies continually, organizations can enter new markets and increase revenues
(Kaplan & Norton, 1992a). Argyris and Schon (1978) have define organizational learning as
"experience-based improvement in organizational task performance" (p. 323). It is developed fi-
om three principal sources namely people, systems and organizational procedures. The financial,
customer and interned business process objectives on the Balanced Scorecard reveal large gaps
between the existing capabilities of people, systems and procedures and the required competence
to realize breakthrough performance. To close these gaps, organizations need to invest in re-
skilling employees, enhancing information technology and systems aligning organizational
procedures and routines. These objectives are articulated in the learning and growth perspective
of the Balanced Scorecard (Kaplan & Norton, 1996c).
3.2.3 Strategic Processes of Balanced Scorecard
Through the Balanced Scorecard system Kaplan and Norton (1996a, 1996b, 1996c. 1996d,
2001a) introduce four management processes that, separately and in combination, link long-term
strategic objectives with short-term actions. The four processes involved in implementing a
Balanced Scorecard are depicted in Exhibit
The first process, clarifying and translating the vision, helps managers build a consensus
around the organization's vision and strategy (Kaplan & Norton, 1996d). To act on the words in
vision and strategy statements, the statements must be expressed as an integrated set of
objectives and measures, agreed upon by all senior executives. To set financial goals, the team
must consider whether to emphasize revenue and market growth, profitability or cash flow
generation. For the customer perspective, the management must be explicit about the customer
and market segments in which it has to compete. Further, the objectives and measures for
internal business process represent one of the principle innovations and benefits of the Balanced
Scorecard approach. The Balanced Scorecard highlights those processes that are most critical for
achieving breakthrough performance for customers and shareholders. Finally, the linkage to
learning and growth objectives depicts the rationale for significant investments in re-skilling
employeesinformation technology, systems and organizational procedures. These investments
generate major innovation and improvement for internal business processes, customers and
shareholders (Kaplan & Norton, 1996c). The second process, communicating and linking

strategic objectives and measures. enables managers communicate their strategy up and down
the organization and link it to departmental and individual objectives. The Balanced Scorecard
gives managers a way to ensure that all levels of organization understand the long-term strategy
and that both departmental and individual objectives are aligned with it (Kaplan & Norton,
1996d). Kaplan & Norton (1996c) reported that the strategic objectives and measures of
Balanced Scorecard are commimicated throughout an organization using company newsletters,
bulletin boards, videos and even electronic media such as groupware and networked personal
computers. The Balanced Scorecard also encourages a dialogue between business units,
corporate executives and board members, not only regarding short-term financial objectives, but
also the formulation and implementation of a strategy. The third process, business planning,
setting targets and aligning strategic initiatives, facihtates companies to integrate their
business and financial plans. Managers find it difficult to integrate diverse initiatives to achieve
strategic goal which often results in irequent disappointments. Kaplan & Norton (1996d) advised
managers to use the ambitious goals for Balanced Scorecard measures as the basis for allocating
resources and setting priorities. This allows undertaking and coordinating only those initiatives
that move them toward the long-term strategic objectives. To meet ambitious financial
objectives, managers need to identify stretch targets for their customer, internal business process
and learning and growth objectives. Once targets for the three perspectives are established,
managers can align their strategic quality, response time and reengineering initiatives for
accomplishing the breakthrough objectives. The targets for the strategic initiatives are derived
from Balanced Scorecard measures such as dramatic time reductions in order fixlfiUment cycles,
shorter time-to-market in product development processes and enhanced employee capabilities
(Kaplan & Norton, 1996c). The fourth process, enhancing strategic feedback and learning, is
considered to be the most irmovative aspect of the entire Balanced Scorecard framework. It
offers companies the capacity for strategic learning. Existing feedback and review processes
depend on whether the business, its departments or its individual employees have met their
budgeted financial goals. Kaplan & Norton (1996c) recognized that with the help of Balanced
Scorecard a company can monitor shortterm consequences of customers, internal business
processes and learning and growth along with evaluating strategy based on recent performance.
The first three strategic processes for Balanced Scorecard are vital for implementing\ strategy but
not sufficient in an unpredictable environment. Together they form an important single-loop-
leaming process, where the objective remains constant and any deviation from the planned route
is considered as an error to be corrected. This single-loop process does not involve re-
examination of strategy and techniques employed. Various organizations fimction in a turbulent
environment whh complex strategies which lose their validity with change in business
conditions. Kaplan and Norton (1992a) suggested that companies must become capable of
double-loop learning, learning that produces a change in people's assumptions and theories about cause-
and-effect relationships.

Evolution of the Balanced Scorecard


Through the years, the Balanced Scorecard has evolved, from the performance measurement tool
originally introduced by Kaplan and Norton (1992a), to a tool for implementing strategies
(Kaplan & Norton, 1996a, 1996b, 1996c, 1996d) and a framework for determining the alignment
of organization's human, information and organization capital with its strategy (Kaplan &
Norton, 2004a). In 1997, The Harvard Business Review listed Balanced Scorecard as one of the
75 most influential ideas of the 20* century (Bible et al., 2006). This shift has prompted
companies to view the Balanced Scorecard as a strategic communication and management
system. Pandey (2005) reported that since early 1980s traditional performance measurement
system based solely on traditional financial ratios, has been criticized by various practitioners
and management theorists for the lack of strategic focus. Research and development, employees'
training, brand building or after-sales services are few examples which might conflict with the
short-term profit objective and the long-term customer satisfaction. Also, new performance
measurement systems included combination of financial and non-financial set of measures. This
led to the introduction of performance drivers, instead of the simple performance measurement
(Seminogovas & Rupsys, 2006). Kaplan and Norton's (1992a) Balanced Scorecard has emerged
as a well-accepted performance managerial tool that assists managers with the mechanisms to
develop performance objectives and measures linked to strategy. Sanger (1998) highlighted that
the Balanced Scorecard acknowledges the deficiencies in many business performance
measurement systems, which often depend totally on financial measures and attempts to
overcome the deficiencies of existing measurement systems by measuring and analyzing results
across a range of activities. Despite the fact that the Balanced Scorecard only identify three
stakeholders: shareholders (financial performance), customers (customer relations) and
employees (internal business process and learning and grov^h) (Kaplan & Norton, 1992a; Wright
et al., 1999; Hsu, 2005) and ignores two other significant stakeholders: environment and social
matters (Brignall, 2002), the Balanced Scorecard has changed managers' view of performance for
the reason that it carries both financial and non-financial metrics. Whitttaker (2001) recognizes
that an effective Balance Scorecard will articulate the strategic direction of the company, the
motivation for that strategic direction and how it will progress the organization's performance.
Following Kaplan and Norton's publications prior to 1997, a significant change is observed in
Balanced Scorecard thinking during the mid to late 1990s, that influenced the illustration of
Balanced Scorecards. Balanced Scorecard has at least the foUov^ng attributes now:
> A mixture of financial and non-financial measures (Kaplan & Norton, 1992a. 1993, 1996a,
> A limited number of measures (Kaplan & Norton, 1992a), between 15-20 (Kaplan & Norton,
1993) and 20-25 (Kaplan & Norton, 1996c).
> Measures clustered into four groups called perspectives (Kaplan & Norton. 1992a, 1993,
1996a, 1996c), originally called "financial", "customer", "internal process" and "innovation and
learning". Later, the last two perspectives were renamed "internal business process" and
"learning and
> Measures chosen to relate to specific strategic goals are usually documented in tables with one
or more measure associated with each goal (Kaplan & Norton, 1992a, 1993, 1996a, 1996c).
> Measures should be chosen in a manner that gains the active support of the senior managers of
the organization. It should reflect their privileged access to strategic information and also the
significance of their endorsement and support of the strategic communications that may flow
from the Balanced Scorecard once developed (Kaplan & Norton, 1992a, 1993, 1996a, 1996c).

> Kaplan and Norton (1996a) illustrated and discussed the need to show causal links between
measures across the Balanced Scorecard perspectives in a fashion that anticipates second-
generation Balanced Scorecard features. But later in 1996, they suggested that causality should
be between "performance driver [lead]" measures and "outcome [lag]" measures (Kaplan &
Norton. 1996c).

First Generation Balanced Scorecard

The Balanced Scorecard has evolved from a mere two-by-two matrix approach for performance
measurement through at least another two generations (Cobbold & Lawrie, 2002). Balanced
Scorecard was initially described as a simple, "4 box" approach to performance measurement
(Kaplan & Norton, 1992a). In addition to financial measures, managers were encouraged to
look at measures drawn from three other "perspectives" of the business: learning and
growth, internal business process and customer, chosen to represent the major stakeholders in
a business (Mooraj et al., 1999). Definition of what comprised a Balanced Scorecard was
sparse and focused on the high level structure of the device. Simple 'causality' between the four
perspectives was illustrated but not used for specific purpose. Kaplan and Norton (1992a)
focused on the selection and reporting of a limited number of measures in each of the four
perspectives (Kaplan & Norton, 1992a). They suggested the use of attitudinal questions
relating to the vision and goals of the organization to help in the selection of measures to be
used. Kaplan and Norton (1992a) mentioned little about how measure selection activity could be
done, beyond general assertions about the design philosophy, for example. "Companies should
also attempt to identify and measure the company's core competencies, the critical technologies
needed to ensure continued market leadership" (Kaplan & Norton, 1992a, p. 176). However, the
design challenges presented by first-generation Balanced Scorecard design were severe as
indicated by practitioners in the literature and authors during their personal experience working
in the field (Butler et al., 1997; Ahn, 2001; Irwin, 2002; Radnor & Lovell, 2003). Also. various
researchers highlighted the adverse effects of poor measure selection on the effectiveness and
adoption rates of Balanced Scorecard (Lingle & Schieman, 1996: Schneiderman, 1999; Malina &
Selto, 2001). The first generation Balanced Scorecards were primarily promoted as a control tool
for managers with the 'red, yellow, green' reporting of achievement of targets where green
indicated a job well done, yellow meant scope for improvement and red needed urgent attention.
Despite the Balanced Scorecard success, limitations have been raised in both the academic and
practitioner Hterature: its key assumptions and relationships (NorrekHt, 2000); not providing
direction as to how to improve performance to achieve the desired strategic results (Gautraeu &
Kleiner, 2001); uncertainty of cause and effect with finality (Norreklit, 2000); being costly in
terms of cash and time (Lipe & Salterio, 2000; Gautraeu & Kleiner, 2001); the volume of data
may overload human decision-makers (Lipe & Salterio, 2002). Butler et al. (1997) argued that
the greatest threat while using the Balanced Scorecard is managers selecting wrong measures and
group those into the four proposed perspectives and focus on the wrong issues.

Second Generation Balanced Scorecard

The practical difficulties related with the design of first generation Balanced Scorecards were
noteworthy, since the definition of a Balanced Scorecard was originally vague. Two significant
areas of concern were filtering (the process of choosing specific measures to report) and
clustering (deciding how to group measures into perspectives). Discussions relating to clustering
continue to be rehearsed in the literature (Butler et al., 1997; Keimerley et al., 2000), but
discussions relating to filtering are less common, and often appear as part of descriptions of
methods of Balanced Scorecard design (Kaplan & Norton, 1996: Olveetal., 1999). Between 1992
and 1996, Kaplan and Norton focused on researching means to demonstrate causality between
measures (Newing, 1995). Measure-based linkages offered a richer model of causality than
before, but also presented conceptual problems (Brewer, 2002; Clinton et al., 2002). Mooraj et
al. (1999) identified causeand- effect relationships as a significant attribute of the Balanced
Scorecard when selecting appropriate indicators. At the same time they are difficult to integrate
with the need for the Balanced Scorecard design to reflect the consensus views of the potential
users. However, the idea of strategic linkage became an increasingly important constituent of
Balanced Scorecard design methodology as it could be used to specify the critical elements and
their linkages for an organization's strateg\ (Kaplan & Norton, 2001a). Measures were chosen to
relate to specific strategic objectives. The design identified about 20-25 strategic objectives each
associated with one or more measures and assigned to one of four perspectives (Olve et al.. 1999;
Kaplan & Norton, 2000a). In the mid-1990s Balanced Scorecard documentation began to present
graphically linkages between the strategic objectives (Kaplan & Norton, 1993) wdth causality
linking across the perspectives on the way to key objectives related to financial performance. The
attempt to visually document the major causal relationships between strategic objectives laid out
the results in a strategic linkage model or strategy map (Olve et al., 1999; Kaplan & Norton,
1996a, 1996c, 2000a). The second generation Balanced Scorecard moved away from an
attitudinal approach, to the selection of measures which had an explicit link between strategy
objectives and measure. Cobbold and Lawrie (2002) referred to Balanced Scorecards that
incorporate these developments as Second Generation Balanced Scorecards'. Kaplan and Norton
(1996d) reported that these changes enabled Balanced Scorecard to evolve from an improved
measurement system to a core management system. The authors realized that early adopters
foimd that the Balanced Scorecard could help their organizations implement and control strategy.
Therefore, the second generation Balanced Scorecard was the substitute of simplistic causality
between perspectives with identifiable cause-and-efifect relationships of strategic management
with performance management. One of the consequences of this change was increase in pressure
on the design process to accurately reflect the organization's strategic goals. Another
consequence was more awareness of the need to reflect differences in management agenda
within differing parts of organizational structures. As a result, attention was given to developing
'strategic alignment' between management units by developing Balanced Scorecards as part of a
'cascade' at the business unit level (Kaplan & Norton, 1996c; Olve et al., 1999). The
representation of causality between strategic objectives, known initially as the 'Strategic Linkage
Model', and later considered an important part of a Balanced Scorecard design (Kaplan &
Norton, 2000a). Researchers suggested that for many organizations the cause-and-effects are
inappropriate since it leaves out one or more important clusters (Kennerley & Neely. 2000;
Brignall, 2002) or because the causality links cannot be justified (NorrekHt. 2000). The common
thread among these concerns is the need to increase confidence that the Balanced Scorecard
accurately reflects the strategic objectives of the organization and that the linkages shown are

significant. Organizations developing second-generation Balanced Scorecards found noteworthy

practical problems both with measure selection and target setting (Barney et al., 2004) and with
challenges to rationally cascade high-level (Lawrie & Cobbold, 2004).

Third Generation Balanced Scorecard

The third generation Balanced Scorecard model is based on a modification of second generation
design characteristics and mechanisms to present greater appreciation of strategy by improved
fimctionality in modeling, studying and co-coordinating holistic relationships over time (Olve et
al., 1999; Cobbold & Lawrie, 2002). The origin of the developments stem from the issues related
to the validation of strategic objective selection and target setting. These triggered the growth in
the late 1990s of a further design element- the destination statement. Destination statements were
initially created towards the end of the design process by challenging the managers involved to
imagine the impact of the accomplishment of the strategic objectives. This integrative process
helped recognize inconsistencies in the profile of objectives preferred (Kenneriey & Neely, 2000;
Neely et al., 2000; Brignall, 2002) and the final document was found to be usefiil in validating
the targets chosen. It was quickly realized that management teams were able to discuss, create,
and relate to the "destination statement" easily and without indicating the selected objectives. As
a result, the design process was reversed, creating destination statement at the beginning of an
activity, rather at the end. Further it was found that by working fi-om destination statements, the
selection of strategic objectives and articulation of hypotheses of causality was much easier and
consensus within a management team could be attained more quickly (Shulver et al., 2000;
Cobbold & Lawrie, 2002; Lawrie et al., 2004). Proper planning and awareness, in relation to
Balanced Scorecard software systems, was found to be another way to overcome shortfalls
(Gautraeu & Kleiner, 2001). Third generation Balanced Scorecards seek to drive
transformational change and breakthrough results throughout the organization. Kaplan and
Norton (2001a) have asserted that the Balanced Scorecard can be used as an organization fi-
amework for successful strategy implementation. Cobbold and Lawrie (2002) identified key
components of a third generation Balanced Scorecard as below:

> Destination Statement: A description, ideally including quantitative detail, of what the
organization, or part of organization managed by the Balanced Scorecard users, is likely
to look like at an agreed future date (01 ve et al.. 1999; Guidoum, 2000; Shulver et al.,
2000; Shulver & Antarkar, 2001; Cobbold & Lawrie, 2002; Lawrie et al., 2004; Barney et
al., 2004). The destination statement is sub-divided into descriptive categories that serve a
similar purpose to the perspectives in first and second generation Balanced Scorecards
(Barney et al., 2004). To decide rationally about organizational activity, an enterprise
need to develop a clear idea regarding what the organization is planning to achieve
(Senge, 1990; Kotter, 1995).

> Strate^c Objectives: The destination statement offers a clear and shared picture of an
organization at some point in the fiiture, but does not present a suitable focus for
management attention between now and then. The organization needs to set up objectives
to reach its destination on time. By representing the selected objectives on a strategic
linkage model, the design team is encouraged to apply systems thinking (Senge, 1990;

Senge et al., 1999) to identify cause-and-effect relationships between the selected


> Strategic Linkage Model and Perspectives: The chosen strategic objectives are
spread across four perspectives. Internal business process and learning and growth
perspective were replaced by single "activity" perspective. The and customer
perspective, were replaced by single "outcome'" perspective (Cobbold & Lawrie, 2002;
Lawrie et al., 2004).

> Measures and Initiatives: Once objectives have been decided, measures can be
identified. Measures are raised with the purpose to carry management's ability to monitor
the organization's progress towards achievement of its goals (Olve et al., 1999). Niven
(2002) reported that initiatives are with a start and end date and mapped to strategic
objectives. The third generation Balanced Scorecards showed material benefits to
organizations resulting from improved functionality as a strategic management tool. As a
result, it attained enhanced ability to support a more flexible and engaging approach to planning
and development within complex organizations (Cobbold & Lawrie, 2002).


There have been many criticisms of the Balanced Scorecard (Van Tassel, 1995: Dinesh &
Palmer, 1998; Nerreklit, 2000; Bessire & Baker, 2004; Bourguignon et al., 2004; Bourne, 2008)
which includes lack of human relations norm and the tendency to consider organizations as
similar to mechanistic systems (Bessire & Baker, 2004; Dinesh & Palmer, 1998). Kaplan and
Norton's use of the jet plane as a metaphor to explain the behavior of organizations is also said to
be misleading (Othman, 2007). Davis and Albright (2004) reported that 77 percent of Balanced
Scorecard adopters in the United States of America failed to build up a causal model of their
strategy which is regarded as a central idea in the Balanced Scorecard. Related results are
depicted in studies on adoption of Balanced Scorecard in Finland, Austria, Malaysia and
Germany (Malmi, 2001; Othman, 2006; Speckbacher et al., 2003). The Balanced Scorecard
recommends strategy map to represent a graphical illustration of the causal model of an
organization's strategy (Kaplan & Norton, 1996a; Mooraj et al., 1999; Nerreklit, 2000). But, few
researchers argue that there is no specific method to help organizations develop the causal model
of their strategy (Malmi. 2001; Speckbacher et al., 2003). Kaplan and Norton (1996a, 2001a)
discussed only large organizations and neglected small and medium-sized firms which
researchers reconunend a "one-size-fits-all" impression which may not be appropriate in all
conditions (Fernandez et al., 2005; Johanson et al., 2006). Meyer (2002) reported that the larger a
firm, the more difficult it is to implement a Balanced Scorecard and measure non-financial
indicators. The Balanced Scorecard offers good coverage of performance measure dimensions,
but provides no mechanism for building and maintaining the relevance of defined measures
(Hudson et al., 2001). The fimnework has also been criticized for being static and ignoring the
external environment (Atkinson et al.. 1997: Brignall, 2002; Nerreklit, 2003; Voelpel et al.,
2006). While taking into account the external environment, Kaplan and Norton (1996c, 2001b)
mention only customers. This is in conflict with Porter's (1980, 1985) five forces and neglects

the influences of suppliers, unions and others (Bontis et al., 1999; Halachmi, 2005). Functions
such as human resources and information technology play a major role in successful
implementation of Balanced Scorecard, yet Kaplan and Norton fail to adequately develop this
relationship (Halachmi, 2005; Papalexandris et al., 2005; Ismail, 2007). Since the Balanced
Scorecard is specific to companies and their specific competitive environments, it has been an
issue whether the Balanced Scorecard is truly a generalizable concept (Halachmi, 2005;
Papalexandris et al., 2005; Likierman, 2006). According to Meyer (2002), there are no
performance measures that are leading indicators, even though Balanced Scorecard is based on
the balance of leading and lagging measures. It results in uncertainty about the reliability of
performance measures to predict fixture economic value. He also argued that because of the time
lag between causes and their effects, the time dimension should be considered in the Scorecard
map. 'Achieving Measurable Performance Improvement in a Changing World' (2001), KPMG's
performance measurement white paper, outlined several drawbacks to Kaplan and Norton's
Balanced Scorecard. The paper argued that the four perspectives of Balanced Scorecard are
limiting. It also highlighted the lack of consideration in the existing perspectives for knowledge
creation processes and intellectual capital. Although, the Kaplan and Norton model is compact
focusing on a limited number of strategic issues, few organizations add a fifth perspective,
human resources, to Balanced Scorecard. It helps the organizations focus on performance drivers
that originate fi-om human capital. Firestone (2006) discussed the highly visible challenges to
Balanced Scorecard such as disappointment among mployees, lack of concentration, inadequate
measurement modeling and imperfect assessment research. Challenges to Balanced Scorecard
underlined by various researchers are discussed below. Table 3.1 summarizes the challenges to
Balanced Scorecard.
Dissatisfaction, Perceived Failure and Lack of Impact
Reviewers of the Balanced Scorecard implementations have been reporting dissatisfaction,
perceived failure or lack of impact. Lewy and Dumee (1998) mentioned results of Lewy's survey
work on Dutch companies which showed a management dissatisfaction rate of 70 percent.
Hendricks et al. (2004) studied 42 Canadian firms that adopted Balanced Scorecard but failed to
find sufficient longitudinal data to draw firm conclusions about post-intervention performance.
According to The Hackett Group's 2004 Finance Book of Numbers research, nearly two-thirds of
typical companies have a Bdanced Scorecard in place or in development. But only 17 percent of
these developed Balanced Scorecards that rely on a mix of financial and operational metrics,
indicating the complexity of implementing the Balanced Scorecard in a large number of
organizations (Answerthmk, 2004).
Lack of Concreteness in Strategic Targets
It has been recognized that the strategic vision formed as part of the score carding process was
vague for people to understand strategic goals. In order to validate previous work done in
developing objectives for strategy maps, some scorecard practitioners responded to this subject
by encompassing stakeholders construct destination statements, much more concrete
specifications of an organization's strategic vision (Cobbold & Lavme, 2002; Lawrie & Cobbold,
2004). An even smaller amount started to have stakeholders construct destination statements
prior to constructing strategy maps, modifying the Balanced Scorecard framework and deciding
on indicators as measures of objectives. Destinations statements turned out to be the solution to
the problem of lack on concreteness in strategic targets, but this measure has yet to be completely
adopted in Balanced Scorecard practice.
Conceptual Incompleteness

Since the early days of the Balanced Scorecard there have been questions about the adequacy of
the framework. Kaplan and Norton (1996) answered these queries by stating that the original
four perspectives do not represent a "straight jacket" that must be imposed. Niven (2003a, b)
recommended examples of frequent reviews of the fi-ameworks in specific areas covered in his
books. Cobbold (2004) underlined that public sector managers were found happy to reduce the
four perspectives framework to a two perspective: 'activity' and 'outcome' framework. Presently
the four perspective fi-amework is used as a starting point in the Balanced Scorecard design
process. The Balanced Scorecard's four perspectives framework suggests an orientation to
thinking about the sorts of indicators that might be added in the Scorecard. If the framework is
unrepresentative, it may bias one's thinking even when destination statements are used to
structure modeHng and selection of\ objectives (Firestone, 2006).
Weakness in Measurement Modeling
One of the major issues in the Balanced Scorecard literature has always been regarding the
constraints to be placed on the quantity of indicators used in Balanced Scorecard systems.
Kaplan and Norton (1992a, 1996c) emphasized the idea that relatively few indicators should be
used in the initial management of the matter. Firestone (1996) and Schneiderman (1999) laid
stress on significance of controlling the number of indicators and argued that too many indicators
in Balanced Scorecard are the major cause for failure in Balanced Scorecard interventions. The
limitation in the number of indicators is an effort to address the problem of focus in Balanced
Scorecird. It is associated to the idea that such systems ought to present a simple dashboard that
executives can utilize to drive the organization. The Balanced Scorecard practitioners are found
to be committed to the idea that the dashboard they build for executives must be as economical
as possible in the number of indicators it includes.
Impact Modeling and Evaluation Research Weaknesses
The Balanced Scorecard is still in its early stages of use in impact modeling to predict and
measure the Balanced Scorecard interventions and change in strategies and policies on
organizational performance (Ittner & Larcker, 1998; Malina & Selto, 2001; Salterio & Webb,
2003; Hendrick et al., 2004). To solve the concern, Cavaleri and Sterman (1997) focused around
the use of system dynamics and statistical analysis. Designing system dynamics in advance is
promoted in order to perform impact evaluation. Sloper et al. (1999) developed a dynamic
feedback framework for public sector performance management specifying how system
dynamics and the Balanced Scorecard can be combined. Wolstenholme (1998) specified three
ways in which systems dynamics could be used to develop Balanced Scorecard systems. It can
be used to model relationships between components of a sfrategic vision in strategy maps. It can
also be used to develop dynamic relationships in sub-models. Lastly, it can be used to model
specific but still high-level relationships dealing with trade-offs among performance measures.

Chapter. 4

Most companies will not have a single strategic plan, but they will have a number of integrated
strategies set at different levels (Doyle, 1998). Strategic planning is the process of Determining
objectives (what you want to accomplish), deciding on strategies (how to accomplish objectives),
and implementing the tactics (which make the plan come to life.) This process occurs within a
specified timeframe (Wells et al 2000). Three major levels of strategy dominate most large
organizations: (Hutt et al 2001).

1) Corporate strategy
2) Business- level strategy
3) Functional strategy

It is described that business strategic planning as a three-tiered process. It starts with a business
strategic plan, and then it moves on to functional plans such as a marketing plan or a financial
plan, and ends with specific plans for the functions. For marketing then, the business might have
specific plans for advertising or product development. (Wells et al, 2000). Marketing
communications strategy cannot exist in isolation from marketing strategy, which in turn is
directly linked to corporate strategy.