You are on page 1of 277

[ UGC-NET

MANAGEMENT]

Contents
Instructions.................................................................................................................4
Organizational behavior...........................................................................................18
Management Thoughts.............................................................................................49
Human resource management.................................................................................51
Business statistics....................................................................................................55
Marketing environment and environment scanning.................................................62
Corporate Strategy...................................................................................................67
Values and ethics in management...........................................................................94
Corporate governance............................................................................................122
Fundamental and Ethics Theories of Corporate Governance..........................122
Production management........................................................................................142
Financial management...........................................................................................157
Different types of transactions in the Foreign Exchange Market....................157
Risk management...........................................................................................181
Cash management.................................................................................................188
Cash management services generally offered................................................188
Inventory................................................................................................................190
Inventory Management..................................................................................190
Business inventory.........................................................................................191
The reasons for keeping stock.....................................................................191
Special terms used in dealing with inventory..............................................192
Typology......................................................................................................192
Inventory examples.....................................................................................192
Principle of inventory proportionality..............................................................193
Purpose.......................................................................................................193
Applications.................................................................................................194
Roots...........................................................................................................194

High-level inventory management.................................................................194


Accounting for inventory................................................................................196
Role of inventory accounting.......................................................................197
FIFO vs. LIFO accounting.............................................................................197
Standard cost accounting............................................................................198
Theory of constraints cost accounting.........................................................198
National accounts...........................................................................................198
Distressed inventory.......................................................................................199
Inventory credit..............................................................................................199
Cash conversion cycle............................................................................................201
Definition........................................................................................................201
[edit] Derivation..........................................................................................201
Question bank........................................................................................................205
Job enrichment.......................................................................................................230
Contents.........................................................................................................231
[edit] Techniques............................................................................................231
What are Management Information Systems?................................................232
Advantages & Disadvantages Of Information Management Systems.....................238
Advantages.....................................................................................................238
Better Planning and Control............................................................................239
Aid Decision Making.......................................................................................239
Disadvantages................................................................................................239
Constant Monitoring Issues............................................................................239

Instructions
SCHEME AND DATE OF TEST:

(i) The Test will consist of three papers. All the three papers will be held on 26th December,
2010 in two separate sessions as under:

Session

Paper

Marks

Duration

First

100

1 Hours (09.30 A.M. to 10.45


A.M.)

First

II

100

1 Hours (10.45 A.M. to 12.00


NOON)

Second

III

200

2 Hours (01.30 P.M. to 04.00


P.M.)

Paper-I shall be of general nature, intended to assess the teaching/research aptitude of the
candidate. It will primarily be designed to test reasoning ability, comprehension, divergent
thinking and general awareness of the candidate. UGC has decided to provide choice to the
candidates from the December 2009 UGC-NET onwards. Sixty (60) multiple choice questions of
two marks each will be given, out of which the candidate would be required to answer any fifty
(50). In the event of the candidate attempting more than fifty questions, the first fifty questions
attempted by the candidate would be evaluated.
Paper-II shall consist of questions based on the subject selected by the candidate. Each of these
papers will consist of a Test Booklet containing 50 compulsory objective type questions of two
marks each.
The candidate will have to mark the responses for questions of Paper-I and Paper-II on the
Optical Mark Reader (OMR) sheet provided along with the Test Booklet. The detailed
instructions for filling up the OMR Sheet will be sent to the candidate along with the Admit
Card.
Paper-III will consist of only descriptive questions from the subject selected by the candidate.
The candidate will be required to attempt questions in the space provided in the Test Booklet.

The structure of Paper-III has been revised from June, 2010 UGC-NET and is available on the
UGC website www.ugc.ac.in.

Paper-III will be evaluated only for those candidates who are able to secure the minimum
qualifying marks in Paper-I and Paper-II, as per the table given in the following:
MINIMUM QUALIFYING MARKS
CATEGORY

PAPER - I

PAPER - II

PAPER - I + PAPER
II

GENERAL

40

40

100 (50 %)

OBC/PH/VH

35

35

90 (45 %)

SC/ST

35

35

80 (40 %)

The minimum qualifying criteria for award of JRF is as follows :


MINIMUM QUALIFYING MARKS
CATEGORY

PAPER - I

PAPER - II

PAPER - I +
PAPER - II

PAPER III

GENERAL

40

40

100 (50 %)

100 (50 %)

OBC/PH/VH

35

35

90 (45 %)

90 (45 %)

SC/ST

35

35

80 (40 %)

80 (40 %)

However, the final qualifying criteria for Junior Research Fellowship (JRF) and Eligibility for
Lectureship shall be decided by UGC before declaration of result.
(ii) For Visually Handicapped (VH) candidates thirty minutes extra time shall be provided
separately for paper-I and Paper-II. For paper-III, forty five minutes extra time shall be
provided. They will also be provided the services of a scribe who would be a graduate in a
subject other than that of the candidate. Those Physically Handicapped (PH) candidates who
are not in a position to write in their own hand-writing can also avail these services by making

prior request (at least one week before the date of UGC-NET) in writing to the Co-ordinator of
the test centre. Extra time and facility of scribe would not be provided to other Physically
Handicapped candidates.
(iii) Syllabus of Test: Syllabi for all NET subjects can be downloaded from the UGC Website
www.ugc.ac.in and are also available in the libraries of all Indian universities. UGC will not send
the syllabus to individual candidates.
(iv) In Paper III, candidate has the option to answer either in Hindi or in English in all
subjects except the languages where the candidate is required to write in the concerned
language only. In case of Computer Science & Applications, Electronic Science and
Environmental Sciences, the question papers have to be answered in English only.
(v) In case of any discrepancy found in the English and Hindi versions, the
questions in English version shall be taken as final.

Latest Structure of paper-III


To be implemented from June 2010 UGC-NET
Section-1:

Essay writing-two questions with internal choice on general themes and


contemporary, theoretical or of disciplinary relevance may be given. The
candidate is expected to write up to 500 words for each question of 20 marks (2Q
X 20 M =40 marks). In case the questions are based on electives, the choices
should be of general nature, common to all candidates. In case of science subjects
like Computer Science etc. two questions carrying 20 marks each may be given in
place of essay type questions. The questions in this section should be numbered
as 1 and 2.

Section-2:

Three extended answer based questions to test the analytical ability of the
candidates are to be asked on the major specialization/electives. Questions will be
asked on all major specialization/electives and the candidates may be asked to
choose one specialization/ elective and answer the three questions. There is to be
no internal choice. Each question will be answered in up to 300 words and shall
carry 15 marks each(3Q X 15 M=45 Marks). Where there is no
specialization/elective, 3 questions may be set across the syllabus. The questions
in this section should be numbered 3 to 5.

Section-3:

Nine questions may be asked across the syllabus. The questions will be
definitional or seeking particular information and are to be answered in up to 50
words each. For Science subjects as mentioned in Section-1, short
numerical/computational problems may be considered. Each question will carry
10 marks (9Q X 10 M =90 Marks). There should be no internal choice. The
questions in this section should be numbered 6 to 14.

Section-4:

It requires the candidates to answer questions from a given text of around 200-300
words taken from the works of a known thinker/author. Five carefully considered
specific questions are to be asked on the given text, requiring an answer in up to
30 words each. This section carries 5 questions of 5 marks each (5Q X 5M =25
Marks). In the case of science subjects, a theoretical/ numerical problem may be
set. These questions are meant to test critical thinking ability to comprehend and

apply knowledge one possess. Question in this section should be numbered as 15


to 19.

Section

Type of
Questions

Test of

No. of
questions

Words
Per
answer

Total

Marks
Per
question

Total

Essay

Ability to
dwell on a
theme at an
optimum
level

500

1000

20

40

Three
analytical/
evaluative
questions

Ability to
3
reason and
hold on
argument
on the given
topic

300

900

15

45

Nine
definitional
/ short
answer
questions

Ability to
understand
and express
the same

50

450

10

90

Text based
questions

Critical
5
thinking,
ability to
comprehend
and
formulate
the concept

30

150

25

2500

200

Total

19

Here are some of the tips and techniques to score well in UGC NET examination.
Follow them at the best. Good Luck.
1. Writing skills matter a lot in the NET Examination. Most of the candidates appearing for the
NET examination have a lot of knowledge, but lack writing skills. You should be able to present
all the information/knowledge in a coherent and logical manner, as expected by the examiner.
For example: Quoting with facts and substantiating your answer with related concepts and
emphasizing your point of view.
2. Preparations for NET examination should be done intensively.
3. Prepare a standard answer to the question papers of the previous years. This will also make
your task easy at the UGC examination.
4. Do Not miss the concepts. Questions asked are of the Masters level examination. Sometimes
the questions are conceptual in nature, aimed at testing the comprehension levels of the basic
concepts.
5. Get a list of standard textbooks from the successful candidates, or other sources and also
selective good notes. The right choice of reading material is important and crucial. You should
not read all types of books as told by others
6. While studying for the subjects, keep in mind that there is no scope for selective studies in
UGC. The whole syllabus must be covered thoroughly. Equal stress and weight should be given
all the sections of the syllabus.
7. Note that in the ultimate analysis both subjects carry exactly the same amount of maximum
marks.
8. Go through the unsolved papers of the previous papers and solve them to stimulate the
atmosphere of the examination.
9. Stick to the time frame. Speed is the very essence of this examination. Hence, time
management assumes crucial importance.
10. For developing the writing skills, keep writing model answers while preparing for the NET
examination. This helps get into the habit of writing under time pressure in the Mains
examination.
11. Try not to exceed the word limit, as far as possible. Sticking to the word limit that will save
time. Besides, the numbers of marks you achieve are not going to increase even if you exceed the
word limit. Its the quality that matters not the quantity.

12. Highlight the important points which are important.


13. Follow paragraph writing rather than essay form. A new point should start with a new
paragraph.
14. If the question needs answer in point format give it a bullet format.
15. Keep sufficient space between two lines.
16. Give space and divide it by a dividing line between two questions.
17. Above all be patient and believe in you and God.
*****************
Unit 1
*****************
managerial economics-demand analysis
production function
cost-output relations
market structures
pricing theories
advertising
macro-economics
national income concept
infrastructure-management and policy
business environment
capital budgeting
*****************
Unit 2
*****************
the concept and significance of organisational behaviour-skills and role in an organisationclassical, neo-classical and modern theories of organisational structure- organisational designunderstanding and managing individual behaviour personality-perception-values-attitudeslearning-motivation.
Understanding and managing group behaviour, processes-Inter-personal and group dynamicscommunication-leadership-managing cange-managing conflicts. Organisational development
*****************
Unit 3
*****************
Concepts and perspectives of HRM,HRM in changing environment, Human resource planningobjectives, process and techniques.
Job analysis- job description
Select human resources
Induction, training and development
Exit policy and implications
performance appraisal and evaluation

Wage determination
Industrial relations and trade unions
Dispute resolution and grievance management
Labour welfare and social security measures
*****************
Unit 4
*****************
financial management-nature and scope
valuation concepts and valuation of securities
Capital budgeting decision-risk analysis
Capital structure and cost of capital
Dividend policies-determinants
Long term and short term financing instruments
mergers and acquisitions
*****************
Unit 5
*****************
Marketing environment and environment scanning, marketing information systems and
marketing research, understanding consumer and industrial markets, demand measurement and
forecasting, market segmentation-targeting and positioning, product decisions,product mix,
product life cycle, new product development, branding and packaging, pricing methods and
strategies.
Promotion decisions-promotion mix, advertising, personal selling, channel management, vertical
marketing system, evaluation and control of marketing effort, marketing of service, customer
relation management,
Uses of internet as a marketing medium- other related issues like branding, market development,
advertising and retailing on the net.
New issues in marketing.
*****************
Unit 6
*****************
Role and scope of production management, facility location, layout planning and analysis,
production planning and control- production process analysis, demand forecasting for operations,
determinant of product mix, production scheduling, work measurement, time and motion study,
statistical quality control.
role and scope of operations research, linear programming, sensitivity analysis, transportation
model, inventory control, queuing theory, decision theory, markov analysis, PERT/CPM
*****************
Unit 7
*****************
Probability theory, probability distribution-binomial, poission, normal and exponential,
correlation and regression analysis, sampling theory, sampling distribution, tests of hypothesis,
large and small samples, t, z, f, chi-square tests.
use of computers in managerial applications, technological issues and data processing in
organisations.
Information systems, MIS and decision making, system analysis and design, trends in
information technology,Internet and internet-based applications.

*****************
Unit 8
*****************
Concept of corporate strategy, component of strategy formulation, Ansoff's growth vector, BCG
model, Porter's generic strategies, competitor analyis, strategic dimensions and group mapping,
industry analysis, strategies in industry evolution, fragmentation, maturity and
decline,competitive strategy and corporate strategy, transnationalisation of world
economy,managing cultural diversity, global entry strategies, globalisation of financial system
and services, managing international business, competitive advantage of nations, RTP and WTO
*****************
Unit 9
*****************
Concepts- types, characterstics, motivation, competencies and its development, innovation and
entrepreneurship, small business-concepts
Government policy for promotion of small and tiny enterprises, process of business opportunity
identification, detailed business plan preparation, managing small enterprises, planning for
growth, sickness in small enterprises, rehabilitation of sick enterprises,
Intrapreneurship(organisational entrepreneurship).
*****************
Unit 10
*****************
Ethics and management systems, ethical issues and analysis in management, value based
organisations, personal framework for ethical choices, ethical pressure on individuals in
organisations, gender issues, ecological consciousness, environmental ethics, social
responsibilities of business, corporate governance and ethics.
************************
Elective-I
************************
Human Resource Management (HRM) - Significance; Objectives; functions; a diagnostic model;
External and Internal environment
Forces and Influences; organizing HRM function
Recruitment and selection-sources of recruits; recruiting methods; selection procedure; selection
tests; Placement and follow-up.
Performance appraisal system-importance and objectives; techniques of appraisal system; new
trends in appraisal system.
Development of personnel- objectives; determining needs; methods o training and development
programs; evaluation.
Career planning and development-concept of career; career planning and development methods.
Compensation and benefits- job evaluation techniques; wage and salary administration; fringe
benefits; human resource records and audit.

Employee discipline importance; causes and forms; disciplinary action; domestic enquiry.
Grievance management- importance; process and practices; employee welfare and social security
measures.
Industrial relations- importance; industrial conflicts; causes; dispute settlement machinery
Trade union- importance of unionism; union leadership; national trade union movement
Collective bargaining- concept; process; pre-requisite; new trends in collective bargaining
Industrial democracy and employee participation- need for industrial democracy; pre-requisite
for industrial democracy; employee participation objectives; forms of employee participation.
Future of Human Resource Management.
************************
Elective-II
************************
Marketing-Concepts; Nature and scope; Marketing myopia; Marketing mix; Different
environments and their influences on marketing; understanding the customer and competition.
Role and relevance of Segmentation and positioning; Static and dynamic understanding of BCG
matrix and Product Life Cycle; Brands-Meaning and role; Brand building strategies; Share
increase strategies.
Pricing objectives; pricing concepts; Pricing methods
Product- Basic and augmented stages in new product development
Test marketing concepts
Promotion mix- Role and relevance of advertising
Sales promotion- media planning and management
Advertising- Planning, execution and evaluation
Different tools used in sales promotion and their specific advantages and limitations
Public relations- concept and relevance
Distribution channel hierarchy; role of each member in the channel; Analysis of businesss
potential and evaluation of performance of the channel members
Wholesaling and retailing- Different formats and the strength of each one; Emerging issues in
different formats of retailing in India

Marketing research- Sources of information; Data collection; Basic tools used in data analysis;
structuring a research report
Marketing to orgaisations- Segmentation models; Buyer behavior models; Organiational buying
process
Consumer behavior theories and models and their specific relevance to marketing managers
Sales function- Role of technology in sales function automation
Customer relationship management including the concept of Relationship marketing
Use of internet as a medium of marketing; Managerial issues in researching consumer/
organization through internet
Structuring and managing marketing organizations, Export
Marketing- Indian and global context
************************
Elective-III
************************
Nature and scope of financial management
valuation concepts-risk and return, valuation of securities, pricing theories- capital asset pricing
model and arbitrage pricing theory
Understanding financial statements and analysis thereof
Capital budgeting decision, risk analysis in capital budgeting and long-term sources of finance
Capital structure- theories and factors, cost of capital
Dividend policies -theories and determinants
Working capital management-determinants and financing, cash management,inventory
management, receivables management
Elements of derivatives
Corporate risk management
mergers and acquisitions
International financial management
************************
Elective-IV
************************
Indias foreign Trade and Policy; Export promotion policies; Trade agreements with other
countries; Policy and performance o Export zones and Export-oriented unit; Export incentives.
International marketing logistics; International logistic structures; Export Documentation
framework; Organisation of shipping services; Chartering practices; Marine cargo insurance.

International financial environment; Foreign exchange markets; Determination of exchange


rates; Exchange risk measuremet; International investment; International capital markets;
International credit Agencies and Implictions of their ratings.
WTO and Multilateral trade agreements pertaining to trade in goods; trade in services and
TRIPS; Multilateral environment agreements(MEAs); International Trade Blocks- NAFTA,
ASEAN, SAARC, EU, WTO and Dispute Settlement Mechanism.
Technology Monitoring; Emerging opportunities for global business.

Managerial economics

Managerial economics
Definition of managerial economics
Nature and characteristics of managerial economics
Scope of managerial economics
Difference between managerial economics and economics
Economic tools used in managerial economics
Decision criteria
Managerial economics is the study of economic theories, logic and methodology
which are generally applied to seek solutions to the practical problems of
business.
Nature and characteristics of managerial economics
Scope of managerial economics
Demand analysis
Demand curve
Demand schedule
Elasticity of demand
Demand forecasting
Relationship between Average and Marginal cost:

In the figure above the average and marginal cost curves have been drawn. It
will be seen that so long as the average cost is falling, marginal cost is less than
average cost. In the same way when average cost is rising marginal cost is
above the average cost.

The marginal average relation is mathematical


truism which is correct in all the conditions. We
can understand it easily with the help of an
example. Suppose in a cricket match the
batting average of a batsman is 60 in the first
innings. If Market structure

Seller
entry
barrier

Seller
numbe
r

Buyer
entry
barrier

Buyer
numbe
r

Perfect competition

No

Many

No

Many

Monopolistic competition

No

Many

No

Many

Oligopoly

Yes

Few

No

Many

Monopoly

Yes

One

No

Many

Monopsony

No

Many

Yes

One

Market structure
Perfect
competition

Monopolistic
competition

Monopoly

Goal of firms

Maximize
profit

Maximize profit

Maximize
profit

Rule for maximizing

MR= MC

MR= MC

MR= MC

Can earn economic profit in the long


run

Yes

yes

Yes

Price takers?

Yes

No

No

Price

P=MC

P>MC

P>MC

Produces welfare maximizing level of


output?

Yes

No

No

Number of firms

Many

Many

one

Entry in long run?

Yes

Yes

No

Can earn economic profit in long run?

No

No

Yes

Features that all three market


structures share

Features that Monopolistic


competition shares with monopoly

Features that Monopolistic


competition shares with perfect
competition

quantit
y
0
1
2
3
4
5
6
7
8
monop
oly
quantit
y
0
1
2
3
4
5
6
7
8

average
cost

5
4
4
4.25
4.6
5
5.428571
429
5.875
average
cost

5
4
4
4.25
4.6
5
5.428571
429
5.875

tot
al
cos
t
3
5
8
12
17
23
30
38

margin
al cost

average
revenue

total
reven
ue

margina
l
revenue

2
3
4
5
6
7
8

47

prof
it

change
in
profit

6
6
6
6
6
6
6

0
6
12
18
24
30
36
42

6
6
6
6
6
6
6

-3
1
4
6
7
7
6
4

4
3
2
1
0
-1
-2
-3

48

-1

tot
al
cos
t
3
5
8
12
17
23
30
38

margin
al cost

average
revenue

total
reven
ue

margina
l
revenue

prof
it

change
in
profit

10
8
6
4
2
0
-2

-3
5
10
12
11
7
0
-10

8
5
2
-1
-4
-7
-10
-13

2
3
4
5
6
7
8

10
9
8
7
6
5
4

10
18
24
28
30
30
28

47

24

-4

-23

23

Macro-Economics
Macro-economics is also known as the theory of income and employment or
simply income analysis. It is concerned with the problems of unemployment,
economic fluctuations, inflation or deflation, international trade and economic
growth.
Macro-economics is the study of aggregates or averages covering the entire
economy, such as total employment, national income, national output, national
output, total investments, total consumption, and total saving; aggregate supply
and aggregate demand, general price level, wage level and cost structure.
In other words, it is aggregate economics which examines the inter-relations
among the various aggregates, their determination and causes of fluctuations in
them. Thus, in the words of Professor Ackley, Macro-economics deals with
economic affairs in the large; it concerns the overall dimensions of economic
life. It looks at the total size and shape and functioning of the elephant of
economic experience, rather than working of articulation or dimensions of the

individual parts. It studies the character of the forest independently of the trees
which compose it.
Scope and Importance of Macro-economics:As a method of economic analysis macro-economics is of much theoretical and
practical importance
1
2
3
4
5
6
7
8

To understand the working of Economy


In Economic Policies
In General Employment
In National Income
In Economic Growth
In Monetary Problems
In Business Cycle
For Understanding the behavior of Individual units

Limitations of Macro-economics:1
2
3
4
5

Fallacy of compositions
To regard the aggregate as homogeneous
Aggregate variables may not be important necessarily
Indiscriminate use of Macro-economics misleading
Statistical and conceptual difficulties

Capital Budgeting
Meaning
Capital budgeting decisions pertain to fixed/long term assets by definition refers
which are in operation and yield a return, over a period of time, usually,
exceeding one year. They therefore, involve a current outlay of series of outlays
of cash resources in return for an anticipated flow of future benefits. In other, the
system of capital budgeting is employed to evaluate expenditure decisions which
involve current outlays but are likely to produce benefits over a period of time
longer than one year. These benefits may be either in the form of increased
revenues or reduced costs. Capital expenditure management, therefore, includes
addition, disposition, modification and replacement of fixed assets. The features
of capital budgeting are as follows:
1
2
3

Potentially large anticipated benefits


A relatively high degree of risk
A relatively long time period between the initial outlay and the
anticipated returns

The term capital budgeting is used inter-changeably with capital expenditure


decision, capital expenditure management, long-term investment decision,
management of fixed assets and so on.
Importance of Capital Budgeting

Capital Budgeting decision affects the profitability of a firm. Capital


budgeting decisions determine the future destiny of the company. A
few wrong decisions and the firm may be forced into bankruptcy.
A

BUSINESS CYCLE
Business Cycle refers to fluctuations in economic activity. It is also known as the
economic cycle. The Business Cycle has four distinct phases that revolve around
its long-tern growth trend.
- Contraction : That features slow down in economic activity.
- Trough : Turning point of business cycle where contraction shifts to expansion.
- Expansion: Growth in economic activity.
- Peak : Upper turning point of business cycle.
There are four phases of the business cycle:
1. Peak/boom
2. Recession
3. Trough
4. Recovery
Let's briefly discuss each phase now:
1. Peak/Boom: This is the stage when the business activity is at its
maximum, although this level of activity is temporary.
2. Recession: After operating at maximum activity, the business goes into
the recession phase. This phase witnesses a decrease in total output,
employment and trade. Recession may last for about 6 months or more.
3. Trough: At this stage, output and employment are at their lowest. This is
also referred to as the stage of depression. This stage may be short term
or may be long term depending on circumstances and market conditions.
4. Recovery: The recovery stage, as the name suggests is the rise in output,
employment and trade after the depression stage. The employment levels
increase till maximum employment is reached.
These stages are often depicted as a graph. The graph would look like a wave.
The peak of the wave is the boom phase, the decreasing slope is recession, the
rock bottom of the wave is the trough/depression, and recovery phase is shown
as the increasing slope after the trough. The vertical axis measures real output
and the horizontal axis measures time.

Organizational behavior
Organizational theory is a set of interrelated constructs (concepts), definitions and
propositions that present a systematic view of behavior of individuals, groups, and subgroups
interacting in some relatively patterned sequence of activity, the intent of which is goal
directed.
Some important organizational theories are:(a) Classical theory
a. Scientific management theory
b. Administrative theory
(b) Neo-classical theory
(c) Modern theory
Concept of organizational behavior
Role of organizational behavior
Organizational theories
Appraisal of Classical theory
Systems approach
Contingency or situational approach
*************
Organisational structure
Mechanism of designing structure
Departmentation
Choosing a basis of departmentation
Span of management
Delegation of authority
Centralisation and decentralization
*************
Personality
Determinants of personality
Personality and behavior
Organisational applications of personality
Personality:Personality is the dynamic organization within the individual of those
psychological systems that determine his unique adjustments to his
environment.
Personality theories:Psychoanalytic theory:The ID
The Id, The Ego and The super Ego are the parts of the personality.
The Id proceeds unchecked to satisfy life instincts and death instincts.
The Ego

The Ego keeps the Id in check through the realities of the external environment
through intellect and reason.
Socio-psychological theory
Social variables and not the biological instincts, are the important determinant in
shaping personality. There is an interaction between the society and the
individual.
Trait theory
The trait approach to personality is one of the major theoretical areas in the
study of personality. The trait theory suggests that individual personalities are
composed broad dispositions. Consider how you would describe the personality
of a close friend. Chances are that you would list a number of traits, such as
outgoing, kind and even-tempered. A trait can be thought of as a relatively
stable characteristic that causes individuals to behave in certain ways.
Unlike many other theories of personality, such as psychoanalytic or humanistic
theories, the trait approach to personality is focused on differences between
individuals. The combination and interaction of various traits forms a personality
that is unique to each individual. Trait theory is focused on identifying and
measuring these individual personality characteristics.
Gordon Allports Trait Theory
In 1936, psychologist Gordon Allport found that one English-language dictionary
alone contained more than 4,000 words describing different personality traits.1
He categorized these traits into three levels:
* Cardinal Traits: Traits that dominate an individuals whole life, often to the
point that the person becomes known specifically for these traits. People with
such personalities often become so known for these traits that their names are
often synonymous with these qualities. Consider the origin and meaning of the
following descriptive terms: Freudian, Machiavellian, narcissism, Don Juan,
Christ-like, etc. Allport suggested that cardinal traits are rare and tend to develop
later in life.
* Central Traits: These are the general characteristics that form the basic
foundations of personality. These central traits, while not as dominating as
cardinal traits, are the major characteristics you might use to describe another
person. Terms such as intelligent, honest, shy and anxious are considered
central traits.
* Secondary Traits: These are the traits that are sometimes related to
attitudes or preferences and often appear only in certain situations or under
specific circumstances. Some examples would be getting anxious when speaking
to a group or impatient while waiting in line.
Raymond Cattells Sixteen Personality Factor Questionnaire
Trait theorist Raymond Cattell reduced the number of main personality traits
from Allports initial list of over 4,000 down to 171,3 mostly by eliminating

uncommon traits and combining common characteristics. Next, Cattell rated a


large sample of individuals for these 171 different traits. Then, using a statistical
technique known as factor analysis, he identified closely related terms and
eventually reduced his list to just 16 key personality traits. According to Cattell,
these 16 traits are the source of all human personality. He also developed one of
the most widely used personality assessments known as the Sixteen Personality
Factor Questionnaire (16PF).
Eysencks Three Dimensions of Personality
British psychologist Hans Eysenck developed a model of personality based upon
just three universal trails:
1. Introversion/Extraversion:
Introversion involves directing attention on inner experiences, while
extraversion relates to focusing attention outward on other people and the
environment. So, a person high in introversion might be quiet and reserved,
while an individual high in extraversion might be sociable and outgoing.
2. Neuroticism/Emotional Stability:
This dimension of Eysencks trait theory is related to moodiness versus eventemperedness. Neuroticism refers to an individuals tendency to become upset
or emotional, while stability refers to the tendency to remain emotionally
constant.
3. Psychoticism:
Later, after studying individuals suffering from mental illness, Eysenck added
a personality dimension he called psychoticism to his trait theory. Individuals
who are high on this trait tend to have difficulty dealing with reality and may be
antisocial, hostile, non-empathetic and manipulative.4
The Five-Factor Theory of Personality
Both Cattells and Eysencks theory have been the subject of considerable
research, which has led some theorists to believe that Cattell focused on too
many traits, while Eysenck focused on too few. As a result, a new trait theory
often referred to as the "Big Five" theory emerged. This five-factor model of
personality represents five core traits that interact to form human personality.5
While researchers often disagree about the exact labels for each dimension, the
following are described most commonly:
1. Extraversion
2. Agreeableness
3. Conscientiousness
4. Neuroticism
5. Openness
Assessing the Trait Approach to Personality

While most agree that people can be described based upon their personality
traits, theorists continue to debate the number of basic traits that make up
human personality. While trait theory has objectivity that some personality
theories lack (such as Freuds psychoanalytic theory), it also has weaknesses.
Some of the most common criticisms of trait theory center on the fact that traits
are often poor predictors of behavior. While an individual may score high on
assessments of a specific trait, he or she may not always behave that way in
every situation. Another problem is that trait theories do not address how or why
individual differences in personality develop or emerge.
Self theory
Self-concept: The composite of ideas, feelings, and attitudes that a person has
about his or her own identity, worth, capabilities, and limitations. A persons self
concept gives him sense of meaningfulness and consistency.
There are four factors in shelf concept.
Self-image: A person's self image is the mental picture, generally of a kind that
is quite resistant to change, that depicts not only details that are potentially
available to objective investigation by others (height, weight, hair color, gender,
I.Q. score, etc.), but also items that have been learned by that person about
himself or herself, either from personal experiences or by internalizing the
judgments of others.
In short, self-image is the way one sees oneself.
Ideal-self: The ideal self denotes the way one would like to be.
Looking Glass-self: The perception of a person about how others are
perceiving his qualities and characteristics.
Real-Self: The real-self is what one really is.
Determinants of personality

Biologica
l factor

Family
and
Individua
social
l group
factors

Cultural
factors

Situation
al factor

Personality and behavior


Self-concept and Self-esteem
Self-concept is the way individuals define themselves as to who they are and
derive their sense of identity. Self-esteem denotes the extent to which they
consistently regard themselves as capable, successful, important and worthy
individual.
Need pattern
Achievement, affiliation, autonomy, dominance.
Machiavellianism refers to manipulating of others as a primary way of
achieving ones goals.
Locus of control means whether people believe that they are in control of
events or events control them.
Type A and B personality
Type A people always feel a sense of time urgency, are highly achievementoriented, exhibit a competitive drive and are impatient when their work is slowed
down for any reason.
Type B people are easy going, do not have urgency for time and do not
experience the competitive drive.
Tolerance of ambiguity
Introversion and extroversion
Work-ethics orientation
Organizational application of personality:1. Matching jobs and individuals
2. Designing motivation system
3. Designing control system

*************
Perception
Perception is defined as a process by which individuals organize and interpret their sensory
impressions in order to give meaning to their environment.
perceptual process
Perceptual selectivity
Perceptual organization
Interpersonal perception
Managerial application of perception

*************
Attitudes
Attitude is the persistent tendency to feel and behave in a favourable or unfavourable way
towards some object, person or idea.
Concepts of attitudes
Attitudes and values
Theories of attitude formation
Factors in attitude formation
Attitude measurement
Attitude change
Methods of attitude change
*************
Values
Values are global beliefs that guide actions and judgment across a variety of situations.
Values and attitudes
Values and behavior
Factors in value formation
Types of values
*************
Learning

Learning is a relative enduring change in behavior brought about as a


consequence of experience.
Learning theories
conditioning theories
Cognitive learning theory
Social learning theory
Integrating various learning theories
Reinforcement
Types of reinforcement
Administering reinforcement

*************
Organizational behavior modification

*************
Motivation
Motivation refers to the way in which urges, drives, desires, aspirations, striving
or need direct, control or explain the behavior of human beings.
Motivation and behavior
Motivation

The term motivation comes from the Latin movere which means to move.
Motivation as the base-building block of human action has been studied
extensively. Studies on motivation broadly refer to two areas:
(a) Motivating self, and
(b) Motivating others
The concept motive refers to the purpose underlying all goal directed actions.
All motives, however, may not be equally important to the context of goal. Some
actions arise from biological or physiological needs, over which people do not
have much control. Such motives are common to the entire animal kingdom. But
there are certain crucial and other higher order needs which are common to
human beings. The distinctly human motives are largely unrelated to biological
and survival needs. These are related to feelings of self esteem, competency,
social acceptance, etc.
Psychologists have defined the term motivation as:

The immediate influence on the direction, vigor and persistence of action;


The process of arousing action, sustaining the activity in progress and
regulating the pattern of activity;
An inner state that energize activities and directs or channels behavior
towards goals;
How behavior gets started, is energized, is sustained, is directed, is
stopped and what kinds of subjective reactions are present in the
organism while all this is going on;
Steering ones action towards certain goals and to commit a certain part o
ones energies reacting to them.

Motivation Process
Motivation is essentially a process. It may be illustrated with the help of a
generalized model.

Inner state of disequilibrium,


Need, desire or expectancies
accompanied by anticipation

Behaviors or action

Incentive or
goal

Motivation of inner
state (feedback)

Fig. Motivation process


The important aspects of the model are:
1.
2.
3.
4.

Needs or expectation
Behavior
Goal, and
Some form of feedback

Achievement motivation
Achievement motivation is also termed as n Ach, the need to achieve, the
urge to improve in common parlance. If a man spends his time thinking about
doing his job better, accomplishing something unusual and important or
advancing his career, the psychologist says he has a high need of achievement.
He thinks not only about the achievement goal, but also about how it can be
attained , what obstacles or blocks might be encountered and how he would take
help to overcome the obstacles in achieving his goal.

Attitude Formation
In Social Psychology attitudes are defined as positive or negative evaluations of
objects of thought. Attitudes typically have three components.

The cognitive component is made up of the thoughts and beliefs people


hold about the object of the attitude.
The affective component consists of the emotional feelings stimulated
by the object of the attitude.
The behavioral component consists of predispositions to act in certain
ways toward an attitude object.
The object of an attitude can be anything people have opinions about. Therefore,
individual people, groups of people, institutions, products, social trends,
consumer products, etc. all can be attitudinal objects.

Attitudes involve social judgments. They are either for, or against, pro,
or con, positive, or negative; however, it is possible to be ambivalent
about the attitudinal object and have a mix of positive and negative
feelings and thoughts about it.
Attitudes involve a readiness (or predisposition) to respond; however, for a
variety of reasons we dont always act on our attitudes.
Attitudes vary along dimensions of strength and accessibility. Strong
attitudes are very important to the individual and tend to be durable and
have a powerful impact on behavior, whereas weak attitudes are not very
important and have little impact. Accessible attitudes come to mind
quickly, whereas other attitudes may rarely be noticed.
Attitudes tend to be stable over time, but a number of factors can cause
attitudes to change.
Stereotypes are widely held beliefs that people have certain
characteristics because of their membership in a particular group.
A prejudice is an arbitrary belief, or feeling, directed toward a group of
people or its individual members. Prejudices can be either positive or
negative; however, the term is usually used to refer to a negative attitude
held toward members of a group. Prejudice may lead to discrimination,
which involves behaving differently, usually unfairly, toward the members
of a group.

Psychological factors involved in Attitude Formation and Attitude


Change
1. Direct Instruction involves being told what attitudes to have by
parents, schools, community organizations, religious doctrine,
friends, etc.
2. Operant Conditioning is a simple form of learning. It is based on
the Law of Effect and involves voluntary responses. Behaviors
(including verbal behaviors and maybe even thoughts) tend to be
repeated if they are reinforced (i.e., followed by a positive
experience). Conversely, behaviors tend to be stopped when they
are punished (i.e., followed by an unpleasant experience). Thus, if
one expresses, or acts out an attitude toward some group, and this
is reinforced by ones peers, the attitude is strengthened and is

3.

4.

5.

6.

7.

likely to be expressed again. The reinforcement can be as subtle as


a smile or as obvious as a raise in salary. Operant conditioning is
especially involved with the behavioral component of attitudes.
Classical conditioning is another simple form of learning. It
involves involuntary responses and is acquired through the pairing
of two stimuli. Two events that repeatedly occur close together in
time become fused and before long the person responds in the
same way to both events. Originally studied by Pavlov, the process
requires an unconditioned stimulus (UCS) that produces an
involuntary (reflexive) response (UCR). If a neutral stimulus (NS) is
paired, either very dramatically on one occasion, or repeatedly for
several acquisition trials, the neutral stimulus will lead to the same
response elicited by the unconditioned stimulus. At this point the
stimulus is no longer neutral and so is referred to as a conditioned
stimulus (CS) and the response has now become a learned response
and so is referred to as a conditioned response (CR). In Pavlovs
research the UCS was meat powder which led to an UCR of
salivation. The NS was a bell. At first the bell elicited no response
from the dog, but eventually the bell alone caused the dog to
salivate.
Advertisers create positive attitudes towards their
products by presenting attractive models in their ads. In this case
the model is the UCS and our reaction to him, or her, is an
automatic positive response. The product is the original NS which
through pairing comes to elicit a positive conditioned response. In a
similar fashion, pleasant or unpleasant experiences with members
of a particular group could lead to positive or negative attitudes
toward that group. Classical conditioning is especially involved with
the emotional, or affective, component of attitudes.
Social (Observational) Learning is based on modeling. We
observe others. If they are getting reinforced for certain behaviors
or the expression of certain attitudes, this serves as vicarious
reinforcement and makes it more likely that we, too, will behave in
this manner or express this attitude.
Classical conditioning can
also occur vicariously through observation of others.
Cognitive Dissonance exists when related cognitions, feelings or
behaviors are inconsistent or contradictory. Cognitive dissonance
creates an unpleasant state of tension that motivates people to
reduce their dissonance by changing their cognitions, feeling, or
behaviors. For example, a person who starts out with a negative
attitude toward marijuana will experience cognitive dissonance if
they start smoking marijuana and find themselves enjoying the
experience.
The dissonance they experience is thus likely to
motivate them to either change their attitude toward marijuana, or
to stop using marijuana. This process can be conscious, but often
occurs without conscious awareness.
Unconscious Motivation. Some attitudes are held because they
serve some unconscious function for an individual. For example, a
person who is threatened by his homosexual feelings may employ
the defense mechanism of reaction formation and become a
crusader against homosexuals. Or, someone who feels inferior may
feel somewhat better by putting down a group other than her own.
Because it is unconscious, the person will not be aware of the
unconscious motivation at the time it is operative, but may become
aware of it as some later point in time.
Rational Analysis involves the careful weighing of evidence for,
and against, a particular attitude. For example, a person may
carefully listen to the presidential debates and read opinions of

political experts in order to decide which candidate to vote for in an


election.
Consumer Attitude Formation / Change

Below is an article that I worked on with a group for a college class I took
recently.
Attitude Formation / Change: Importance to Marketers

As consumers, we have a wide variety of products and services to choose from


when purchasing almost anything. Our attitudes towards certain products,
services, brands, or advertisements can and do affect whether or not we will
purchase a certain product or service. Attitudes will also affect whether or not we
become a loyal customer and whether or not we recommend it to a friend. As
marketers, it is important to understand how an attitude is formed and if there is
any way of changing consumers attitudes.

Attitude Formation / Change: What is attitude?

What is an attitude exactly? It is defined as a learned predisposition to behave


in a consistently favorable or unfavorable way with respect to a given object.
There are two different types of models that have been found to explain
consumers attitudes:

Attitude Formation / Change: 1st is Tricomponent

The first one is the tricomponent attitude model. It comprised of three


categories: Cognitive component, Affective Component, and Conative
Component. The Cognative component is made up of knowledge and perceptions
that are acquired through direct experience with an object and related
information from other sources. The Affective component is ones emotions or
feelings about a particular product or brand. The Conative component is the
likelihood that the consumer will take an action or behave in a certain way.
Attitude Formation / Change: 2nd Model is Multiattribute

There are also multiattribute attitude models. The first one is the attitude toward
an object model. This is when ones attitude toward a product or brand is a
function of the presence, or absence, and evaluation of certain product-specific

beliefs and/or attitudes. The second is the attitude toward behavior model. This
is when the individuals attitude toward behaving or acting with respect to an
object rather than the attitude toward the object itself seem to correspond more
closely to actual behavior than does the attitude toward object model. The
theory of reasoned action model is a comprehensive integration of attitude
components designed to lead to both better explanation and better predictions
of behavior. It incorporates subjective norms that influence intention. This
assesses normative beliefs attributed to others and motivation to comply with
others.
Attitude Formation / Change: Theory Of Trying to Consume and Attitude towards
ad

Two last models were formed to look at consumers attitudes from a different
perspective. There is the theory of trying-to-consume model, which reflects
instances in which the action or outcome is not certain but instead reflects the
consumers attempts to consume. And our final model is the attitude toward the
ad model in which the consumer sees an ad and forms certain feelings and
judgments as a result of the ad. These feelings and judgments in turn affect the
consumers attitude toward the ad and beliefs about the brand. Then these two
things combined influence his or her attitude toward the brand.
Attitude Formation / Change: Attitudes are Learned

Attitudes are learned. This learning process is the shift from having no attitude
about a product to having an attitude. For example, new technology is always
coming out, and until something is invented we have no attitudes toward it. An
attitude can follow the purchase or consumption of a product or it can come
before the purchase, perhaps from something as simple as viewing an
advertisement for that product. Things that may influence ones attitude are
personal experience, influence of family and friends, direct marketing, mass
media, and the Internet. Attitudes that have been formed from direct
experiences are more confidently held, and therefore stronger, than attitudes
formed from an indirect experience. As we discussed in class, a consumers
personality will have an effect on how they perceive an advertisement. People
with a high need for cognition enjoy lots of product information, whereas those
low in need for cognition respond better to celebrities or attractive models.
Attitude Formation / Change: Methods used to change attitudes

Consumers attitudes can be changed, however. There are five methods for
attempting to alter the attitudes of consumers. They are: (1) Changing the
consumers basic motivational function (2) associating the product with an
admired group or event (3) resolving two conflicting attitudes (4) altering
components of the multiattribute model and (5) changing consumer beliefs
about competitors brands.

Attitude Formation / Change: Functional Approach to Change

The functional approach to changing attitudes says that there are four
classifications of attitudes. They are the utilitarian function, the ego defensive
function, the value-expressive function, and the knowledge function. The
utilitarian function is when an attitude is held due to the brands utility. A way to
change this attitude is to show the utility or purpose of the brand that they might
not have considered. The next is the ego-defensive function which expresses
peoples desire to protect their self-image. Showing how a product can boost
peoples self esteem and feelings of self doubt is one way of changing their
attitude in this situation.
Attitude Formation / Change: Value Expressive

The value-expressive function says that consumers attitudes are a product of


their lifestyle, beliefs, and outlook on life. Knowing the attitudes of a specific
segment can help better reflect these characteristics in ads. The knowledge
function says that people have a desire to know information and details about
products they encounter. Comparing ones products to other products and
explaining its benefits and advantages could be one way of appealing to this side
of people.

There is also the idea of combining several of the above functions to appeal to
different groups of people who may use the same product but for different
reasons.
Another way to change attitudes is to associate a product with an admired group
or event, such as a charity cause. One example of this is Gaps Red campaign.
Half the profit made from the Red clothing goes to the Global Fund, which helps
women and children in Africa who are affected by AIDS/HIV.
Attitude Formation / Change: Using negative attitudes

Showing consumers that their negative attitude toward a product, brand, etc. is
not in conflict with another attitude, may make them inclined to change their
negative opinion of the brand. This is just one more way of changing consumers
attitudes.
Another solution is altering components of the multiattribute model. One way of
altering this model is changing the relative evaluation of attributes. It is easier to
persuade customers to cross over to another product when the two are similar.
They can be encouraged to shift their favorable attitude toward another version
of the product. Another way of altering the model is by changing brand beliefs,
which is changing perceptions or beliefs about the brand itself. Suggesting
information about your brand, however, must be compelling and repeated

enough to overcome a consumers natural tendency to stick with their previously


held attitude. Adding an attribute can be another option for changing attitudes.
Adding a previously un-thought of attribute or one that shows improvement or
technological innovation will also shift attitudes. An example would be
advertising that yogurt has more potassium than a banana. Another possibility is
eliminating a characteristic or feature, such as making unscented products. One
last route is to change the overall brand rating. This is an attempt to alter a
consumers overall assessment of a brand, such as mentioning that it is the
most popular brand.

Attitude Formation / Change: Changing Beliefs about Competitors Brands

The final way of changing an attitude is by changing beliefs about competitors


brands. Many brands do this, but an example would be a Ziploc bag commercial
showing a store-brand or competitors brand bag leaking as it is turned upside
down. This gives the viewer a negative connotation of the competitors bag,
thereby improving their attitude toward Ziploc.

Now that we have discussed how attitudes are formed and how they can be
altered, we will go into how attitudes affect the actions that consumers take, or
vice versa. Consumers behavior can either precede or follow their attitude
formation.
Two explanations as to why behavior may precede attitude formation are the
cognitive dissonance theory and the attribution theory. The cognitive dissonance
theory is the discomfort or dissonance that occurs when a consumer holds
conflicting thoughts about a belief or an attitude object. An example would be a
post-purchase dissonance, where the consumer thinks about the unique, positive
qualities of the brands that they did not select. An ad may help to assure the
consumer that they made the right decision and ease this dissonance. The
attribution theory explains how people assign blame or credit to events on the
basis of either their behavior or the behavior of others. They may ask themselves
why they made a decision. The process of making inferences is a major part of
attitude formation and change.
There are different perspectives on the attribution theory, which include selfperception theory, attributions toward others, attributions toward things, and
how we test our attributions.
Attitude Formation / Change: Self Perception Theory

Self-perception theory is individuals inferences or judgments as to the causes of


their own behavior. Attitudes develop as consumers look at and make judgments
about their own behavior. Included in this are internal attributions, giving credit
to oneself for the outcome or results of using a product, external attributions,

which is attributing positive results to factors beyond ones control and defensive
attribution, which says consumers will often accept personal credit success and
credit failure to others or outside causes.
Attitude Formation / Change: Attributions are Opinions

Attributions towards others and attributions towards things are the opinions
people have of things which they come into contact with. For example, when
talking to a salesperson at a store, a consumer will try to determine if the
salesperson is knowledge, trustworthy, and reliable. The same can be said of
attributions towards things. Consumers will judge a products performance and
form attributes in an attempt to find out why the product meets or fails to meet
their expectations.
Testing Attributions is an important step for consumers. They want to test
firsthand whether the attributions they have made towards a certain product,
service, or person is correct. People want conviction about a particular
observation and will go about collecting additional information in order to do this.
They may use the following criteria: Distinctiveness, consistency over time,
consistency over modality, and consensus.
Attitude Formation / Change: Distinctiveness

Distinctiveness is attributing an action to a particular product or person if the


action occurs only when that product/person is present and not in its absence. In
order to have consistency over time, each time the person/product is present the
consumers inference must be the same. In measuring consistency over
modality, the inference/reaction must be the same, even when the situation
varies. Finally, a consensus is when the action is perceived in the same way by
other consumers.
What is Organization Design?
________________________________________
A process for improving the probability that an organization will be successful.
________________________________________
More specifically, Organization Design is a formal, guided process for integrating
the people, information and technology of an organization. It is used to match
the form of the organization as closely as possible to the purpose(s) the
organization seeks to achieve. Through the design process, organizations act to
improve the probability that the collective efforts of members will be successful.
Typically, design is approached as an internal change under the guidance of an
external facilitator. Managers and members work together to define the needs of
the organization then create systems to meet those needs most effectively. The
facilitator assures that a systematic process is followed and encourages creative
thinking.

Hierarchical Systems
Western organizations have been heavily influenced by the command and
control structure of ancient military organizations, and by the turn of the century
introduction of Scientific Management. Most organizations today are designed as
a bureaucracy in which authority and responsibility are arranged in a hierarchy.
Within the hierarchy rules, policies, and procedures are uniformly and
impersonally applied to exert control over member behaviors. Activity is
organized within sub-units (bureaus, or departments) in which people perform
specialized functions such as manufacturing, sales, or accounting. People who
perform similar tasks are clustered together.
The same basic organizational form is assumed to be appropriate for any
organization, be it a government, school, business, church, or fraternity. It is
familiar, predictable, and rational. It is what comes immediately to mind when
we discover that ...we really have to get organized!
As familiar and rational as the functional hierarchy may be, there are distinct
disadvantages to blindly applying the same form of organization to all purposeful
groups. To understand the problem, begin by observing that different groups
wish to achieve different outcomes. Second, observe that different groups have
different members, and that each group possesses a different culture. These
differences in desired outcomes, and in people, should alert us to the danger of
assuming there is any single best way of organizing. To be complete, however,
also observe that different groups will likely choose different methods through
which they will achieve their purpose. Service groups will choose different
methods than manufacturing groups, and both will choose different methods
than groups whose purpose is primarily social. One structure cannot possibly fit
all.
Organizing on Purpose
The purpose for which a group exists should be the foundation for everything its
members do including the choice of an appropriate way to organize. The idea
is to create a way of organizing that best suits the purpose to be accomplished,
regardless of the way in which other, dissimilar groups are organized.
Only when there are close similarities in desired outcomes, culture, and methods
should the basic form of one organization be applied to another. And even then,
only with careful fine tuning. The danger is that the patterns of activity that help
one group to be successful may be dysfunctional for another group, and actually
inhibit group effectiveness. To optimize effectiveness, the form of organization
must be matched to the purpose it seeks to achieve.
The Design Process
Organization design begins with the creation of a strategy a set of decision
guidelines by which members will choose appropriate actions. The strategy is
derived from clear, concise statements of purpose, and vision, and from the
organizations basic philosophy. Strategy unifies the intent of the organization
and focuses members toward actions designed to accomplish desired outcomes.

The strategy encourages actions that support the purpose and discourages those
that do not.
Creating a strategy is planning, not organizing. To organize we must connect
people with each other in meaningful and purposeful ways. Further, we must
connect people with the information and technology necessary for them to be
successful. Organization structure defines the formal relationships among people
and specifies both their roles and their responsibilities. Administrative systems
govern the organization through guidelines, procedures and policies. Information
and technology define the process(es) through which members achieve
outcomes. Each element must support each of the others and together they
must support the organizations purpose.
Exercising Choice
Organizations are an invention of man. They are contrived social systems
through which groups seek to exert influence or achieve a stated purpose.
People choose to organize when they recognize that by acting alone they are
limited in their ability to achieve. We sense that by acting in concert we may
overcome our individual limitations.
When we organize we seek to direct, or pattern, the activities of a group of
people toward a common outcome. How this pattern is designed and
implemented greatly influences effectiveness. Patterns of activity that are
complementary and interdependent are more likely to result in the achievement
of intended outcomes. In contrast, activity patterns that are unrelated and
independent are more likely to produce unpredictable, and often unintended
results.
The process of organization design matches people, information, and technology
to the purpose, vision, and strategy of the organization. Structure is designed to
enhance communication and information flow among people. Systems are
designed to encourage individual responsibility and decision making. Technology
is used to enhance human capabilities to accomplish meaningful work. The end
product is an integrated system of people and resources, tailored to the specific
direction of the organization.

Conflict
Conflict is any situation in which two or more parties feel themselves in
opposition. It is an interpersonal process that arises from disagreements over the
goals or the methods to accomplish those goals.
Followings are the important features of conflict:1

2
3
4

Conflict arises because of incompatibility of two or more aspects of an


element; it may be goals, interests, methods of working, or any other
feature.
Conflicts occur when an individual is not able to choose among the
available courses of action.
Conflict is a dynamic process as it indicates a series of events, each
conflict is made of series of interlocking conflict episode
Conflict must be perceived and expressed by the parties to it. If one is
aware of a conflict, it is generally agreed that conflict does not exist
even though there may be incompatibility in some respect.

There are four basic issues which may be involved in a conflict. These are:(1) Facts- Conflicts may occur because of disagreement that the persons have
over the definition of a problem, relevant facts related to the problem, or
their authority and power.
(2) Goals- Sometimes, there may be disagreement over the goals which two
parties want to achieve. The relationship between goals of the parties may
be viewed as incompatible with the result that one goal may be achieved
at the cost of the other.
(3) Methods- Even if goals are perceived to be the same, there may be
difference over the methods, procedures, strategies, tactics, etc. through
which goals may be achieved.
(4) Values- There may be differences over the value-ethical standards,
considerations for fairness, justice etc. These differences are of more
intrinsic nature in persons and may affect the choice of goals or methods
of achieving those.
Types of conflict:

Individual level conflicts


Goal conflicts
Role conflicts
Interpersonal conflict
Vertical conflict
Horizontal conflict
Group level conflicts
Organizational level conflicts

Organization Development Interventions


Organization Development (OD) interventions techniques are the methods
created by OD professionals and others. Single organization or consultant cannot
use all the interventions. They use these interventions depending upon the need
or requirement. The most important interventions are,
1. Survey feedback
2. Process Consultation
3. sensitivity Training
4. The Managerial grid
5. Goal setting and Planning
6. Team Building and management by objectives
7. Job enrichment, changes in organizational structure and participative
management and Quality circles, ISO, TQM
Survey feedback: The intervention provides data and information to the
managers. Information on Attitudes of employees about wage level, and
structure, hours of work, working conditions and relations are collected and the
results are supplied to the top executive teams. They analyse the data, find out
the problem, evaluate the results and develop the means to correct the problems
identified. The team are formed with the employees at all levels in the
organization hierarchy i.e, from the rank and file to the top level.
Process Consultation : The process consultant meets the members of the
department and work teams observes thie interaction, problem identification
skills, solving procedures et. He feeds back the team eith the information
collected through observations, coaches and counsels individuals & groups in
moulding their behavior.
Goal setting and planning : Each division in an organization sets the goals or
formulates the plans for profitability. These goals are sent to the top
management which in turn sends them back to the divisions after modification .
A set of organization goals thus emerge there after.
Managerial grid: This identifies a range of management behavior based on the
different ways that how production/service oriented and employee oriented
states interact with each other. Managerial grid is also called as instrumental
laboratory training as it is a structured version of laboratory training. It consists
of individual and group exercises with a view to developing awareness of
individual managerial style interpersonal competence and group effeciveness.
Thus grid training is related to the leadership styles. The managerial grid focuses
on the observations of behaviour in exercises specifically related to work.
Participants in this training are encouraged and helped to appraise their own
managerial style.
There are 6 phases in grid OD:
First phase is concerned with studying the grid as a theoretical knowledge to
understand the human behavior in the Organization.
Second phase is concerned with team work development. A seminar helps the

members in developing each members perception and the insight into the
problems faced by various members on the job.
Third phase is inter group development. This phase aims at developing the
relationships between different departments
Fourth phase is concerned with the creation of a strategic model for the
organization where Chief Executives and their immediate subordinates
participate in this activity.
Fifth phase is concerned with implementation of strategic model.. Planning
teams are formed for each department to know the available resources, required
resources, procuring them if required and implementing the model
Sixth Phase is concerned with the critical evaluation of the model and making
necessary adjustment for successful implementation.
Starting from the top of a company, the six stages of Management by
Objectives (MBO) are:
1. Define corporate objectives at board level
2. Analyze management tasks and devise formal job specifications, which
allocate responsibilities and decisions to individual managers
3. Set performance standards
4. Agree and set specific objectives
5. Align individual targets with corporate objectives
6. Establish a management information system to monitor achievements
against objectives

Management by Objectives (MBO) is a successful philosophy of management. It


replaces the traditional philosophy of Management by Domination. MBO led to
a systematic Goal setting and Planning. Peter Drucker the eminent management
Guru in 1959 has first propagated the philosophy since then it has become a
movement.
MBO is a process by which managers at different levels and their subordinates
work together in identifying goals and establishing objectives consistent with
Organizational goals and attaining them.
Team building is an application of various techniques of Sensitivity training to the
actual work groups in various departments. These work groups consist of peers
and a supervisor.
Sensitivity training is called a laboratory as it is conducted by creating an
experimental laboratory situation in which employees are brought together. The
Team building technique and training is designed to improve the ability of the
employees to work together as teams.
Job enrichment is currently practiced all over the world. It is based on the
assumption in order to motivate workers, job itself must provide opportunities for

achievement, recognition, responsibility, advancement and growth. The basic


idea is to restore to jobs the elements of interest that were taken away. In a job
enrichment program the worker decides how the job is performed, planned and
controlled and makes more decisions concerning the entire process.
Organizational Development
Organizational Structures
Organizational Development Cycles
Diagnosis and Intervention Strategies:
Team Building
Goal Setting
Survey Feedback
Strategic Planning
Sensitivity Training (T-Groups)
Grid Training
Organizational Culture within Organizational Development
International Culture and Organizational Development
Organizational Development in Response to Technological Change

Organizational development
Burke (1982) defined organizational development (OD) as "a planned process of change in an
organization's culture through the utilization of behaviourial science, technology, research
and theory." It refers to the management of change and the development of human resources.
It is a response to change (Bennis, 1969). OD is a complex educational strategy intended to
change the beliefs, attitudes, values and structure of the organization so that the organization
can better adapt to new technologies, markets and challenges.
A variety of forces cause changes in the modern organization (Hellriegel, Slocum and
Woodman, 1983). Some of these are:
technological change;
the knowledge explosion;
product and service obsolescence; and
social change.
Environment, resources and technology perform a decisive role in determining organizational
policies. If any one of these determinants changes, the policies need to be re-examined to
determine if a different organizational design would be better suited.
Approaches to OD
The major schools of thought in OD are considered in the following paragraphs.
Group Dynamics
This is a historical and traditional method of OD based on the assumption that OD activities
are process consultation (Albrecht, 1983). In this approach, an expert works at a small-group
level, using group methods, sensitivity training and other related approaches.
The Behaviour Modification School
The 'be-mod' school of OD (based on the various works of Skinner) attempts to rearrange the
reward system in the organization so as to strengthen selected 'target' behaviour on the part of
employees.
The Systems Approach
This approach aims at enhancing the overall effectiveness of the organization. The system
can be defined as having:

some components that comprise it;


functions and processes performed by various components;
relationship among the components that make them a system; and
an organizational principle, which gives the system a purpose.
This approach is based on the assumption that an organization is composed of four
interlocking systems (Albrecht, 1983), namely:
a technical system, referring to the elements, activities and relationships that make up the
primary productive axis of the organization. It includes physical facilities, machinery, special
equipment, work processes, work methods, work procedures, work-oriented information and
various means of handling;
a social system, referring to the people in the organization and the activities in which they
are engaged. It includes the intra-group roles and relationships, the form of power hierarchy,
values and norms for behaviour in the organization, and the reward and punishment
processes;
an administrative system, which refers to the policies, procedures, instructions, reports, etc.,
which are required to operate the organization. It also includes those who operate the
technical and administrative systems; and
a strategic system, which is the steering function of the organization. Its components
include the management team from the chief executive down to the lowest supervisor, the
chain of command, reporting relationships, and the power values of the leaders of the
organization. It also includes plans, the planning process and the procedures used in
governing the organization and adapting it to changing needs.
The systems approach has four sequential stages: assessment, problem solving,
implementation and evaluation.
The Socio-Technical Approach
The socio-technical approach views an organization (Pasmore, 1988) as made up of people (a
social system and a technical system) producing goods or services valued by customers (who
are part of the external environment).
The social system uses tools, techniques, and knowledge. The technical system produces
goods and services which are valued by customers in the external environment.
The Environment Approach
The environment is an agent of change. Environmental changes are the primary incitement
and stimulus for organizational betterment. The socio-technical arrangements in the
organization must change according to changes in the environment. The environment can
change in both predictable and unpredictable ways. The external environment can be
relatively stable or rapidly changing.
Thus, the environment, the technical system and the social system are three basic elements
which play a crucial role in any organization's design, re-design or development. The
efficiency and effectiveness of the organization depend upon the equilibrium between the
needs of these determinant elements.

The OD process
The OD process entails various activities at different levels in the organization. Through
these activities, interventions are made in the ongoing organization to change the structure,
processes, behaviour or values of individuals and groups. Golembiewski, Prochl and Sink
(1981) categorized these interventions under eight headings:
Process Analysis Activities, referring to applications of behaviourial science perspectives to
fathom complete and dynamic situations;

Skill-building Activities, involving various designs for eliciting behaviours in congruence


with OD values. This includes giving and receiving feedback, listening, and settling conflicts;
Diagnostic Activities, including process analysis to generate data through interviews,
psychological instruments or opinion surveys;
Coaching or Counselling Activities to help in resolving conflicts through third-party
consultation;
Team Building Activities, enhancing the efficiency and effectiveness of task groups;
Inter-group Activities, attempting to create effective and satisfying linkages between two or
more task groups or departments in the organization;
Techno-Structural Activities, aiming at building need-fulfilling roles, jobs and structures;
and
System-Building or System-Renewal Activities, seeking exhaustive changes in a large
organization's climate and values using combinations of the various OD interventions listed
above.

Socio-technical systems approach for organization re-design


Socio-technical systems design is better suited to meet the requirements of a changing
external environment in comparison with traditional designs. It endeavours to re-design the
organization's structure, processes and functions to create a balance between the organization
and its changing external environment. It could involve the following steps (Foster, 1967;
Cummings, 1976; Pasmore, 1988):
defining the scope of the system to be re-designed;
defining the environmental demands;
evolving a vision statement;
enlightening organizational members;
developing the change structure;
conducting socio-technical analysis;
preparing re-design proposals;
implementing recommended changes; and
evaluating the changes or re-design.

OD techniques
Techniques used for OD are considered below.
Sensitivity training
This has many applications and is still used widely, even though new techniques have
emerged (Lewin, 1981). Sensitivity training (Benny, Bradford and Lippitt, 1964) basically
aims at:
growth in effective membership;
developing ability to learn;
stimulating to give help; and
developing insights to be sensitive to group processes.
These process variables - in a systems sense - interact and are interdependent.
Grid Training
Blake and Moutons Managerial Grid
The treatment of task orientation and people orientation as two independent dimensions was a
major step in leadership studies. Many of the leadership studies conducted in the 1950s at the
University of Michigan and the Ohio State University focused on these two dimensions.

Building on the work of the researchers at these Universities, Robert Blake and Jane Mouton
(1960s) proposed a graphic portrayal of leadership styles through a managerial grid
(sometimes called leadership grid). The grid depicted two dimensions of leader behavior,
concern for people (accommodating peoples needs and giving them priority) on y-axis and
concern for production (keeping tight schedules) on x-axis, with each dimension ranging
from low (1) to high (9), thus creating 81 different positions in which the leaders style may
fall.

Concern for People

The five resulting leadership styles are as follows:

Concern for production

1. Impoverished Management (1, 1): Managers with this approach are low on both the
dimensions and exercise minimum effort to get the work done from subordinates. The leader
has low concern for employee satisfaction and work deadlines and as a result disharmony and
disorganization prevail within the organization. The leaders are termed ineffective wherein
their action is merely aimed at preserving job and seniority.
2. Task management (9, 1): Also called dictatorial or perish style. Here leaders are more
concerned about production and have less concern for people. The style is based on theory X
of McGregor. The employees needs are not taken care of and they are simply a means to an
end. The leader believes that efficiency can result only through proper organization of work
systems and through elimination of people wherever possible. Such a style can definitely
increase the output of organization in short run but due to the strict policies and procedures,
high labour turnover is inevitable.
3. Middle-of-the-Road (5, 5): This is basically a compromising style wherein the leader
tries to maintain a balance between goals of company and the needs of people. The leader
does not push the boundaries of achievement resulting in average performance for
organization. Here neither employee nor production needs are fully met.
4. Country Club (1, 9): This is a collegial style characterized by low task and high people
orientation where the leader gives thoughtful attention to the needs of people thus providing
them with a friendly and comfortable environment. The leader feels that such a treatment
with employees will lead to self-motivation and will find people working hard on their own.
However, a low focus on tasks can hamper production and lead to questionable results.
5. Team Management (9, 9): Characterized by high people and task focus, the style is based
on the theory Y of McGregor and has been termed as most effective style according to Blake
and Mouton. The leader feels that empowerment, commitment, trust, and respect are the key
elements in creating a team atmosphere which will automatically result in high employee
satisfaction and production.
Advantages of Blake and Moutons Managerial Grid
The Managerial or Leadership Grid is used to help managers analyze their own leadership
styles through a technique known as grid training. This is done by administering a
questionnaire that helps managers identify how they stand with respect to their concern for
production and people. The training is aimed at basically helping leaders reach to the ideal
state of 9, 9.
Limitations of Blake and Moutons Managerial Grid
The model ignores the importance of internal and external limits, matter and scenario. Also,
there are some more aspects of leadership that can be covered but are not.
Grid training is an outgrowth of the managerial grid approach to leadership (Blacke and
Mouton, 1978). It is an instrumental approach to laboratory training. Sensitivity training is
supplemented with self-administered instruments (Benny, Bradford and Lippitt, 1964). The
analysis of these instruments helps in group development and in the learning of group
members. This technique is widely used and has proved effective.
Grid training for OD is completed in six phases. They are:
laboratory-seminar training, which aims at acquainting participants with concepts and
material used in grid training;
a team development phase, involving the coming together of members from the same
department to chart out as to how they will attain a 9 x 9 position on the grid;
inter-group development aims at overall OD. During this phase, conflict situations between
groups are identified and analysed;
organization goal setting is based on participative management, where participants
contribute to and agree upon important goals for the organization;

goal attainment aims at achieving goals which were set during the phase of organizational
goal setting; and
stabilization involves the evaluation of the overall programme and making suggestions for
changes if appropriate.
Survey Feedback
Survey feedback is based on the study (survey) of the unit of analysis (such as work group, a
department or a whole organization) by using questionnaires (Taylor and Bowers, 1972). The
resulting data are then used to identify and analyse problems and propose a suitable action
plan to overcome them. A typical survey questionnaire would generate information on
leadership, organizational climate and satisfaction (Table 1).
Satisfaction
with pay
Typical factors covered in a survey research
questionnaire
Satisfaction with the work group

Reducing involvement and terminating This is the mutual agreement to cease the
consultation.
Third Party
The third-party peace-making technique attempts to settle inter-personal and inter-group
conflicts using modern concepts and methods of conflict management. This technique

analyses the processes involved, discerns the problem on the basis of the analysis, and
suitably manages the conflict situation.
Team building
Team building has been considered the most popular OD technique in recent years, so much
so that it has replaced sensitivity training. It aims at improving overall performance, tends to
be more task-oriented, and can be used with family groups (members from the same unit) as
well as special groups (such as task forces, committees and inter-departmental groups).
There are five major elements involved in team building (French and Bell, 1978):
problem solving, decision making, role clarification and goal setting for accomplishing the
assigned tasks;
building and maintaining effective inter-personal relationships;
understanding and managing group processes and culture;
role analysis techniques for role clarification and definition; and
role negotiation techniques.
Transactional Analysis
Transactional analysis is widely used by management practitioners to analyse group
dynamics and inter-personal communications. It deals with aspects of identity, maturation,
insight and awareness (Berne, 1964). As a tool for OD, it attempts to help people understand
their egos - both their own and those of others - to allow them to interact in a more
meaningful manner with one another (Huse, 1975). It attempts to identify peoples' dominant
ego states and help people understand and analyse their transactions with others. It is quite
effective if applied in the early stage of the diagnostic phase.
Team Building
Formi
ng

Stormi
ng

Normi
ng

Performin
g

Adjourni
ng

Leadership vs. Management


What is the difference between management and leadership? It is a question
that has been asked more than once and also answered in different ways. The
biggest difference between managers and leaders is the way they motivate the
people who work or follow them, and this sets the tone for most other aspects of
what they do.
Many people, by the way, are both. They have management jobs, but they
realize that you cannot buy hearts, especially to follow them down a difficult
path, and so act as leaders too.
Managers have subordinates
By definition, managers have subordinates - unless their title is honorary and
given as a mark of seniority, in which case the title is a misnomer and their
power over others is other than formal authority.
Authoritarian, transactional style

Managers have a position of authority vested in them by the company, and their
subordinates work for them and largely do as they are told. Management style is
transactional, in that the manager tells the subordinate what to do, and the
subordinate does this not because they are a blind robot, but because they have
been promised a reward (at minimum their salary) for doing so.
Work focus
Managers are paid to get things done (they are subordinates too), often within
tight constraints of time and money. They thus naturally pass on this work focus
to their subordinates.
Seek comfort
An interesting research finding about managers is that they tend to come from
stable home backgrounds and led relatively normal and comfortable lives. This
leads them to be relatively risk-averse and they will seek to avoid conflict where
possible. In terms of people, they generally like to run a 'happy ship'.
Leaders have followers
Leaders do not have subordinates - at least not when they are leading. Many
organizational leaders do have subordinates, but only because they are also
managers. But when they want to lead, they have to give up formal authoritarian
control, because to lead is to have followers, and following is always a voluntary
activity.
Charismatic, transformational style
Telling people what to do does not inspire them to follow you. You have to appeal
to them, showing how following them will lead to their hearts' desire. They must
want to follow you enough to stop what they are doing and perhaps walk into
danger and situations that they would not normally consider risking.
Leaders with a stronger charisma find it easier to attract people to their cause.
As a part of their persuasion they typically promise transformational benefits,
such that their followers will not just receive extrinsic rewards but will somehow
become better people.
People focus
Although many leaders have a charismatic style to some extent, this does not
require a loud personality. They are always good with people, and quiet styles
that give credit to others (and takes blame on themselves) are very effective at
creating the loyalty that great leaders engender.
Although leaders are good with people, this does not mean they are friendly with
them. In order to keep the mystique of leadership, they often retain a degree of
separation and aloofness.
This does not mean that leaders do not pay attention to tasks - in fact they are
often very achievement-focused. What they do realize, however, is the
importance of enthusing others to work towards their vision.

Seek risk
In the same study that showed managers as risk-averse, leaders appeared as
risk-seeking, although they are not blind thrill-seekers. When pursuing their
vision, they consider it natural to encounter problems and hurdles that must be
overcome along the way. They are thus comfortable with risk and will see routes
that others avoid as potential opportunities for advantage and will happily break
rules in order to get things done.
A surprising number of these leaders had some form of handicap in their lives
which they had to overcome. Some had traumatic childhoods, some had
problems such as dyslexia, others were shorter than average. This perhaps
taught them the independence of mind that is needed to go out on a limb and
not worry about what others are thinking about you.
In summary
This table summarizes the above (and more) and gives a sense of the differences
between being a leader and being a manager. This is, of course, an illustrative
characterization, and there is a whole spectrum between either ends of these
scales along which each role can range. And many people lead and manage at
the same time, and so may display a combination of behaviors.

Subject

Leader

Manager

Essence

Change

Stability

Focus

Leading people

Managing work

Have

Followers

Subordinates

Horizon

Long-term

Short-term

Seeks

Vision

Objectives

Approach

Sets direction

Plans detail

Decision

Facilitates

Makes

Power

Personal charisma

Formal authority

Appeal to

Heart

Head

Energy

Passion

Control

Culture

Shapes

Enacts

Dynamic

Proactive

Reactive

Persuasion

Sell

Tell

Style

Transformational

Transactional

Exchange

Excitement for work

Money for work

Likes

Striving

Action

Wants

Achievement

Results

Risk

Takes

Minimizes

Rules

Breaks

Makes

Conflict

Uses

Avoids

Direction

New roads

Existing roads

Truth

Seeks

Establishes

Concern

What is right

Being right

Credit

Gives

Takes

Blame

Takes

Blames

Rensis Likert
Management Systems and Styles
Dr. Rensis Likert has conducted much research on human behavior within
organizations, particularly in the industrial situation.
He has examined different types of organizations and leadership styles, and he
asserts that to achieve maximum profitability, good labor relations and high
productivity, every organization must make optimum use of their human assets.
The form of the organization which will make greatest use of the human
capacity, Likert contends, is;

highly effective work groups linked together in an overlapping pattern by


other similarly effective groups.

Organizations at present have widely varying types of management style and


Likert has identified four main systems:
Management Styles

The exploitive - authoritative system, where decisions are imposed on


subordinates, where motivation is characterized by threats, where high levels of
management have great responsibilities but lower levels have virtually none,
where there is very little communication and no joint teamwork.
The benevolent - authoritative system, where leadership is by a
condescending form of master-servant trust, where motivation is mainly by
rewards, where managerial personnel feel responsibility but lower levels do not,
where there is little communication and relatively little teamwork.
The consultative system, where leadership is by superiors who have
substantial but not complete trust in their subordinates, where motivation is by
rewards and some involvement, where a high proportion of personnel, especially
those at the higher levels feel responsibility for achieving organization goals,
where there is some communication (both vertical and horizontal) and a
moderate amount of teamwork.
The participative - group system, which is the optimum solution, where
leadership is by superiors who have; complete confidence in their subordinates,
where motivation is by economic rewards based on goals which have been set in
participation, where personnel at all levels feel real responsibility for the
organizational goals, where there is much communication, and a substantial
amount of cooperative teamwork.
This fourth system is the one which is the ideal for the profit oriented and
human-concerned organization, and Likert says (The Human Organization,
Mcgraw Hill, 1967) that all organizations should adopt this system. Clearly, the
changes involved may be painful and long-winded, but it is necessary if one is to
achieve the maximum rewards for the organization.
To convert an organization, four main features of effective management must be
put into practice:
Features of Effective Management

The motivation to work must be fostered by modern principles and


techniques, and not by the old system of rewards and threats.

Employees must be seen as people who have their own needs, desires and
values and their self-worth must be maintained or enhanced.

An organization of tightly knit and highly effective work groups must be


built up which are committed to achieving the objectives of the
organization.

Supportive relationships must exist within each work group. These are
characterized not by actual support, but by mutual respect.

The work groups which form the nuclei of the participative group system, are
characterized by the group dynamics:

Members are skilled in leadership and membership roles for easy


interaction.

The group has existed long enough to have developed a well established
relaxed working relationship.

The members of the group are loyal to it and to each other since they
have a high degree of mutual trust.

The norms, values and goals of the group are an expression of the values
and needs of its members.

The members perform a "linking-pin" function and try to keep the goals of
the different groups to which they belong in harmony with each other.

Max Weber (1864-1920) was a German academic and sociologist who


provided another approach in the development of classical management
theory.
As a German academic, Weber was primarily interested in the reasons
behind the employees actions and in why people who work in an
organization accept the authority of their superiors and comply with the
rules of the organization.
Legitimate Types of Authority by Max Weber

Weber made a distinction between authority and power. According to


Weber power educes obedience through force or the threat of force which
induces individuals to adhere to regulations. In contrast, legitimate
authority entails that individuals acquiesce that authority is exercised
upon them by their superiors. Weber goes on to identify three types of
legitimate authority:
Traditional authority Traditional authority is readily accepted and
unquestioned by individuals since it emanates from deeply set customs
and tradition. Traditional authority is found in tribes and monarchies.
Charismatic authority Charismatic authority is gained by those
individuals who have gained the respect and trust of their followers. This
type of authority is exercised by a charismatic leader in small and large
groups alike.
Rational-legal authority Rational-legal authority stems from the setup of
an organization and the position held by the person in authority. Rationallegal authority is exercised within the stipulated rules and procedures of
an organization.

The Key Characteristics of a Bureaucracy


Weber coined this last type of authority with the name of a bureaucracy. The
term bureaucracy in terms of an organization and management functions refers
to the following six characteristics:

Management by rules. A bureaucracy follows a consistent set of rules that


control the functions of the organization. Management controls the lower
levels of the organization's hierarchy by applying established rules in a
consistent and predictable manner.
Division of labor. Authority and responsibility are clearly defined and
officially sanctioned. Job descriptions are specified with responsibilities

and line of authority. All employees have thus clearly defined rules in a
system of authority and subordination.
Formal hierarchical structure. An organization is organized into a hierarchy
of authority and follows a clear chain of command. The hierarchical
structure effectively delineates the lines of authority and the subordination
of the lower levels to the upper levels of the hierarchical structure.
Personnel hired on grounds of technical competence. Appointment to a
position within the organization is made on the grounds of technical
competence. Work is assigned based on the experience and competence
of the individual.
Managers are salaried officials. A manager is a salaried official and does
own the administered unit. All elements of a bureaucracy are defined with
clearly defined roles and responsibilities and are managed by trained and
experienced specialists.
Written documents. All decisions, rules and actions taken by the
organization are formulated and recorded in writing. Written documents
ensure that there is continuity of the organizations policies and
procedures.

Advantages and Disadvantages of Webers Bureaucracy

Webers bureaucracy is based on logic and rationality which are supported


by trained and qualified specialists. The element of a bureaucracy offers a
stable and hierarchical model for an organization.Nevertheless, Webers
bureaucracy does have its limitations since it is based on the roles and
responsibilities of the individuals rather than on the tasks performed by
the organization. Its rigidity implies a lack of flexibility to respond to the
demands of change in the business environment.

Management Thoughts
There are a few people in every age who produce new, paradigm-shifting ideas.
Sometimes these ideas don't catch on right away, but as time passes, their worth
becomes more evident. The art of management is an old one, but it was a fairly
static one until about 150 years ago, when changes in technology, e.g. railroads
and telegraph, changed our economy quite dramatically, and at the same time
changed the discipline of management. We don't really have much perspective
yet. Without it, it's hard to say what ideas will endure, and who the real pioneers
will turn out to be. But, guessing, here are some of the people in our
management heroes gallery.

* George Box
* Philip Crosby
* W. Edwards Deming
* John Dewey
* Fredrick Herzberg

Kaoru Ishikawa

Kaoru Ishikawa wanted to change the way people think about work. He urged managers to
resist becoming content with merely improving a product's quality, insisting that quality
improvement can always go one step further. His notion of company-wide quality control
called for continued customer service. This meant that a customer would continue receiving
service even after receiving the product. This service would extend across the company itself
in all levels of management, and even beyond the company to the everyday lives of those
involved. According to Ishikawa, quality improvement is a continuous process, and it can
always be taken one step further.
With his cause and effect diagram (also called the "Ishikawa" or "fishbone" diagram) this
management leader made significant and specific advancements in quality improvement.
With the use of this new diagram, the user can see all possible causes of a result, and
hopefully find the root of process imperfections. By pinpointing root problems, this diagram
provides quality improvement from the "bottom up." Dr. W. Edwards Deming --one of
Isikawa's colleagues -- adopted this diagram and used it to teach Total Quality Control in
Japan as early as World War II. Both Ishikawa and Deming use this diagram as one the first
tools in the quality management process.
Ishikawa also showed the importance of the seven quality tools: control chart, run chart,
histogram, scatter diagram, Pareto chart, and flowchart. Additionally, Ishikawa explored the
concept of quality circles-- a Japanese philosophy which he drew from obscurity into world
wide acceptance. .Ishikawa believed in the importance of support and leadership from top
level management. He continually urged top level executives to take quality control courses,
knowing that without the support of the management, these programs would ultimately fail.
He stressed that it would take firm commitment from the entire hierarchy of employees to
reach the company's potential for success. Another area of quality improvement that Ishikawa
emphasized is quality throughout a product's life cycle -- not just during production.
Although he believed strongly in creating standards, he felt that standards were like
continuous quality improvement programs -- they too should be constantly evaluated and
changed. Standards are not the ultimate source of decision making; customer satisfaction is.
He wanted managers to consistently meet consumer needs; from these needs, all other
decisions should stem. Besides his own developments, Ishikawa drew and expounded on
principles from other quality gurus, including those of one man in particular: W. Edwards
Deming, creator of the Plan-Do-Check-Act model. Ishikawa expanded Deming's four steps
into the following six:

Determine goals and targets.

Determine methods of reaching goals.

Engage in education and training.

Implement work.

Check the effects of implementation.

Take appropriate action.

* Joseph M. Juran
* Kurt Lewin
* Lawrence D. Miles
* Alex Osborne
* Walter Shewhart
* Genichi Taguchi
* Frederick Winslow Taylor
* J. Edgar Thomson

Human resource management


The maternity Benefit Act, 1961: Maternity benefit is an indemnity for the
loss of wages incurred by a woman who voluntarily before child-birth and
compulsorily thereafter abstains from work in the interest of the child and
herself. The I.L.O. Maternity Protection Conventions of 1919 was revised in
various details in 1952.
Its purposes are to:
(a) Enable the women employee to abstain from work during the 6 weeks
preceding the expected date of her confinement;
(b) Oblige her to abstain from work during the 6 weeks following her
confinement;
(c) Provide her, with free attendance by a doctor or certified mid-wife;
(d) Provide her out of public funds or by means of insurance, with a cash
benefit sufficient for the full and healthy maintenance of herself and child
during the said period of abstention from work;
(e) Prohibit her dismissal during the said periods or a subsequent period o
sickness; and
(f) Enable her to suckle her baby twice a day during working hours.
The main purpose of Maternity Benefit Act, 1961 are:
(a) To regulate the employment of women employees in certain
establishments for certain specified periods before and after child-birth.
(b) To provide for the payment of maternity benefits to women workers at the
rate of average daily wages calculated on the basis of wages payable to
her for the days on which she has worked during the three calendar
months immediately preceding the date from which she absented herself
on account of maternity.
(c) To provide certain benefits in case of miscarriage, premature birth, or
illness arising out of pregnancy.
Dispute resolution and grievance management:
According to the Industrial Disputes Act, 1947, industrial disputes mean any
dispute or difference between employers and employees, or between employers
and workmen, or between workmen and workmen, which is connected with the
employment or non-employment or with the conditions of labor of any person.
Causes of dispute:
1
2
3
4
5
6

Wages
Union rivalry
Political interference
Unfair labor practices
Multiplicity of labour law
Others: Industrial relation managers stoke the fire and then try to
extinguish it- all to justify their own existence in organization.

Methods of resolving dispute;


1
2
3
4
5
6
7

Collective bargaining
Code of discipline
Grievance procedure
Arbitration
Conciliation
Adjudication
Consultative machinery

Labour welfare and social security measures


According to Arthur James Todd, Labour welfare means anything done for the
comfort and improvement, intellectual or social, of the employee over and above
the wages paid which is not a necessity of the Industry.
Labour welfare is a part and parcel of Social Welfare.
Labour welfare:
(1) May be introduced by Central government, State government, Employers,
Trade unions or by any charitable orgaisation.
(2) Provides better life and physical and mental health to the workers.
(3) Make them happy satisfied and efficient.
(4) Relieves workers from industrial fatigue and improve intellectual, cultural
and material condition.
(5) The measures are in addition to regular wages and other economic
benefits available to workers due to legal provision and collective
bargaining.
(6) New measures are added to the existing from time to time.
(7) It helps the workers devote greater attention towards their work. The gain
is in terms of productivity and quality.
(8) It helps in maintaining peace with the employees unions
(9) Employers get stable labour force
(10)Absenteeism is less
(11)Attract talented workers
(12)Social evils like gambling, drinking are reduced.
Agencies of Labour Welfare:
(a) Central Government
(b) State Government
(c) Employers
(d) Trade Unions
(e) Charitable Organisations
Types of Welfare services:
(a) Economic Services- pension, life insurance, credit facilities.
(b) Recreational Services- indoor games, outdoor games, reading rooms,
libraries, radios, T.V etc

(c) Facilitative Services- canteen, rest room, lunch room, housing facility(rent
free or loan), Medical facility, Washing facility, Education facility, Leave
travel concession
Social security:
According to ILO, Social security is that security that society furnishes
through appropriate orgainsation against certain risk to which its members
are exposed.
The term social security originated n U.S.A. In 1935, the Social Security Act was
passed there and Social Security
Board was established to govern and
administer the scheme of

unemployment,
sickness and
old-age insurance.

Social security schemes include health insurance, maternity benefits,


compensation for employment injury, workers family pension cum- insurance
schemes, compulsory and voluntary social insurance, provident fund schemes
and public health services.
Though social security program differ from country to country, they have three
characteristics in common
(i)
(ii)

(iii)

They are established by law


They provide some form of cash payment to individuals to compensate
at least a part of the lost income that occurs due to such contingencies
as unemployment, maternity, work injury, invalidism, industrial
diseases, old age, burial, widowhood, and orphan hood; and
The benefits or services are provided in three ways:
(a) Social insurance contribution is made by the employers and
the government.
(b) Social assistance- non-contributory benefits
(c) Public service- financed directly by the government from its
general revenue.

Social security in India:


India is a Welfare State as envisaged in her constitution.
There are several Acts and Schemes which provide security to the workers.
1
2
3
4
5
6
7

The Employees State Insurance Act, 1948


The Employees Provident Funds Act, 1952
Gratuity Scheme
Employees Deposit-Linked Insurance Scheme
Group Life Insurance
The Workmens Compensation Act, 1923
The maternity Benefit Act, 1961

David C Mc Clelland has contributed to the understanding of motivation by


identifying three types of basic motivating needs. He classified them as the need
for power, need for affiliation and need for achievement. Considerable research
has been done on methods of testing people with respect to these three types of
needs, and Mc Clelland and his associates have done substantial research
especially on the need for achievement.

Discuss the different steps involved in Pricing Procedure


Following are the major steps involved in pricing procedures:
1
2
3
4
5
6
7
8
9

Identify the target customer segments and draw up their profile


Decide the market position and price image that the firm desires for
the brand
Determine the extent of price elasticity of demand of the product, and
the extent of price sensitivity of target groups
Take into account the life-cycle stage of the product
Analyze competitors prices
Analyze other environmental factors
Choose the pricing method to be adopted, taking all the above actors
into account
Select the final price
Periodically review the pricing method as well as procedures

Discuss the assumptions under the law of return to a variable factor.


According to Prof. Watson, the law of return to a variable factor is when total
output o production of a commodity is increased by adding units of variable
inputs while the quantities of other inputs are held constant, then increase in
total production becomes, after some point, smaller and smaller.
Assumptions behind this law are given here below:
1
2
3
4

It is possible to make changes in the factor proportions


All units of the variable factors are homogeneous
One is variable factor and others are the fixed factors
There is no change in technique of production and organization

Business statistics
Define the term probability
Probability measures provide the decision-maker with the means for quantifying
the uncertainties which affect the choices of appropriate actions. Understanding
probability and taking decision after understanding it minimizes the risk.
What is priori approach to probability?
Priori approach assumes that all the possible outcomes of an experiment are
mutually exclusive and equally likely.
The word equally likely conveys the motion of equally probable, and mutually
exclusively means of one event occurs the other event will not occur.
What are the steps involved in fitting a Binomial Distribution?
When a binomial distribution is to be fitted to observe data the following
procedure is adopted:
1

Determine the value of p and q. If one of these values is known the


other can be found out by the simple relationships p= (1-q) and q=(1p). When p and q are equal, the distribution is symmetrical

Discuss the characteristics of the Poisson distribution.


The characteristics of the Poisson distribution are as follow:1. The occurrence of the event is independent. That is, the occurrence of an
event in an interval of space or time has no effect on the probability of a
second occurrence of the event in the same or any other interval.
2. Theoretically, an infinite number of occurrences must be possible in the
interval.
3. The probability of single occurrence of the event in a given interval is
proportional to the length of the interval.
4. In any infinitesimal (extremely small) portion of interval, the probability of
two or more occurrences of the event is negligible.
Attributes of a Exponential Experiment

The exponential pdf has no shape parameter, as it has only one shape.

The exponential pdf is always convex and is stretched to the right as decreases in
value.

The value of the pdf function is always equal to the value of at T = 0 (or T = ).

The location parameter, , if positive, shifts the beginning of the distribution by a


distance of to the right of the origin, signifying that the chance failures start to occur
only after hours of operation, and cannot occur before this time.

The scale parameter is

As

Attributes of a Poisson Experiment

A Poisson experiment is a statistical experiment that has the following properties:

The experiment results in outcomes that can be classified as successes or


failures.

The average number of successes () that occurs in a specified region is


known.

The probability that a success will occur is proportional to the size of the
region.

The probability that a success will occur in an extremely small region is


virtually zero.

Note that the specified region could take many forms. For instance, it could be a length, an
area, a volume, a period of time, etc.
Notation

The following notation is helpful, when we talk about the Poisson distribution.

e: A constant equal to approximately 2.71828. (Actually, e is the base of


the natural logarithm system.)

: The mean number of successes that occur in a specified region.

x: The actual number of successes that occur in a specified region.

P(x; ): The Poisson probability that exactly x successes occur in a


Poisson experiment, when the mean number of successes is .

Poisson Distribution

A Poisson random variable is the number of successes that result from a Poisson
experiment. The probability distribution of a Poisson random variable is called a Poisson
distribution.
Given the mean number of successes () that occur in a specified region, we can compute the
Poisson probability based on the following formula:
Poisson Formula. Suppose we conduct a Poisson experiment, in which the
average number of successes within a given region is . Then, the Poisson
probability is:

P(x; ) = (e-) (x) / x!


where x is the actual number of successes that result from the experiment, and
e is approximately equal to 2.71828.

The Poisson distribution has the following properties:

The mean of the distribution is equal to .

The variance is also equal to .

Example 1
The average number of homes sold by the Acme Realty company is 2 homes per day. What is
the probability that exactly 3 homes will be sold tomorrow?
Solution: This is a Poisson experiment in which we know the following:

= 2; since 2 homes are sold per day, on average.

x = 3; since we want to find the likelihood that 3 homes will be sold


tomorrow.

e = 2.71828; since e is a constant equal to approximately 2.71828.

We plug these values into the Poisson formula as follows:


P(x; ) = (e-) (x) / x!
P(3; 2) = (2.71828-2) (23) / 3!
P(3; 2) = (0.13534) (8) / 6
P(3; 2) = 0.180
Thus, the probability of selling 3 homes tomorrow is 0.180 .
Cumulative Poisson Probability

A cumulative Poisson probability refers to the probability that the Poisson random variable
is greater than some specified lower limit and less than some specified upper limit.
Example 1
Suppose the average number of lions seen on a 1-day safari is 5. What is the probability that
tourists will see fewer than four lions on the next 1-day safari?
Solution: This is a Poisson experiment in which we know the following:

= 5; since 5 lions are seen per safari, on average.

x = 0, 1, 2, or 3; since we want to find the likelihood that tourists will see


fewer than 4 lions; that is, we want the probability that they will see 0, 1,
2, or 3 lions.

e = 2.71828; since e is a constant equal to approximately 2.71828.

To solve this problem, we need to find the probability that tourists will see 0, 1, 2, or 3 lions.
Thus, we need to calculate the sum of four probabilities: P(0; 5) + P(1; 5) + P(2; 5) + P(3; 5).
To compute this sum, we use the Poisson formula:
P(x < 3, 5) = P(0; 5) + P(1; 5) + P(2; 5) + P(3; 5)
P(x < 3, 5) = [ (e-5)(50) / 0! ] + [ (e-5)(51) / 1! ] + [ (e-5)(52) / 2! ] + [ (e-5)(53) / 3! ]
P(x < 3, 5) = [ (0.006738)(1) / 1 ] + [ (0.006738)(5) / 1 ] + [ (0.006738)(25) / 2 ] +
[ (0.006738)(125) / 6 ]
P(x < 3, 5) = [ 0.0067 ] + [ 0.03369 ] + [ 0.084224 ] + [ 0.140375 ]
P(x < 3, 5) = 0.2650
Thus, the probability of seeing at no more than 3 lions is 0.2650.
The weekly wages of 2000 workers in a factory is normally distributed with a mean of
Rs. 200 and a standard deviation of Rs. 20.
Estimate the lowest weekly wages at the 200 highest paid workers and the highest wages
of 200 lowest paid workers.
[Given phi(1.28)=0.09]

Marketing environment and environment scanning


In marketing environment analysis, a firm gathers relevant information relating
to the environment, studies them in detail, takes note of the changes in the
changes in each elements of the environment and forecasts the future position in
each of them.
Under the Mega/Macro environment the firm studies:
1
2
3
4
5
6
7

The geographic environment


The political environment
The Socio-cultural environment
The economic environment
The natural environment
The technological environment
Legal environment (Business Legislation)

As regards the environment that is specific to the given business, the firm
studies:
1
2
3
4
5
6

The market demand


The consumer
The industry
The competition
Government policies (specific to business concerned)
Supplier-related factors

Marketing information system


Benefits:
1
2
3

In marketing planning
In marketing implementation
In marketing control

Classification of marketing information


1
2

Classification based on the end use/purpose


Classification based on the subject matter

Steps involved in designing and developing an MIS

Defining information needs


Classifying information appropriately and identifying whether it is or
planning, or implementation or control purposes.
Evaluating the cost of collecting and processing the information and
comparing the cost Vs benefits.
Identifying the sources of the information
Designing the mechanism/ procedure for gathering, processing, storing
and retrieval of the information

Processing, analyzing and interpreting the information and disseminating


it to the right persons at the right time in the right capsule
Monitoring, maintaining, reviewing and improving the system

Requisites of a good MIS


Must be unified system
Should be conceived and used as a marketing decision support system
Must be compatible with the culture and level of sophistication of the firm
and of the marketing organization in particular
Must be user-orients and user friendly; must also secure users
involvement
Must involve the suppliers of the information as well
Must be economical. Value-cost position of the information should be
favourable
Must meet the principle of selectivity
Must be capable of smoothly absorbing changes that may become
necessary
Must be fast
Characteristics of Good Marketing Information
Relevance to decision making
Clarity
Completeness
Confidentiality
Precision
Cost reasonableness
Reliability (from genuine sources)
Accuracy
Timeliness
Objectivity
Authenticity
Strategic value
Inputs from the MIS

Periodic reports
Triggered reports
Demand reports
Plan reports

Specialized databases
Customer database
Marketing intelligence
Data mining and data warehousing
1 Bachelor
stage

Young, single, not living at home, few financial burdens, fashion


opinion leaders. Recreation oriented. Buy; basic home
equipments, furniture, cars, equipments for the mating game
vacations.

2 Newly
married
couples

Young, no children, highest purchase rate and highest purchase of


durables: Cars, appliances, furniture, vacations

3 Full nest I Youngest child under six, home purchasing at peak, liquid assets
low, interested in new products, advertised products. Buy:
Washers, dryers, TV, baby food, chest rub and cough medicines,
vitamins, dolls, wagons, sleds,skates
4 Full nest
II

Youngest child six or over, financial position better. Less


influenced by advertising. Buy large-size packages, multiple-unit
deals. Buy: Manifolds, cleaning materials, bicycles,music lessons,
pianos

5 Full nest
III

Older married couples with dependent children. Financial position


still better. Some children get jobs. Hard to influence with
advertising. High average purchase of durables: new, more
tasteful, furniture, auto travel, unnecessary appliances, boats,
dental services, magazines

6 Empty
nest I

Older married couples, no children living with them, head of


household in labour force. Home ownership at peak. Most satisfied
with financial position and money saved. Interested in travel,
recreation , self-education. Make gits and contributions. Not
interested in new products. Buy: vacations, luxuries, home
improvemet

7 Empty
nest II

Older married. No children living at home, head of household


retired. Drastic cut in income. Keep home. Buy: Medical
appliances, medical-care products

8 Solitary
survivor I

In labour force, Income still good but likely to sell home

9 Solitary
survivor
II

Retired. Same medical and product needs as other retired group;


drastic cut in income, special need for attention, affection and
security

Marketing research
Marketing research is a systematic, objective and exhaustive search for the
study of the facts realting to any problem in the field of marketing Richard Crisp
Classification of marketing research jobs
(Based on the subject of the
research)
Routine problem analysis and research on nonroutine problems

Research on consumer

Research on short-term and ling term problems


demand

Research on market/

Research on product/ brand


Research on competition
Research on distribution
Research on price
Research on advertising and
promotion
Research on sales methods
Process of marketing research
What is a service?
Kotler and Bloom defined service as A service is any act or performance that
one party can offer to another that is essentially intangible and does not result in
the ownership of anything. Its production may or may not be tied to a physical
product.
Characterstics of Service products

Intangibility: Refers to the aspect not associated with any physical form or
characteristics. It is very much, pronounced in the pure service elements
like the lecture given by a professor.
Inseparability: It means that the production and consumption of the
service are inextricably interwined. Hence, the consumers presence is in
most cases necessary at the time of production. Goods are usually
purchased, sold and consumed; whereas, services are usually sold and
then produced and consumed.
Heterogeneity: The services offered are not similar all the time to all the
customers. This feature of service is called Heterogenity. The quality of
a service depends on the person, who provides the service, or the time,
when provided. Even though standard systems may be used to handle a
flight reservation, book a car for service, each unit of service differs from
other units.
Perishability: This means that the service units cannot be stocked. If a
seat is unfilled when the plane leaves or the play starts, it cannot be
stored and sold next day or next week; that revenue is lost forever.
Difference between physical goods and Services

Physical goods

Services

1 Tangible

Intangible

2 Homogeneous

Heterogeneous

3 Production and distribution


separated from consumption

Production, distribution and consumption


are simultaneous process

4 A thing

An activity or process

5 Core value produced in factory

Core value produced in buyer-seller

interactions
6 Customers do not participate in
the production process

Customers participate in the production

7 Can be kept in stock

Cannot be kept in stock

8 Transfer of ownership

No transfer of ownership

Market development process


A process for developing sales - new business and new markets

This process is effective for developing all types of business, and delivers business growth
via:

new products or services to existing customers,

existing products or services to new customers, or

new products or services to new customers.

Market development process:


1. Establish market development aims and targets.
2. Identify target market(s), sectors and niches.
3. Assess your existing sales organisation and develop it as necessary.
4. Source/utilise a suitable prospect database - ensure data is clean and up
to date, and strategic decision-makers are identified.
5. Develop and agree your strategic proposition(s) - with reference to USP's,
UPB's, competitors, positioning, product mix, margins, etc.
6. Design your communication(s) and method(s) to generate enquiries.
7. Design your response and sales processes and establish or provide
required capabilities.
8. Design and provide your required monitoring, measurement and reporting
systems.
9. Implement your sales development activity and reinforce it through
coaching, training, meetings, executive endorsement, etc.
10.Follow-up the activity: coach as required, review, monitor, seek customer
and prospect feedback (successful and unsuccessful) and report on
performance.
11.Make changes and improvements and continue your activity at the
appropriate stage.

Vertical Marketing System


A vertical marketing system (VMS) is one in which the main members of a distribution
channelproducer, wholesaler, and retailerwork together as a unified group in order to
meet consumer needs. In conventional marketing systems, producers, wholesalers, and
retailers are separate businesses that are all trying to maximize their profits. When the effort
of one channel member to maximize profits comes at the expense of other members, conflicts
can arise that reduce profits for the entire channel. To address this problem, more and more
companies are forming vertical marketing systems.
Vertical marketing systems can take several forms. In a corporate VMS, one member of the
distribution channel owns the other members. Although they are owned jointly, each

company in the chain continues to perform a separate task. In an administered VMS, one
member of the channel is large and powerful enough to coordinate the activities of the other
members without an ownership stake. Finally, a contractual VMS consists of independent
firms joined together by contract for their mutual benefit. One type of contractual VMS is a
retailer cooperative, in which a group of retailers buy from a jointly owned wholesaler.
Another type of contractual VMS is a franchise organization, in which a producer licenses a
wholesaler to distribute its products.
The concept behind vertical marketing systems is similar to vertical integration. In vertical
integration, a company expands its operations by assuming the activities of the next link in
the chain of distribution. For example, an auto parts supplier might practice forward
integration by purchasing a retail outlet to sell its products. Similarly, the auto parts supplier
might practice backward integration by purchasing a steel plant to obtain the raw materials
needed to manufacture its products. Vertical marketing should not be confused with
horizontal marketing, in which members at the same level in a channel of distribution band
together in strategic alliances or joint ventures to exploit a new marketing opportunity.
As Tom Egelhoff wrote in an online article entitled "How to Use Vertical Marketing
Systems," VMS holds both advantages and disadvantages for small businesses. The main
advantage of VMS is that your company can control all of the elements of producing and
selling a product. In this way, you are able to see the whole picture, anticipate problems,
make changes as they become necessary, and thus increase your efficiency. However, being
involved in all stages of distribution can make it difficult for a small business owner to keep
track of what is happening. In addition, the arrangement can fail if the personalities of the
different areas do not fit together well.
For small business owners interested in forming a VMS, Egelhoff recommended starting out
by developing close relationships with suppliers and distributors. "What suppliers or
distributors would you buy if you had the money? These are the ones to work with and form a
strong relationship," he stated. "Vertical marketing can give many companies a major
advantage over their competitors."

Corporate Strategy
Key words: Strategic management,

Discuss the formulation of a strategic plan.


Main steps in the strategy formulation process are given below:1

2
3

4
5

Spelling out the Business Mission and Objectives: Mission is the


overall purpose of an organization and the expressed reason for its
existence. The mission should be clearly expressed and effectively
communicated to the members of the organization. It serves as a
reference point from which objectives can be derived for managerial
decision-making. Mission provides unity of purpose specifies the
identity of the firm and provides
Environmental Scanning:
Organizational analysis: In case of an established and on-going
enterprise a thorough appraisal of its current position is essential to
identify its strengths (internal capabilities) and weaknesses
(deficiencies). A detailed analysis and evaluation of the functional
areas of the enterprise will throw a profile of its abilities and
disabilities. For example, the enterprise may have a sound distribution
network and state of the art technology but it may be deficient in its
communication system and control mechanism. Analysis of the internal
environment is popularly known as corporate appraisal or selfappraisal. It should cover marketing, financial, operations, human
resources, orgainsational culture, etc. Once the strength and
weaknesses of the enterprise are identified, each of them should be
assigned weights according to the degree of importance. Management
can identify the areas that need immediate attention.
Developing strategic alternatives:
Evaluation of Strategic alternatives: Each strategic alternative has
its own merits and demerits

Spelling out
organizations
mission and
objectives

Choice of strategy

Environmental
Scanning:
Opportunities and
threats

Evaluation of
Strategic
Alternatives

Organizational
Analysis: Strengths
and Weaknesses

Developing
Strategic
Alternatives

Fig. Strategy Making Process

Choice of Strategy: Once the available strategic alternatives are


evaluated and compared, management selects the strategic
alternatives that will maximize the long-run effectiveness of the
organization. Selection of overall strategy is both the right and duty of
top management but the resulting choice permeates deeply into the
organization. In order to make an effective strategic choice, top
management must have a clear shared conception of the firm and its
future. The strategic choice must be clear and unambiguous.
Commitment to a given choice often limits future strategy; the decision
must be thoroughly researched and evaluated. Several factors
influence the strategic choice:
i. Degree of risk acceptable to management.
ii.Knowledge of past strategy
iii.Response of owners
iv.Values and preference of top management
v.Timing of the decision

Ansoffs Growth Vector


Growth Strategies:
One of the objectives of the firms is to continuously increase their sales and
profit. At some point of time, a firm faces a situation that the expected sales and
profit from its existing business do not reach the desired levels. The firms need
to adopt suitable strategies to fill this strategic gap. The firms can adopt three
possible approaches:

Identify opportunities for further growth within the existing businesses


(Intensive growth)
Identify opportunities to build or acquire businesses related to the existing
businesses (Integrative growth)
Identify opportunities to add attractive businesses, unrelated to the
existing businesses(Diversification growth)

You need to run faster and faster to remain at the same place

H. Igor Ansoff first published the now well-known vector matrix or product-matrix
in the Harvard Business review in Sep/ Oct edition of 1957. The matrix also
appeared in the book written later by Ansoff and published in 1965- corporate
strategy. Although the matrix was published a long time ago, it still remains one
of the most popular matrices and is used to identify the basic alternative
strategies, which are options for a firm wanting to grow.
Ansoff developed the matrix out of his realization that a firm needs a welldefined scope and growth direction. For most companies growth is often the
perquisite for survival.
Ansoff felt that many of the theorists had too broad a concept of business and
that the traditional identification of a firm with a particular industry had become
too narrow. This was because many firms acquired a diverse range of products
through policies of vertical and horizontal integration to protect their existing
markets, and also through new product development, done to exploit
technological innovations and to develop new market with opportunities to
growth.
The vector matrix is based on joint consideration of the implication of change in
the product (technology) and/ or the market and is perhaps the simplest and
most basic statement of the strategic alternatives open to the firm who desires
growth.

Products
Existing
Market penetration
strategy
1. More purchase and
usage from existing
customers
2. Gain customers from
competitors
3. Convert non users into
users
Market development
strategy
1. New market segment
2. New distribution
channels
3. New geographic areas

New
Product development
strategy
1. Product modification
via new features
2. Different quality levels
3. New product

Diversification strategy
1.
2.
3.
4.

Organic growth
Joint ventures
Mergers
Acquisition/take-overs

Existing
Market

New

Fig: Market/ product expansion grid


This grid may be also used to make an analysis of the marketing personality/
outlook of the individual/ firm
Current products and current market: market penetration

Market penetration: the firm seeks to:


a. Maintain or increase its share of the current market with current products.
b. Secure dominance of growth markets.
c. Restructure a mature market by driving out competitors.
d. Increase usage by existing customer.
Present products and new markets: market development
a. New geographical areas and export markets
b. Different package sizes for food and other domestic items so that those who
buy in bulk and small quantities are catered for.
c. New distribution channels to attract new customers (e.g. organic foods sold in
supermarkets not just specialist shops)
d. Differential pricing policies to attract different types of customer and create
new market segments.

New products to present markets: product development


a. Advantage Product development forces competition to innovate, new comers
to the market might be discouraged.
b. The drawbacks include the expense and the risk.
New products and new markets: diversification
Diversification occurs when a company decides to make new products for new
markets. It has to have a clear idea of what it hopes to gain from diversification.
There are two types of diversification, related and unrelated diversification.
a. Growth - new products and new markets should be selected which offer
prospects for growth, which the existing product market mix does not.
b. Investing surplus funds not required for other expansion needs: but the
funds could be returned to shareholders.
c. The firms strengths matches the opportunity if outstanding new products
have been developed by the companys research and development department.
The profit opportunities from diversification are high.
Related diversification
Horizontal integration refers to development into activities which are
competitive with or directly complementary to a companys present activities.
Sony with its playstation started to compete in computer games.
Vertical integration occurs when a company becomes its own;
a. supplier of raw materials, components or services (backward vertical
integration)
b. Distributor or sales agent (forward vertical integration), for example: where a
manufacturer of synthetic yarn begins to produce shirts from the yarn instead of
selling it to other shirt manufacturers.

Advantage of vertical integration


a. To secure supply of components or raw materials with more control. Supplier
bargaining power is reduced.
b. Strengthen the relationships and contacts of the manufacturer with the final
consumer of the product.
c. Win a share of the higher profits.
d. Pursue a differentiation strategy more effectively.
e. Raise barriers to entry.
Disadvantages of vertical integration
a. Over-concentration - A company places too many bets on a same end-market
product
b. The firm fails to benefit from any economies of scale or technical advances in
the industry to which it has diversified. This is why in the publishing industry
most printing is subcontracted to the specialist printing firms, who can work
machinery to capacity by doing work for many firms.

Unrelated diversification - conglomerate diversification


Unrelated diversification or conglomerate diversification is very unfashionable
now but it has been a key strategy for many companies in Asia.

Advantages of conglomerate diversification


a. Risk spreading entering new products into new markets offers protection
against failure of current products and markets.
b. High profit opportunities Ability to move into high growth profitable
industries especially important if current industry is in decline.
c. Escape from the present business if competition is too hot!
d. Better access to capital markets.
e. No other way to grow expansion in the existing industry might lead to
monopoly and government investigation
f. Use surplus cash
g. Exploit under-utilized resources
h. Obtain cash or other financial advantages
i. Use a companys image reputation in one market to build products and
services in another market.
Disadvantages of conglomerate diversification
a. The dilution of shareholders earnings if diversification is into growth
industries with high P/E ratios.
b. Lack of a common identity and purpose in a conglomerate
organization. A conglomerate will be successful only if it has high
quality of management and financial ability at head office where

diverse operations are brought together.


c. Failure in one business will drag down the rest.
d. Lack of management experience
e. No good for shareholders shareholders can spread risk by
buying shares in companies in different industries.
Product Mix Optimization for Maximum Profitability
Decision-making based on complex combinations of dynamic factors.

Supply chain performance management


Profits
Manufacturing capacity
Capabilities and costs
Supply chain constraints
Market opportunities
Product might not be independent
Customer loyalty might be lost when product is not available
Price elasticity of demand
Labor may be specialized and cannot switch between the two products
Population increase
Changes in the level of income of buyers
Marketing influences
Finance influences
Production influences
Management ability and effort

Product strategies
Marketing strategies which are based on the product element are called product
strategies. Product strategies are of two types:(a) Strategies based on Product Mix
(b) Strategies based on Product life cycle
Product modification
Product elimination
Diversification
Stages
Sale
s

ch
ar
ac
te

Profit
Strategic thrust
Customer targets

Introducti
on

Growth

Maturity

Decli
ne

Low

Fast
Growth

Slow
growth

Decli
ne

ris
tic
s

Competition
Differential
advantage

Stages
Sale
s

Ma
rk
eti
ng
Mi
x

Introducti
on

Growth

Maturity

Declin
e

Low

Fast
Growth

Slow
growth

Declin
e

Product
Price
Promotion
Advertising
focus
distribution

Products

Market

Immobile
Existing
non-innovative

Immobile
Existing
innovative

mobile
non-innovative

Immobile
innovative

Existing

New

Fig: Analysis of the marketing personality/ outlook of the


individual / firm

To portray alternative corporate growth strategies, Igor Ansoff presented a


matrix that focused on the firms present and potential products and markets
(customers). By considering ways to grow via existing products and new
products, and in existing markets, there are four possible product-market
combinations.
Ansoffs matrix provides four different growth strategies:-

BCG Model

Lo
w

Business Growth rate


%

Stars

Question marks

Cash cows
Hi
gh
Porters Generic
Strategies

Dogs
Market Share

Hig
h
Strategic dimensions and Group mapping
Competitors Analysis

Industry Analysis
Fragmentation maturity and decline
Competitive strategy
Grand strategies
Stability strategies
Expansion strategies
Retrenchment strategies
Combination strategies

Low

Corporate restructuring
Transnationalisation of world economy
World trade organization
The Uruguay round of trade negotiations, after more than seven years of
deliberation was wrapped uo on 14th Dec, 1993 and was formalized by more than
120 countries on April 15, 1994. WTO came into existence on Jan 1, 1995.
Functions:
1. To facilitate the implementation, administration and operation of Uruguay
round agreements.
2. To review national trade policies
3. To provide a forum for negotiations among member countries on their
multilateral trade relations
4. To cooperate with other international institutions, especially the IMF and
the World Bank in order to ensure more meaningful compatibility in global
economic policies
5. To administer the trade dispute settlement procedures

Agreement on agriculture:
The tariffs resulting from transformation of non-tariff barriers, as well as other
tariffs on agricultural product to be reduced on an average by 36% in the case of
Balanced Scorecard
The balanced scorecard is a strategic planning and management system that is
used extensively in business and industry, government, and nonprofit
organizations worldwide to align business activities to the vision and strategy of
the organization, improve internal and external communications, and monitor
organization performance against strategic goals. It was originated by Drs.
Robert Kaplan (Harvard Business School) and David Norton as a performance
measurement framework that added strategic non-financial performance
measures to traditional financial metrics to give managers and executives a
more 'balanced' view of organizational performance. While the phrase balanced
scorecard was coined in the early 1990s, the roots of the this type of approach
are deep, and include the pioneering work of General Electric on performance
measurement reporting in the 1950s and the work of French process engineers
(who created the Tableau de Bord literally, a "dashboard" of performance
measures) in the early part of the 20th century.
The balanced scorecard has evolved from its early use as a simple performance
measurement framework to a full strategic planning and management system.
The new balanced scorecard transforms an organizations strategic plan from
an attractive but passive document into the "marching orders" for the
organization on a daily basis. It provides a framework that not only provides

performance measurements, but helps planners identify what should be done


and measured. It enables executives to truly execute their strategies.
This new approach to strategic management was first detailed in a series of
articles and books by Drs. Kaplan and Norton. Recognizing some of the
weaknesses and vagueness of previous management approaches, the balanced
scorecard approach provides a clear prescription as to what companies should
measure in order to 'balance' the financial perspective. The balanced scorecard
is a management system (not only a measurement system) that enables
organizations to clarify their vision and strategy and translate them into action. It
provides feedback around both the internal business processes and external
outcomes in order to continuously improve strategic performance and results.
When fully deployed, the balanced scorecard transforms strategic planning from
an academic exercise into the nerve center of an enterprise.
Kaplan and Norton describe the innovation of the balanced scorecard as follows:
"The balanced scorecard retains traditional financial measures. But financial
measures tell the story of past events, an adequate story for industrial age
companies for which investments in long-term capabilities and customer
relationships were not critical for success. These financial measures are
inadequate, however, for guiding and evaluating the journey that information
age companies must make to create future value through investment in
customers, suppliers, employees, processes, technology, and innovation."

Adapted from Robert S. Kaplan and David P. Norton, Using the Balanced
Scorecard as a Strategic Management System, Harvard Business Review
(January-February 1996): 76.
Perspectives

The balanced scorecard suggests that we view the organization from four
perspectives, and to develop metrics, collect data and analyze it relative to each
of these perspectives:
The Learning & Growth Perspective
This perspective includes employee training and corporate cultural attitudes
related to both individual and corporate self-improvement. In a knowledgeworker organization, people -- the only repository of knowledge -- are the main
resource. In the current climate of rapid technological change, it is becoming
necessary for knowledge workers to be in a continuous learning mode. Metrics
can be put into place to guide managers in focusing training funds where they
can help the most. In any case, learning and growth constitute the essential
foundation for success of any knowledge-worker organization.
Kaplan and Norton emphasize that 'learning' is more than 'training'; it also
includes things like mentors and tutors within the organization, as well as that
ease of communication among workers that allows them to readily get help on a
problem when it is needed. It also includes technological tools; what the Baldrige
criteria call "high performance work systems."
The Business Process Perspective
This perspective refers to internal business processes. Metrics based on this
perspective allow the managers to know how well their business is running, and
whether its products and services conform to customer requirements (the
mission). These metrics have to be carefully designed by those who know these
processes most intimately; with our unique missions these are not something
that can be developed by outside consultants.
The Customer Perspective
Recent management philosophy has shown an increasing realization of the
importance of customer focus and customer satisfaction in any business. These
are leading indicators: if customers are not satisfied, they will eventually find
other suppliers that will meet their needs. Poor performance from this
perspective is thus a leading indicator of future decline, even though the current
financial picture may look good.
In developing metrics for satisfaction, customers should be analyzed in terms of
kinds of customers and the kinds of processes for which we are providing a
product or service to those customer groups.
The Financial Perspective
Kaplan and Norton do not disregard the traditional need for financial data. Timely
and accurate funding data will always be a priority, and managers will do
whatever necessary to provide it. In fact, often there is more than enough
handling and processing of financial data. With the implementation of a
corporate database, it is hoped that more of the processing can be centralized
and automated. But the point is that the current emphasis on financials leads to
the "unbalanced" situation with regard to other perspectives. There is perhaps a
need to include additional financial-related data, such as risk assessment and
cost-benefit data, in this category.
Strategy Mapping

Strategy maps are communication tools used to tell a story of how value is
created for the organization. They show a logical, step-by-step connection
between strategic objectives (shown as ovals on the map) in the form of a causeand-effect chain. Generally speaking, improving performance in the objectives
found in the Learning & Growth perspective (the bottom row) enables the
organization to improve its Internal Process perspective Objectives (the next row
up), which in turn enables the organization to create desirable results in the
Customer and Financial perspectives (the top two rows).

Corporate level
strategies
Stability

Expansion

Retrenchment

Combination

Types of merger
There are four types of merger, viz., horizontal
merger, vertical
merger,
Simultaneous
Sequential
Simultaneous
No- concentric
Pause/Proceed
Profit merger and conglomerate merger. Horizontal mergers normally
and sequential
change,
with caution
involve the merger
of two or more companies which are producing similar
Do
products or rendering similar services, i.e. products or services which compete
nothing
directly with each other. This type of merger normally results in reduction in the
number of players in that particular industry and may reduce or eliminate
Turnaround
Divestment
Liquidation
competition. Vertical mergers involve the merger of
two companies, where one
of them is an actual or potential supplier of goods or services to the other. The
object of this kind of merger could be to ensure a source of supply or an outlet
for products and the effect may improve efficiency.
In concentric or congeneric mergers, the two companies may be related through
Concentration Integration
Diversification
Internationalizatio
Cooperation
the basic technologies, production
process or markets.
The merged company
n to the
provides an extension of product line, market participations or technology
surviving company. Such mergers provide greater opportunities to diversify into
a relative market having higher return that it enjoyed earlier. Conglomerate
International
Vertical
mergers
neither constitute the bringing together of competitorsStrategic
or have a
Merger Takeover
Conglomerate
vertical connection.Concentric
It involves a predominant element of diversification
of
alliance Multidomestic
activities.
Thus,
in
this
kind
of
merger,
one
company
derives
most
of
the
revenue
Horizontal
Joint
from a particular industry, acquiring companies operating venture
in other industries-Global
with a view
to obtain greater stability of earnings through diversification or to
MarketingHorizontal
Friendly
obtain benefits
relatedof economies of scale, etc.
Transnational
Vertical
What isTechnology
industry lifecycle?
Hostile
related
Concentric
Like other living creatures, industry
also has its circle of life. The industry
Conglomerate
Marketing
and
lifecycle imitates the human lifecycle. The stages of industry lifecycle include
technology
fragmentation,
shake-out, maturity and decline (Kotler 2003). These stages will
related
be described
in the followings section.
What are the main aspects of industry lifecycle?
Pro-competitive Non-competitive Competitive

Pre-competitive

Fragmentation Stage
Tree of Strategic Alternatives the family
Fragmentation is the first stage of the new industry. This is the stage when the
new industry develops the business. At this stage, the new industry normally
arises when an entrepreneur overcomes the twin problems of innovation and
invention, and works out how to bring the new products or services into the
market (Ayres et al., 2003). For example, air travel services of major airlines in
Europe were sold to the target market at a high price. Therefore, the majority of
airlines' customers in Europe were those people with high incomes who could
afford premium prices for faster travel.
In 1985, Ryanair made a huge change in the European airline industry. Ryanair
was the first airline to engage low-cost airlines in Europe. At that time, Ryanair's
services were perceived as the innovation of the European airline industry (Le
Bel, 2005). Ryanair tickets are half the price of British Airways. Some of its sales

promotions were as low as 0.01. This made people think that air travel was not
just made for the rich, but everybody (Haley & Tan 1999).
Ryanair overcame the twin problems of innovation and invention in the airline
industry by inventing air travel services that could serve passengers with tight
budgets and those who just wanted to reach their destination without breaking
their bank savings. Ryanair achieved this goal by eliminating unnecessary
services offered by traditional airlines (Kaynak & Kucukemiroglu, 1993). It does
not offer free meals, uses paper-free air tickets, gets rid of mile collecting
scheme, utilises secondary airports, and offers frequent flights. These techniques
help Ryanair save time and costs spent in airline business operation (Haley &
Tan 1999).
Shake-out
Shake-out is the second stage of the industry lifecycle. It is the stage at which a
new industry emerges. During the shake-out stage, competitors start to realise
business opportunities in the emerging industry. The value of the industry also
quickly rises (Ayres et al., 2003).
For example, many people die and suffer because of cigarettes every year. Thus,
the UK government decided to launch a campaign to encourage people to quit
smoking. Nicorette, one of the leading companies is producing several nicotine
products to help people quit smoking. Some of its well-known products include
Nicorette patches, Nicolette gums and Nicorette lozenges (Nicorette 2007).
Smokers began to see an easy way to quit smoking. The new industry started to
attract brand recognition and brand awareness among its target market during
the shake-out stage (Hendrickson et al., 2006). Nicorette's products began to
gain popularity among those who wanted to quit smoking or those who wanted
to reduce their daily cigarette consumption.
During this period, another company realised the opportunity in this market and
decided to enter it by launching nicotine product ranges, including Nic Lite gum
and patches. It recently went beyond UK boarder after the UK government
introduced non-smoking policy in public places, including pubs and nightclubs.
This business threat created a new business opportunity in the industry for Nic
Lite to launch a new nicotine-related product called Nic Time (ABC News 2006).
Nic Time is a whole new way for smokers to "get a cigarette" an eight-ounce
bottle contains a lemon-flavoured drink laced with nicotine, the same amount of
nicotine as two cigarettes (ABC News 2006). Nic Lite was first available at Los
Angeles airports for smokers who got uneasy on flights, but now the nicotine soft
drinks are available in some convenience stores (ABC News 2006).
Maturity
Maturity is the third stage in the industry lifecycle. Maturity is a stage at which
the efficiencies of the dominant business model give these organisations
competitive advantage over competition (Kotler, 2003). The competition in the
industry is rather aggressive because there are many competitors and product
substitutes. Price, competition, and cooperation take on a complex form
(Gottschalk & Saether, 2006). Some companies may shift some of the production
overseas in order to gain competitive advantage.
For example, Toyota is one of the world's leading multinational companies,
selling automobiles to customers worldwide. The export and import taxes mean

that its cars lose competitiveness to the local competitors, especially in the
European automobile industry. As a result, Toyota decided to open a factory in
the UK in order to produce cars and sell them to customers in the European
market (Toyota, 2007).
The haute couture fashion industry is another good example. There are many
western-branded fashion labels that manufacture their products overseas by
cooperating with overseas partners, or they could seek foreign suppliers who
specialise in particular materials or items. For instance, Nike has factories in
China and Thailand as both countries have cheap labour costs and cheap, quality
materials, particularly rubber and fabric. However, their overseas partners are
not allowed to sell shoes produced for Adidas and Nike (Harrison & Boyle, 2006).
The items have to be shipped back to the US, and then will be exported to
countries worldwide, including China and Thailand.
Decline
Decline is the final stage of the industry lifecycle. Decline is a stage during which
a war of slow destruction between businesses may develop and those with heavy
bureaucracies may fail (Segil, 2005). In addition, the demand in the market may
be fully satisfied or suppliers may be running out (Ayres et al., 2003).
In the stage of decline, some companies may leave the industry if there is no
demand for the products or services they provide, or they may develop new
products or services that meet the demand in the market. In such cases, this will
create a new industry (Francis & Desai, 2005).
For example, at the beginning of the communication industry, pagers were used
as the main communication method among people working in the same
organisation, such as doctors and nurses. Then, the cutting edge of the
communication industry emerged in the form of the mobile phone. The
communication process of pagers could not be accomplished without telephones.
To send a message to another pager, the user had to phone the call-centre staff
who would type and send the message to another pager. On the other hand,
people who use mobile phones can make a phone-call and send messages to
other mobiles without going through call-centre staff (Hui et al., 2002).
In recent years, the features of mobile phones have been developing rapidly and
continually. Now people can use mobiles to send multimedia messages, take
pictures, check email, surf the internet, read news and listen to music (Hui et al.,
2002). As mobile phone feature development has reached saturation, thus the
new innovation of mobile phone technology has incorporated the use of
computers.
The launch of personal digital assistants (PDA) is a good example of the decline
stage of the mobile phone industry as the features of most mobiles are similar.
PDAs are hand-held computers that were originally designed as a personal
organiser but it become much more multi-faceted in recent years. PDAs are
known as pocket computers or palmtop computers (Wikipedia, 2007). They have
many uses for both mobile phones and computers such as computer games,
global positioning system, video recording, typewriting and wireless wide-area
network (Wikipedia, 2007).
How do you use industry lifecycle analysis?
It is important for companies to understand the use of the industry lifecycle

because it is a survival tool for businesses to compete in the industry effectively


and successfully (Baum & McGahan, 2004). The main aspects in terms of
strategic issues of the industry lifecycle are described below:
Competing over emerging industries

The game rules in industry competition can be undetermined and the


resources may be constrained. Thus, it is vital for firms to identify market
segments that will allow them to secure and sustain a strong position
within the industry (Ayres et al., 2003).

The product in the industry may not be standardised so it is necessary for


companies to obtain resources needed to support new product
development and rapid company expansion (Ayres et al., 2003).

The entry barriers may be low and the potential competition may be high,
thus companies must adapt to shift the mobility barriers (Ayres et al.,
2003).

Consumers may be uncertain in terms of demand. As a result, determining


the time of entry to the industry can help companies to take business
opportunities before their rivals (Ayres et al., 2003).

Competing during the transition to industry maturity

When competition in the industry increases, firms can have a sustainable


competitive advantage that will provide a basis for competing against
other companies (Baum & McGahan, 2004).

The new products and applications are harder to come by, while buyers
become more sophisticated and difficult to understand in the maturity
stage of the industry lifecycle. Thus, consumer research should be carried
out and this could help companies in building up new product lines (Baum
& McGahan, 2004).

Slower industry growth constrains capacity growth and often leads to


reduced industry profitability and some consolidation. Therefore,
companies can focus greater attention on costs through strategic cost
analysis (Baum & McGahan, 2004).

The change in the industry is rather dynamic, and an understanding of the


industry lifecycle can help companies to monitor and tackle these changes
effectively (Baum & McGahan, 2004). Firms can develop organisational
structures and systems that can facilitate the transition (Baum &
McGahan, 2004).

Some companies may seek business opportunities overseas when the


industries reach the maturity stage because during this stage, the demand
in the market starts to decline (Baum & McGahan, 2004).

Competing in declining industries


The characteristics of declining industries include the following:

Declining demand for products

Pruning of product lines

Shrinking profit margins

Falling research and development advertisement expenditure

Declining number of rivals as many are forced to leave the industry

For companies to survive the dynamic environment, it is necessary for them to:

Measure the intensity of competition (Baum & McGahan, 2004)

Assess the causes of decline (Baum & McGahan, 2004)

Single out a viable strategy for decline such as leadership, liquidation and
harvest (Baum & McGahan, 2004).

Where do you find information on the industry lifecycle?


The information, model and theory for the industry lifecycle can be found in
many business management books. Several variations of lifecycle model have
been developed to address the development and transition of products, market
and industry. The models are similar but the number and names of each stage
can be different (Baum & McGahan, 2004). The following are some of the major
models:

Fox, 1973: Pre-commercialisation introduction, growth, maturity and


decline

Wasson, 1974: Market Development


turbulence, saturation/maturity and decline

Anderson & Zeithaml, 1984: introduction, growth, maturity and decline

Hill & Jones, 1998: fragmentation, growth, shake-out, maturity and decline

rapid

growth,

competitive

Conclusion
The industry lifecycle imitates the cycle of human being. Industry lifecycle
comprises four stages including fragmentation, growth, maturity and decline. An
understanding of the industry lifecycle can help competing companies survive
during periods of transition. Information on the industry lifecycle can be found in
most business management books. Several variations of the lifecycle model have
been developed to address the development and transition of products, market
and industry. The models are similar but the number of stages and names of
each may differ. Major models include those developed by Fox (1973), Wasson
(1974), Anderson & Zeithaml (1984), and Hill & Jones (1998).
Strategic dimensions
A careful balance of four interrelated elements: people, space, time and money.

Porter's Generic Strategies


If the primary determinant of a firm's profitability is the attractiveness of the industry
in which it operates, an important secondary determinant is its position within that
industry. Even though an industry may have below-average profitability, a firm that is
optimally positioned can generate superior returns.
A firm positions itself by leveraging its strengths. Michael Porter has argued that a
firm's strengths ultimately fall into one of two headings: cost advantage and
differentiation. By applying these strengths in either a broad or narrow scope, three
generic strategies result: cost leadership, differentiation, and focus. These strategies
are applied at the business unit level. They are called generic strategies because
they are not firm or industry dependent. The following table illustrates Porter's
generic strategies:
Porter's Generic Strategies

Advantage
Target Scope
Low Cost

Product Uniqueness

Broad
(Industry Wide)

Cost Leadership
Strategy

Differentiation
Strategy

Narrow
(Market Segment)

Focus
Strategy
(low cost)

Focus
Strategy
(differentiation)

Cost Leadership Strategy

This generic strategy calls for being the low cost producer in an industry for a given
level of quality. The firm sells its products either at average industry prices to earn a
profit higher than that of rivals, or below the average industry prices to gain market
share. In the event of a price war, the firm can maintain some profitability while the
competition suffers losses. Even without a price war, as the industry matures and
prices decline, the firms that can produce more cheaply will remain profitable for a
longer period of time. The cost leadership strategy usually targets a broad market.

Some of the ways that firms acquire cost advantages are by improving process
efficiencies, gaining unique access to a large source of lower cost materials, making
optimal outsourcing and vertical integration decisions, or avoiding some costs
altogether. If competing firms are unable to lower their costs by a similar amount, the
firm may be able to sustain a competitive advantage based on cost leadership.
Firms that succeed in cost leadership often have the following internal strengths:

Access to the capital required to make a significant investment in production


assets; this investment represents a barrier to entry that many firms may not
overcome.

Skill in designing products for efficient manufacturing, for example, having a


small component count to shorten the assembly process.

High level of expertise in manufacturing process engineering.

Efficient distribution channels.

Each generic strategy has its risks, including the low-cost strategy. For example,
other firms may be able to lower their costs as well. As technology improves, the
competition may be able to leapfrog the production capabilities, thus eliminating the
competitive advantage. Additionally, several firms following a focus strategy and
targeting various narrow markets may be able to achieve an even lower cost within
their segments and as a group gain significant market share.
Differentiation Strategy
A differentiation strategy calls for the development of a product or service that offers
unique attributes that are valued by customers and that customers perceive to be
better than or different from the products of the competition. The value added by the
uniqueness of the product may allow the firm to charge a premium price for it. The
firm hopes that the higher price will more than cover the extra costs incurred in
offering the unique product. Because of the product's unique attributes, if suppliers
increase their prices the firm may be able to pass along the costs to its customers
who cannot find substitute products easily.
Firms that succeed in a differentiation strategy often have the following internal
strengths:

Access to leading scientific research.

Highly skilled and creative product development team.

Strong sales team with the ability to successfully communicate the perceived
strengths of the product.

Corporate reputation for quality and innovation.

The risks associated with a differentiation strategy include imitation by competitors


and changes in customer tastes. Additionally, various firms pursuing focus strategies
may be able to achieve even greater differentiation in their market segments.
Focus Strategy
The focus strategy concentrates on a narrow segment and within that segment
attempts to achieve either a cost advantage or differentiation. The premise is that the
needs of the group can be better serviced by focusing entirely on it. A firm using a
focus strategy often enjoys a high degree of customer loyalty, and this entrenched
loyalty discourages other firms from competing directly.

Because of their narrow market focus, firms pursuing a focus strategy have lower
volumes and therefore less bargaining power with their suppliers. However, firms
pursuing a differentiation-focused strategy may be able to pass higher costs on to
customers since close substitute products do not exist.
Firms that succeed in a focus strategy are able to tailor a broad range of product
development strengths to a relatively narrow market segment that they know very
well.
Some risks of focus strategies include imitation and changes in the target segments.
Furthermore, it may be fairly easy for a broad-market cost leader to adapt its product
in order to compete directly. Finally, other focusers may be able to carve out subsegments that they can serve even better.
A Combination of Generic Strategies
- Stuck in the Middle?
These generic strategies are not necessarily compatible with one another. If a firm
attempts to achieve an advantage on all fronts, in this attempt it may achieve no
advantage at all. For example, if a firm differentiates itself by supplying very high
quality products, it risks undermining that quality if it seeks to become a cost leader.
Even if the quality did not suffer, the firm would risk projecting a confusing image.
For this reason, Michael Porter argued that to be successful over the long-term, a
firm must select only one of these three generic strategies. Otherwise, with more
than one single generic strategy the firm will be "stuck in the middle" and will not
achieve a competitive advantage.
Porter argued that firms that are able to succeed at multiple strategies often do so by
creating separate business units for each strategy. By separating the strategies into
different units having different policies and even different cultures, a corporation is
less likely to become "stuck in the middle."
However, there exists a viewpoint that a single generic strategy is not always best
because within the same product customers often seek multi-dimensional
satisfactions such as a combination of quality, style, convenience, and price. There
have been cases in which high quality producers faithfully followed a single strategy
and then suffered greatly when another firm entered the market with a lower-quality
product that better met the overall needs of the customers.
Generic Strategies and Industry Forces
These generic strategies each have attributes that can serve to defend against
competitive forces. The following table compares some characteristics of the generic
strategies in the context of the Porter's five forces.
Generic Strategies and Industry Forces

Generic Strategies
Industry
Force

Cost
Leadership

Entry Ability to cut


Barriers price in

Differentiation
Customer loyalty can
discourage potential

Focus
Focusing develops core
competencies that can act as

retaliation deters
potential
entrants.
entrants.

an entry barrier.

Large buyers have less


power to negotiate
because of few close
powerful buyers.
alternatives.

Large buyers have less power


to negotiate because of few
alternatives.

Ability to offer

Buyer
lower price to
Power

Better insulated
Supplier
from powerful
Power
suppliers.

Better able to pass on


supplier price increases
to customers.

Suppliers have power because


of low volumes, but a
differentiation-focused firm is
better able to pass on supplier
price increases.

Customer's become
Threat
Can use low price attached to
Specialized products & core
of
to defend against differentiating attributes, competency protect against
Substitu
substitutes.
reducing threat of
substitutes.
tes
substitutes.
Better able to
Rivalry compete on
price.

Brand loyalty to keep


customers from rivals.

Rivals cannot meet


differentiation-focused
customer needs.

Strategic group mapping


If you want to understand your environment and its implications in greater depth, it might be
helpful to look more widely and add your beneficiary needs into the mix. In this way, you can
also consider the important factors affecting other organisations in your specialist sector
(perhaps health or social care), or for your field of operation (perhaps crime prevention or
victim support). This is often called your strategic group.
Take the top five other players in your strategic group. List them. Develop a profile for each:

What services do they provide?

What beneficiary group do they work with?

What's their impact?

What might their plans be for the future? How might you create greater impact by
reconsidering your relationship with them?

It's important not only to think about who these other players are, but also about the
marketplace you each work in and how this could affect your future strategies. To help with
this, think about the two most important factors driving success (or ensuring outcomes) for
your service users or beneficiaries. Examples that people sometimes come up with are:

being able to access the service immediately

having all their needs met in one place

having a tailored service based on their unique needs.

You will have your own factors for your beneficiaries. Once you've picked the top two, draw
up a matrix showing each factor as in the example below.
Example of strategic group map

Description of the diagram


The diagram shows a square divided into quadrants with 'Immediate access' on the y-access
and 'Tailored service' on the x-access. Drawn in the appropriate position on the grid (with

regards to these two factors) are the other players. The size of the circle that represents each
corresponds to their size in the marketplace.
Create your own strategic group map
Plot out each of the other players on this matrix. You could draw a circle for each that gives
an idea of relative size. Put your organisation on there too. Where are the gaps? Where are
the overlaps? What are the options for change?
*

Financial Globalization and Financial Stability


Globalization is a complex process and it has different facets. Among all of them, financial
globalization is the most powerful dimension. It has caused great impact on the global
economy and constituted a remarkable change in the exhaustive cross-border financial and
cost flows. In addition, financial globalization also has great impact in international risksharing management. There are myriad of aspects of Financial Globalization, and financial
globalization and financial stability is one of them.
Due to financial globalization, a massive change has been noticed in the market operators and
institutions, in expanding the stakes of cross-border properties as well as the growing
international profile in the financial stability of economic markets. These changes have been
called as the second wave of financial globalization.
When the financial markets cannot perform at its best due to an unrelenting predicament, the
situation is called as financial instability. To counteract this instability, financial globalization
takes an important role. First of all, it changes the traditional government-ruled exchange rate
to a flexible exchange rate system. In addition, a precise application of liberalization and
formation of institution is the crucial factor in all emerging markets.
Reasons of financial globalization
Technological advancement is considered as the prime cause of financial globalization.
Especially, the transport and communications sectors have experienced an enormous growth
which caused a change in the financial system. A combined effort of the technological
advancement and the expansion of financial liberalization ensured active financial
globalization in todays global economy.
History
Due to financial globalization there was a huge crisis in banking sector which affected almost
all the countries in the world. The first impact had been noticed in the Nordic countries and
Japan in the 1980s, while in the 1994, there was Mexican crisis, and crisis in banking sector
in the Asian countries took place during the 1997-98. In the Russian countries it was in 1998.
Impact of Financial Globalization and financial stability
Earlier the system was mainly political dominated, but financial globalization and financial
stability has reformed the entire system, and gave birth to the market-directed system. This
system performs important role in determining the conditions of flexible accessibility in
economy, and exchange rates. In addition, it also helps to cope up with any sort of financial
crisis.
There was massive impact of financial globalization on the countries and regions across the
world. A total transmission noticed, and in the place of bank-centered financial system, a
market-driven financial system has taken the charge. As a result, there was a downfall in the

banking sector, and they need to search for other options in domestic and global markets to
rejuvenate the sector.
Advantages of Financial Globalization and Financial Stability
It can undoubtedly be said that due to financial globalization and financial stability there is
boom in the economic sector. Plenty of options have been opened and the sources of global
financing have become cheaper and easily accessible as well. Due to financial globalization
numerous countries are enjoying financial stability which is widely accepted. The most
important thing is that, for the developing countries, financial globalization and financial
stability is really a boon. They have been highly benefited from the security markets of the
developed countries. Furthermore, to keep the inflation rate in control, financial stability has
been very much effectual. In a word, financial globalization and financial stability is
definitely a perfect step for boosting up the economy in different countries worldwide.
Technology and Innovation in Financial Services: Scenarios to 2020
Innovation has already transformed the financial services (FS) industry. Fourteen years ago,
who expected the massive growth of e-banking and e-brokerage? Who envisioned the entry
of new players such as retailers and telecommunications providers into financial services
arena thanks to their ability to harness the power of innovative technology? Who predicted
that technology would enable outsourcing and offshore contracting of core financial
processes in lowcost countries such as India?
The business environment continues to change today, and the financial services sector needs
to confront many issues to remain competitive. In particular, technology and innovation are
board level issues; they create opportunities and pose threats.
To explore these issues, the World Economic Forum and representatives of the financial
services, information technology and telecommunication communities set out to develop
scenarios for the future of financial services and how they might be affected by innovation.
The objective of these scenarios is to explore how innovation will transform access to, and
delivery of, financial services by the year 2020.
From the many key drivers, project participants in particular, leading industry
representatives identified two crucial groups of questions.
1. How will the globalization of financial services evolve? Will it be further supported by
governments and regulators? What outcomes will we see in the next decade?
2. Will innovation be incremental or fundamental? Will it be driven by traditional or new
players? What types of innovation will we see for example, in products and services,
distribution and sales channels, operations, and new business models?
Project participants worked from these questions to develop three very different but plausible
scenarios of the future of financial services over the next 14 years.
Global Ivy League describes a highly concentrated financial services sector dominated by a
small number of large, global players. Governments support globalization but take a very
conservative approach to customer protection and regulation of the sector. At the same time,
declining trust in digital media means customers favour the solidity of traditional financial
service providers. In this environment, a small number of financial services institutions
evolve into global powerhouses.

Next Frontier describes a world in which governments pursue deregulation and, as the title
reflects, technology enables a great variety of new business models to emerge. The result is a
financial services industry as an ecosystem of highly specialized providers, each focusing on
creating a competitive advantage over incumbents. There are many new players, including
telecommunications companies, peer-to-peer financial services providers, processing
providers, retailers and Internet companies.
Innovation Islands describes a world in which globalization stalls due to geopolitical tensions
and global instability. Government policies toward the financial services sector differ widely
among countries. Three trends become apparent:
* "Leapfrogging": in large emerging economies such as China and India, government
regulation and investment in infrastructure fosters the local financial service industry,
expanding access to the poor and leading to new business models that "leapfrog" over
developed markets in areas such as mobile banking and flexible, low-cost operating models.
* "Business as usual": in other mainly developed economies such as the US or European
countries where innovation neither accelerates nor decelerates. There is only limited change
to business models.
* "Back to the past": in the remaining countries and regions, mainly in developing
economies. Governments increase control over the financial services sector but do not foster
local innovation;as a result, there is little progress and sometimes even regression in the
efficiency and quality of FS.

The Globalization Of Financial Services


In this age of globalization, the key to survival and success for many financial
institutions is to cultivate strategic partnerships that allow them to be
competitive and offer diverse services to consumers. In examining the barriers to
- and impact of - mergers, acquisitions and diversification in the financial
services industry, it's important to consider the keys to survival in this industry:
1. Understanding the individual client's needs and expectations
2. Providing customer service tailored to meet customers' needs and
expectations
In 2008, there were very high rates of mergers and acquisition (M&A) in the
financial services sector. Let's take a look at some of the regulatory history that
contributed to changes in the financial services landscape and what this means
for the new landscape investors now need to traverse.
Diversification Encouraged by Deregulation
Because large, international mergers tend to impact the structure of entire
domestic industries, national governments often devise and implement
prevention policies aimed at reducing domestic competition among firms.
Beginning in the early 1980s, the Depository Institutions Deregulation and
Monetary Control Act of 1980 and the Garn-St. Germaine Depository Act of 1982
were passed.
By providing the Federal Reserve with greater control over non-member banks,

these two acts work to allow banks to merge and thrift institutions (credit unions,
savings and loans and mutual savings banks) to offer checkable deposits. These
changes also became the catalysts for the dramatic transformation of the U.S.
financial service markets in 2008 and the emergence of reconstituted players as
well as new players and service channels. (For more on this, check out our
Financial Crisis Survival Guide special feature.)
Nearly a decade later, the implementation of the Second Banking Directive in
1993 deregulated the markets of European Union countries. In 1994, European
insurance markets underwent similar changes as a result of the Third Generation
Insurance Directive of 1994. These two directives brought the financial services
industries of the United States and Europe into fierce competitive alignment,
creating a vigorous global scramble to secure customers that had been
previously unreachable or untouchable.
The ability for business entities to use the internet to deliver financial services to
their clientle also impacted the product-oriented and geographic diversification
in the financial services arena.
Going Global
Asian markets joined the expansion movement in 1996 when "Big Bang"
financial reforms brought about deregulation in Japan. Relatively far-reaching
financial systems in that country became competitive in a global environment
that was enlarging and changing swiftly. By 1999, nearly all remaining
restrictions on foreign exchange transactions between Japan and other countries
were lifted. (For background on Japan, see The Lost Decade: Lessons From
Japan's Real Estate Crisis and Crashes: The Asian Crisis.)
Following the changes in the Asian financial market, the United States continued
to implement several additional stages of deregulation, concluding with the
Gramm-Leach-Bliley Act of 1999. This law allowed for the consolidation of major
financial players, which pushed U.S.-domiciled financial service companies
involved in M&A transactions to a total of $221 billion in 2000. According to a
2001 study by Joseph Teplitz, Gary Apanaschik and Elizabeth Harper Briglia in
Bank Accounting & Finance, expansion of such magnitude involving trade
liberalization, the privatization of banks in many emerging countries and
technological advancements has become a rather common trend. (For more
insight, see State-Run Economies: From Public To Private.)
The immediate effects of deregulation were increased competition, market
efficiency and enhanced consumer choice. Deregulation sparked unprecedented
changes that transformed customers from passive consumers to powerful and
sophisticated players. Studies suggest that additional, diverse regulatory efforts
further complicated the running and managing of financial institutions by
increasing the layers of bureaucracy and number of regulations. (For more on
this topic, see Free Markets: What's The Cost?)
Simultaneously, the technological revolution of the internet changed the nature,
scope and competitive landscape of the financial services industry. Following
deregulation, the new reality has each financial institution essentially operating
in its own market and targeting its audience with narrower services, catering to
the demands of a unique mix of customer segments. This deregulation forced
financial institutions to prioritize their goals by shifting their focus from ratesetting and transaction-processing to becoming more customer-focused.
Challenges and Drawbacks of Financial Partnerships
Since 1998, the financial services industry in wealthy nations and the United
States has been experiencing a rapid geographic expansion; customers
previously served by local financial institutions are now targeted at a global
level. Additionally, according to Alen Berger and Robert DeYoung in their article

Technological Progress and the Geographic Expansion of the Banking Industry


(Journal of Money, Credit and Banking, September 2006), between 1985 and
1998, the average distance between a main bank and its affiliates within U.S.
multibank holding companies has increased by more than 50%, from 123.4 miles
to 188.9 miles. This indicates that the increased ability of banks to make small
business loans at greater distances enabled them to suffer fewer diseconomies
of scale and boost productivity. (To learn more, check out Competitive
Advantage Counts.)
Deregulation has also been the major factor behind this geographic
diversification, and beginning in the early 1980s, a sequence of policy changes
implemented a gradual reduction of intrastate and interstate banking
restrictions.
In the European Union, a similar counterpart of policy changes enabled banking
organizations and certain other financial institutions to extend their operations
across the member-states.Latin America, the transitional economies of Eastern
Europe and other parts of the world also began to lower or eliminate restrictions
on foreign entry, thus enabling multinational financial institutions headquartered
in other countries to attain considerable market shares.
Transactions without Boundaries, Borders
Recent innovations in communications and information technology have resulted
in a reduction in diseconomies of scale associated with business costs faced by
financial institutions contemplating geographic expansion. ATM networks and
banking websites has enabled efficient long-distance interactions between
institutions and their customers, and consumers have become so dependent on
their newfound ability to conduct boundary-less financial transactions on a
continuous basis that businesses lose all competitiveness if they are not
technologically connected.
An additional driving force for financial service firms' geographic diversification
has been the proliferation of corporate combination strategies such as mergers,
acquisitions, strategic alliances and outsourcing. Such consolidation strategies
may improve efficiency within the industry, resulting in M&As, voluntary exit, or
forced withdrawal of poorly performing firms.
Consolidation strategies further empower firms to capitalize on economies of
scale and focus on lowering their unit production costs. Firms often publicly
declare that their mergers are motivated by a desire for revenue growth, an
increase in product bases, and for increased shareholder value via staff
consolidation, overhead reduction and by offering a wider array of products.
However, the main reason and value of such strategy combinations is often
related to internal cost reduction and increased productivity. (For further
reading, check out What Are Economies of Scale?)
Unfavorable facts about the advantages and disadvantages of the major
strategies used as a tool for geographic expansions within the financial services
sectors were obscured in 2008 by the very high rates of M&As, such as those
between Nations Bank and Bank of America (NYSE:BAC), Travelers Group and
Citicorp (NYSE:C), JP Morgan Chase (NYSE:JPM) and Bank One. Their dilemma was
to create a balance that maximized overall profit.
Conclusion
The conclusion regarding the impact, advantages and disadvantages of domestic
and international geographic diversification and expansion on the financial
service industry is the fact that with globalization, the survival and success of
many financial service firms lies in understanding and meeting the needs,

desires and expectations of their customers.


The most important and continually emerging factor for financial firms to operate
successfully in extended global markets is their ability to efficiently serve
discerning, highly sophisticated, better educated, more powerful consumers
addicted to the ease and speed of technology. Financial firms that do not to
realize the significance of being customer-oriented are wasting their resources
and eventually will perish. Businesses that fail to recognize the impact of these
consumer-driven transformations will struggle to survive or cease to exist in a
newly forged global financial service community that has been forever changed
by deregulation. (To learn more about this industry, check out The Evolution of
Banking.)
What is meant by globalization? Discuss Indias experience with
globalization since 1991.
Concept of Globalization: Globalization refers to the process of increasing
economic integration and growing economic interdependence between nations.
It means integration
Indias experience with Globalization: The industrial policy 1991 aims to
globalize the Indian economy through reforms in foreign trade policy, such as
abolition of import licensing, convertibility of rupee, market determination of
foreign exchange rates, etc. These policy reforms are explained below.
(i)

(ii)

(iii)

(iv)

Imports liberalization: Quantitative restrictions such as import licenses


and quotas have been phased out. Under the new foreign trade policy
most imports have been put under Open General License (OGL) list,
wherein automatic permission is granted to import goods.
Now import license is necessary for every new item. Qualitative
restrictions on a large number of export items have been removed.
Several items of imports have been decanalised.
The Foreign Trade Policy 2004-09 seek to double Indias percentage
share of global merchandise trade within next five years. The measure
proposed in the new policy is to continue the process of liberalization,
simplification of procedures, reduction of transaction costs to the
Indian exporters and importers and focus on areas of the core
competency namely, agriculture, textile, cottage and handicraft sector.
Rationalization of Tariff structure: The structure and patterns of custom
duties levied on import of different commodities (known as tariff
structure) has become very complex over the years. Since 1991 the
peak tariff has been reduced substantially and the tariff structure has
been rationalized.
Reforms in Foreign Exchange management: Before 1991, Government
of India exercised strict control over foreign exchange. Under the
foreign exchange regulation act (FERA) all Indian exporters had to
surrender their foreign exchange earnings to the reserve bank of India.
Reforms in Foreign Direct Investment (FDI): Now 100 percent foreign
equity is allowed in many industries. In other cases, the upper limit for
foreign investment has been raised from 40% to 74%. Automatic
permission is given to foreign technology agreements in high priority
industries. Import of capital goods also gets automatic approval in case
the foreign exchange required for such import is received through

(v)
1)

2)
3)

4)
5)

foreign equity.
Capital Market Reforms: The major reform in the capital market are as
follows:
The capital issues (control) Act 1947 has been repealed Indian
Companies faced bureaucratic delays in issues of securities due to
this act.
Listing of Companies on stock exchanges has been liberalized.
The role of foreign Institutional Investors (FIIs) on Indian stock has
increased tremendously due to liberalization of foreign portfolio
inflows.
Private mutual funds (both Indian and Foreign) have been permitted
to operate thereby ending the monopoly of UTI.
The securities and Exchange Boards of India (SEBI) has become the
regulator of capital market.

What is WTO? State the objective and function of WTO.


WTO has a General council for its administration, which includes one permanent
representative of each member nation. Generally, it has one meeting per month
which is held in Geneva.
The higher authority of policy making is WTOs ministerial conference which is
held after every 2 years.
Director General is the highest official of the organization to look after day to day
working. General Council of WTO elects Director General for 4 years.
The ex-trades
WTO ministerial conference
Conference

Year

Place

Dec.9-13, 1996

Singapore

II

May 18-20, 1998

Geneva

III

Nov.30-Dec.9, 1999

Ciatel

IV

Nov. 9-14,2001

Doha

Sep. 10-14, 2003

Kankun

VI

Dec. 13-18, 2005

Hong-kong

Minister of Italy Mr. Renatto Rugaro is its present Director General. Four Deputy
Director General are also elected to assist the Director General.
Like GATT, WTOs headquarter is also at Geneva. According to the latest WTO
report (WTR-2003), at the end of April 2003, 146 countries were the members of
WTO. WTO increased the present membership of WTO to 151.
Objectives of WTO
1. To improve standard o living of people in the member countries.
2. To ensure full employment and broad increase in effective demand.
3. To enlarge production
4. To enlarge trade of goods.
5. To enlarge production with trade of services.
6. To ensure optimum utilization of world resources.

7. To accept the concept of sustainable development


8. To protect environment
Function of WTO
1. To provide facilities or implementation administration and operation of
multilateral and bilateral agreements of the world trade.
2. To provide a platform to member countries to decide future strategies
related to trade and tariff.
3. To administer the rules and process related to dispute settlement
4. To implement rules and provisions realted to trade policy preview
mechanism
5. To assist IMF and IBRD for establishing coherence in universal economic
policy determination.
6. To ensure the optimum use of world resources
7. To accept the concept of sustainable development
8. To protect environment
9. To ensure optimum utilization of world resources
10.To enlarge production
11.To enlarge trade or services
WTO agreements
The main WTO agreements can be divided into the following categories:
1. Agreement on agriculture: This provides a framework for the long-term

Values and ethics in management


What is Management ethics?
Ethics is the study of individual and collective moral awareness, judgment,
character and conduct.
Management ethics can be defined as the descriptive and normative study of
moral awareness, judgment, character and conduct as they relate to all levels of
managerial practice.
What is morality and how is it different from ethics?
Morality, is defined as the customary, sociolegal practices and activities that are
considered importantly right and wrong; the rules that governs these activities;
and the values that are embedded, fostered, or perused by those conventional,
sociolegal activities and practices.
Whereas management morality may allow one to succeed temporarily by
conforming to current social norms or organizational mazes, management ethics
calls for a sensitive, reflective, and systematic endorsement only after they have
withstood critical challenge.
What are the major types of Ethical theories?
Teleological ethics theories
Teleological ethics theories maintain that good ends and/or results determine the
ethical values of action.

Ethical egoism- an action is good if it produces or tends to produce


results that maximize a particular persons self-interest as defined
by the individual, even at the expense of others.
Utilitarianism- an action is good if it produces or tends to produce
the greatest amount of satisfaction for the greatest number of
stakeholders for the greatest number of people.
Eudaimonism- an action is good if it promotes or tends to promote
the fulfillment of goals constitutive of human nature and its
happiness.

Deontological ethics theory


Deontological ethical theories maintain that responsibility fulfilling obligations,
following proper procedure, doing the right thing. And adhering to moral
standards determines the ethical value of actions. Deontological ethics maintains
that actions are morally right independent of their consequences; for example,
for the secularist, it is right to keep promises and for the religious, it is might to
obey the Ten Commandments regardless of personal costs or benefits. Among
the major types of deontological ethics are negative and positive right theories,
social contract theories and social justice theories.

Negative and positive rights Negative rights- an action is right if it protects an individual
from unwarranted interference from Government and/ or
other people in the exercise of that right
Positive rights- an action is right if it provides any individual
with whatever he or she needs to exist
Social Contract- an action is right if it conforms to the terms agreed
upon, conditions, or rules for social well-being negotiated by
competent parties
Social justice- an action is right if it promotes the duty of fairness in
the distributive, retributive, and compensatory dimensions of social
benefits and burdens.

Virtue ethics theory

Individual character
Work character
Professional character

System Development Ethics Theory

Personal improvement
Organizational ethics
Extra- organizational ethics

What are the arguments for and against Social responsibility of


Corporate?
Arguments against social responsibilities are:
1
2
3

Contrary to basic functions of business


Domination of business values
Inefficiency in the system

Arguments in favor of social responsibilities are:


1
2
3
4

Business is a part of the society


Avoidance of government regulation
Long-run self-interest of business
Traditional value

Social objective of Business and Enterprise


Social entrepreneurs play the role of change agents in the social sector by1
2
3
4
5

Adopting a mission to create and sustain social values


Recognizing and relentlessly pursuing new opportunities to serve that
mission
Engaging in a process of continuous innovation, adaptation and
learning
Creating healthy environment by controlling pollution
Acting boldly without being limited by resources currently in hand

6
7

Helping religions, charitable and cultural institutions which serve the


people for betterment, and
Helping the nation by improving its economy by producing and
distributing such products which are essential for that nation.

Environmental Ethics and Corporate Responsibility


Environmental ethics is the part of environmental philosophy which considers
extending the traditional boundaries of ethics from solely including humans to
including the non-human world. It exerts influence on a large range of disciplines
including law, sociology, theology, economics, ecology and geography.
There are many ethical decisions that human beings make with respect to the
environment. For example:

Should we continue to clear cut forests for the sake of human


consumption?

Should we continue to propagate?

Should we continue to make gasoline powered vehicles?

What environmental
generations?

obligations

do

we

need

to

keep

for

future

Is it right for humans to knowingly cause the extinction of a species for the
convenience of humanity?
Environmental ethics is becoming an important issue for many companies and
businesses as there is a greater push for corporate responsibility. Leaders of
organizations of all sizes and in all sectors face a growing number of issues
related to ethical behavior, particularly in terms of environmental responsibility.
As global understanding of the significant ecological and environmental ethics
issues we face expands and moves to the forefront of debates, it is even more
important for leaders to take action to both remedy the causes of the problem
and to act as models for other organizations and individuals. Although there are
many examples of responsible corporate and organizational environmental
governance and behavior, there is yet to emerge a global initiative aimed at
changing the face of environmentally ethical and responsible action that will
promote further corporate responsibility. This lack of understanding of issues of
environmental ethics and corporate responsibility occurs for a number of a
reasons, one of which could be because of a lack of global consensus on the
importance of taking the necessary steps to remedy the problem. As one scholar
notes, "In our pluralistic societies, there is no uncontested common ethical
ground in general and no undisputed conception of environmental responsibility
in particular" (Enderle 2006) and as a result there is little unified action. If this
assessment is valid then it is necessary to first define a clear set of issues and
resolutions that organizations and leaders can agree upon.

One issue related to environmental ethics in the corporate and organizational


sphere is that most will concur is vital for all leaders is that there cannot be any
fuzzy distinctions between what is good for business
versus what is good for the environment. In other words, the consensus must be
that there should be a hierarchy of interests-one which places environmental and
sustainability concerns at the peak. "The claim is that the various benefits and
harms of development are incommensurable and not easily weighed, involving
differences between global and local goods-the benefits of selling wood fiber for
local populations versus the possible global benefits of a potential cure for
cancer or a contribution to the reduction in greenhouse gases...Whose interests
count for more?" (Light 2002). In short, the interests of the global good should
always outweigh those of the short-term monetary or other gains produced by
unethical or unsustainable practices and leadership decisions. Leaders in both
business and civil society have focused too much on the friction between them
and not enough on the points of intersection. The mutual dependence of
corporations and society implies that "both business decisions and social policies
must follow the principle of shared value. That is, choices must benefit both
sides. If either a business or a society pursues policies that benefit its interests at
the expense of the other, it will find itself on a dangerous path (Porter 2006).
The bulk of recent peer-reviewed literature on the topic of environmental ethics
and corporate responsibility has granted a great deal of focus on matters of
ethical behavior in the organizational context. Although there are still several
debates about possible courses of action that could and should be followed at
the management level, it is generally agreed that "environmental leadership is a
collective dynamic, wherein the difference between leaders and followers is
based more on degrees of social influence (through words and deeds) than on
traditional institutional power differentials" (Egri 2006). In addition, this also
suggests another popular paradigm that has emerged that insists that
businesses and organizations are increasingly more accountable for their
environmentally ethical behavior-both with the organization and in the view of
the public at large. With growing global consciousness devoted to understanding
and championing issues of environmental sustainability, companies and their
practices on such a level can no longer be viewed as separate matters. "The
evolution of business and societal concern has led to businesses gradually reembracing formerly displaced social orientation for both social and
environmental well-being" (Panwar 2006). What this means essentially, is that it
is even more important for the overall success or failure of a corporation or
organization to engage with public concerns and behave in a responsible way,
particularly as far as environmental issues are concerned. As Panwar goes on to
note, "When pursing [environmentally] ethical investments, individuals and
organizations seek out companies with a positive reputation while avoiding
companies linked to environmentally damaging practices, oppressive regimes,
etc.
The increase in environmental ethical investment has encouraged companies to
give attention to corporate responsibility"(Panwar 2006). How organizations give
this attention, however, is contested. The problem is becoming less a matter of
recognizing that environmental ethics issues are present and pressing and more

an issue of what to do, leadership-wise and on a meta-organizational level to


address these problems.
There are various approaches to solving the organizational (and for that matter,
national and international) problems surrounding effective and environmentally
ethical leadership. The main issue is, however, a lack of coherent ideology
surrounding organizational and corporate responses-even if the desire to be
more aware of environmental ethics matters exists. For example, according to a
survey conducted in December of 2006, "198 medium-sized to large
multinationals found that most said they lacked an active approach to
developing new business opportunities arising from meeting citizenship and
sustainability needs" (Marshall 2007). In order to remedy this crisis, many larger
organizations hired corporate responsibility officers to monitor such things as
environmental ethics. These individuals were charged with the task of reviewing
and analyzing current policy and practices to ensure that the highest ethical
standards were being met in a way that was conducive to the organization's
mission statement, budget, and overall corporate culture. In short, one approach
to solving the ethical demands that increasingly valued by both the public and
investors is to ensure corporate responsibility through the hiring of an outside
consultant. With larger organizations understanding the value of environmental
ethical responsibility, it is natural to assume that smaller entities will take notice
and follow suit.
Being an effective and responsible corporate leader is not simply something that
is an issue in the organizational context, but it extends to the community level as
well. Consider the case of Detroit and its rapidly dwindling reserve of natural
areas and resources. In Detroit, an urban ecosystem analysis undertaken by
American Forests revealed how land cover changes over the past 11 years have
affected environmental quality in a nine-county area of southeast Michigan.
"From 1991 to 2002 that region's open space declined by 10 percent while urban
areas increased dramatically-21 percent. As a result, the region lost $1 billion in
stormwater management services with a corresponding decline in water quality"
(Kollin 2006). "The companies were rated on their ability to provide good jobs for
employees, environmental sustainability, and healthy community relations"
(Mirren 2006).

INFORMAL CONSULTATION
ON STRATEGIES
FOR
GENDER EQUALITY- IS MAINSTREAMING A DEAD END?
Following four days of discussion of strategies for gender equality in international
organisations, the gender focal points of 15 UN organisations and development
banks together with representatives of 5 donor agencies and resource persons
drew the following conclusions and recommendations related to lessons learned
in promoting institutional change and effective strategies for the future:
A. Gender mainstreaming is not a dead end strategy. But it is not always fully
understood and implemented in the right way.
* There is confusion about concepts: gender and women. However, one does
not exclude the other. The use depends on the context. Gender is most
fruitfully used as an adjective, not a noun, in concepts like gender equality and

gender analysis. Women (and girls) are essential actors and target groups in
relation to gender equality. It is important to analyse issues so that gender
differences and disparities appear and women are visible in relation to men.
* There is also confusion about goals and means. The goal is gender equality and
womens empowerment. To achieve the goal, different strategies and actions are
needed according to circumstances. Polarisation of approaches does not work. A
main strategy is gender mainstreaming of all policies, programmes and projects.
But women must not be lost in the mainstream, or malestream!. Targeted
women-specific policies, programmes and projects are necessary to strengthen
the status of women and promote mainstreaming. In any case, there must be
specialist support, institutional mechanisms and accountability.

* Agencies have chosen different bases for their action: human rights or
efficiency considerations. In fact, it is not a question of either/or. The human
rights basis is more fundamental, but is not always made explicit and in some
organisations it is not well understood or appreciated. The emphasis will vary
from one organisation to the other, but it is important to realise that the
promotion of gender equality implies a social transformation in society in
addition to more effective economic development and poverty reduction.
B. Global commitment. The international womens conferences from Mexico
(1975) to Beijing (1995) established a global consensus and commitment to
promote gender equality which was reaffirmed by the Millennium Summit (2000).
This is a long term commitment and it is important to keep the goal on the
agenda. Ongoing political and financial support from Member States is essential
to maintain focus on gender issues and ensure implementation of the
recommendations. The mandates and policy statements of UN organisations and
development banks should have conceptual clarity and explicit language so
people understand them. Commitments should be clearly spelled out, given
visibility and cultivated. Without pressure from governing bodies and top
management mandates and policy statements do not get implemented. External
advisory gender boards or panels can be used to answer questions and help
elucidate and depersonalise issues.
C. Organizational change. The challenge is to transform multilateral
organisations to actively pursue the goal of promoting gender equality and
womens empowerment through a process of gender mainstreaming and other
forms of organizational change. As gender equality often touches on power
relations, there can be strong discomfort and even reistance to change. To make
progress the following is needed:
- strong, active leadership
- incentives and accountability
- a critical mass of commited individuals
D. Tools. Useful tools include
- partnerships: internally and externally

- action plans to move from general policies to practice


- advocacy events to keep the issues visible
- simple, understandable language that is suitable for non-specialist audiences
- universal norms, country statistics and local knowledge
- sex-disaggregated data and analyses
- best practice dissemination to excite the imagination
- regular reporting om commitments, monitoring and evaluation
- gender champions in relevant positions with appropriate financial resources
- gender-balanced staffing and supports, including adequate training
- individual recognition for good practice, rewards and incentives
E. Top management. Responsibility for promoting gender equality is system-wide
and rests at the highest levels of management. The active support of top
management is crucial to increase action and impact. There must be more than
lip-service. Leaders need to issue regular instructions and walk the talk. The
responsibility of different levels of management must be clearly defined. The
most important responsibility must be to create an enabling environment for
gender equality. Measures score-cards for enabling environment should be
put in place by top management. The gender units/advisers need to be proactive
in advocating with and assisting top management to obtain the necessary
support for gender equality. Also female top leaders need assistance on this.
There are competing concerns, goal congestion and resistance to change and to
addressing gender issues.
F. Enabling environment. An enabling environment for the promotion of gender
equality is important. Indicators of this include among others:
- percentage of core funds dedicated to gender issues
- gender inputs and outputs in corporate programmes and results frameworks
- gender issues integrated in corporate policy
- gender mainstreaming performance in performance appraisal reviews of staff
- gender perspectives in human resources policy: affirmative action in
recruitment, gender balance, work/life measures, harassment policy, value and
visibility of interdisciplinary skills in vacancy announcements and promotions
- regular gender audits including baseline data and monitoring
G. Gender units. To promote gender equality, funds and competent staff are
required. Corporate gender units are necessary. Regarding the level, resources
and institutional placement of the gender units, the key objective is maximum
and timely access to key corporate strategic processes and high-level
management. There must be a critical mass of staff resources/gender specialists

kept together and then ideally additional fulltime specialists in other units and
decentralized offices. There should be allocation of adequate resources and a
match of expectations and resources expressed in clear terms of reference of
catalytic functions of the gender unit
H. Capacity-building. Capacity-building for gender mainstreaming is still needed
in international organisations. A corporate capacity-building plan should be
elaborated and be the responsibility of the staff training and capacity-building
unit. The sustainability of efforts and investments is crucial, particularly in times
of high staff turn-over. It is important that policy informs practice as practice
should influence policy. Capacity-building should be tailor-made and demanddriven for various audiences: orientation for newcomers, gender modules in
other courses (e.g. project cycle), gender sensitivity training, gender analysis
training etc. Examples of successful practice are very useful and more cases
should be presented. But lessons learned cannot only be general, some must be
context-related.
I. Networks. Networks and alliances are important within the organisation and
outside. Internally, ownership should be shared with both women and men, and
between Headquarters and the field. Externally collaboration should be
established with governments, civil society and other UN organisations. Links
should be established and support provided for womens organisations and
groups, keeping in mind the character of the different groups and organisations.
It is also important to collaborate with business and professional organizations,
employers and trade unions, social and cultural associations, youth clubs etc.
J. Involvement of men. The involvement of men is important to promote gender
equality: more male staff in gender specialist posts, more male gender focal
points in other units and more male trainees/facilitators for gender capacitybuilding courses. Training curriculum should be packaged with a resultsoriented focus to appeal to managers. It is important to break stereotypes.
HIV/AIDS might be a good entry-point for talking with men about masculinity,
gender-based violence, trafficking etc. Contacts should be established with male
government and NGO representatives and they should be encouraged to
participate in advocacy events and discussions.
K. Accountability. To monitor progress it is important to define different roles and
responsibilities for staff members at different levels of accountability. Existing
accountability mechanisms need to be catalogued or mapped by level:
leadership (executive head), management (ADGs/Directors), gender advisers in
units, corporate gender units, country representatives. The role and
accountability should also be mapped for non-programme/non-technical units
such as evaluation/audit offices, programme budget offices and human
resources offices. Core competencies needed for fulfilling various responsabilities
need to be identified. Special attention should be given to the development of
results frameworks and systematic measurement of results. Even if planned
results are not achieved, efforts undertaken to meet gender commitments
should be acknowledged.
L. Mottos:

Whatever works, do it (dont be hung up in dogmatic approaches or language)


Be persistent (things are never fast and easy), passionate (both competence
and involvement are needed) and keep a sense of humour (there are many
perspectives and ways of thinking)
Dont compromise your dignity (there are limits to what a gender focal point
can or should do)
Damned if you do, damned if you dont (there are rarely simple
solutions)Dont reinvent the wheel, there are so many wheels (learn from the
experiences of others)
The more you advance, the more remains to be done (new opportunities entail
new challenges)

Pune: Women occupy just about 5% positions on the boards of director of Indian
firms listed on the Bombay Stock Exchange. The revelation, which comes amid
The study-a first of its kind in India and second in Asia- note that only 59 (5.3%)
of the 1,112 directors of companies that form the elite BSE-100 group are
women. These directorships are held by 48 different women, the study said.
The percentage compares unfavorably with Canada, where women hold 15% of
directorships, the United States (14.5%), the United Kingdom(12.2%),
Hongkong(8.9%) and Australia(8.3%).
The findings also reveal that 12 companies on the BSE-100 have more than one
female director, 7 companies have female executive directors and 2.5% of all
executive director roles are held by women. Less than half of the companies-only
46%- have women on their boards.
Of all the appointments made in 2010 (as of May 2010), 4.9% were women. Two
companies Jindal Steel& Power ltd. Have women as chairpersons and two of the
countries most significant banks-ICICI Bank and Axis Bank- have female CEOs.
The report includes a Women on Boards League Table which ranks companies
listed on the BSE-100 in terms of the gender diversity of their boards, with those
with the highest percentage o women on their boards appearing at the top. At
the top of the list is JSW Steel Limited, which has three women (23.1%) on its
board of 13. Oracle Financial Services Software is second with two
women(22.2%) on its board of nine and Piramal Healthcare is third with 20%
female board directors.
Both of Piramal Healthcares two female directors hold executive directorships,
the only BSE-100 company with two executive female directors. In forth place is
Axis Bank Ltd, with two (18.2%) of its 11 board members being female. In joint
fifth place, with two women (16.7%) out of 12 board members, are Lupin and
Titan Industries. The research looks at the representation o female directors on
the BSE-100 and ranks the companies in terms of the gender diversity of their
boards, with those with the highest percentage of women on their boards

appearing at the top.


We hope that this research will act as a catalyst for discussion in and amongst
corporate India on the need for greater gender diversity at senior levels, said
Shalini Mahtani, co-author of the report and Founder of Community Business.
Our aspiration is that in time we will have a true meritocracy in corporate India,
allowing each person, regardless of gender or background, to achieve their full
potential.

Shackles down the ages


Jemila Samerin
The Hindu/ Sunday, March 28, 2010

Prime minister Manmohan singh has described the historic womens reservation
bill as a giant step towards the empowerment of women and a celebration of
womanhood. The passing of the bill in the Rajya Sabha is a momentous, heart
warming step for India; also an inspirational trendsetter for womens
empowerment in the entire region.

The movement for womens right has broken many a fetter, but it has also
forged new ones. Women today are the striking power, a great contributor to
many working sectors, ready to accept challenges. But do we ever recognize
what boundaries they are being forced to cross?

The sexual laws and moral standards have always been stricter for women. The
female body was regarded down the ages as a mere vessel for the male creative
fluids. Women were the soil in which men planted their seeds. This perception
was also reflected in religious beliefs. Women were stripped of their creative role
and burdened with the responsibility for the Original Sin. The Ten
Commandments list wives among a mans possessions. Not surprisingly,
therefore, in a traditional Jewish prayer men implored God, Let not my offspring
be a girl, for very wretched is the life of woman. And they gladly repeated every
day: Blessed be Thou, O Lord our God, for not making me a woman.
The sacred texts of every major religion enshrines the subjugation of women
through myth (Eve causing the fall of man) or through code (the Shariah that
values a womans testimony as half that of a man and authorizes a man to beat
and whip his wife to keep her obedient to him).
Apostle Paul made it clear that the head of the woman is the man, For the man is
not of the woman; but the woman of the man. And if they will learn anything, let
them ask their husbands at home: or it is a shame for women to speak in the

church.
Christianity excluded women from priesthood and other church offices. At the
same time, they were also expected to remain subservient to men at home. In all
societies, the obvious biological difference between men and women is used as a
justification for forcing them into different social roles which limit and shape their
attitudes and behavior. A woman, in addition to being a female, must be
feminine. Sexual oppression, no matter how harsh or unjustified, has never
lacked rationalization. These may range from simple religious dogmas to
sophisticated pseudo-scientific theories. For over a hundred years, the old form
of marriage, based on the Bible, till death do us part, has been denounced as
an institution that stands for the sovereignty of the man over the woman, of her
complete submission to his whims and commands, and absolute dependence on
his name and support. In addition, women are generally exploited by the media.
They become like goods which are sold and bought. For instance, in
advertisements we usually see women presenting products; but unfortunately,
their bodies are used to attract consumers.
Break barriers.
The problem that confronts us today is how to be ones self and yet be in
oneness with others, to feel deeply with all human beings and still retain ones
own characteristic qualities. The modern woman would be enabled to blossom in
true sense- with full respect for her personality; all artificial barriers should be
broken, and the road towards grater freedom cleared of every trace of centuries
of submission and slavery

Ethics the Framework for success: while some ethical decisions are
simply a matter of right vs. wrong, the tough ethical decisions are right
vs. right.
The widespread attention given to the fall of companies such as Tyco,WorldCom,
and Enron has led to an increased focus on ethics in the business world. Because
of the enormous pressure to produce higher and better returns, some individuals
at corporations have adopted the philosophy, "the ends justify the means." They
fall into the trap of setting unrealistic budgets, improbable expectations, and
unlikely goals. Not surprisingly, investor confidence has been low due to the
many corporate scandals. Despite these results, however, firms continue to allow
external sources, such as outside analysts, to define success.
Instead, companies must ask the following question: "Have we replaced our
underlying business theme of 'succeeding at all costs' with 'succeeding only the
right way'?" An ethical culture can ensure success by establishing appropriate
expectations using proper guidelines, thus preventing the need or desire to be

involved in any questionable business practices. Ultimately, success is about


keeping your word, and companies that live up to their promises are successful.
While it's true that some businesses hold themselves to a higher ethical
standard, not all companies operate in an ethical environment. Financial
decisions often are made without considering the ethical implications.When
companies don't hold themselves to high ethical standards, the impact
reverberates throughout the financial markets. Companies are destroyed, jobs
are lost, and retirement savings are decimated. One of the government's
reactions to corporate wrongdoing was enactment of the Sarbanes-Oxley Act of
2002 (SOX). But as Gary Smith, CEO of CIENA, characterized it in the October 20,
2003, edition of USA Today, SOX was "'chemotherapy' to prevent the cancer
from recurring after cutting out corporate tumors at Enron,WorldCom, and
elsewhere."
Ensuring that an effective ethical culture exists in an organization isn't only a key
factor in preventing the kinds of losses brought about by corporate frauds and
avoiding the need for costly, burdensome legislation, but it can also enhance a
company's reputation, improve morale, and even increase sales. This article
examines top management's role in building an ethically minded culture, steps
for making sound choices, and examples of ethical issues.
FROM THE TOP DOWN
Establishing ethical standards for a business should be the primary goal of
executive management. Companies must design an environment that not only
encourages high ethical standards but also produces ethically minded
management, employees, suppliers, and customers.
To establish an ethical culture, top management must accept responsibility for
the ethical climate within their organizations. In reality, the actions of top
executives define a company's culture because employees emulate their boss's
behavior.Michael Hackworth, author of "Only the Ethical Survive" in the Fall 1999
Issues in Ethics, believes top leadership is ultimately responsible for the culture
of their organization, including the ethical culture.
To establish an ethical environment, top management needs to use five key
elements to build trust: integrity, competence, consistency, loyalty, and
openness with employees, vendors, and stakeholders. Stakeholder is a better
word than stockholder because it represents the significant effect that business
has on the community as a whole. Companies that operate under high ethical
values don't have to spend any negative energy hiding wrongdoings if they make
all decisions while considering the ethical implications. Most financial analysts
agree that no single variable affects the climate of an organization more than the
beliefs, practices, and ideas of its top management.
THE GOOD AND THE BAD
One company that provides a prime example of making good ethical decisions is
Johnson & Johnson. In 1982, James Burke, then CEO, faced an ethical dilemma.
The company experienced a major crisis when some of its Extra-Strength Tylenol
capsules were found laced with cyanide. Faced with a difficult decision, Burke

turned to Johnson & Johnson's credo: "We believe our first responsibility is to
doctors, nurses, and patients, to mothers and fathers and all others who use our
products and services." He ignored the immediate short-term financial
implication and adhered to the attitude of "doing the right thing," ordering the
recall of more than 31 million bottles at a cost of more than $100 million. This
action set a new standard for crisis management. As a result of these events, the
company developed the tamper-proof seal and gained even more market share
and customer loyalty than it had before the incident.
To make choices like Burke requires individuals to take the steps listed in "A
Framework for Thinking Ethically" from the Markkula Center for Applied Ethics at
Santa Clara University (www.scu.edu/ethics):
* Be sensitive to ethical issues,
* Explore ethical aspects of a decision,
* Weigh the considerations that impact their course of action, and
* Have the moral courage to make the right ethical choice.
[ILLUSTRATION OMITTED]
While companies will inevitably face difficult situations, their ability to make
ethical decisions must not be compromised for any reason. Consider Exxon, for
example. This company refused to accept responsibility for the Valdez accident,
and their attempt to blame state and federal officials for delays in containing the
spill damaged their reputation. Even today the name Exxon is synonymous with
environmental catastrophe. Due to ineffective communication from Exxon, the
public questioned their credibility and truthfulness. According to Jennifer Hogue
in
"What
is
Crisis
Management?"
(http://iml.jou.ufl.edu/
projects/Spring01/Hogue/crisismanagement.html), a survey conducted by Porter
Novelli several years after the accident found that 54% of respondents were still
less likely to buy Exxon products.
THE DANIEL EFFECT
Everyone within an organization should work together to create the "Daniel
Effect." This comes from the Old Testament account of a governing body trying
to discredit Daniel in front of the whole kingdom of Babylon. In the New King
James Version, the Book of Daniel, Chapter 6:3-4, says, "Then this Daniel
distinguished himself above the governors and satraps, because an excellent
spirit was in him; and the king gave thought to setting him over the whole realm.
So the governors and satraps sought to find some charge against Daniel
concerning the kingdom; but they could find no charge or fault, because he was
faithful; nor was there any error or fault found in him."
Employees would benefit individually from this mindset during their careers by
adhering to high ethical standards. Companies must build a strong ethical
framework to withstand attacks from the public through frivolous lawsuits,
competition's claims of wrongdoing, and any fraud attempted by their
employees. Positive public perception is vital to success in the marketplace,

which is protected by ethical behavior just as Daniel protected himself from his
enemies by remaining faithful to his high moral standards.
Some ethical decisions, such as cheating on taxes, lying under oath, or
overstating revenue and understating expenses, are simply a matter of right vs.
wrong. The tough ethical decisions are right vs. right. Four such dilemmas
include truth vs. loyalty, individual vs. community, short-term vs. long-term,
justice vs. mercy. Here are some real-world examples from Rushworth Kidder's
How Good People Make Tough Choices:
* It is right to find out all you can about your competitor's costs and price
structures--and right to obtain information only through proper channels;
* It is right to throw the book at good employees who make dumb decisions that
endanger the firm--and right to have enough compassion to mitigate the
punishment and give them another chance.
* It is right to protect the endangered spotted owl in the old-growth forests of the
American Northwest--and right to provide jobs to loggers.
Unfortunately, no magic formula exists to guide management through these
types of decisions. Companies must be willing to equally weigh the ethical
repercussions of one decision over the other.
DIFFICULT CHOICES
In Moral Courage, Kidder relates the story of Eric Duckworth. A metallurgist by
training, the recently married Duckworth took a position in 1949 with Federal
Mogul, a firm that made bearings for internal combustion engines. His job
description included examining damaged bearings returned by customers. He
would determine the cause of the failure, report to the customers, and
recommend changes to correct the problem.Most were due to misuse, improper
installation, and lack of lubrication. Sometimes he discovered that the faulty
parts were the result of production mistakes. His boss, the chief metallurgist,
regularly tried to cover up such faults by refusing to divulge all the facts and by
attributing the failure to end users mishandling the bearings, making no effort to
compensate customers.
At first, Duckworth rationalized "that he was prepared to commit sins of omission
but not of commission." Eventually, a particularly flagrant case drove him to
write a completely honest report, which his boss rejected. Summoning his moral
courage, he protested that he would resign if they didn't report the true findings
to the customer. His boss, as well as the sales department, protested that such
findings would cost them customers and perhaps more.
Fortunately, Duckworth previously had made several suggestions that increased
the productivity of the manufacturing process and won him the admiration of the
CEO, who backed him against his boss. The report went to the customer, who
responded with a congratulatory letter that said: "We had always suspected
concealment in some of your reports." In the wake of the company's new-found
honesty, the customer increased orders. Duckworth later recalled his moral

courage, "On one occasion when I was young and idealistic, I succeeded--and
have been proud of it ever since."
My own experience illustrates how one benefit of ethical behavior is improved
employee morale. The testing lab at a former employer of mine discovered a
potential electrical hazard related to a specific motor supplier. Under unique
circumstances that required the existence of several conditions, this motor had
the potential to deliver an electric shock to the end user. The possible financial
impact of rework or possible recall could cost the company millions of dollars.
Our management team, aware of the chance for a possible recall, decided to
report this issue to the Consumer Product Safety Commission (CPSC). Taking a
pro-ethical approach had a positive impact on me and other employees because
we all were impressed with the company's commitment to product safety.
SAFEGUARD THE FUTURE
Every day, management decisions affect individuals, families, and even nations.
Before making a final decision, the goal should be to completely consider the
ethical implications, including the immediate financial impact as well as the
lasting consequences. If the organization's climate is to not permit wrongdoing of
any kind, then employees are more likely to work harder for the company's
common good. Ethical decision making safeguards an enterprise's future.
Managing companies in the ever-changing business environment is difficult even
without falling into the trap of earnings-only management. But an organization's
management can't concentrate on the future if it's worried about any past
corrupt business dealings. An ethical culture cultivates realistic expectations with
the focus on following sound and unquestionable business principles. Ethics
improves goodwill, company perception, employee morale, and even sales.
Ethics allows management to be focused on the future, thereby becoming the
framework for long-term success.
Steve Hunter, CMA, is a senior finance manager for equipment at an
international company. He has 16 years of experience in accounting and finance.
You can reach Steve at (731) 645-4526 or shunter7263@bellsouth.net.
Ethics is a topic at IMA's Annual Conference, June 14-18, 2008, in Tampa, Fla. For
information, visit www.imaconference.org.
BY STEVE HUNTER, CMA
****************************************
Thinking Ethically:
A Framework for Moral Decision Making
Developed by Manuel Velasquez, Claire Andre, Thomas Shanks, S.J., and
Michael J. Meyer
Moral issues greet us each morning in the newspaper, confront us in the memos
on our desks, nag us from our children's soccer fields, and bid us good night on
the evening news. We are bombarded daily with questions about the justice of
our foreign policy, the morality of medical technologies that can prolong our

lives, the rights of the homeless, the fairness of our children's teachers to the
diverse students in their classrooms.
Dealing with these moral issues is often perplexing. How, exactly, should we
think through an ethical issue? What questions should we ask? What factors
should we consider?
The first step in analyzing moral issues is obvious but not always easy: Get the
facts. Some moral issues create controversies simply because we do not bother
to check the facts. This first step, although obvious, is also among the most
important and the most frequently overlooked.
But having the facts is not enough. Facts by themselves only tell us what is; they
do not tell us what ought to be. In addition to getting the facts, resolving an
ethical issue also requires an appeal to values. Philosophers have developed five
different approaches to values to deal with moral issues.
The Utilitarian Approach
Utilitarianism was conceived in the 19th century by Jeremy Bentham and John
Stuart Mill to help legislators determine which laws were morally best. Both
Bentham and Mill suggested that ethical actions are those that provide the
greatest balance of good over evil.
To analyze an issue using the utilitarian approach, we first identify the various
courses of action available to us. Second, we ask who will be affected by each
action and what benefits or harms will be derived from each. And third, we
choose the action that will produce the greatest benefits and the least harm. The
ethical action is the one that provides the greatest good for the greatest number.
The Rights Approach
The second important approach to ethics has its roots in the philosophy of the
18th-century thinker Immanuel Kant and others like him, who focused on the
individual's right to choose for herself or himself. According to these
philosophers, what makes human beings different from mere things is that
people have dignity based on their ability to choose freely what they will do with
their lives, and they have a fundamental moral right to have these choices
respected. People are not objects to be manipulated; it is a violation of human
dignity to use people in ways they do not freely choose.
Of course, many different, but related, rights exist besides this basic one. These
other rights (an incomplete list below) can be thought of as different aspects of
the basic right to be treated as we choose.

The right to the truth: We have a right to be told the truth and to be
informed about matters that significantly affect our choices.

The right of privacy: We have the right to do, believe, and say whatever
we choose in our personal lives so long as we do not violate the rights of
others.

The right not to be injured: We have the right not to be harmed or injured
unless we freely and knowingly do something to deserve punishment or
we freely and knowingly choose to risk such injuries.

The right to what is agreed: We have a right to what has been promised by
those with whom we have freely entered into a contract or agreement.

In deciding whether an action is moral or immoral using this second approach,


then, we must ask, Does the action respect the moral rights of everyone? Actions
are wrong to the extent that they violate the rights of individuals; the more
serious the violation, the more wrongful the action.
The Fairness or Justice Approach
The fairness or justice approach to ethics has its roots in the teachings of the
ancient Greek philosopher Aristotle, who said that "equals should be treated
equally and unequals unequally." The basic moral question in this approach is:
How fair is an action? Does it treat everyone in the same way, or does it show
favoritism and discrimination?
Favoritism gives benefits to some people without a justifiable reason for singling
them out; discrimination imposes burdens on people who are no different from
those on whom burdens are not imposed. Both favoritism and discrimination are
unjust and wrong.
The Common-Good Approach
This approach to ethics assumes a society comprising individuals whose own
good is inextricably linked to the good of the community. Community members
are bound by the pursuit of common values and goals.
The common good is a notion that originated more than 2,000 years ago in the
writings of Plato, Aristotle, and Cicero. More recently, contemporary ethicist John
Rawls defined the common good as "certain general conditions that are...equally
to everyone's advantage."
In this approach, we focus on ensuring that the social policies, social systems,
institutions, and environments on which we depend are beneficial to all.
Examples of goods common to all include affordable health care, effective public
safety, peace among nations, a just legal system, and an unpolluted
environment.
Appeals to the common good urge us to view ourselves as members of the same
community, reflecting on broad questions concerning the kind of society we want
to become and how we are to achieve that society. While respecting and valuing
the freedom of individuals to pursue their own goals, the common-good
approach challenges us also to recognize and further those goals we share in
common.
The Virtue Approach
The virtue approach to ethics assumes that there are certain ideals toward which
we should strive, which provide for the full development of our humanity. These
ideals are discovered through thoughtful reflection on what kind of people we
have the potential to become.
Virtues are attitudes or character traits that enable us to be and to act in ways
that develop our highest potential. They enable us to pursue the ideals we have

adopted. Honesty, courage, compassion, generosity, fidelity, integrity, fairness,


self-control, and prudence are all examples of virtues.
Virtues are like habits; that is, once acquired, they become characteristic of a
person. Moreover, a person who has developed virtues will be naturally disposed
to act in ways consistent with moral principles. The virtuous person is the ethical
person.
In dealing with an ethical problem using the virtue approach, we might ask, What
kind of person should I be? What will promote the development of character
within myself and my community?
Ethical Problem Solving
These five approaches suggest that once we have ascertained the facts, we
should ask ourselves five questions when trying to resolve a moral issue:

What benefits and what harms will each course of action produce, and
which alternative will lead to the best overall consequences?

What moral rights do the affected parties have, and which course of action
best respects those rights?

Which course of action treats everyone the same, except where there is a
morally justifiable reason not to, and does not show favoritism or
discrimination?

Which course of action advances the common good?

Which course of action develops moral virtues?

This method, of course, does not provide an automatic solution to moral


problems. It is not meant to. The method is merely meant to help identify most
of the important ethical considerations. In the end, we must deliberate on moral
issues for ourselves, keeping a careful eye on both the facts and on the ethical
considerations involved.
This article updates several previous pieces from Issues in Ethics by Manuel
Velasquez - Dirksen Professor of Business Ethics at Santa Clara University and
former Center director - and Claire Andre, associate Center director. "Thinking
Ethically" is based on a framework developed by the authors in collaboration
with Center Director Thomas Shanks, S.J., Presidential Professor of Ethics and the
Common Good Michael J. Meyer, and others. The framework is used as the basis
for many programs and presentations at the Markkula Center for Applied Ethics.
This article appeared originally in Issues in Ethics V7 N1 (Winter 1996)
*********************
Relationships among ethical pressure, professional expectation, stress,
and job quality of accountants in Thailand
ABSTRACT

This study aims at testing the influence of ethical pressure, professional


expectation in stress and job quality via moderators of time pressure and self
esteem. Accountants in Thailand are the sample. The results show that ethical
pressure and professional expectation have positive and significant association
with stress. In addition, stress is positively and significantly related to job quality.
these findings provide some initial empirical support for suggests need for
additional investigation of factors that influence an accountant's stress and for
further investigation into the effect of ethical pressure, professional expectation
on job quality. Therefore, contributions and suggestion are also provided for
further research.
Keywords: Ethical Pressure, Professional Expectation, Stress, Job Quality, Time
Pressure, Self-Esteem
1. INTRODUCTION
The rapid acceleration of the global economic system, world trade and free
markets continue to expand organizations seek new business opportunities to
enhance their competitiveness. Organizations focus to improve services,
enhance product quality and improve production efficiency. The significant
influence of business activities on accountant professionalism is interesting. It is
commonly accepted that accounting information is used to manage business.
Accountants cannot escape involvement in this undertaking. Accounting
professionals are generally perceived by the public. The characteristics of a
professional, which include the presence of a systematic body of the skills
required for practice, the sanction of the community in the form of formal
credentials and licensing, recognition by the general public of profession
authority over the knowledge and skills in the field, a regulative code of ethics
and a professional culture with a language, symbols and norm of its own.
Accounting professionals have found to experience various kinds of stress
related to their work and workplace. Examples of stressful job characteristics are
job pressure from public's expectation, requires professional's ethic, and
professional institution's control, which sometimes lead to job quality. Sanders et
al. (1995) described that eight work-related sources of stress are: role ambiguity,
role conflict, overload-quantitative, overload-qualitative, career progress,
responsibility for people, time pressure, and job scope. Accounting professionals
in Thailand are related to business societies that are expected and pressured to
accountants' practice about accountability and professionalism. Thus, the
accountants perceive stress from their practices. Research questions are how the
ethical pressure and professional expectation influence stress, stress affects the
job quality.
Therefore, the primary purpose of this study is to examine the relationships
between (1) ethical pressure and stress, (2) professional expectation and stress,
(3) ethical pressure and stress when moderated by time pressure, (4)
professional expectation and stress when moderated by time pressure, (5) stress
and job quality, and (6) stress and job quality when moderated by self esteem.
This structure of the paper is outlined as follows. In section 2, the relevant
literature on all constructs is reviewed. Section 3 presents research method of

this paper. Section 4 presents the results of the empirical study and discussion.
Section 5 proposes the theoretical and managerial contributions, and
suggestions for future research and section 6 ends the study with the conclusion.
2. RELATION MODEL AND HYPOTHESES
In this study and attend to test the effect of ethical pressure, and professional
expectation are independent variables, job quality is dependent variable, stress
is mediating variable, time pressure, and self esteem are moderator variable, as
shown in Figure 1.
[FIGURE 1 OMITTED]
2.1 Ethical Pressure
Ethical pressure is defined as an objective stimulus constructs referring to
individual characteristics or combinations of characteristics and events that
impinge on the perceptual and cognitive processes of individuals (DeZoort and
Lord, 1997; Pratt and Barling, 1988; Eden, 1982; Kahn et al. 1964). It refers to
conformity pressure affects individuals who tend to alter their attitudes or
behavior in an effort to be consistent with perceived group norm (DeZoort and
Lord, 1997; Brehm and Kassin, 1990). In this study, Ethical pressure is defined as
perceptions of professional values which accountants have as a professional
responsibility to adhere to a code of conduct, and ethical code that expressly
prohibits engaging in actions such as the fact of reported financial results.
Shafer (2002) and Aranya and Ferris (1984) found that accountants employed in
industry did in fact experience higher levels of organizational-professional
conflict than those employed in public accounting. Perceived ethical conflicts can
lead to dysfunctional organizational outcomes such as lower organizational
outcomes such as lower organizational commitment and higher turnover
intentions (Shafer, 2002; Schwepker, 1999). Thus ethical pressure is an
important factor to impact an accountant's stress. This implies that if there is
high ethical pressure it may also have great stress. This leads to the following
hypotheses:
H1a: The accountants with higher ethical pressure will have greater stress.
2.2 Professional expectation
Brierley (1999) and Lachman & Aranya (1986b) described that the realization of
professional expectations has been measured in research of accountants by
assessing the discrepancy between responses to questions about "how much
should there be" and "how much is there now", on aspects of professional
values, such as the autonomy to act according to professional judgment and
responsibility to clients. Thus in this study, Professional expectation refers to the
perceptions of public about professionalism, independence, self improvement,
commitment to learning, responsibility, skill with accountant's practice.
Sanders et al., (1995) described stress created by job requirements which
exceed the individual's ability or skill level. Professional expectation requires
accountant's ability and skill. Thus professional expectation has effect on

accountant's stress. If there is high professional expectation it may also have


great stress. This leads to the following hypotheses:
H1b: The accountants with higher professional expectation will have greater
stress.
2.3 Stress
In modern times, stress is related to several outcomes. Such as job-tension, job
satisfaction, absenteeism, turnover intention, and job performance. In order to
examine effect of stress on job quality, stress here refers to a response construct
dealing with how internalize and represent pressure with their cognitive
processes (DeZoort and Lord, 1997; Pratt and Barling, 1988). Stress has been
defined as a state which arises from an actual or perceived demand/capability
imbalance in an individual's vital adjustment actions (Picccoli and Emig, 1988).
Also, stress is defined as perceptions of accountant demand and capability
imbalance about ethics and professionalism.
Tulen and Neidermeyer (2004) and Sullivan and Baghat (1992) reviewed four
possible scenarios regarding stress and performance: stress may increase
performance, stress may decrease performance, stress may have no effect on
performance, and the relationship between stress and performance may
represent an inverted-U. Their findings supported a negative relationship
between stress and performance. Tulen and Neidermeyer (2004) and Rabinowitz
and Stumpf (1987) described that there is a positive relationship between stress
and job performance. Thus, stress may be closely related to job quality. This
leads to the following hypotheses:
H2a: Accountants with the greater stress will have greater job quality.
2.4 Time pressure
Time budget pressure refers to the pervasive constraint on resources that can be
allocated to accomplish a job (DeZoort and Lord, 1997). It is the perception of
unreasonable deadlines and time demands.
Sanders et al.,(1995) found role ambiguity, role conflict, overload quantitative,
overload-qualitative, career progress, responsibility for people, time pressure,
and job scope, related to auditor's stress. In order to examine the effects of time
pressure on stress, time pressure thus may be closely related to stress. This
leads to the following hypotheses:
H1c: The accountants with the higher time pressure will potentially have greater
positive relationship between ethical pressure and stress.
H1d: The accountants with the higher time pressure will potentially have greater
positive relationship between professional expectation and stress.
2.5 Self-esteem
Self-esteem refers to as the extent that employees feel valued and taken
seriously (LeRouge, et al., 2006). It significantly moderates the relationship

between role stress fit and job satisfaction. In order to examine the effects of
self-esteem on job quality, self-esteem thus may be related to job quality.
H2b: Accountants with the higher self esteem will potentially have greater
positive relationship between stress and job quality.
3. RESEARCH METHODS
3.1 Data collection
The samples were randomly drawn from 818 companies in Automotive/Auto
parts and accessories/Machiner in Thailand's Exporting Industries. The sampling
frame was listed from the Thailand's exporting firm database. The questionnaire
was constructed covering contents according to each variable that was
operationalized for empirical studies. The pretest was used to verify the validity
and reliability of expertise and misunderstanding were reduced that can arise
from ambiguities, and it is improved in its contents, item ordering, and wording.
Reliability was tested by Cronbach alpha reliability coefficients of all constructs
to make sure that the items of the questionnaire were designed to measure
consistency for each concept.
Later, 600 questionnaires were sent to accounting manager firms to provide data
for this study via mail. After two weeks 152 questionnaires were received. There
were 33 questionnaires that could not be sent to receivers and these were
returned. However, 2 received questionnaires were incomplete, and were not
included in the data analysis. This resulted in 100 usable responses or a
response rate of 26%.
3.2 Reliability and Validity
Constructs, multi-item scale, were tested by Cronbach alpha to measure
reliability of the data. Table 1 shows an alpha ranged from 0.60-0.80.comfortably
above the minimum 0.60 requirements (Chalos and Poon, 2000).That is internal
consistency of the measures used in this study can be considered very well for
all constructs.
Factor analysis is employed to test the validity of data in the questionnaire.
Items are used to measure each construct that is extracted to be one only
principal component. Table 1 shows factor loading of each construct that
presents a value higher than 0.50. Thus, construct validity of this study is tapped
by items in the measure, as theorized. That is, factor loading of each construct
should not be less than 4.00 (Hair et al., 2006).
3.3 Statistic Technique
OLS regression analysis is employed to estimate parameters in hypothesis test.
From the relationship model and the hypotheses the following seven equation
models are formulated:
Equation 1: S = [[beta].sub.01] [[beta].sub.1] EP [epsilon]
Equation 2: S = [[beta].sub.02] [[beta].sub.2] EP [[beta].sub.3] PE [epsilon]

Equation 3: S = [[beta].sub.03] [[beta].sub.4] EP [[beta].sub.5] PE [[beta].sub.6]


TP [[beta].sub.7] [EP.sup.*] TP [[beta].sub.8 [PE.sup.*] TP [epsilon]
Equation 4: JQ = [[beta].sub.04] [[beta].sub.9] S [epsilon]
Equation 5: JQ = [[beta].sub.05]
[[beta].sub.12] [S.sup.*] SE [epsilon]

[[beta].sub.10]

[[beta].sub.11]

SE

Where EP is Ethical Pressure; PE is Professional Expectation; S is Stress; TP is


Time Pressure; JQ is Job Quality; SE is Self-Esteem.
These regression equations are employed to estimate inferred parameters
whether the hypotheses are substantiated and fit overall model (f value) or not.
Then, the model variables and parameters are presented in various tables later.
3.4 Measure
All variables in Table1 use the 5-point Likert scale and show numbers of items in
order to tap each variable. Five-point Likert scale ranging from strongly disagree
(score one) to strongly agree (scored five) were used to measure all variable.
Next, respondents were asked to indicate ethical pressure, professional
expectation, stress, time pressure, self esteem, and job quality.
4. RESULTS AND DISCUSSION
This study aims at examining the relationship among ethical pressure,
professional expectation, and stress; stress and job quality, analyzed by OLS
regression model. Thus, the results will be presented by Table 3. Table 2 shows
the inter-correlation of all constructs to explore relating of each dual variable.
Results find that stress would be expected to positively and significantly
correlate with ethical pressure, professional expectation, and job quality.
The correlations among independent variables, ethical pressure and professional
expectation are not more high level; therefore multicolinearity is expected low
level when multiple regression model is employed; the model has stress as
dependent variable.
4.1 Antecedent of Stress
Table 3 shows the results of regression analysis to inference H1a, H1b, H1c, H1d
that is measured via user information satisfaction and monitoring items,
moderated by time pressure. The results indicate that in Model 3 of regression
equation consisting of ethical pressure, and professional expectation as
independent variables, and stress as dependent variable, there is a significant
and positive association between ethical pressure and stress (b =. 048; p>.05);
therefore, H1a is supported. Likewise, the relationship between professional
expectation and stress is significant and positive (b = .381; p<.01), which is
consistent with H1b.
The results of Model 1 presents according to Model 2 that the linkage between
ethical pressure and stress, and the linkage among ethical pressure, professional
expectation and stress are positive and significant. Indeed, Model 3 is added
moderator variable. Finding shows not significant relationship between

interaction of ethical pressure and time pressure with stress (b =. 052; p>.95).
Other interactions are not significant. Therefore, H1c and H1d are not supported.
4.2 Consequence of Stress
Results are presented in table 4; regression analysis is employed to estimate
parameters to test H2a. For Model 4, there is a positive and significant
relationship between stress and job quality (b = .326; p<.01).
Job quality is explained by stress equaling 10 percent. VIF values among
independent variables in less than 10 (Maximum of VIF value = 2.186), and little
multicolinearity is accepted. Thus, H2a is supported.
Table 4 shows the results of regression analysis to inference H2b that is
measured via user information satisfaction and monitoring items, moderated by
self esteem. The results indicate that in Model 5 of regression equation
consisting of stress, and self-esteem as independent variables, and job quality as
dependent variable, there is a significant and positive association between stress
and job quality (b =. 418; p<.01); but finding shows not significant relationship
between interaction of stress and self-esteem with job quality (b =. 056; p >.44).
Therefore, H2b is not supported.
5. CONTRIBUTIONS AND FUTURE DIRECTIONS FOR RESEARCH
5.1 Theoretical Contributions and Future Direction for Research
This research aims to provide an understanding of ethical pressure and
professional expectation that have a significant direct positive influence on
accountant's stress, stress has a significant direct positive influence on job
quality. The study provides important theoretical contributions expanding on
previous knowledge and literature of ethical pressure, professional expectation,
stress, and job quality. This research is one of the first known studies to link
among ethical pressure, professional expectation, stress, and job quality in
accountants' perspective. In addition, this study examines differences of
pressure affects accountants' stress and job quality via moderating effects of
time pressure, and self-esteem. According to the results of this research, the
need for future research should have effects of ethical pressure and professional
expectation with other industry.
5.2 Managerial Contributions
This study helps accountants identify and explain key components that may
influence to accountants' stress. Accountants should be continuously training in
order to continuously maintain knowledge and increase skills and ethics that
reduce accountants' stress. Accountants should provide other factors to support
job quality including the good staff, the greater time-management, the
appropriate accounting work scope, the character and number of work when suit
to accountant's capability. An important point is accountant's professional ethic
behavior that he or she should care for acting to professions and users.
Consequently, reducing accountant's stress and job quality are needed for
businesses and managers.
6. CONCLUSION

Our expectations regarding perceptions of accountant's stress were confirmed.


Both ethical pressure and professional expectation have a direct positively
influence on stress and stress has a direct positively influence on job quality. In
hypotheses testing professional expectation is more a superior variable than
ethical pressure in all relational. But interactions of moderator have not
association. Our results suggest that it would be prudent for firm managers to
focus their stress-reduction strategy upon accountant.

Corporate Ethics and Accountability


I was attending a conference on social investing in Boston this spring when a
spirited debate erupted over lunch. It was about what constitutes a socially
responsible business, not a topic expected to get ones blood boiling. But this
day, the discourse was particularly fierce, probably because the sub-text of the
debate was about integrity, and that's always a hot topic.
To everyone at my table, or at least to everyone but me, the corporate world
divided into so-called ethical corporations with "good intentions" and most of the
rest of the world. These "evil" corporations were led by businessmen ascribed
the most selfish of motivations, a desire to grow their companies. As such, they
risked being dismissed as mere "capitalists," a characterization among this group
akin to being labeled a pro-life activist at a NOW convention. No one, it seemed,
wanted to talk about what I wanted to discuss, which was which companies
turned out quality, affordable products or services, treated their trading partners
and employees fairly, and generally kept their noses clean. Instead, the talk
focused on such sexy subjects as world peace, spirituality in business, and the
worldwide battle of good and evil.
Who were the "bad" guys? According to group consensus, corporations that
manufacture weapons (which helped the United States defeat Iraq in the Gulf
War), refine oil (for planes and cars) or, the worst of all offenses, test their
ingredients according to accepted international standards to ensure the safety of
consumers.
The good guys? Many of the companies cited by my lunch companions pay their
workers near-minimum wage, are strongly anti-union, have an unhappy
workforce, and/or make luxury products at pricey premiums. No matter that in
marketing their products these ethical business superstars frequently confuse
their intentions and reputations with their not-so-lustrous corporate actions.
While reflecting on my inability to make any headway with this table of social
visionaries, my thoughts turned to Mark Twain, always a good source for irony in
the midst of hubris. "The secret of success is honesty and fair dealing," he once
said. "If you can fake these, you've got it made."
Are good intentions enough? No one took kindly to my observation that
corporations and individuals can be idealistic in intent while the consequences of
their actions are not particularly ethical. I wanted to talk about organizational
structures, internal auditing, corporate governance, and other snore-inducing
subjects. This just wasn't sexy, and it didn't result in the kind of rhetorical
flourishes and promises of social change that this group embraced as the Holy

Grail of socially responsible business. The lunch ended as unproductively as it


began.
What Does "Ethical" Mean?
The sobering reality is that the socially responsible business movement may
promote corporate behavior that is neither progressive nor particularly ethical.
Business ethics is based on broad principles of integrity and fairness and focuses
on internal stakeholder issues such as product quality, customer satisfaction,
employee wages and benefits, and local community and environmental
responsibilities?issues that a company can actually influence.
The corporate responsibility movement, on the other hand, has come to elevate
a social and political agenda that draws on notions of liberal propriety and
correctness that date to the 1960s. Truisms of social responsibility include the
embrace of environmentalism, antiwar pacificism, human rights, animal rights,
sexual rights, women's rights, and other -isms that few disagree about in
principle. For instance, military production and animal testing are negative
screens while the use of "natural" products or campaigning for Third World rights
demonstrates a higher ethical standard. Academics, the media, and social
investment firms have uncritically promoted these fashionable standards.
Unfortunately, applying these ambiguous litmus screens is more than just
problematic; these categories can promote a not-so-thoughtful social agenda of
questionable ethics. The business ethics community has some soul searching
ahead of it. Is it about outward-focused social vision, as represented by many
vocal leaders? Or is it about ethics?putting out a quality product at reasonable
prices; treating employees, vendors, franchisees, and investors fairly; acting
responsibly toward the local environment and community; and, most of all,
embracing transparency in operations and accountability to critics, internal and
external?
Easier in Theory Than in Practice
Ethics, like democracy, is a lot easier in theory than in practice. As an example,
let's look at the proliferation of codes of conduct and mission statements that
have been drafted in the wake of the Kathie Lee Gifford fiasco over foreign
sweatshops. The Gifford scandalette, as helpful as it has been in shining needed
light on the complicated issue of foreign sourcing, may also leave us with a notvery-progressive legacy if we're not careful.
The most highly touted solution to U.S. manufacturers' sourcing of goods from
low-wage countries?corporate codes of conduct on sourcing?frequently ends up
doing far more harm than good. As well-meaning as these codes and mission
statements purport to be, promises that companies cannot hope to implement?
or that cause more harm than good if they are implemented?divert attention
from the need for structural changes in the relationship between consuming
nations and raw material suppliers. The real benefits of many well-publicized
codes have gone to the companies who are embarrassed into drafting them, not
the people they were designed to help.
Starbucks

Take Starbucks, the boutique Seattle-based coffee retailer, as an example. To


earn enough to afford a pound of Starbucks' coffee, a Guatemalan worker would
have to pick 500 pounds of beans, about five days of work. As you choke on your
scone, note that this story has a twist: in a glittering ceremony in New York
recently, Starbucks was awarded the International Human Rights Award by the
Council on Economic Priorities (CEP) at its annual "Corporate Conscience" awards
ceremony.
How does a company under attack for exploiting cheap, foreign labor by activist,
environmental, and church groups become the belle of the socially responsible
ball? During 1994, Starbucks suffered embarrassing grassroots protests because
it sourced beans from export houses that paid Guatemalan workers below a
living daily wage, about $2.50 a day. The company is no worse than the average
wholesaler, but it has a better-than-average reputation as a new-breed, valuesdriven corporation. So when protesters leafleted Starbucks stores and targeted
its annual meeting, a peace plan was offered. Last year, Starbucks became the
first company in the agricultural commodities sector to announce a "framework"
for a code of conduct.
There are more than 30,000 farms in Guatemala, one of 20 coffee-supplying
countries. Starbucks was targeted not because it could change the labor status
quo--it is a bit player in the coffee business--but because of its high public profile.
The increasingly visible protests left Starbucks with little choice but to pass its
code, and it cost the company little. We were "prodded" into it, notes David
Olsen, Starbucks' senior vice president, diplomatically.
But according to Alice Tepper Marlin, CEP's executive director, the mission
statement alone was enough to earn Starbucks its honor. How has Starbucks
enforced its code? "We've done nothing yet," acknowledges Olsen. "It's a slow,
incremental process." Very incremental. Starbucks' promised review of
plantation conditions is being carried out by the Guatemalan coffee growers
association, the very organization accused of perpetuating the low wages. First
condemned for labor practices it could not hope to change, Starbucks is now
praised for actions it has not yet taken.
What can Starbucks accomplish with its code, putting aside its obvious goal of
quieting protests? "Codes are a start," says Eric Hahn of the US/Guatemala Labor
Education Project. "But only if it's part of a bigger strategy of industry
monitoring, which is one of the few tools available in an international,
deregulated economy. Otherwise it's just a balm to consumers."
This is not to suggest that codes are entirely meaningless. As Kathie Lee Gifford
has learned, promises focus attention. But solutions rest with accountability, and
there doesn't appear to be any here. Starbucks has no practical ability to
oversee conditions and says it cannot risk punishing violators.
The Apparel Industry
There is also the labyrinthine social and political climate in impoverished
countries to consider. In a follow-up article to the Gifford Story, New York Times
reporter Larry Rohter visited Honduras where clothes used to be made for
Gifford.1 The apparel assembly plants in Central America employ "slave labor"

and are "monstrous sweatshops of the New World Order," according to the
National Labor Committee, the New York-based group that publicized the issue.
But Honduran union leaders universally resent the moralizing of U.S. labor
activists who, like the National Labor Committee, are funded by organized labor
committed to preserving American jobs.
According to Honduran labor leaders, maquiladoras are increasingly unionized
and offer wages two-to-three times the minimum wage. These are prime jobs in
an economy in which almost half of the population can find no work at all. Labor
shortages at these jobs have helped bump up wages throughout the economy.
Even the bugaboo of child labor is more complicated than it seems. Honduran
adolescents are legally allowed to work at 14 with parental permission, and most
are desperate to help their families. The frenzy sparked by the Gifford spectacle
has led to the dismissal of hundreds of legally hired adolescents. Rather than
returning to school, which is not an option for most families who cannot afford to
feed and clothe their children, adolescents buy documents to work at even lower
pay or in some cases peddle their bodies. When confronted with the
consequences of their highpowered campaign, the New York labor group offered
little solace: "Obviously, this is not what we wanted to happen."
Although many clothing companies, such as Nike, KMart, JC Penney, and Reebok,
have rushed to pass sourcing codes, few make the effort to examine the
complexity of these issues. Of the high-profile retailers, Levi Strauss and The Gap
have distinguished themselves by devoting considerable resources to identifying
the first link in the supply chain (the shops that supply their suppliers) and
bringing direct pressure to establish minimum wage standards and working
conditions.
The Never-Never-Land of Good Intentions
Celebrating "good intentions" when complex social problems are at issue and not
understood goes to the heart of the corporate ethics conundrum. Rewarding
noble posturing also obscures meaningful progress by "messier" companies.
While many highly praised "New Age" firms have been found lacking in critical
areas of accountability and honesty of marketing, some of yesterday's most
vilified companies have quietly moved to the forefront of corporate
responsibility. Despite their regular appearances on "dishonorable" lists,
controversial multinationals such as Monsanto, DuPont, or Gillette offer fair
wages and benefits, have launched impressive affirmative action practices, are
addressing complicated environmental issues, actively engage their community
responsibilities, give many millions of dollars to charity, and sell quality,
competitively priced products and services.
Reforms can reduce litigation expenses, lighten regulatory pressures, and
improve company morale. Frequently they can result in considerable savings in
their own right. Selling necessary products with an eye to a broader definition of
stakeholder responsibilities is not politically sexy, but it can promote positive
social change.
When comparing these environmental and social reforms with the cosmetic code
at Starbucks or other boutique retailers, one has to wonder how they rack up so

many "good business" honors. A more basic question is why do so many "socially
responsible" awards go to companies that sell commodity goods to affluent
consumers at eyepopping prices?Starbucks, for example, where mark-ups
exceed 1,000 percent?
When asked why Starbucks was honored, CEP's Marlin says, "We want to reward
positive role models." Dare one suggest that CEP should have waited until
Starbucks did more than pass a "framework for a code of conduct," as admirably
symbolic as it may be? According to Starbucks, its code has had no effect on the
way it does business in Guatemala or dozens of other countries.
Awarding "A"s for visionary rhetoric shifts focus away from corporate governance
and behavior to the nevernever-land of good intentions. It's a dangerous trend
that companies promote Thoreau-like mission statements without organizational
commitments to implement those ideals. Character demonstrated by actions, not
by intentions, is the only reliable measure of corporate ethics.
Raising the Ethical Parapet
Socially responsible business, by promoting boutique social issues and using
simplistic litmus tests, encourages cynicism. Can we break out of this ideological
box and raise the ethical parapet? This special issue of At Work moves us beyond
the concept of corporate responsibility to its expression. How are companies
manifesting social and environmental responsibility? In what ways can they be
influenced to become better in this arena?
Our first articles describe the steps taken by the chemical industry and one of its
member firms, Velsicol Chemical Corporation, to become accountable to local
communities and to the environment. Then David Mager draws from 20 years of
experience to tell how socially and environmentally responsible behavior benefits
the bottom line.
Richard Adams' description of the new retail chain he has founded, Out of this
World, illustrates how it is possible to incorporate the means for corporate
accountability to multiple stakeholders into the design and operation of a
company.
We conclude with two articles that examine the principal avenues owners can
take to influence corporate behavior in a positive direction: ethical investing and
pension fund activism.
The corporate world cannot be divided easily into "good guys" and "evil
companies." Companies are dysfunctional families writ large. Mistakes,
sometimes whoppers, are built into life, including the life of corporations. Selfscrutiny and accountability are essential. The measure of a company's integrity
is not how loudlyit beats its own breast, or whether it blunders, but its respect for
its stakeholders and its responsiveness to problems.
1

"In Honduras, 'Sweatshops' Can Look Like Progress." New York Times, July 18,
1996, p. A1.

A framework for ethical decision making

The framework overview:Step one: Describe the problem


Step two: Determine whether there is an ethical issue or ethical dilemma
Step three: Identify and rank the key values and principles
Step four: Gather your information
Step five: Review any applicable code of ethics
Step six: Determine the options
Step seven: Select a course of action
Step eight: Put your plan into action
Step nine: Evaluate the results
Step ten: Submit cases to your ethical review team or board regularly for review
Step One: Describe the Problem

Ethical problems are always embedded in a context.

Circumstances impact upon the problem definition (for whom does the
problem exist? What is the setting?)

Beware of the tendency to look toward the clinical or purely legal


perspective for guidance.

Corporate governance
Corporate governance is a broad term that has to do with the manner in which
the rights and responsibilities are shared among owners, managers and
shareholders of a given company.
In essence, the exact structure of the corporate governance will determine what
rights, responsibilities, and privileges are extended to each of the corporate
participants, and to what degree each participant may enjoy those rights.
Generally, the foundation for any system of corporate governance will be
determined by several factors, all of which help to form the final form of
governing the company.
Within any corporation, the structure of corporate governance begins with laws
that impact the operation of any company within the area of jurisdiction.
Companies cannot legally operate without a corporate structure that meets the
minimum requirements set by the appropriate government jurisdiction. All
founding documents of the company must comply with these laws in order to be
granted the privilege of incorporation. In many jurisdictions, these documents
are required by law to contain at least the seeds of how the company will be
structured to allow the creation of a balance of power within the corporation.
Much of the basis for corporate governance is found in the documents that must
be prepared and approved before incorporation can take place. These
documents help to form the basis for the final expression of the balance of power
between shareholders, stakeholders, management, and the board of directors.
The bylaws, articles of incorporation, and the company charter will all include
details that determine who has what authority in the decision making process of
the company.
Along with the laws of the land and the founding documents, corporate
governance is further refined by the drafting of formal policies that not only
recognize the assignment of powers in accordance to the bylaws and corporate
charter, but also help to further define how those powers may be employed. This
helps to allow the company some degree of flexibility in maintaining a balance of
power as the company grows, without undermining the rights and privileges
inherent in each type of corporate participation.

Fundamental
Governance

and

Ethics

Theories

of

Corporate

History has revealed that there is a never-ending evolution of theories or models of


corporate governance. One of the reasons is due to the very essence of social consciences
that is minimal and profit making took center stage. All over the world, companies are
trying to instill the sense of governance into their corporate structure. With the surge of
capitalism, corporation became stronger while governments all over the world had to
succumb to its manipulations and dominance. Hence, this article is a review of literature on
the range of theories in corporate governance. The fundamental theories in corporate

governance began with the agency theory, expanded into stewardship theory and
stakeholder theory and evolved to resource dependency theory, transaction cost theory,
political theory and ethics related theories such as business ethics theory, virtue ethics
theory, feminists ethics theory, discourse theory and postmodernism ethics theory.
However, these theories address the cause and effect of variables, such as the configuration
of board members, audit committee, independent directors and the role of top management
and their social relationships rather than its regulatory frameworks. Hence, it is suggested
that a combination of various theories is best to describe an effective and good governance
practice rather than theorizing corporate governance based on a single theory.

Introduction

Corporations have become a powerful and dominant institution. They have reached to every
corner of the globe in various sizes, capabilities and influences. Their governance has
influenced economies and various aspects of social landscape. Shareholders are seen to be
losing trust and market value has been tremendously affected. Moreover with the emergence
of globalization, there is greater deterritorialization and less of governmental control, which
results is a greater need for accountability (Crane and Matten, 2007). Hence, corporate
governance has become an important factor in managing organizations in the current global
and complex environment. In order to understand corporate governance, it is important to
highlight its definition. Even though, there is no single accepted definition of corporate
governance but it can be defined as a set of processes and structures for controlling and
directing an organization. It constitutes a set of rules, which governs the relationships
between management, shareholders and stakeholders (Ching et al, 2006). The term corporate
governance has a clear origin from a Greek word, kyberman meaning to steer, guide or
govern.
From a Greek word, it moved over to Latin, where it was known as gubernare and the
French version of governer . It could also mean the process of decision-making and the
process by which decisions may be implemented. Henceforth, corporate governance has
much a different meaning to different organizations (Abu-Tapanjeh, 2008). In recent years,
with much corporate failures, the countenance of corporate has been scared.
Corporate governance includes all types of firms and its definitions could extend to cover all
of
the economic and non-economic activities. Literatures in corporate governance provide some
form of
meaning on governance, but fall short in its precise meaning of governance. Such ambiguity
emerges
in words like control, regulate, manage, govern and governance. Owing to such ambiguity,
there are
many interpretations. It may be important to consider the influences a firm has or affected by
in order
to grasp a better understanding of governance. Owing to vast influential factors, proposed
models of
corporate governance can be flawed as each social scientist is forming their own scope and
concerns.
Hence, this article reviews various fundamental theories underlining corporate governance.
These
theories range from the agency theory and expanded into stewardship theory, stakeholder
theory,
resource dependency theory, transaction cost theory, political theory and ethics related
theories such as
business ethics theory, virtue ethics theory, feminists ethics theory, discourse theory and
postmodernism ethics theory.

Fundamental Corporate Governance Theories


Agency Theory
Agency theory having its roots in economic theory was exposited by Alchian and Demsetz
(1972) and further developed by Jensen and Meckling (1976). Agency theory is defined as
the relationship between the principals, such as shareholders and agents such as the
company executives and managers.
In this theory, shareholders who are the owners or principals of the company, hires the agents
to perform work. Principals delegate the running of business to the directors or managers,
who are the shareholders agents (Clarke, 2004). Indeed, Daily et al (2003) argued that two
factors can influence the prominence of agency theory. First, the theory is conceptually and
simple theory that reduces the corporation to two participants of managers and shareholders.
Second, agency theory suggests that employees or managers in organizations can be selfinterested.
The agency theory shareholders expect the agents to act and make decisions in the principals
interest. On the contrary, the agent may not necessarily make decisions in the best interests of
the principals (Padilla, 2000). Such a problem was first highlighted by Adam Smith in the
18th century and subsequently explored by Ross (1973) and the first detailed description of
agency theory was presented by Jensen and Meckling (1976). Indeed, the notion of problems
arising from the separation of ownership and control in agency theory has been confirmed by
Davis, Schoorman and Donaldson (1997).
In agency theory, the agent may be succumbed to self-interest, opportunistic behavior and
falling short of congruence between the aspirations of the principal and the agents pursuits.
Even the understanding of risk defers in its approach. Although with such setbacks, agency
theory was introduced basically as a separation of ownership and control (Bhimani, 2008).
Holmstrom and Milgrom (1994) argued that instead of providing fluctuating incentive
payments, the agents will only focus on projects that have a high return and have a fixed
wage without any incentive component. Although this will provide a fair assessment, but it
does not eradicate or even minimize corporate misconduct. Here, the positivist approach is
used where the agents are controlled by principal-made rules, with the aim of maximizing
shareholders value. Hence, a more individualistic view is applied in this theory (Clarke,
2004). Indeed, agency theory can be employed to explore the relationship between
the ownership and management structure. However, where there is a separation, the agency
model can be applied to align the goals of the management with that of the owners. Due to
the fact that in a family firm, the management comprises of family members, hence the
agency cost would be minimal as any firms performance does not really affect the firm
performance (Eisenhardt, 1989). The model of an employee portrayed in the agency theory is
more of a self-interested, individualistic and are bounded
rationality where rewards and punishments seem to take priority (Jensen & Meckling, 1976).
This
theory prescribes that people or employees are held accountable in their tasks and
responsibilities.
Employees must constitute a good governance structure rather than just providing the need of
shareholders, which maybe challenging the governance structure.
Figure 1: The Agency Model
Self
interest
Self
interest
Performs
Hires & delegate
Principals Agents

2.2. Stewardship Theory


Stewardship theory has its roots from psychology and sociology and is defined by Davis,
Schoorman & Donaldson (1997) as a steward protects and maximises shareholders wealth
through firm performance, because by so doing, the stewards utility functions are
maximised. In this perspective, stewards are company executives and managers working for
the shareholders, protects and make profits for the shareholders. Unlike agency theory,
stewardship theory stresses not on the perspective of individualism (Donaldson & Davis,
1991), but rather on the role of top management being as stewards,
integrating their goals as part of the organization. The stewardship perspective suggests that
stewards
are satisfied and motivated when organizational success is attained.
Agyris (1973) argues agency theory looks at an employee or people as an economic being,
which suppresses an individuals own aspirations. However, stewardship theory recognizes
the
importance of structures that empower the steward and offers maximum autonomy built on
trust
(Donaldson and Davis, 1991). It stresses on the position of employees or executives to act
more
autonomously so that the shareholders returns are maximized. Indeed, this can minimize the
costs
aimed at monitoring and controlling behaviours (Davis, Schoorman & Donaldson, 1997).
On the other end, Daly et al. (2003) argued that in order to protect their reputations as
decision
makers in organizations, executives and directors are inclined to operate the firm to maximize
financial
performance as well as shareholders profits. In this sense, it is believed that the firms
performance
can directly impact perceptions of their individual performance. Indeed, Fama (1980) contend
that
executives and directors are also managing their careers in order to be seen as effective
stewards of
their organization, whilst, Shleifer and Vishny (1997) insists that managers return finance to
investors
to establish a good reputation so that that can re-enter the market for future finance.
Stewardship model
can have linking or resemblance in countries like Japan, where the Japanese worker assumes
the role of
stewards and takes ownership of their jobs and work at them diligently.
Moreover, stewardship theory suggests unifying the role of the CEO and the chairman so as
to
reduce agency costs and to have greater role as stewards in the organization. It was evident
that there
would be better safeguarding of the interest of the shareholders. It was empirically found that
the
returns have improved by having both these theories combined rather than separated
(Donaldson and
Davis, 1991).
Middle Eastern Finance and Economics - Issue 4 (2009) 91
Figure 2: The Stewardship Model
Intrinsic and
extrinsic
motivation
Shareholders
profits and
returns
Protects and maximise

shareholders wealth
Empower and

trust

Shareholders Stewards

2.3. Stakeholder Theory


Stakeholder theory was embedded in the management discipline in 1970 and gradually
developed by Freeman (1984) incorporating corporate accountability to a broad range of
stakeholders. Wheeler et al,
(2002) argued that stakeholder theory derived from a combination of the sociological and
organizational disciplines. Indeed, stakeholder theory is less of a formal unified theory and
more of a
broad research tradition, incorporating philosophy, ethics, political theory, economics, law
and
organizational science.
Stakeholder theory can be defined as any group or individual who can affect or is affected
by
the achievement of the organizations objectives. Unlike agency theory in which the
managers are
working and serving for the stakeholders, stakeholder theorists suggest that managers in
organizations
have a network of relationships to serve this include the suppliers, employees and business
partners.
And it was argued that this group of network is important other than owner-manageremployee
relationship as in agency theory (Freeman, 1999). On the other end, Sundaram & Inkpen
(2004)
contend that stakeholder theory attempts to address the group of stakeholder deserving and
requiring
managements attention. Whilst, Donaldson & Preston (1995) claimed that all groups
participate in a
business to obtain benefits. Nevertheless, Clarkson (1995) suggested that the firm is a system,
where
there are stakeholders and the purpose of the organization is to create wealth for its
stakeholders.
Freeman (1984) contends that the network of relationships with many groups can affect
decision making processes as stakeholder theory is concerned with the nature of these
relationships in
terms of both processes and outcomes for the firm and its stakeholders. Donaldson & Preston
(1995)
argued that this theory focuses on managerial decision making and interests of all
stakeholders have
intrinsic value, and no sets of interests is assumed to dominate the others.
92 Middle Eastern Finance and Economics - Issue 4 (2009)
Figure 3: The Stakeholder Model (Donaldson and Preston, 1995)
Government
Investors
Political
Groups
Supplier
Trade
Associations
Customers
Communities
Employees

FIRM
2.4. Resource Dependency Theory
Whilst, the stakeholder theory focuses on relationships with many groups for individual
benefits,

resource dependency theory concentrates on the role of board directors in providing access to
resources
needed by the firm. Hillman, Canella and Paetzold (2000) contend that resource dependency
theory
focuses on the role that directors play in providing or securing essential resources to an
organization
through their linkages to the external environment. Indeed, Johnson et al, (1996) concurs that
resource
dependency theorists provide focus on the appointment of representatives of independent
organizations
as a means for gaining access in resources critical to firm success. For example, outside
directors who
are partners to a law firm provide legal advice, either in board meetings or in private
communication
with the firm executives that may otherwise be more costly for the firm to secure.
It has been argued that the provision of resources enhances organizational functioning, firms
performance and its survival (Daily et al, 2003). According to Hillman, Canella and Paetzold
(2000)
that directors bring resources to the firm, such as information, skills, access to key
constituents such as
suppliers, buyers, public policy makers, social groups as well as legitimacy. Directors can be
classified
into four categories of insiders, business experts, support specialists and community
influentials. First,
the insiders are current and former executives of the firm and they provide expertise in
specific areas
such as finance and law on the firm itself as well as general strategy and direction. Second,
the
business experts are current, former senior executives and directors of other large for-profit
firms and
they provide expertise on business strategy, decision making and problem solving. Third, the
support
specialists are the lawyers, bankers, insurance company representatives and public relations
experts
and these specialists provide support in their individual specialized field. Finally, the
community
influentials are the political leaders, university faculty, members of clergy, leaders of social
or
community organizations.
2.5. Transaction Cost Theory
Transaction cost theory was first initiated by Cyert and March (1963) and later theoretical
described
and exposed by Williamson (1996). Transaction cost theory was an interdisciplinary alliance
of law,
economics and organizations. This theory attempts to view the firm as an organization
comprising
people with different views and objectives. The underlying assumption of transaction theory
is that
firms have become so large they in effect substitute for the market in determining the
allocation of
resources. In other words, the organization and structure of a firm can determine price and
production.
The unit of analysis in transaction cost theory is the transaction. Therefore, the combination
of people

with transaction suggests that transaction cost theory managers are opportunists and arrange
firms
transactions to their interests (Williamson, 1996).
Middle Eastern Finance and Economics - Issue 4 (2009) 93
2.6. Political Theory
Political theory brings the approach of developing voting support from shareholders, rather
by
purchasing voting power. Hence having a political influence in corporate governance may
direct
corporate governance within the organization. Public interest is much reserved as the
government
participates in corporate decision making, taking into consideration cultural challenges
(Pound,
1993). The political model highlights the allocation of corporate power, profits and privileges
are
determined via the governments favor. The political model of corporate governance can
have an
immense influence on governance developments. Over the last decades, the government of a
country
has been seen to have a strong political influence on firms. As a result, there is an entrance of
politics
into the governance structure or firms mechanism (Hawley and Williams, 1996).

3.0. Ethics Theories and Corporate Governace

Other than the fundamental corporate governance theories of agency theory, stewardship
theory, stakeholder theory, resource dependency theory, transaction cost theory and political
theory, there are
other ethical theories that can be closely associated to corporate governance. These include
business
ethics theory, virtue ethics theory, feminist ethics theory, discourse ethics theory, postmodern
ethics
theory.
Business ethics is a study of business activities, decisions and situations where the right and
wrongs are addressed. The main reasons for this are the power and influence of business in
any given
society is stronger than ever before. Businesses have become a major provider to the society,
in terms
of jobs, products and services. Business collapse has a greater impact on society than ever
before and
the demands placed by the firms stakeholders are more complex and challenging. Only a
handful of
business giants have had any formal education on business ethics but there seems to be more
compromises these days. Business ethics helps us to identify benefits and problems
associated with
ethical issues within the firm and business ethics is important as it gives us a new light into
present and
traditional view of ethics (Crane and Matten, 2007). In understanding the right and wrongs
in
business ethics, Crane & Matten, (2007) injected morality that is concerned with the norms,
values and
beliefs fixed in the social process which helps right and wrong for an individual or social
community.

Ethics is defined as the study of morality and the application of reason which sheds light on
rules and
principle, which is called ethical theories that ascertains the right and wrong for a situation.
Whilst business ethics theory focuses on the rights and wrongs in business, feminist ethics
theory emphasizes on empathy, healthy social relationship, loving care for each other and the
avoidance of harm. In an organization, to care for one another is a social concern and not
merely a
profit centered motive. Ethics has also to be seen in the light of the environment in which it is
exercised. This is important as an organization is a network of actions, hence influencing
transcommunal
levels and interactions (Casey, 2006). On the other end, discourse ethics theory is
concerned with peaceful settlement of conflicts. Discourse ethics, also called argumentation
ethics,
refers to a type of argument that tries to establish ethical truths by investigating the
presuppositions of
discourse (Habermas, 1996). Meisenbach (2006) contends that such kind of settlement would
be
beneficial to promote cultural rationality and cultivate openness.
Virtue ethics theory focuses on moral excellence, goodness, chastity and good character.
Virtue
is a state to act in a given situation. It is not a habit as a habit can be mindless (Annas, 2003).
Aristotle
calls it as disposition with choice or decision. For example, if a board member decides to be
honest,
now that a decision which he makes and thus strengthens his virtue of honesty. Virtue
involves two
aspects, the affective and intellectual. The concept of affective in virtue theory suggests
doing the
right thing and have positive feelings, whilst, the concept of intellectual suggests to do
virtuous act
with the right reason. Virtues can be instilled with education. Aristotle mentions that
knowledge on
ethics is just like becoming a builder (Annas, 2003). Through the process of educating and
exposure to
good virtues, the development of ethical values in a childs life is evident. Hence, if a person
is
94 Middle Eastern Finance and Economics - Issue 4 (2009)
exposed to good or positive ethical standards, exhibiting honesty, just and fairness, than he
would
exercise the same and it will be embedded in his will to do the right thing at any given
situation. Virtue
ethics is eminent to bring about the intangibles into an organization. Virtue ethics highlights
the
virtuous character towards developing a morally positive behavior (Crane and Matten, 2007).
Virtues
are a set of traits that helps a person to lead a good life. Virtues are exhibited in a persons
life.
Aristotle believed that virtue ethics consists of happiness not on a hedonistic sense, but rather
on a
broader level. Nevertheless, postmodern ethics theory goes beyond the facial value of
morality and
addressed the inner feelings and gut feelings of a situation. It provides a more holistic
approach in
which firms may make goals achievement as their priority, foregoing or having a minimal
focus on

values, hence having a long term detrimental effect. On the other hand, there are firms today
who are
so value driven that their values become their ultimate goal (Balasubramaniam, 1999).

4.0. Conclusion

This review has seen corporate governance from various theoretical perspectives. The
emergence of
agency theory, stewardship theory, stakeholder theory, transaction cost theory and political
theory
addresses the cause and effect of variables, such as the configuration of board members, audit
committee, independent directors and the role of top management. In addition, ethics in
business have
been closely associated with corporate governance. This can be seen with the association of
business
ethics theory, feminist ethics theory, discourse ethics theory, virtue ethics theory and
postmodern ethics
theory. Hence, it can be argued that corporate governance is more of a social relationships
rather than
process orientated structure. In addition, these theories focused on the view that the
shareholders
aimed to get a return on their investments. In todays business environment, business process
should
also focus on other critical factors such as legislation, culture and institutional contexts.
Corporate
governance is constantly changing and evolving and changes are driven by both internal and
external
environmental dynamics. The internal environment has a fixed mindset of shareholders
relationship
with stakeholders and maximizing profits. Whilst, issues in the external environment such as
the breakup
of large conglomerates like Enron, mergers and acquisitions of corporation, business
collaborations,
easier financial funding, human resource diversity, new business start-ups, globalization and
business
internationalization, and the advance of communication and information technology have
directly and
indirectly caused the changes in corporate governance. The current corporate governance
theories
cannot fully explain the complexity and heterogeneity of corporate business. Governance for
different
country may vary due to its cultural values, political and social and historical circumstances.
In this
sense, governance for developed countries and developing countries can vary due to the
culture and
economic contexts of individual country.
Moreover, an effective and good corporate governance cannot be explained by one theory but
it
is best to combine a variation of theories, addressing not only the social relationships but also
emphasize on the rules and legislation and stricter enforcement surrounding good governance
practice
and going beyond the norms of a mechanical approach towards corporate governance.
Literature has
proven that even with strict regulations, there have been infringements in corporate
governance. Hence
it is crucial that a holistic realization be driven across the corporate world that would bring
about a

different perspective towards corporate governance. The days of cane and bridle are
becoming a mere
shadow and the need to get to the root of a corporation is essential. Therefore, it is important
to re-visit
corporate governance in the light of the convergence of these theories and with a fresh angle,
which
has a holistic view and incorporating subjectivity from the perspective of social sciences.
Middle Eastern Finance and Economics - Issue 4 (2009) 95

Invest
or

Invest
or

Invest
or

Invest
or

Firm
Invest
or

Invest
or
Invest
or
Invest
or

Invest
or

Invest
or

Invest
or

Invest
or

Invest
or

Invest
or

Invest
or

Invest
or

Invest
or

Invest
or

Invest
or

Invest
or

CEO Duality and Agency Theory


Time Warner president and CEO Jeffrey Bewkes accepted a position of duality
when he took on the role of chairman of the board of the company in January
2009.

Appointing a CEO's successor gets a little more complicated when the chief executive officer
is also a member of the board of directors. Let's examine how muddled up things can get in
this case.
So we know that shareholders elect a board of directors for a company, and that board in turn
elects the CEO. But we've also learned that in some cases, a CEO can be a member of the
board itself. In fact, he or she can simultaneously hold the position of chairman of the board
and CEO. Those who study corporate governance call this situation CEO duality.
As you might expect, duality is controversial. Even theorists who strive to find the best ways
of managing a company are split about the issue. Two schools of thought represent the
different arguments. Advocates of agency theory argue that the positions of CEO and
chairman should be separate. They say that a single officer who holds both positions creates a
conflict of interest that could negatively affect the interests of the shareholders. Why? Well,
in this situation, the CEO/chairman is be able to direct board meetings and isn't restrained
from acting in his or her own self-interest when a separate chairman isn't there to look out for
shareholders. This very powerful CEO would therefore generally weaken the oversight power
that boards hold -- in other words, there wouldn't be a solid system of checks and balances.
And it's not just an issue of power for the acting CEO/chairman. CEO duality can also
complicate the already frustrating issue of CEO succession. In some cases, a CEO/chairman
may choose to retire as CEO, but keep his or her role as the chairman. Although this splits up
the roles, which appeases agency theorists somewhat, it nonetheless puts the new CEO in a
difficult position. The chairman is bound to question some of the new changes put in place,
and the board as a whole might take sides with the chairman whom they trust and have a
history with [source: Lavelle]. This conflict of interest would make it difficult for the new
CEO to institute any changes, as the power and influence still remains with the former CEO.
If that's agency theory, what does the opposing side argue?
CEO Duality and Stewardship Theory
Breaking Barriers
In the United States, the positions of CEO and board members have been
dominated historically by white males, but this is slowly changing. Now, about
14.5 percent of Fortune 500 companies have female CEOs [source: Shambaugh].
Women and minorities also make up about 11 percent of board members in
corporations [source: Kidder].

CEO duality is a pretty hot debate. While advocates of agency theory believe that little good
can come from a CEO who serves simultaneously as chairman of the board of directors, there
is another side to the argument. Those who support stewardship theory maintain that when
one person holds both roles, he or she is able to act more efficiently and effectively. Holding
dual roles as CEO/chairman creates unity across the company's managers and board of
directors, which ultimately allows the CEO to serve the shareholders even better.
Unfortunately, studies on the different situations (companies that have duality and those who
do not) haven't been able to come up with a clear answer on which is better for running a
company [source: Crane]. Studies seem to indicate that duality doesn't have a direct
correlation to how well a company performs. One might assume that without a separate
chairman to oversee the CEO, the environment is ripe for corruption. However, many are
surprised to learn that even in the high-profile corporate scandals of Enron and WorldCom,
which centered around CEO corruption, the companies didn't have a duality structure [source:
Knowledge@Wharton].
This last fact is even more intriguing when you consider that most CEOs of big companies in
the United States also act as the chairman. About 80 percent of the big corporations in the

United States have a system of duality [source: Alvarez]. The same isn't true in Europe,
however. There, duality is either not permitted or, as in the U.K., not very common [source:
Huse].
Up until now, we haven't discussed what is actually the most hot-button issue regarding
CEOs: salary. We'll get to that next.
CEO Salaries

We all know that our boss makes more money than we do -- but finding out just how much
more can be shocking and often hard to swallow. Chief executive officers (CEOs) obviously
get paid handsomely (for the most part). But how much is too much? CEO pay is always
controversial -- especially when the CEOs are getting perks at a time when the company isn't
doing well.
Looking at how much modern CEOs get paid, you may think that they get to decide their own
salary. But this isn't allowed in public companies. Boards of directors have that responsibility,
and this is a harder task than you might expect. Pay too much and the board risks not only
marring the public image of the company, but also squandering corporate funds. Pay too little
and the board won't be able to attract or retain talented executives who are sought after in a
competitive market.
It's such a difficult decision that boards often designate a compensation committee made up
typically of two to five board members to determine how much to pay a CEO. Regulations
stipulate that the members of this committee can't be current employees of the company
(inside directors), which would cause a conflict of interest. Although private companies aren't
required to follow such regulations, many do anyway [source: Smith].
Compensation committees often consider the advice of internal executives, but they also
recruit outside consultants to help them determine an appropriate salary for the company's
CEO. The committees strive to design an appropriate philosophy for compensating the CEO
in a way that motivates performance. After the committee makes its recommendations, the
board can decide whether or not to approve them. In the United States, Securities and
Exchange Commission (SEC) regulations require that committees explain the reasons for
their decision to shareholders in a released statement [source: Smith].
There's at least one CEO who makes less than minimum wage -- kind of. Find out who on the
next page.
CEO Perks
Loss of Loyalty
Although it used to be customary for upper-management employees to stick with
a single company for much of their lives, this tradition changed in the 1980s.
Since then, executives have been more willing to switch companies for better
offers. This trend has contributed to higher salaries for executives as companies
make bids for the best candidates on the job market [source:
Knowledge@Wharton].

Steve Jobs, the CEO of Apple whose health we discussed on a previous page, is a pretty
notable exception when it comes to high CEO salaries. Apple pays him $1 a year. You read
that right: a single dollar. But don't feel too badly for him; he actually takes home a whole lot
more than that and is reportedly worth billions [source: Knowledge@Wharton]. That's
because in lieu of a traditional paycheck, Jobs receives stock options that allow him to cash in
on the success of the company.
As Jobs' case clearly illustrates, CEO compensation is more than just salary. Actually, most
top earners receive the bulk of their take-home pay from stock options. Larry Ellison, CEO of

Oracle Corporation and the top-paid CEO of 2007, received a cool $182 million in stock
options and a mere million from his salary [source: DeCarlo]. In addition to stock options,
CEOs often get hefty bonuses, privileges to use company-paid perks (like private jets) and
large contributions to their retirement plans. And although this is great news for CEOs, it
gives researchers quite a headache. Because compensation takes so many forms, those who
want to analyze, compare and determine CEO compensation find it a daunting task.
Overall, it's important to take sensationalized reports of a CEO's high salary with a grain of
salt. It can be difficult to estimate his or her value to a company and to guess the various
factors that go into the board's difficult decision of determining salary.
If you want more on the spoken and unspoken rules that govern a company, browse the links
on the next page.
How CEOs Work
If you're too intimidated to ask him or her personally, this article will tell you
what a CEO does. See more pictures of corporate life.

You've heard about his private jet, fancy mansion and sports car collection -- not to mention
the cutthroat business practices that helped him attain all these things. He's the CEO of your
company, and you're probably lucky if he knows your name.
Well, this is the stereotypical portrait of a CEO, anyway. In reality, yours might be a nice,
down-to-earth guy, or he may be a she. Regardless, CEOs have a reputation for living
luxuriously, having keen business minds and striking fear into the hearts of employees
whenever they happen to drop in.
In corporate culture, a chief executive officer, or CEO, is the big boss. CEOs may not do the
nitty-gritty hirings and firings themselves, but they run the show. They're in charge of setting
strategy, company goals and making the high-end decisions. Because this is a big job, they
delegate many of their powers to other executives. Employees can question a CEO's
judgment, but only at their own risk. That's not to say CEOs are untouchable or have
unchecked power. Although he or she may be top dog in the office, the CEO must answer to
a board of directors.
Nevertheless, the power associated with the position often generates suspicion and
controversy. When a company is suffering through a tough quarter and sends word to its
employees that there won't be any Christmas bonus this year, it certainly looks bad to see a
CEO take an increase in salary and fly off on vacation in the company jet. It's also suspicious
when a company's CEO serves simultaneously as chairman of the board of directors. What's
more, the position draws heightened scrutiny these days after such corporate scandals as
Enron exposed CEOs abusing their power.
Before we delve into these and other controversies that swarm around CEOs, we need to
understand what these officers do. It can be difficult to define a CEO's responsibilities due to
the fact that every company's CEO is different. Because they hold the top internal position in
a corporation, CEOs get to decide which duties they want to take on personally and which
they want to delegate. And because every corporation has its own culture and various
industries operate on different corporate structures, we'll have to look at the role from a
general perspective. Let's start with a brief overview of how corporations work.
Corporate Structure: Board of Directors

Have you ever tried to understand the ranks of executives in a company only to get lost in
acronyms and jargon? You're not alone; the balance of power in the corporate world can be
confusing even to those entrenched in it. But don't dismay: We'll walk you through the basic
corporate structure.

Just like many governments, corporations have a system of checks and balances so that not
too much power is centered in one person or group. In companies, the structure is set up to
separate powers of ownership and management. This wasn't always the case. Before the
Industrial Revolution in the 19th century, companies were typically family-run and very
small by today's standards. But eventually, powered by machines and advanced efficiency,
individual companies grew exponentially. Soon after came the dawn of public ownership of
companies, which helped fund these gargantuan institutions.
When various shareholders have partial ownership of a company, they want to make sure
whoever's running the show is looking out for their best interests. This is what a board of
directors is for. The board represents the shareholders and other stakeholders (those who
have a vested interest in the company). The board of directors doesn't run the company itself,
but it oversees those who do.
In a public company, the shareholders elect the members of a board of directors. The board is
headed by a chairman and contains other directors, the number of which varies from
company to company. Directors can be either inside directors or outside directors. Inside
directors are those who are also managers in the company or happen to be major
shareholders. Outside directors, on the other hand, don't have a role in the company. They
typically have experience in the industry (or might even be chief executive officers of other
companies), which allows them to make informed decisions about the business. Some have
memberships on multiple boards.
While inside directors can share their unique insight from an internal perspective, outside
directors are considered unbiased. Both kinds of directors have the same general
responsibilities on a board. Directors oversee the management of the company collectively by
approving strategies and budgets. They may not meet regularly, and the influence they truly
wield over management can depend on the dynamics and atmosphere of the company.
Corporate Structure: Company Management Ladder

Private Matters
Today, even if a corporation is private and isn't publicly traded, laws and
regulations usually require it to have a board of directors that looks out for the
interests of owners and various stakeholders, such as the local community.
However, the board of a private company has fewer oversight rules and
regulations to follow. Many private companies also have CEOs, though not all of
them do.

Part of a board of directors' responsibilities of overseeing management is electing a chief


executive officer (CEO). From that point, the similarities among most companies end -- each
typically has its own unique management structure. Sometimes, a company will have a
president, which may or may not be the same person as the CEO. If the CEO and the
president aren't the same person, the president's rank is just below the CEO. Another
important figure who may be under the CEO is the chief operations officer (COO). The
person in this position is closer to the detailed operations and goings-on of business.
Although the COO doesn't set the company strategy like the CEO does, he or she does make
sure that strategy is getting carried out by upper management. A similar position is that of the
chief finance officer (CFO), who, like you might expect, is in charge of the company's
financial matters. The CFO's primary responsibility is interpreting financial situations and
reporting them to the CEO and the board, as well as making the information available to
shareholders. If necessary, a CEO can also hire various vice presidents for different
departments in the company.

The different theories on how best to organize and run a large corporation have allowed the
subject to blossom into its own field of study known as corporate governance. Under this
subject, researchers inspect such things as how many inside or outside directors should make
up a board, or the best balance of powers between the board and the CEO.
Next, let's focus in on the CEO.

Duties of a CEO
A CEO must make the important high-end decisions for the company.

Putting aside the vague language, what does a chief executive officer (CEO) do, exactly?
All CEOs are responsible for determining the overall strategy of a company. For example, the
CEO of a car company would have to decide whether to focus on building large SUVs for the
family and adventurer demographic or to jump on the latest green trend and build vehicles
with more efficient gas mileage, instead. The CEO of a company that makes computers might
decide whether to cut prices to be more competitive in the consumer market or to hire more
engineers so that the company can make a better computer.
The CEO's day-to-day duties may depend on the size of the company he or she oversees. In a
big company, setting the strategy in all departments and for all facets of the industry can be a
full-time job. This is why you never see CEOs of large corporations stepping into the
warehouse and helping to get orders through (except, perhaps, in photo ops). In smaller
companies and start-ups, things are usually different. A CEO who was also the founder of the
company and is struggling to make it grow probably has a more hands-on role. He or she is
more likely to step into any role necessary to get the job done. And, of course, the daily
responsibilities of a CEO may also vary across industries.
Even though they can delegate power, CEOs are ultimately responsible for everything related
to management, such as operations and financial matters. This means that the chief operating
officer (COO) and chief financial officer (CFO) report directly to the CEO. As we've
mentioned, since the board of directors chooses the CEO, the CEO must, in turn, report to the
board.
Depending on how involved the board chooses to be, it can take a backseat to the CEO's
vision and decisions. Or, the board could opt to take a more direct role and charge the CEO
with carrying out its plans. The CEO's personality is a major factor in determining his or her
relationship with the board. In general, CEOs tend to have domineering, arresting
personalities that can help them wield power over a board. But because the board has the
power to choose and remove the CEO, there's always that check on power that can reign in a
CEO's behavior.
More CEO Responsibilities

Regardless of whether it's a big or small company that he or she oversees, the CEO is usually
instrumental in setting the tone for an organization. CEOs are able to use their power and
method of leadership in a way that motivates employees. For instance, if employees get the
impression that their CEO is working as hard as they do and that he or she really appreciates
their hard work, this can elicit loyalty from all levels of employees. But the CEO doesn't
always set a positive tone; his or her behavior can discourage employees as easily as it
bolsters their morale. If a CEO comes across as unattached to the company's employees and
flies off frequently on exotic vacations, employees may not feel compelled to work hard for
him or her.
Many people assume that because of their heavy responsibilities, CEOs are especially prone
to stress-related health problems. According to some research, however, those in mid-level
management are more likely to develop health problems than those who work at higher levels

of the corporate ladder [source: Quick]. So it would seem that more responsibility doesn't
necessarily equate to more stress. However, some argue that top-ranking CEOs are able to
avoid job stress by dodging responsibility. When a company's performance takes a dive,
CEOs may try to pass the buck down to lower executives. Although this is just one possible
explanation for why CEOs wouldn't be as stressed as some of those managers to whom they
delegate power, shirking responsibility has shown to be an unwise business tactic. According
to some studies of Fortune 500 companies, when high-level executives take the blame for
slumps, it's more likely to result in improved performance from the employees [source:
Pfeffer]. Other studies confirm that even in hypothetical situations, employees are more likely
to approve of and respect executives who shoulder the blame for unfavorable events [source:
Pfeffer].
Because CEOs are so vital to the success, identity and tone of a company, controversy always
lurks around the corner when the top dog retires, as we'll see next.
The Problem of Losing a CEO

Car accidents, heart attacks, cancer. As much as we hate to think about it, no one lives
forever. If a CEO is truly successful, he or she won't outlive the corporation itself. And,
CEOs may also choose to leave the company suddenly to go another organization, to pursue
other exploits or retire. Of course, the board can always fire the CEO as well.
Whatever the cause, when a company loses a CEO, it can be like the frenzy of a chicken
running around with its head cut off. That's because of the problem of CEO succession -- in
other words, deciding who will be a suitable replacement. Just as monarchies have struggled
historically with the death of a king who has no strong or obvious successor, so must
companies struggle with the departure of a CEO. If companies aren't careful, what plays out
is the stuff of Shakespearean drama. In fact, in the 2000 motion picture release of
Shakespeare's "Hamlet," which deals with problems of royal succession, filmmaker Michael
Almereyda modernized the plot to revolve around the death of a CEO in place of a king.
So why is naming a new CEO such a big deal? Why does the media rush to the scene when
Steve Jobs, the CEO of Apple, so much as sneezes? Basically, it's because of the reasons we
laid out on the last page -- the CEO is the lifeblood of a company. He or she sets the direction
of a corporation, and shareholders don't want to hold on to the stock of a directionless
company for long. Jobs himself is a great example of this because many credit him with
saving Apple from the brink of bankruptcy and subsequently raising it to enormous success.
Without him, some fear the company might sink yet again. To see evidence of how much a
company hinges on its CEO, note how Apple's shares dipped at the mere rumor of Jobs'
remission into ill health [source: Reuters]. In January 2009, news surfaced of Steve Jobs
taking a leave of absence from his position at Apple. The announcement was enough to
institute a temporary halt on the trading of Apple stock. To calm investors, Jobs appointed
COO Tim Cook to take over daily operations for him during his leave.
CEO Succession

So what does happen to a company when a CEO leaves?


As we've learned, it's up to the board of directors to hire and fire CEOs. The decision of CEO
succession is entirely up to the board -- in theory, at least. In reality, it might be a different
story. In the past, boards of directors generally took a passive role in their corporate
oversight. It was the accepted tradition that CEOs should choose and groom their successors
while they're still at the company. Once the CEO died or retired, boards typically followed
suit and elected the former CEO's choice. Microsoft's Bill Gates took a variation of this route,
as he began planning his own succession at age 45 [source: Mader]. He bowed out gradually,
leaving the CEO position and naming a successor in 2000, but retaining his position as
chairman and taking on a new role of chief software architect. By 2006, he decided to leave
the management position but stay on as chairman.

But not every CEO phases himself or herself out of the picture gradually like Gates did. In
the modern dynamic of corporate culture, a board of directors is more likely to take an
aggressive role in appointing a successor. In fact, it's not uncommon for the board to make an
independent choice, perhaps selecting a candidate from outside the company. Hiring CEOs
from outside the organization has become more popular lately. In the 1960s, for instance,
outsiders accounted for 9 percent of new CEOs, but by 2000, this figure had risen to about 33
percent [source: Carey]. Theorists disagree about what factors are behind this shifting
ideology. Some claim that boards increasingly (and unwisely) seek charismatic, superstar
CEOs for the illusion of strong company leadership [source: Monks].
Because of the problems than can ensue from the sudden death or departure of a CEO,
experts recommend that boards always have a plan ready for a stable transition. This would
involve communicating with various managers to appoint the best successor [source: Monks].

Production management
How Transportation method works?
The transportation method consists of the following three steps:
1
2
3

Obtaining an initial solution, that is to say making an initial assignment n


such a way that a basic feasible solution is obtained;
Ascertaining whether it is optimal or not by determining opportunity costs
associated with the empty cells, and if the solution is not optimal;
Revising the solution until an optimal solution is obtained.

Layout Planning
The term 'layout planning' can be applied at various levels of planning:
Plant location planning (where you are concerned with location of a factory or a warehouse or
other facility.) This is of some importance in design of multi-nationally cooperating, Globalsupply Chain systems.
Department location Planning: This deals with the location of different departments or sections
within a plant/factory. This is the problem we shall study in a little more detail, below.
Machine location problems: which deal with the location of separate machine tools, desks,
offices, and other facilities within each cell or department.
Detailed planning: The final stage of a facility planning is the generation, using CAD tools or
detailed engineering drawings, of scaled models of the entire floor plans, including details such
as the location of power supplies, cabling for computer networks and phone lines, etc.
The Department Location Problem: A department is defined as any single, large resource, with a
well defined set of operations, and fixed material entry and exit points. Examples range from a
large machine tool, or a design department. The aim is to develop a BLOCK PLAN showing the
relative locations of the departments.
Criteria: The primary criteria for evaluating any layout will be the:
MINIMIZATION of material handling costs.
MH cost components: depreciation of MH equipment, variable operating costs, labor expenses.
Also, MH costs are typically directly proportional to (a) the frequency of movement of material,
and (b) The length over which material is moved.

Advantages of these criteria (reduced material movements):


1. Reduction of Aisle space required.

2. Lower WIP levels


3. Lower throughput times
4. Less product damage and lower obsolescence
5. Reduced storage space
6. Simplified material control and scheduling
7. Less congestion in system.
Consider the following:

Location of next operation

Material Handled per move

adjacent machine

single part

across the aisle

unit load

across the plant

lot size of over 1 hour of production

another plant

one day's production

At each stage, the WIP is increasing by as much as 10 times.


The most popular layout for complex systems is the SPINE LAYOUT. Examples are shown in
the following figure.

The spine defines a central channel of material flow for the entire facility. Each department
branches out of this central core. Ideally, each department has its own input/output area along the
spine. This departmental point of usage concept reduces material flow.

We shall now look at some details of how to locate departments along a spine to optimize the
flow of materials. Let us first try to see if we can evaluate whether there is a dominant flow
pattern in a manufacturing system or not.

fij = flow volume (trips/time) between department I and j.


hij = cost/unit distance for the material handling system.
The cost = unit fixed cost + unit variable cost.
We define the weight of the cost of moving material between departments i and j as:

wij = fij.hij
Given the values of all the wij's, one measure of flow dominance is the coefficient of variation,
defined as:

What do different computed values of f mean ?


Clearly, f=0 implies that there is no significant variation of flow volumes between different pairs
of departments. In such cases, almost any solution for layouts will be close to optimal.
Similarly, if f is large (>2), it implies that some flows in the system are very low, while others
are extremely dominant. This is typical for assembly lines types of systems. It is easy to design
the layout for such systems (Why?).
However, if the value of f is close to 1, then it is difficult to see dominant flows, and other
techniques of layout design need to be employed.
One such technique is the manual design methodology developed by Muther, called:

Systematic Layout Planning


The figure below shows the steps of this methodology.

The method can be described in terms of the basic steps:


1. Data Collection: A study of the Product Mix, Quantity of each product to be produced,
Routing for each product, Support services needed, and the Schedule (or the timing and
transport issues related to production schedules of the products types).
2. Flow Analysis: whence we identify what each department will be, what its inputs and
outputs are likely to be, specification of physical workstations required to do the tasks (in
the process plan) etc.
At the early stages, this involves considerations of quantity of material flow, as well as
overall flow lines that could be better in the implementation of departments.
Examples include straight-line flow, S-shaped flow, U-shaped flow, or W-shaped flows.
Further, even for a spine shaped system, the spine geometry can be straight line, or Ushaped (the latter case is useful if a single material receiving/delivery point is preferred.)

3. Quantitative analysis: Some factors, such as flow costs, can be quantified. Several others
are not so easy to quantify. For example
a. MH receiving and delivery stations to be kept together.
b. Delicate testing equipment should be placed far from high vibration areas, etc.
Such relationships can be quantified by using REL diagrams, as shown in the
figure below. The relative importance of each factor is expressed in terms of
subjective evaluations, ranging from A (absolutely necessary) to U (unnecessary),
and X (necessary to keep apart).
The diagram can also give reasons for such decisions. An example is shown
below.

4. Relationship Diagram: The quantitative and qualitative analysis is combined into a


relationship diagram. One way to do this is to assign some numerical values to the A-X
ratings, (typically, large integer for A-rating, 0 for U-rating, and large -ve integer for Xrating).
These ratings can then be used to determine the closeness rating for each department as
the sum of all the rating-values of all links coming into it. Usually, a department with a
large rating value should have significant links will many other departments, and should
therefore be at the center of the layout (to be close to all other departments.)
We can now use these ratings (or their numerical values) to define the total closeness
rating of different departments. If V(X) is a function which defines the value of
achieving closeness between two departments, the total closeness rating of a department
can be defined as the sum of its closeness rating values for all its sister departments.

To give a numerical example, assume that we allow: V(A) = 81, V(E) = 27, V(I) =9,
V(O) = 3 and V(U) = 1. Then the closeness ratings corresponding to each department in
the example figure above are:

Department

Total
Closeness
Rating

SR

9+3+9+3+81
= 105

PC

9+0+1+1+27
= 38

PS

58

IC

39

XT

35

AT

165

In the above, the X-ratings were ignored in order to allow each department to have a fair
chance in placement in the initial design of the layout. The real value of this rating will be
used later, when we put some effort into modification on the first-guess solution.
Forming the first guess solution (greedy algorithm):
Step 1. Notice that AT has the highest rating, and so is placed in the center of the layout
(why ?)
Step 2. The next highest ranked department is SR, which may be placed adjacent to AT
due to their mutual A-rating. We put it on top of AT.
Step 3. Next up is PS, which should go adjacent to AT (since V(AT,PS) is the highest
rated closeness value for PS.
Step 4. Next comes XT, which should be close to PS.
Step 5. Next is IC, which should be close to AT and is placed below it.
Step 6. Finally, we have PC, which must stay away from PS.
Using these directions, we have a first attempt at the layout as follows:

Notice the odd shape of the final layout. This does not matter, since we still have not
considered the relative sizes of the departments. But before considering that, we must
also attempt to improve upon our greedy solution.
One heuristic to do so is called the 2-Opt method. A k-opt method is said to have
converged when any switching between k variables (in this case, locations of
departments) cannot improve upon the objective (in our case, minimization of the total
MH cost).
The 2-Opt procedure to improve on the greedy solution is pretty straightforward, and
described rather well in your text (Askin and Standridge, pp 219). In summary, it is a hillclimbing heuristic, in which, starting from the initial solution, at each step we compute
the reduction (if any) in cost associated with switching the positions of each pair of
departments.
The pair which yields the maximum reduction in costs (steepest local benefit) is selected
at this step. The switch is made, and the procedure continues, until at some stage, we are
unable to find any pair-switch which improves on the MH cost.
In the above, the MH cost associated with any pair of departments is often based on the
estimated MH cost factor, wij that we computed earlier, multiplied by an estimate of the
distance between the two cells.
5. Space requirements: these are determined based on industrial standards, equipment
required, shelf space required, etc.
6. Space availability: this is determined based on the economic analysis, as well as on other
constraints that may arise (especially if the system is to be housed in an existing facility).
The last two considerations will give an estimate of total space for each department, and
sometimes also the shape of each department (based on flow type within the department).
7. Space relationship diagram: In this part, we substitute in the actual area on each
department, and fit the departments into the available space. Usually, the solution
methods may be computer-assisted heuristics, or just direct visual methods.
8. Putting in the constraints: Finally, other existing constraints are employed to cut down the
number of feasible solutions, to result in a small set of solutions. from among these,
direct comparison can be used to rank, eliminate, or select the optimum design.
The decision theory is a branch of the applied probability theory, which evaluates consequences
of decisions. The decision theory is often used as economical instrument. Two well-known
methods in addition are e.g. the simple efficiency analysis (NWA) or the more precise Analytic

Hierarchy Process (AHP), where criteria and alternatives are represented, compared and
evaluated, in order to find the optimal solution to a decision or a problem definition.
One differentiates between three subsections of the decision theory:
1. The normative decision theory looks for criteria of rational deciding and wants to give
assistance for the question, how one is to decide in a given situation reasonably. In addition it
must meet some simplifying model acceptance, then it must proceed for example from the axiom
of the of the Entscheiders.
2. The decision theory concerns itself with the supply from procedures to the precipitation of
rational and practicable decisions.
3. The descriptive decision theory examines against it empirically the question how decisions
in the reality are actually made.
The basic model (normative) of the decision theory consists of the decision field and the target
system. The decision field contains the activity space (quantity of the possible action
alternatives), the Zustandsraum (quantity of the possible environmental conditions) and a result
function, which assign a value to each combination of action and condition. A frequent problem
is that the true environmental condition does not admit is. Here one speaks of uncertainty,
contrary to the situation of the security, in which the environmental condition admits is. The
uncertainty situation can be arranged into
* Decision under security: The occurring situation is well-known. (Deterministic decision
model)
* Decision under uncertainty: It is not with security well-known, which environmental
situation occurs s_j, one continues to differentiate thereby in:
o Decision under risk: The probability p_j for the possibly occurring environmental
situations s_j is well-known. (Stochastic decision model)
o Decision under uncertainty: One knows the possibly occurring environmental situations,
however not their probabilities of entrance.
With a decision under risk expectancy values can be calculated over all possible consequences of
each individual decision, while with a decision under uncertainty and/or is used the principle is
not possible by the insufficient reason/Indifferenzprinzip, which assigns the same probability to
each option. On the basis of such probability evaluations a determination of the expectancy value
can be made also under uncertainty
(In or multi-level) the decision-making process with the different consequences can be plotted as
decision tree.

The decision theory is not applicable, if the entrepreneur and/or manager lets this competition
likewise flow competed with a rationally acting opponent (a competitor about) to which into his
decision. This can do also with the help of the probability calculation alone is not no more
illustrated, since the behavior of the opponent is not deterministically however not coincidental.
In such a case the game theory is used.
The decision theory is used recently also with the evaluation of investments. Under the name
material option is used the decision tree procedure (and/or options) for it to be able to decide the
value of flexibility concerning decisions - i.e. the option (at a later time) - to judge.
OPERATIONS SCHEDULING
Scheduling pertains to establishing both the timing and use of resources within an organization.
Under the operations function (both manufacturing and services), scheduling relates to use of
equipment and facilities, the scheduling of human activities, and receipt of materials.
While issues relating to facility location and plant and equipment acquisition are considered long
term and aggregate planning is considered intermediate term, operations scheduling is considered
to be a short-term issue. As such, in the decision-making hierarchy, scheduling is usually the
final step in the transformation process before the actual output (e.g., finished goods) is
produced. Consequently, scheduling decisions are made within the constraints established by
these longer-term decisions. Generally, scheduling objectives deals with tradeoffs among
conflicting goals for efficient utilization of labor and equipment, lead time, inventory levels, and
processing times.
Byron Finch notes that effective scheduling has recently increased in importance. This increase
is due in part to the popularity of lean manufacturing and just-in-time. The resulting drop in
inventory levels and subsequent increased replenishment frequency has greatly increased the
probability of the occurrence of stock-outs. In addition, the Internet has increased pressure to
schedule effectively. "Business to customer" (B2C) and "business to business" (B2B)
relationships have drastically reduced the time needed to compare prices, check product
availability, make the purchase, etc. Such instantaneous transactions have increased the
expectations of customers, thereby, making effective scheduling a key to customer satisfaction. It
is noteworthy that there are over 100 software scheduling packages that can perform schedule
evaluation, schedule generation, and automated scheduling. However, their results can often be
improved through a human scheduler's judgment and experience.
There are two general approaches to scheduling: forward scheduling and backward scheduling.
As long as the concepts are applied properly, the choice of methods is not significant. In fact, if
process lead times (move, queue and setup times) add to the job lead time and process time is
assumed to occur at the end of process time, then forward scheduling and backward scheduling
yield the same result. With forward scheduling, the scheduler selects a planned order release date
and schedules all activities from this point forward in time.
With backward scheduling, the scheduler begins with a planned receipt date or due date and
moves backward in time, according to the required processing times, until he or she reaches the
point where the order will be released.
Of course there are other variables to consider other than due dates or shipping dates. Other
factors which directly impact the scheduling process include: the types of jobs to be processed

and the different resources that can process each, process routings, processing times, setup times,
changeover times, resource availability, number of shifts, downtime, and planned maintenance.
LOADING
Loading involves assigning jobs to work centers and to various machines in the work centers. If
a job can be processed on only one machine, no difficulty is presented. However, if a job can be
loaded on multiple work centers or machines, and there are multiple jobs to process, the
assignment process becomes more complicated. The scheduler needs some way to assign jobs to
the centers in such a way that processing and setups are minimized along with idle time and
throughput time.
Two approaches are used for loading work centers: infinite loading and finite loading. With
infinite loading jobs are assigned to work centers without regard for capacity of the work center.
Priority rules are appropriate for use under the infinite loading approach. Jobs are loaded at work
centers according to the chosen priority rule. This is known as vertical loading.
Finite loading projects the actual start and stop times of each job at each work center. Finite
loading considers the capacity of each work center and compares the processing time so that
process time does not exceed capacity. With finite loading the scheduler loads the job that has
the highest priority on all work centers it will require. Then the job with the next highest priority
is loaded on all required work centers, and so on. This process is referred to as horizontal
loading. The scheduler using finite loading can then project the number of hours each work
center will operate. A drawback of horizontal loading is that jobs may be kept waiting at a work
center, even though the work center is idle. This happens when a higher priority job is expected
to arrive shortly. The work center is kept idle so that it will be ready to process the higher
priority job as soon as it arrives. With vertical loading the work center would be fully loaded. Of
course, this would mean that a higher priority job would then have to wait to be processed since
the work center was already busy. The scheduler will have to weigh the relative costs of keeping
higher priority jobs waiting, the cost of idle work centers, the number of jobs and work centers,
and the potential for disruptions, new jobs and cancellations.
If the firm has limited capacity (e.g., already running three shifts), finite loading would be
appropriate since it reflects an upper limit on capacity. If infinite loading is used, capacity may
have to be increased through overtime, subcontracting, or expansion, or work may have to be
shifted to other periods or machines.
SEQUENCING
Sequencing is concerned with determining the order in which jobs are processed. Not only must
the order be determined for processing jobs at work centers but also for work processed at
individual work stations. When work centers are heavily loaded and lengthy jobs are involved,
the situation can become complicated. The order of processing can be crucial when it comes to
the cost of waiting to be processed and the cost of idle time at work centers.
There are a number of priority rules or heuristics that can be used to select the order of jobs
waiting for processing. Some well known ones are presented in a list adapted from Vollmann,
Berry, Whybark, and Jacobs (2005):

Random (R). Pick any job in the queue with equal probability. This rule is often used as a
benchmark for other rules.

First come/first served (FC/FS). This rule is sometimes deemed to be fair since jobs are
processed in the order in which they arrive.

Shortest processing time (SPT). The job with the shortest processing time requirement
goes first. This rule tends to reduce work-in-process inventory, average throughput time,
and average job lateness.

Earliest due date (EDD). The job with the earliest due date goes first. This seems to work
well if the firm performance is judged by job lateness.

Critical ratio (CR). To use this rule one must calculate a priority index using the formula
(due datenow)/(lead time remaining). This rule is widely used in practice.

Least work remaining (LWR). An extension of SPT, this rule dictates that work be
scheduled according to the processing time remaining before the job is considered to be
complete. The less work remaining in a job, the earlier it is in the production schedule.

Fewest operations remaining (FOR). This rule is another variant of SPT; it sequences
jobs based on the number of successive operations remaining until the job is considered
complete. The fewer operations that remain, the earlier the job is scheduled.

Slack time (ST). This rule is a variant of EDD; it utilizes a variable known as slack. Slack
is computed by subtracting the sum of setup and processing times from the time
remaining until the job's due date. Jobs are run in order of the smallest amount of slack.

Slack time per operation (ST/O). This is a variant of ST. The slack time is divided by the
number of operations remaining until the job is complete with the smallest values being
scheduled first.

Next queue (NQ). NQ is based on machine utilization. The idea is to consider queues
(waiting lines) at each of the succeeding work centers at which the jobs will go. One then
selects the job for processing that is going to the smallest queue, measured either in hours
or jobs.

Least setup (LSU). This rule maximizes utilization. The process calls for scheduling first
the job that minimizes changeover time on a given machine.

These rules assume that setup time and setup cost are independent of the processing sequence.
However, this is not always the case. Jobs that require similar setups can reduce setup times if
sequenced back to back. In addition to this assumption, the priority rules also assume that setup
time and processing times are deterministic and not variable, there will be no interruptions in
processing, the set of jobs is known, no new jobs arrive after processing begins, and no jobs are
canceled. While little of this is true in practice, it does make the scheduling problem manageable.
GANTT CHARTS
Gantt charts are named for Henry Gantt, a management pioneer of the early 1900s. He proposed
the use of a visual aid for loading and scheduling. Appropriately, this visual aid is known as a
Gantt chart. This Gantt chart is used to organize and clarify actual or intended use of resources
within a time framework. Generally, time is represented horizontally with scheduled resources
listed vertically. Managers are able to use the Gantt chart to make trial-and-error schedules to get
some sense of the impact of different arrangements.
There are a number of different types of Gantt charts, but the most common ones, and the ones
most appropriate to our discussion, are the load chart and schedule chart. A load chart displays
the loading and idle times for machines or departments; this shows when certain jobs are
scheduled to start and finish and where idle time can be expected. This can help the scheduler
redo loading assignments for better utilization of the work centers. A schedule chart is used to
monitor job progress. On this type of Gantt chart, the vertical axis shows the orders or jobs in

progress while the horizontal axis represents time. A quick glance at the chart reveals which jobs
are on schedule and which jobs are on time.
Gantt charts are the most widely used scheduling tools. However, they do have some limitations.
The chart must be repeatedly updated to keep it current. Also, the chart does not directly reveal
costs of alternate loadings nor does it consider that processing times may vary among work
centers.
SCHEDULING SERVICE OPERATIONS
The scheduling of services often encounters problems not seen in manufacturing. Much of this is
due to the nature of service, i.e., the intangibility of services and the inability to inventory or
store services and the fact that demand for services is usually random. Random demand makes
the scheduling of labor extremely difficult as seen in restaurants, movie theaters, and amusement
parks. Since customers don't like to wait, labor must be scheduled so that customer wait is
minimized. This sometimes requires the use of queuing theory or waiting line theory. Queuing
theory uses estimate arrival rates and service rates to calculate an optimum staffing plan. In
addition, flexibility can often be built into the service operation through the use of casual labor,
on-call employees, and cross-training.
Scheduling of services can also be complicated when it is necessary to coordinate and schedule
more than one resource. For example, when hospitals schedule surgery, not only is the
scheduling of surgeons involved but also the scheduling of operating room facilities, support
staff, and special equipment. Along with the scheduling of classes, universities must also
schedule faculty, classrooms, labs, audiovisual and computer equipment, and students. To further
complicate matters, cancellations are also common and can add further disruption and confusion
to the scheduling process.
Instead of scheduling labor, service firms frequently try to facilitate their service operations by
scheduling demand. This is done through the use of appointment systems and reservations.
Frank and Lillian Gilbreth
Pioneers of Ergonomics and "Time and Motion"
Efficiency and productivity go together, and working efficiently has many meanings. It's not just
about working in a way that allows you to get the most done in a fixed period of time. It also
involves making sure that you don't hurt productivity.
If you work too fast, you risk making mistakes. You also risk becoming so tired, either mentally
or physically, that you have to stop working too early, which means that your total efficiency
suffers.
Today, we regularly use ergonomic principles to design work and workplace equipment. From
something as simple as placing the photocopier in a central location, to custom designing
workstations to minimize repetitive strain injuries, the principles of work efficiency are all
around us. But where did these ideas originate?
The poorly-designed, inefficient workplaces of the late 19th century led to the scientific
management movement in the early 20th century, which applied the scientific method to the
study of the workplace. Frank Gilbreth and his wife, Lillian, were supporters of this movement.
The Gilbreths pioneered the study of "time and motion" at work. They were interested in
efficiency, so they set up experiments to examine the movements that individual workers made
while doing their daily work.

Before he became a workplace researcher, Frank was a bricklayer. He noted that every worker
had his own way of laying bricks. By observing these individual methods, he determined the
most efficient way to complete the task. Frank believed that by working efficiently, both the
employer and the worker would benefit employers would gain more productivity, and workers
would have reduced stress and fatigue. His observations eventually led to a new way of laying
bricks that more than doubled daily output.
Another of Frank's studies led to creating the role of the surgical assistant in modern operating
rooms. Instead of the surgeon finding each instrument he needed, a nurse would stand by and
hand the surgeon the appropriate tool.
Interesting Fact: The book "Cheaper by the Dozen" was written by
Frank and Lillian's children Frank Jr. and Ernestine. There were 12
children in the family, and the book (and subsequent movies)
highlighted the efficiencies that were introduced into their household
as a result of their parents' methods.
Experimental Technique
Work simplification strategies can be traced back to the work of the Gilbreths, whose methods
were quite sophisticated. For example, they weren't satisfied with simply saying that a person
"moved the hand," so they broke down this action into 17 separate units of motion. They called
each motion a "therblig," which is "Gilbreth" spelled backward (the "th" is transposed for easier
pronunciation).
They also invented a microchronometer to study work motion. This is a clock capable of
recording time to the 1/2000th of a second. By placing the clock in the field of the picture, they
could break movements down into very small units of time. Henry Gantt, the originator of the
Gantt Chart, was a contemporary of the Gilbreths, who used a Gantt Chart to demonstrate
graphically the various pieces of a larger task.
The Gilbreths' discoveries about workplace efficiency were not limited to the need to increase
output. They were also interested in how workers could reduce fatigue. From this industrial
psychology perspective, they advanced ideas about how best to train and develop workers.
Tactics like job rotation and finding work best suited for a worker's natural skills and abilities
developed from the Gilbreths' extensive experiments.
While the Gilbreths' work is very important, their methods are no
longer used directly in the modern workplace. However, the
underlying theory of workplace efficiency remains strong. See a
current list of team tools to improve the effectiveness and
functionality of your team, and learn about the Kaizen approach to
efficiency.
Key Points
While you may not have known the names Frank and Lillian Gilbreth before reading this article,
their contribution to the advancement of management science and modern management theory
was significant. Today, we're very familiar with the idea of workplace efficiency no one argues
with its importance. We can thank pioneers in the management science movement, like the
Gilbreths, for this knowledge.

Financial management

Advantages of Ratios Analysis:


Ratio analysis is an important and age-old technique of financial analysis. The following are
some of the advantages / Benefits of ratio analysis:

1. Simplifies financial statements: It simplifies the comprehension of financial

statements. Ratios tell the whole story of changes in the financial condition of the
business

2. Facilitates inter-firm comparison: It provides data for inter-firm comparison.

Ratios highlight the factors associated with with successful and unsuccessful firm.
They also reveal strong firms and weak firms, overvalued and undervalued firms.

3. Helps in planning: It helps in planning and forecasting. Ratios can assist

management, in its basic functions of forecasting. Planning, co-ordination, control


and communications.

4. Makes inter-firm comparison possible: Ratios analysis also makes possible


comparison of the performance of different divisions of the firm. The ratios are
helpful in deciding about their efficiency or otherwise in the past and likely
performance in the future.

5. Help in investment decisions: It helps in investment decisions in the case of


investors and lending decisions in the case of bankers etc.

Limitations of Ratios Analysis:


The ratios analysis is one of the most powerful tools of financial management. Though ratios
are simple to calculate and easy to understand, they suffer from serious limitations.

1. Limitations of financial statements: Ratios are based only on the information


which has been recorded in the financial statements. Financial statements
themselves are subject to several limitations. Thus ratios derived, there from, are
also subject to those limitations. For example, non-financial changes though
important for the business are not relevant by the financial statements. Financial
statements are affected to a very great extent by accounting conventions and
concepts. Personal judgment plays a great part in determining the figures for
financial statements.

2. Comparative study required: Ratios are useful in judging the efficiency of the

business only when they are compared with past results of the business. However,
such a comparison only provide glimpse of the past performance and forecasts for
future may not prove correct since several other factors like market conditions,
management policies, etc. may affect the future operations.

3. Ratios alone are not adequate: Ratios are only indicators, they cannot be taken as
final regarding good or bad financial position of the business. Other things have also
to be seen.

4. Problems of price level changes: A change in price level can affect the validity of
ratios calculated for different time periods. In such a case the ratio analysis may not

clearly indicate the trend in solvency and profitability of the company. The financial
statements, therefore, be adjusted keeping in view the price level changes if a
meaningful comparison is to be made through accounting ratios.
5. Lack of adequate standard: No fixed standard can be laid down for ideal ratios. There
are no well accepted standards or rule of thumb for all ratios which can be accepted
as norm. It renders interpretation of the ratios difficult.

6. Limited use of single ratios: A single ratio, usually, does not convey much of a
sense. To make a better interpretation, a number of ratios have to be calculated
which is likely to confuse the analyst than help him in making any good decision.

7. Personal bias: Ratios are only means of financial analysis and not an end in itself.
Ratios have to interpreted and different people may interpret the same ratio in
different way.

8. Incomparable: Not only industries differ in their nature, but also the firms of the
similar business widely differ in their size and accounting procedures etc. It makes
comparison of ratios difficult and misleading.

Profit Maximization vs Wealth maximization


The traditional approach of financial management was all about profit
maximization.The main objective of companies was to make profits.
The traditional approach of financial management had many limitations:
1.Business may have several other objectives other than profit
maximization.Companies may have goals like: a larger market share, high
sales,greater stability and so on.The traditional approach did not take into
account so many of these other aspects.
2.Profit Maximization has to defined after taking into account many things like:
a.Short term,mid term,and long term profits
b.Profits over period of time
The traditional approach ignored these important points.
3.Social Responsibility is one of the most important objectives of many firms.Big
corporates make an effort towards giving back something to the society.The big
companies use a certain amount of the profits for social causes.It seems that the
traditional approach did not consider this point.

Modern Approach is about the idea of wealth maximization.This involves


increasing the Earning per share of the shareholders and to maximize the net
present worth.
Wealth is equal to the the difference between gross present worth of some
decision or course of action and the investment required to achieve the
expected benefits.
Gross present worth involves the capitalised value of the expected benefits.This
value is discounted a some rate,this rate depends on the certainty or uncertainty
factor of the expected benefits.
The Wealth Maximization approach is concerned with the amount of cash flow
generated by a course of action rather than the profits.
Any course of action that has net present worth above zero or in other
words,creates wealth should be selected.
The financial management come a long way by shifting its focus from traditional
approach to modern approach. The modern approach focuses on wealth
maximization rather than profit maximization. This gives a longer term horizon
for assessment, making way for sustainable performance by businesses.

A myopic person or business is mostly concerned about short term benefits. A


short term horizon can fulfill objective of earning profit but may not help in
creating wealth. It is because wealth creation needs a longer term horizon
Therefore, Finance Management or Financial Management emphasizes on wealth
maximization rather than profit maximization. For a business, it is not necessary
that profit should be the only objective; it may concentrate on various other
aspects like increasing sales, capturing more market share etc, which will take
care of profitability. So, we can say that profit maximization is a subset of wealth
and being a subset, it will facilitate wealth creation.

Giving priority to value creation, managers have now shifted from traditional
approach to modern approach of financial management that focuses on wealth
maximization. This leads to better and true evaluation of business. For e.g.,
under wealth maximization, more importance is given to cash flows rather than
profitability. As it is said that profit is a relative term, it can be a figure in some
currency, it can be in percentage etc. For e.g. a profit of say $10,000 cannot be
judged as good or bad for a business, till it is compared with investment, sales
etc. Similarly, duration of earning the profit is also important i.e. whether it is
earned in short term or long term.

In wealth maximization, major emphasizes is on cash flows rather than profit. So,
to evaluate various alternatives for decision making, cash flows are taken under
consideration. For e.g. to measure the worth of a project, criteria like: present
value of its cash inflow present value of cash outflows (net present value) is
taken. This approach considers cash flows rather than profits into consideration
and also use discounting technique to find out worth of a project. Thus,
maximization of wealth approach believes that money has time value.

An obvious question that arises now is that how can we measure wealth. Well, a
basic principle is that ultimately wealth maximization should be discovered in
increased net worth or value of business. So, to measure the same, value of
business is said to be a function of two factors - earnings per share and
capitalization rate. And it can be measured by adopting following relation:

Value of business = EPS / Capitalization rate

At times, wealth maximization may create conflict, known as agency problem.


This describes conflict between the owners and managers of firm. As, managers
are the agents appointed by owners, a strategic investor or the owner of the firm
would be majorly concerned about the longer term performance of the business
that can lead to maximization of shareholders wealth. Whereas, a manager
might focus on taking such decisions that can bring quick result, so that he/she
can get credit for good performance. However, in course of fulfilling the same, a
manager might opt for risky decisions which can put on stake the owners
objectives.

Hence, a manager should align his/her objective to broad objective of


organization and achieve a tradeoff between risk and return while making
decision; keeping in mind the ultimate goal of financial management i.e. to
maximize the wealth of its current shareholders.

1. Financial statement analysis of firms presents you an intuition on how


the corporation is conducting its program. For stockholders who are interested in
finding out whether the management is properly utilizing the corporations resources
to create shareholder wealth, a financial analysis of a corporation will be able to
help investors come to proper decision. As such, financial analysis of
a corporation has several items, including capital budgeting and capital structure

decisions when the analysis of financial statements is done for the management of
the firm. The peformane of competitors within the industry, and the viability
of businesss future can be evaluated through financial statement analysis.
2. Viability of a project can be found out through a financial statement analysis which
can be performed by financial analysts employed by the firm. Projects that would
bring in the maximum amount of revenues over the course of time over similar
projects are recommended by financial analysts to the management. Expected returns
from projects are provided by financial analysts to the management. Analysts
employed by the business can also give the management suggestions on whether to
issue new stocks or borrow money to fund new projects. Financial analysts will
recommend whether a new project should be undertaken or invest the money
somewhere else, essentially performing capital budgeting decisions.
3. Financial Institutions will carry out a financial statement analysis of a business to see
how strong its fundamentals are, and then use their findings to either make good
investments for themselves, or pass on ther findings to their clients. Large
investment corporations have their own in house financial analysts who advice to
their employers on what stocks might be a good buy, these recommendations are
usually private and only available within the company. A corporations stock price
can be affected based on a financial analyst recommendations as these
recommendations are used by stockholders to determine whether it is a good
investment. If a financial analyst after evaluating a companys financial statements
finds that the company isnt performing well, he might suggest owners to sell
the stock if they already own it. If such a suggestion were to be made public, the price
of that businesss share could see its value drop moderately.

***********************
What is trading on equity? How can it prove to be a double-edged sword?

Trading on equity is sometimes referred to as financial leverage or the leverage factor.


Trading on equity occurs when a corporation uses bonds, other debt, and preferred stock to
increase its earnings on common stock. For example, a corporation might use long term debt
to purchase assets that are expected to earn more than the interest on the debt. The earnings
in excess of the interest expense on the new debt will increase the earnings of the
corporations common stockholders. The increase in earnings indicates that the corporation
was successful in trading on equity.
If the newly purchased assets earn less than the interest expense on the new debt, the
earnings of the common stockholders will decrease.
trading on the equity
Borrowing funds to increase capital investment with the hope that the business
will be able to generate returns in excess of the interest charges.

Course 2: Capital Budgeting Analysis


Risk Analysis in Capital Budgeting Decisions

Conceptually, a capital budgeting decision is simplicity itself. The analyst


determines the upfront cost of a project, as well as the periodic future cash flows
resulting from the project. Those cash flows are then used to calculate either the
net present value (NPV) of the project - using the firm's weighted-average costof-capital (WACC) as a discount rate - or the internal rate of return (IRR) for the
project. If the NPV is positive, or if the IRR exceeds the WACC, the firm
undertakes the project; otherwise it doesn't.

The difficulty in making proper capital budgeting decisions arises as a


consequence of the difficulty in determining the upfront costs, the periodic cash
flows, even the proper WACC. All of these quantities must be estimated, and all
of the ensuing estimates will contain some degree of uncertainty; the process in
inherently risky.

In their book Fundamentals of Financial Management, 8th edition, SouthWestern, 1998, authors Eugene F. Brigham and Joel F. Houston include a chapter
entitled Risk Analysis and the Optimal Capital Budget. With examples from
industry, they illustrate the pitfalls of using uncertain single-point estimates for
the cash flows associated with a project. One recommendation in the chapter is
to model the uncertainty in all of the quantities being estimated and to use
Monte Carlo simulation to produce a probability distrubution for the NPV (or the
IRR) of the project. Additionally, the analyst can produce sensitivity analyses to
determine the most critical uncertainties in the estimation. The additional
information that these statistical techniques provide can aid the capital budget
decision-makers and can help them avoid costly mistakes.

As part of the CFA exam review courses that he teaches, Bill covers the area of
risk analysis in capital budgeting. His experience in developing mathematical
models for Monte Carlo simulation fit naturally into this very important area of
corporate decision-making.
Risk Analysis in Capital Budgeting

Introduction

In discussing the capital budgeting techniques, we have so far assumed that the
proposed investment projects do not involve any risk. This assumption was
made simply to facilitate the understanding of the capital budgeting techniques.
In real world situation, however, the firm in general and its investment projects
in particular are exposed to different of risk. What is risk? How can risk be
measured and analyzed in the investment decisions?

Nature of risk

Risk exists because of the inability of the decision maker to make perfect
forecasts. Forecasts cannot be made with perfection or certainty since the future
events on which they depend are uncertain. An investment is not risky if, we can
specify a unique sequence of cash flows for it. But whole trouble is that cash
flows cannot be forecast accurately, and alternative sequences of cash flows can
occur depending on the future events. Thus, risk arises in investment evaluation
because we cannot anticipate the occurrence of the possible future events with
certainty and consequently, cannot, make are correct prediction about the cash
flow sequence. To illustrate, let us suppose that a firm is considering a proposal
to commit its funds in a machine, which will help to produce a new product. The
demand for this product may be very sensitive to the general economic
conditions. It may be very high under favorable economic conditions and very
low under unfavorable economic conditions. Thus, the investment would be
profitable in the former situation and unprofitable in the later case. But, it is
quite difficult to predict the future state of economic conditions, uncertainty
about the cash flows associated with the investment derives
A large number of events influence forecasts. These events can be grouped in
different ways. However, no particular grouping of events will be useful for all
purposes. We may, for example, consider three broad categories of the events
influencing the investment forecasts.

General economic conditions

This category includes events which influence general level of business activity.
The level of business activity might be affected by such events as internal and
external economic and political situations, monetary and fiscal policies, social
conditions etc.

Industry factors

This category of events may affect all companies in an industry. For example,
companies in an industry would be affected by the industrial relations in the
industry, by innovations, by change in material cost etc.

Company factors

This category of events may affect only a company. The change in management,
strike in the company, a natural disaster such as flood or fire may affect directly
a particular company
Risk Analysis in Capital Budgeting
Capital budgeting is used to ascertain the requirements of the long-term
investments of a company.

Examples of long-term investments are those required for replacement of


equipments and machinery, purchase of new equipments and machinery, new
products, and new business premises or factory buildings, as well as those
required for R&D plans.

The different techniques used for capital budgeting include:

Profitability index

Net present value

Modified Internal Rate of Return

Equivalent annuity

Internal Rate of Return

Besides these methods, other methods that are used include Return on
Investment (ROI), Accounting Rate of Return (ARR), Discounted Payback Period
and Payback Period.

The different types of risks that are faced by entrepreneurs regarding capital
budgeting are the following:

Corporate risk

International risk

Stand-alone risk

Competitive risk

Market risk

Project specific risk

Industry specific risk

The following methods are used for


Risk Analysis in Capital Budgeting:

Sensitivity Analysis:
This is also known as a "what if analysis". Because of the uncertainty of the
future, if an entrepreneur wants to know about the feasibility of a project in
variable quantities, for example investments or sales change from the
anticipated value, sensitivity analysis can be a useful method. This is calculated
in terms of NPV, or net present value.

Scenario Analysis:
In the case of scenario analysis, the focus is on the deviation of a number of
interconnected variables. It is different from sensitivity analysis, which usually
concentrates on the change in one particular variable at a specific point of time.
Break Even Analysis:
The Break Even Analysis allows a company to determine the minimum
production and sales amounts for a project to avoid losing money. The lowest
possible quantity at which no loss occurs is called the break-even point. The
break-even point can be delineated both in financial or accounting terms.
Hillier Model:
In particular situations, the anticipated NPV and the standard deviation of NPV
can be incurred with the help of analytical derivation. This was first realized by
F.S. Hillier. There are situations where correlation between cash flows is either
complete or nonexistent.

Simulation Analysis: Simulation analysis is utilized for formulating the probability


analysis for a criterion of merit with the help of random blending of variable
values that carry a relationship with the selected criterion.

Decision Tree Analysis: The principal steps of decision tree analysis are the
definition of the decision tree and the assessment of the alternatives.

Corporate Risk Analysis: Corporate risk analysis focuses on the analysis of risk
that may influence the project in terms of the entire cash flow of the firm. The
corporate risk of a project refers to its share of the total risk of a company.

Risk Management: Risk management focuses on factors such as pricing strategy,


fixed and variable costs, sequential investment, insurance, financial leverage,
long term arrangements, derivatives, strategic alliance and improvement of
information.

Selection of project under risk: This involves procedures such as payback period
requirement, risk adjusted discount rate, judgmental evaluation and certainty
equivalent method.

Practical Risk Analysis: The techniques involved include the Acceptable Overall
Certainty Index, Margin of Safety in Cost Figures, Conservative Revenue
Estimation, Flexible Investment Yardsticks and Judgment on Three Point
Estimates.
Chapter 4: Additional Considerations in Capital Budgeting Analysis
Whenever we analyze a capital project, we must consider unique factors. A
discussion of all of these factors is beyond the scope of this course. However,
three common factors to consider are:

Compensating for different levels of risks between projects.


Recognizing risks that are specific to foreign projects.
Making adjustments to capital budgeting analysis by looking at the actual
results.

Adjusting for Risk


We previously learned that we can manage uncertainty by initiating decision
analysis and building options into our projects. We now want to turn our
attention to managing risks. It is worth noting that uncertainty and risk are not
the same thing. Uncertainty is where you have no basis for a decision. Risk is
where you do have a basis for a decision, but you have the possibility of several
outcomes. The wider the variation of outcomes, the higher the risk.

In our previous example (Example 6), we used the cost of capital for discounting
cash flows. Our example involved the replacement of equipment and carried a
low level of risk since the expected outcome was reasonably certain. Suppose
we have a project involving a new product line. Would we still use our cost of
capital to discount these cash flows? The answer is no since this project could
have a much wider variation in outcomes. We can adjust for higher levels of risk
by increasing the discount rate. A higher discount rate reflects a higher rate of
return that we require whenever we have higher levels of risk.

Another way to adjust for risk is to understand the impact of risk on outcomes.
Sensitivity Analysis and Simulation can be used to measure how changes to a

project affect the outcome. Sensitivity analysis is used to determine the change
in Net Present Value given a change in a specific variable, such as estimated
project revenues. Simulation allows us to simulate the results of a project for a
given distribution of variables. Both sensitivity analysis and simulation require a
definition of all relevant variables associated with the project. It should be noted
that sensitivity analysis is much easier to implement since sophisticated
computer models are usually required for simulation.
sitemap | top
International Projects
Capital investments in other countries can involve additional risks. Whenever we
invest in a foreign project, we want to focus on the values that are added (or
subtracted) to the Parent Company. This makes us consider all relevant risks of
the project, such as exchange rate risk, political risk, hyper-inflation, etc. For
example, the discounted cash flows of the project are the discounted cash flows
of the project to the foreign subsidiary converted to the currency of the home
country of the Parent Company at the current exchange rate. This forces us to
take into account exchange rate risks and its impact to the Parent Company.
Post Analysis
One of the most important steps in capital budgeting analysis is to follow-up and
compare your estimates to actual results. This post analysis or review can help
identify bias and errors within the overall process. A formal tracking system of
capital projects also keeps everyone honest. For example, if you were to
announce to everyone that actual results will be tracked during the life of the
project, you may find that people who submit estimates will be more careful. The
purpose of post analysis and tracking is to collect information that will lead to
improvements within the capital budgeting process.
sitemap | top
Course Summary
The long-term investments we make today determines the value we will have
tomorrow. Therefore, capital budgeting analysis is critical to creating value
within financial management. And the only certainty within capital budgeting is
uncertainty. Therefore, one of the biggest challenges in capital budgeting is to
manage uncertainty. We deal with uncertainty through a three-stage process:

1. Build knowledge through decision analysis.


2. Recognize and encourage options within projects.

3. Invest based on economic criteria that have realistic economic assumptions.

Once we have completed the three-stage process (as outlined above), we


evaluate capital projects using a mix of economic criteria that adheres to the
principles of financial management. Three good economic criteria are Net
Present Value, Modified Internal Rate of Return, and Discounted Payback.

Additionally, we need to manage project risk differently than we would manage


uncertainty. We have several tools to help us manage risks, such as increasing
the discount rate. Finally, we want to implement post analysis and tracking of
projects after we have made the investment. This helps eliminate bias and errors
in the capital budgeting process. Sensitivity and Simulation Analysis Assignment
Help

Finance Assignment help --> Capital Budgeting -->Sensitivity and Simulation


Analysis

Sensitivity Analysis

In the evaluation of an investment project, we work with the forecasts of cash


flows. Forecasted cash flows depend on the expected revenue and costs.
Further, expected revenue is a function of sales volume and unit selling price.
Similarly, sales volume will depend on the market size and the firms market
share. Costs include variable costs, which depend on sales volume, and unit
variable cost and fixed costs. The net present value or the internal rate of return
of a project is determined by analyzing the after-tax cash flows arrived at by
combining forecasts of various variables. It is difficult to arrive at an accurate
and unbiased forecast of each variable. The reliability of the NPV of variable
underlying the estimates of net cash flows. To determine the reliability id the
projects NPV or IRR, we can work out how much difference it males in any of
these forecasts goes wrong. We can change each of the forecasts, one at a time
to at least three values: pessimistic, expected, and optimistic. The NPV of the
project is recalculated under these different changing each forecast is called
sensitivity analysis.

Sensitivity analysis is a way of analyzing change in the projects NPV (or IRR) for
a given change in one of the variables. It indicates how sensitive a projects NPV
(or IRR) is to changes in particular variables. The more sensitive the NPV, the
more critical is the variable. The following three steps are unsolved in the use or
sensitivity analysis:

Identification of all those variables, which have an influence on the projects


NPV (or IRR).
Definition of the underlying (mathematical) relationship between the variables.
Analysis of the impact of the change in each of the variables on the projects
NPV.

The decision-maker, while performing sensitivity analysis, computer the projects


(or IRR) for each forecast under three assumptions: (a) pessimistic, (b) expected,
and (c) optimistic. It allows him to ask what if question. For example, what (is
the NPV) if volume increase or decreases? What (is the NPV) if variable cost of
fixed cost increases or decreases? What (is the NPV) if the selling price increases
or decreases? That (is the NPV) if the project is delayed or outplay escalate or he
questions can be answered with the help of sensitivity analysis. It examines the
sensitivity of the variables underlying the computation of NPV or IRR, rather than
attempting to quantify risk. It can be applied to any variable, which is an input
for the after-tax cash flows.

Simulation Analysis

Sensitivity and scenario analyses are quite useful to understand the uncertainty
of the investment projects. But both Approaches suffer from certain weaknesses.
As we have discusses, they do not consider the interactions between variables
and also, they do not reflect on the probability of the changes in variables.

The Monte carol simulation or simple the simulation analysis considers the
interactions among variables and probability of the change in variables. It does
not given the probability distribution of NPV. The simulation analysis is an
extension of scenario analysis. In simulation analysis a computer generates a
very large number of scenarios according to the probability distributions of the
variables. The simulation analysis involves the following steps:

First, you should identify variables that influence cash inflows and outflows. For
example, when a firm introduces a new product in the market these variables
are initial investment, market size, market growth, market share, price, variable
costs, fixed costs fixed costs, product life cycle, and terminal value.
Second specify the formulas that relate variables. For example, revenue
depends on by sales volume and price: sales volume is given by market size,
market share and market growth. Similarly, operating expenses depend on
production, sales and variable and fixed costs.
Third, indicate the probability distribution for each variable. Some variables will
have more uncertainty than others. For example, it is quite difficult to predict or
market growth with confidence.
Fourth, develop a computer programmed that randomly selects on e value
from the probability distribution of each variable and uses these values to
calculate the projects NPV. The computer generates a large; number of such
scenarios, calculates NPVs and stores them. The stored value are primed as a
probability distribution of the projects NPVs along with the expected NPV and its
standard deviation the rick-free rate should be used as the discount rate to
computer the projects cash flows, the discount rate should reflect only the time
value of money.

Simulation analysis is a very useful technique for risk analysis. Unfortunately. Its
practical use is limited because of a number of shortcomings. First the model
becomes quite complex to use because the variables are interrelated with each
other and each variable depends on its value in the precious periods as well.
Identifying all possible relationships and estimating as well as expensive.
Second, the model helps in generating a probability distribution of the projects
NPVs. But it does not indicate whether or not the project should be accepted.
Third, simulation analysis, like sensitivity or scenario analysis, considers the risk
of any project in isolation of other projects. We know that if we consider the
portfolio of projects, the unsystematic risk can be diversified. A risky project may
have a negative correlation with the firms other projects, and therefore
accepting the project may reduce the overall risk of the firm.

INTERNATIONAL FINANCE
Different types of transactions in the Foreign Exchange
Market
Spot and Forward Exchanges
Spot Market:
The term spot exchange refers to the class of foreign exchange transaction which requires the
immediate delivery or exchange of currencies on the spot. In practice the settlement takes
place within two days in most markets. The rate of exchange effective for the spot transaction
is known as the spot rate and the market for such transactions is known as the spot market.
Forward Market:
The forward transactions is an agreement between two parties, requiring the delivery at some
specified future date of a specified amount of foreign currency by one of the parties, against
payment in domestic currency be the other party, at the price agreed upon in the contract. The
rate of exchange applicable to the forward contract is called the forward exchange rate and
the market for forward transactions is known as the forward market.
The foreign exchange regulations of various countries generally regulate the forward
exchange transactions with a view to curbing speculation in the foreign exchanges market. In
India, for example, commercial banks are permitted to offer forward cover only with respect
to genuine export and import transactions. Forward exchange facilities, obviously, are of
immense help to exporters and importers as they can cover the risks arising out of exchange
rate fluctuations be entering into an appropriate forward exchange contract. With reference to
its relationship with spot rate, the forward rate may be at par, discount or premium. If the
forward exchange rate quoted is exact equivalent to the spot rate at the time of making the
contract the forward exchange rate is said to be at par.
The forward rate for a currency, say the dollar, is said to be at premium with respect to the
spot rate when one dollar buys more units of another currency, say rupee, in the forward than
in the spot rate on a per annum basis.
The forward rate for a currency, say the dollar, is said to be at discount with respect to the
spot rate when one dollar buys fewer rupees in the forward than in the spot market. The
discount is also usually expressed as a percentage deviation from the spot rate on a per
annum basis.
The forward exchange rate is determined mostly be the demand for and supply of forward
exchange. Naturally when the demand for forward exchange exceeds its supply, the forward
rate will be quoted at a premium and conversely, when the supply of forward exchange
exceeds the demand for it, the rate will be quoted at discount. When the supply is equivalent
to the demand for forward exchange, the forward rate will tend to be at par.
Futures

While a focus contract is similar to a forward contract, there are several differences between
them. While a forward contract is tailor made for the client be his international bank, a future
contract has standardized features the contract size and maturity dates are standardized.
Futures cab traded only on an organized exchange and they are traded competitively.
Margins are not required in respect of a forward contract but margins are required of all
participants in the futures market an initial margin must be deposited into a collateral account
to establish a futures position.
Options
While the forward or futures contract protects the purchaser of the contract fro m the adverse
exchange rate movements, it eliminates the possibility of gaining a windfall profit from
favorable exchange rate movement. An option is a contract or financial instrument that gives
holder the right, but not the obligation, to sell or buy a given quantity of an asset as a
specified price at a specified future date. An option to buy the underlying asset is known as a
call option and an option to sell the underlying asset is known as a put option. Buying or
selling the underlying asset via the option is known as exercising the option. The stated price
paid (or received) is known as the exercise or striking price. The buyer of an option is known
as the long and the seller of an option is known as the writer of the option, or the short. The
price for the option is known as premium.
Types of options: With reference to their exercise characteristics, there are two types of
options, American and European. A European option cab is exercised only at the maturity or
expiration date of the contract, whereas an American option can be exercised at any time
during the contract.
Swap operation
Commercial banks who conduct forward exchange business may resort to a swap operation
to adjust their fund position. The term swap means simultaneous sale of spot currency for the
forward purchase of the same currency or the purchase of spot for the forward sale of the
same currency. The spot is swapped against forward. Operations consisting of a simultaneous
sale or purchase of spot currency accompanies by a purchase or sale, respectively of the same
currency for forward delivery are technically known as swaps or double deals as the spot
currency is swapped against forward.
Arbitrage
Arbitrage is the simultaneous buying and selling of foreign currencies with intention of
making profits from the difference between the exchange rate prevailing at the same time in
different markets.

Foreign exchange
exposure

Transaction
exposure

Translation
(Accounting)
exposure

Operating/Econo
mic -Exposure

Hedging
through invoice
currency
Positive

Negati
ve

Asset

Operatin
g

Hedging
through invoice
currency

Selecting
low cost
productio
n sites

Flexible
sourcing
policy

Diversificatio
n of the
market

R & D efforts Financial


and product hedging
differentiatio
n

Foreign Exchange Exposure


Foreign exchange risk is related to the variability of the domestic currency values of assets,
liabilities or operating income due to unanticipated changes in exchange rates, whereas
foreign exchange exposure is what is at risk. Foreign currency exposures and the attendant
risk arise whenever a company has an income or expenditure or an asset or liability in a
currency other than that of the balance-sheet currency. Indeed exposures can arise even for
companies with no income, expenditure, asset or liability in a currency different from the
balance-sheet currency. When there is a condition prevalent where the exchange rates
become extremely volatile the exchange rate movements destabilize the cash flows of a
business significantly. Such destabilization of cash flows that affects the profitability of the
business is the risk from foreign currency exposures.
Classification of Exposures
Financial economists distinguish between three types of currency exposures transaction
exposures, translation exposures, and economic exposures. All three affect the bottom- line
of the business.
1. Transaction Exposure
Transaction exposure can be defined as the sensitivity of realized domestic currency values
of the firms contractual cash flows denominated in foreign currencies to unexpected
exchange rate changes. Transaction exposure is sometimes regarded as a short-term
economic exposure. Transaction exposure arises from fixed-price contracting in a world
where exchange rates are changing randomly.
Suppose that a company is exporting deutsche mark and while costing the transaction had
reckoned on getting say Rs.24 per mark. By the time the exchange transaction materializes
i.e. the export is affected and the mark sold for rupees, the exchange rate moved to say Rs.20
per mark.
The profitability of the export transaction can be completely wiped out by the movement in
the exchange rate. Such transaction exposures arise whenever a business has foreign currency
denominated receipt and payment. The risk is an adverse movement of the exchange rate
from the time the transaction is budgeted till the time the exposure is extinguished by sale or
purchase of the foreign currency against the domestic currency. Furthermore, in view of the
fact that firms are now more frequently entering into commercial and financial contracts
denominated in foreign currencies, judicious management of transaction exposure has
become an important function of international financial management.
Some strategy to manage transaction exposure

Hedging through invoice currency: While such financial hedging


instruments as forward contract, swap, future and option contracts
are well known, hedging through the choice of invoice currency, an
operational technique, has not received much attention. The firm
can shift, share or diversify exchange risk by appropriately choosing
the currency of invoice. Firm can avoid exchange rate risk by
invoicing in domestic currency, there by shifting exchange rate
risk on buyer. As a practical matter, however, the firm may not be
able to use risk shifting or sharing as much as it wishes to for fear

of losing sales to competitors. Only an exporter with substantial


market power can use this approach. Further, if the currencies of
both the exporter and importer are not suitable for settling
international trade, neither party can resort to risk shifting to deal
with exchange exposure.

Hedging via lead and lag: Another operational technique the firm
can use to reduce transaction exposure is leading and lagging
foreign currency receipts and payments.
Lead means to pay or collect early, where as
Lag means to pay or collect late.

The firm would like to lead soft currency receivables and lag hard
currency receivables to avoid the loss from depreciation of the soft
currency and benefit from the appreciation of the hard currency. For the
same reason, the firm will attempt to lead the hard currency payables
and lag soft currency payables. To the extent that the firm can effectively
implement the Lead/Lag strategy, the transaction exposure the firm faces
can be reduced.

2. Translation Exposure (Accounting Exposures)


Translation exposure is defined as the likely increase or decrease in the parent companys net
worth caused by a change in exchange rates since last translation. This arises when an asset
or liability is valued at the current rate. No exposure arises in respect of assets/liabilities
valued at historical rate, as they are not affected by exchange rate differences. Translation
exposure is measured as the net of the foreign currency denominated assets and liabilities
valued at current rates of exchange. If exposed assets exceed the exposed liabilities, the
concern has a positive or long or asset translation exposure, and exposure is equivalent
to the net value. If the exposed liabilities exceed the exposed assets and results in negative
or short or liabilities translation exposure to the extent of the net difference.
Translation exposure arises from the need to translate foreign currency assets or liabilities
into the home currency for the purpose of finalizing the accounts for any given period.
A typical example of translation exposure is the treatment of foreign currency borrowings.
Consider that a company has borrowed dollars to finance the import of capital goods worth
$10000. When the import materialized the exchange rate was say Rs 30 per dollar. The
imported fixed asset was therefore capitalized in the books of the company for Rs 300000.
In the ordinary course and assuming no change in the exchange rate the company would have
provided depreciation on the asset valued at Rs 300000 for finalizing its accounts for the year
in which the asset was purchased. If at the time of finalization of the accounts the exchange
rate has moved to say Rs 35 per dollar, the dollar loan has to be translated involving
translation loss of Rs50000. The book value of the asset thus becomes 350000 and
consequently higher depreciation has to be provided thus reducing the net profit.
Thus, Translation loss or gain is measured by the difference between the value of assets and
liabilities at the historical rate and current rate. A company which has a positive exposure
will have translation gains if the current rate for the foreign currency is higher than the
historic rate. In the same situation, a company with negative exposure will post translation

loss. The position will be reversed if the currency rate for foreign currency is lesser than its
historic rate of exchange. The translation gain/loss is shown as a separate component of the
shareholders equity in the balance-sheet. It does not affect the current earnings of the
company.
3. Economic Exposure
Economic exposure can be defined as the extent to which the value of the firm would be
affected by unanticipated changes in exchange rates. An economic exposure is more a
managerial concept than an accounting concept. A company can have an economic exposure
to say Yen: Rupee rates even if it does not have any transaction or translation exposure in the
Japanese currency.
This would be the case for example, when the companys competitors are using Japanese
imports. If the Yen weekends the company loses its competitiveness (vice-versa is also
possible). The companys competitor uses the cheap imports and can have competitive edge
over the company in terms of his cost cutting. Therefore the companys exposed to Japanese
Yen in an indirect way.
In simple words, economic exposure to an exchange rate is the risk that a change in the rate
affects the companys competitive position in the market and hence, indirectly the bottomline. Broadly speaking, economic exposure affects the profitability over a longer time span
than transaction and even translation exposure. Under the Indian exchange control, while
translation and transaction exposures can be hedged, economic exposure cannot be hedged.
Economic exposure consists of mainly two types of exposures.

Asset exposure

Operating exposure

Exposure to currency risk can be properly measured by the sensitivities of (1) the future
home currency values of the firms assets (and liabilities) (2) the firms operating cash flows
to random changes in exchange rates.
Asset exposure: Let us discuss the case of asset exposure. For convenience, assume that
dollar inflation is non random. Then, from the perspective of the U.S. firm that owns an asset
in Britain, the exposure can be measured by the coefficient b in regressing the dollar value
P of the British asset on the dollar/pound exchange rate S.
P=a+b*S+e
Where a is the regression constant and e is the random error term with mean zero, P =
SP*, where P* is the local currency (pound) price of asset. It is obvious from the above
equation that the regression coefficient b measures the sensitivity of the dollar value of
asset (P) to the exchange rate (S). If the regression coefficient is zero, the dollar value of the
asset is independent of exchange rate movement, implying no exposure. On the basis of
above analysis, one can say that exposure is the regression coefficient. Statistically, the
exposure coefficient, b, is defined as follows:
b = Cov (P,S)/ Var (S)
Where Cov (P,S) is the covariance between the dollar value of the asset and the exchange
rate, and Var (S) is the variance of the exchange rate.
Next, we show how to apply the exposure measurement technique using numerical examples.
Suppose that a U.S. firm has an asset in Britain whose local currency price is random. For

simplicity, let us assume that there are three states of the world, with each state equally likely
to occur. The future local currency price of this British asset as well as the future exchange
rate will be determined, depending on the realized state of the world.
Operating exposure: Operating exposure can be defined as the extent to which the firms
operating cash flows would be affected by random changes in exchange rates. Operating
exposure may affect in two different ways to the firm, viz., competitive effect and
conversion effect. Adverse exchange rate change increase cost of import which makes firms
product costly thus firms position becomes less competitive, which is competitive effect.
Adverse exchange rate change may reduce value of receivable to the exporting firm which is
called conversion effect.
Some strategy to manage operating exposure

Selecting low cost production sites: When the domestic


currency is strong or expected to become strong, eroding the
competitive position of the firm, it can choose to locate production
facilities in a foreign country where costs are low due to either the
undervalued currency or under priced factors of production.
Recently, Japanese car makers, including Nissan and Toyota, have
been increasingly shifting production to U.S. manufacturing
facilities in order to mitigate the negative effect of the strong yen
on U.S. sales. German car makers such as Daimler Benz and BMW
also decided to establish manufacturing facilities in the U.S. for the
same reason. Also, the firm can choose to establish and maintain
production facilities in multiple countries to deal with the effect of
exchange rate changes. Consider Nissan, which has manufacturing
facilities in the U.S. and Mexico, as well as in Japan. Multiple
manufacturing sites provide Nissan with great deal of flexibility
regarding where to produce, given the prevailing exchange rates.
When the yen appreciated substantially against the dollar, the
Mexican peso depreciated against the dollar in recent years. Under
this sort of exchange rate development, Nissan may choose to
increase production in the U.S. and especially in Mexico, in order to
serve the U.S. market. This is, in fact, how Nissan has reacted to
the rising yen in recent years. Maintaining multiple manufacturing
sites, however, may prevent the firm from taking advantage of
economies of scale, raising its cost of production. The resultant
higher cost can partially offset the advantages of maintaining
multiple production sites.

Flexible sourcing policy: Even if the firm manufacturing facilities


only in the domestic country, it can substantially lessen the effect
of exchange rate changes by sourcing from where input costs are
low. Facing the strong yen in recent years, many Japanese firms are
adopting the same practice. It is well known that Japanese
manufacturers, especially in the car and consumer electronics
industries, depend heavily on parts and intermediate products from
such low cost countries as Thailand, Malaysia, and China. Flexible
sourcing need not be confined just to materials and parts. Firms can
also hire low cost guest workers from foreign countries instead of
high cost domestic workers in order to be competitive.

Diversification of the market: Another way of dealing with


exchange exposure is to diversify the market for the firms products
as much as possible. Suppose that GE is selling power generators in
Mexico as well as Germany. Reduced sales in Mexico due to the
dollar appreciation against the peso can be compensated by
increased sales in Germany due to dollar depreciation against the
euro. As a result, GEs overall cash flows will be much more stable
than would be the case if GE sold only in one foreign market, either
Mexico or Germany. As long as exchange rates do not always move
in the same direction, the firm can stabilize its operating cash flow
by diversifying its export market.

R&D efforts and product differentiation: Investment in R&D


activities can allow the firm to maintain and strengthen its
competitive position in the face of adverse exchange rate
movements. Successful R&D efforts allow the firm to cut costs and
enhance productivity. In addition, R&D efforts can lead to the
introduction of new and unique products for which competitors offer
no close substitutes. Since the demand for unique products tend to
be highly inelastic, the firm would be less exposed to exchange risk.
At the same time, the firm can strive to create a perception among
consumers that its product is indeed different from those offered by
competitors. This helps firm to pass-through any adverse effect of
exchange rate on to the customers.

Financial hedging: While not a substitute for the long-term,


financial hedging can be used to stabilize the firms cash flow. For
example, the firm can lend or borrow foreign currencies as a long
term basis. Or, the firm can use currency forward of options
contracts and roll them over if necessary.

Related posts:
1. Foreign Exchange Risk
2. Economics of the Foreign Exchange Market
3. Settlement of Transactions in Foreign Exchange Markets
4. Flexible v/s fixed foreign exchange rates
5. Official actions to influence foreign exchange rates
6. Management of Foreign Exchange Risks
7. Role of FEDAI in Foreign Exchange
8. Foreign Exchange Management Policy in India
9. Different types of transactions in the Foreign Exchange Market
10.Merchant Rate and Exchange Margin in Foreign Exchange Markets

Capital Asset Pricing Model and Arbitrage Pricing Theory


The three portfolios we looked at in Topic 2 helped down the foundation for
many of the asset pricing models commonly used in the financial industry today.
Two of such models are the capital asset pricing model (CAPM) and the arbitrage
pricing theory (APT). We will focus first on the capital asset pricing model for two
reasons: (i) many of you have seen the CAPM in an introductory finance course,
and (ii) the approach the CAPM takes to estimate the risk of a portfolio is very
similar to the approach of the third portfolio we analyzed in Topic 2.

1. Assumptions of the Capital Asset Pricing Model

Before we look at how the CAPM can be used to price a portfolio (or an
investment), it is important for you to understand that it is after all a theoretical
model, which means that it is based on an idealistic investment environment
different from the real world. Despite its simplistic assumptions about the
investment environment, the CAPM still serves as a valuable tool in
understanding the relationship between the risk and return.

The following are the assumptions of the CAPM. Briefly explain what each
assumption means.
(a) Investors are price takers
(b) Investors have identical single-period holding horizons
(c) Investors have access to all investments and have access to unlimited
borrowing and lending opportunities at the risk-free rate
(d) The financial markets are frictionless
(e) Investors are rational mean-variance optimizer
(f) Investors have homogenous expectations

1. Relationship Between the CAPM and the CML

In Topic 2, we know that when you have access to all the different investments
available in the financial market, the best place you can be is on the capital
market line (CML). Portfolios that are located on this line will provide you the
best (or optimal) combination of risk and return. As a result, the CML is a good
measure for the relationship between risk and return. Just in case you forgot, the
CML is represented by the following formula:

E (rm ) rrf
E (r p ) = rrf +
rm

What is the similarity and difference between the CAPM and the CML in
measuring the relationship between risk and return? We need to first re-arrange
the formula (which is presented below) for the CML before we will address the
question.

E (r p ) = rrf +

[ E (r

) rrf

If you remembered what you learned in your Introductory Finance course, no


doubt you will recognize the equation for the CAPM:

E (r p ) = rrf + p E (rm ) rrf

where is the beta of the portfolio.

Do you begin to see the resemblance between the CML and the CAPM?
According to the two formulae, the return of the portfolio can be broken down
into two components: (i) the guaranteed risk-free rate and (ii) the compensation
for taking on risk. In addition, the compensation is determined by two things: (i)
a relative measurement of the portfolios risk and (ii) the market risk premium
[i.e. E(rmr)rf].

What about the differences between the CML and the CAPM? Can you tell what
are the two differences between the CML and CAPM?
2. , and SML
The CAPM, Beta (i.e.
Now that we know more about the similarities and differences between the CML
and the CAPM, we need to go back and look at some of the details related to the
CAPM.
Even though the formula presented earlier for the CAPM is for a portfolio, the
formula can easily be modified to determine the return of a single investment as
follows:

E (ri ) = rrf + i E (rm ) rrf

Since the risk-free rate and the market return should be the same for every
investment in the financial market, the only thing that is different from
investment to investment is the beta of the investment. As a result, we can
claim that the only driving force behind the determination of an investments
return is its beta.

What is the beta? It represents an investments non-diversifiable risk (and not its
total risk) relative to the market risk. In other words, the beta of an investment
measures the co-movement of the investments expected return with the
markets expected return. The formula of an investments beta is as follows:

i =

im i
m

where
im = correlation between investment is return and the markets return

i = investment is non-diversifiable risk


m = market risk
We know the CAPM can be easily modified to determine the expected return of
either a portfolio or an individual investment. The only difference between the
two is the beta: beta of a portfolio and beta of an individual investment. What is
the relationship between the two? The beta of a portfolio is simply the weighted
average of the betas of the investments included in the portfolio. The formula for
the beta of a portfolio is as follows:

p = wi i

Just as in the case with the capital allocation line, we can also represent the
CAPM in a graphical manner. The straight line that represents the relationship
between risk and return (according to the CAPM) is known as the security market
line (SML).

E(ri)

E(rm)

SML

rrf

1.0

The security market line will help you determine if an investment is correctly
price. In other words, help you determine if the investment is offering a return
that is appropriate for its level of risk (as measured by the beta). If an
investments return falls on the SML, the investment is considered to be
correctly price because the expected return of the investment matches the one
according to the CAPM (based on for its beta). However, if the expected return of
the investment differs from the one as predicted by the CAPM, the investment
is considered to be either underpriced or overpriced. The difference between the
investments actual expected return and its fair return (as dictated by the CAPM)
is known as the investments alpha (i.e.
).

Lets analyze the two investments A and B as depicted in the graph above.
Based on your analysis, what can you say about the two investments?
(a) Investment A
(b) Investment B
Estimating the Beta of an Investment Using the Index Model
Since the driving force behind the CAPM in determining the return of an
investment is its beta, it is important that you know the process commonly
adopted to estimate the beta of an investment. Before we can proceed with the
discussion on how to estimate beta, you need to first understand that we cannot

implement the CAPM in the real world as it is because of two main issues. First,
the CAPM assumes that the market portfolio (which includes all investments in
the financial market) is available to all investors. Second, it focuses on the
expected return of an investment.

Index model
To apply the CAPM in the real world, we need to use the index model, which
addresses the above two issues as follows:
(a) The index model uses a proxy such as a market index (e.g. S&P 500) to
represents a more relevant market portfolio (and the market risk).
(b) The index model uses realized returns (rather than expected returns,
which are not easily observable).
If we are to estimate the beta of an investment using CAPM, we will need to
establish the following regression model, which is based on the realized excess
returns of the investment in relation to the realized excess returns of the
market:
E (ri ) rrf = i + i [ E (rm ) rrf ]

However, since we are using the index model (i.e. using realized returns), the
regression model will look as follows:
ri rrf = i + i [rm rrf ]

To estimate the beta of an investment, we need first to determine the holding


period returns of the investment and the chosen market index. Once we have
identified the proxy for the risk-free rate, we can determine the excess returns of
the investment and the excess returns of the market index. If you plot the
excess returns of the investment and the market index as follows, you will have
a very good idea whether the beta will be positive or negative.

Based on the graph above, can you tell if the beta will be positive or negative?
One thing that is crucial to remember is that because of the setup of the
regression model, the excess returns of the investment have to be on the y-axis
and the excess returns of the market index have to be on the x-axis.
Once you have the excess returns of the investment and the market index
plotted as above, you want to find a straight line that best fit the data as
presented in the graph below:

What does it mean to have a straight line that best fit the data points?
The straight line, which best fit the data points, is known as the security
characteristic line (SCL). Once again, a straight line is determined by its yintercept and its slope. How do you determine the y-intercept and the slope of
the SCL? You can do so by performing a regression analysis using any statistical
packages or Microsoft Excel.

Problems of the Capital Asset Pricing Model


Although the Capital Asset Pricing Model is the most popular tool among many of
the investors and investment analysts, it does have its problems. We know the
CAPM uses 3 variables to determine the expected return of an asset: the riskfree rate, the expected return of the market portfolio, and the beta of the asset.
An error in the estimation of any of these variables might lead to a wrong
recommendation or investment decision. The following are some of the sources
of error in estimating the 3 variables for the CAPM:
(a) Although a 1-year T-bill is commonly used as a representation for a riskfree asset, it might not be the most appropriate choice in certain
situations. Some analysts have suggested that a 30-year T-bond might be
a more appropriate choice because its time horizon closely matches the
investment horizons of most investors. In this case, the choice of the
representation of a risk-free asset might lead to a wrong investment
decision because the return of a 1-year T-bill can differ significantly from
the return of a 30-year T-bond.
(b) We know there are many representations (or proxies) for the market,
which means that there are many choices to represent the market
portfolio: the Dow Jones Industrial Average, the S&P 500 index, the NYSE
Composite index, etc. Each of these choices will provide a different
estimate for the market return. Just as in the case of the risk-free asset,

the choice of the representation for the market portfolio will affect an
investors investment decisions.

(c) It has been proven empirically that the beta of an investment is unstable
over time. In other words, the value of the beta of an investment changes
over time. This could be due to changes in the companys management,
its financing policy, etc. In addition, the estimates for the beta of a
particular investment vary among analysts and publications for several
reasons:

(i) The proxy for the market can be different among analysts and
publications. For example, one analyst might be using the Value
Line index (which contains 1700 stocks), while another analyst
might be using the S&P 500 index.
(ii) The time period used in estimating the beta of a stock can be
different among analysts and publications. For example, the beta
of an investment estimated using 5 years of return will differ from
the one estimated using 10 years of return.
(iii)The intervals of the measurement of the returns will also affect the
estimates of the betas. For example, a beta estimated with weekly
returns will differ from the one estimated with monthly returns.

Capital Asset Pricing Model and Arbitrage Theory


The major criticism of the CAPM is that it uses only a single factor in determining
the return of a portfolio, namely the beta of the portfolio. In other words, the
non-diversifiable risk of the portfolio (in relation to the market risk) is the sole
determinant of its return. No other factors will have any effect on the portfolios
return.
To address this criticism of the CAPM, a new model has been developed based
on the arbitrage pricing theory (APT). Similar to the CAPM, the APT assumes that
there is a relationship between the risk and return of a portfolio. However,
compared to the CAPM, the APT has fewer assumptions. The following
assumptions are required for the CAPM but not for the APT:
(a) A single-period investment horizon
(b) Borrowing and lending at the risk-free rate

(c) Investors are mean-variance optimizer

The APT is based on the concept of arbitrage (or law of one price), which states
that any two identical investments cannot be sold at a different price. In other
words, the theory states that market forces will adjust to eliminate any arbitrage
opportunities, where a zero investment portfolio can be created to yield a riskfree profit.

The key thing you need to understand is that, unlike the CAPM, the APT does not
assume that the market risk is the only factor that influences the return of a
portfolio. The APT recognizes that several other factors (or risks) can influence
the return of a portfolio.
The APT preserves the linear relationship between risk and return of the CAPM
but abandons the single measure of risk by the beta of the portfolio. The APT
model is a multiple factor model, which uses factors such as the inflation rate,
the growth rate of the economy, the slope of the yield curve, etc. in addition to
the beta of the portfolio in determining the return of the portfolio. Keep in mind
that just as in the case with the CAPM, the APT can also be modified to
determine the return of an individual investment. The formula of the APT can be
presented as follows:

E (ri ) = rrf + 1 E (r1 ) rrf + 2 E (r2 ) rrf + ... + n E (rn ) rrf

where 1, 2, , n represent the different factors that have impact over an


investments return.
The problem with the APT is that the factors are not well-specified ex-ante. Some
research had been conducted to determine the appropriate factors that should
be included in the model. However, there is no consensus on what the factors
should be. One study suggested that the factors that should be included are
changes in expected inflation, unanticipated changes in inflation, unanticipated
changes in industrial production, unanticipated changes in the default riskpremium, and unanticipated changes in the term structure of interest rates. On
the other hand, another study suggested that the factors should be default risk,
the term structure of interest rates, inflation or deflation, the long-run expected
growth rate of profits for the economy, and residual market risk.

What does this all means to an investor like you? Should you use the CAPM or
the APT? The key thing you need to remember is that neither of the theories
dominates the other one. The APT is more general because it does not require as
many assumptions as the CAPM. However, the CAPM is more general because it
applies to all individual investments without reservation (whereas the APT works
better with well-diversified portfolio).

Arbitrage pricing theory


Arbitrage pricing theory (APT), in Finance, is a general theory of asset pricing, that has
become influential in the pricing of shares.
APT holds that the expected return of a financial asset can be modeled as a linear function of
various macro-economic factors or theoretical market indices, where sensitivity to changes in
each factor is represented by a factor-specific beta coefficient. The model-derived rate of
return will then be used to price the asset correctly - the asset price should equal the expected
end of period price discounted at the rate implied by model. If the price diverges, arbitrage
should bring it back into line.
The theory was initiated by the economist Stephen Ross in 1976.

The APT model


If APT holds, then a risky asset can be described as satisfying the following relation:

where

E(rj) is the risky asset's expected return,

RPk is the risk premium of the factor,

rf is the risk-free rate,

Fk is the macroeconomic factor,

bjk is the sensitivity of the asset to factor k, also called factor loading,

and j is the risky asset's idiosyncratic random shock with mean zero.

That is, the uncertain return of an asset j is a linear relationship among n factors.
Additionally, every factor is also considered to be a random variable with mean zero.
Note that there are some assumptions and requirements that have to be fulfilled for the latter
to be correct:
There must be perfect competition in the market, and the total number of factors may never
surpass the total number of assets (in order to avoid the problem of matrix singularity),

Arbitrage and the APT

Arbitrage is the practice of taking advantage of a state of imbalance between two (or possibly
more) markets and thereby making a risk free profit; see Rational pricing.

Arbitrage in expectations
The APT describes the mechanism whereby arbitrage by investors will bring an asset which
is mispriced, according to the APT model, back into line with its expected price. Note that
under true arbitrage, the investor locks-in a guaranteed payoff, whereas under APT arbitrage
as described below, the investor locks-in a positive expected payoff. The APT thus assumes
"arbitrage in expectations" - i.e. that arbitrage by investors will bring asset prices back into
line with the returns expected by the model portfolio theory.

Arbitrage mechanics
In the APT context, arbitrage consists of trading in two assets with at least one being
mispriced. The arbitrageur sells the asset which is relatively too expensive and uses the
proceeds to buy one which is relatively too cheap.
Under the APT, an asset is mispriced if its current price diverges from the price predicted by
the model. The asset price today should equal the sum of all future cash flows discounted at
the APT rate, where the expected return of the asset is a linear function of various factors,
and sensitivity to changes in each factor is represented by a factor-specific beta coefficient.
A correctly priced asset here may be in fact a synthetic asset - a portfolio consisting of other
correctly priced assets. This portfolio has the same exposure to each of the macroeconomic
factors as the mispriced asset. The arbitrageur creates the portfolio by identifying x correctly
priced assets (one per factor plus one) and then weighting the assets such that portfolio beta
per factor is the same as for the mispriced asset.
When the investor is long the asset and short the portfolio (or vice versa) he has created a
position which has a positive expected return (the difference between asset return and
portfolio return) and which has a net-zero exposure to any macroeconomic factor and is
therefore risk free (other than for firm specific risk). The arbitrageur is thus in a position to
make a risk free profit:
Where today's price is too low:
The implication is that at the end of the period the portfolio would have
appreciated at the rate implied by the APT, whereas the mispriced asset would
have appreciated at more than this rate. The arbitrageur could therefore:
Today:
1 short sell the portfolio
2 buy the mispriced-asset with the proceeds.
At the end of the period:
1 sell the mispriced asset
2 use the proceeds to buy back the portfolio
3 pocket the difference.
Where today's price is too high:
The implication is that at the end of the period the portfolio would have
appreciated at the rate implied by the APT, whereas the mispriced asset would
have appreciated at less than this rate. The arbitrageur could therefore:
Today:
1 short sell the mispriced-asset

2 buy the portfolio with the proceeds.


At the end of the period:
1 sell the portfolio
2 use the proceeds to buy back the mispriced-asset
3 pocket the difference.

Relationship with the capital asset pricing model


The APT along with the capital asset pricing model (CAPM) is one of two influential
theories on asset pricing. The APT differs from the CAPM in that it is less restrictive in its
assumptions. It allows for an explanatory (as opposed to statistical) model of asset returns. It
assumes that each investor will hold a unique portfolio with its own particular array of betas,
as opposed to the identical "market portfolio". In some ways, the CAPM can be considered a
"special case" of the APT in that the securities market line represents a single-factor model of
the asset price, where Beta is exposed to changes in value of the Market.
Additionally, the APT can be seen as a "supply side" model, since its beta coefficients reflect
the sensitivity of the underlying asset to economic factors. Thus, factor shocks would cause
structural changes in the asset's expected return, or in the case of stocks, in the firm's
profitability.
On the other side, the capital asset pricing model is considered a "demand side" model. Its
results, although similar to those in the APT, arise from a maximization problem of each
investor's utility function, and from the resulting market equilibrium (investors are
considered to be the "consumers" of the assets).

Using the APT


Identifying the factors
As with the CAPM, the factor-specific Betas are found via a linear regression of historical
security returns on the factor in question. Unlike the CAPM, the APT, however, does not
itself reveal the identity of its priced factors - the number and nature of these factors is likely
to change over time and between economies. As a result, this issue is essentially empirical in
nature. Several a priori guidelines as to the characteristics required of potential factors are,
however, suggested:
1. their impact on asset prices manifests in their unexpected movements
2. they should represent undiversifiable influences (these are, clearly, more

likely to be macroeconomic rather than firm-specific in nature)


3. timely and accurate information on these variables is required
4. the relationship should be theoretically justifiable on economic grounds
Chen, Roll and Ross identified the following macro-economic factors as significant in
explaining security returns:

surprises in inflation;

surprises in GNP as indicted by an industrial production index;

surprises in investor confidence due to changes in default premium in


corporate bonds;

surprise shifts in the yield curve.

As a practical matter, indices or spot or futures market prices may be used in place of macroeconomic factors, which are reported at low frequency (e.g. monthly) and often with
significant estimation errors. Market indices are sometimes derived by means of factor
analysis. More direct "indices" that might be used are:

short term interest rates;

the difference in long-term and short term interest rates;

a diversified stock index such as the S&P 500 or NYSE Composite Index;

oil prices

gold or other precious metal prices

Currency exchange rates

What is Capital Asset Pricing Model? Discuss its assumptions


Capital asset pricing model is a model that describes the relationship/trade-off
between risk and expected/required return. It explains the behavior of security
prices and provides a mechanism to assess the impact of a proposed security
investment on investors overall portfolio risk and return. The CAPM provides a
framework for basic risk-return trade-offs in portfolio management. It enables
drawing certain implications about risk and the size of risk premium necessary to
compensate for bearing risk.
Assumptions: the basic assumptions of CAPM are related to
(i)

(ii)

The efficiency of the markets: In an efficient capital market,


investors are well informed, transaction costs are low, there are
negligible restrictions on investment, no investment is large
enough to influence the market price of a share and investors
are in general agreement about the likely performance of
individual securities and their expectations are based on a one
year common ownership (holding) period.
Investor preferences: Investors are assumed to proper to invest
in securities with the highest return for a given level of risk or
the lowest risk for a given level of return, return and risk being
measured in terms of expected value and standard deviation
relatively.

Net operating income approach to Capital Structure


The second approach as propounded by David Durand the net
operating income approach examines the effects of changes in capital
structure in terms of net operating income. In the net income approach
discussed above net income available to shareholders is obtained by
deducting interest on debentures form net operating income. Then
overall value of the firm is calculated through capitalization rate of
equities obtained on the basis of net operating income, it is called net

income approach. In the second approach, on the other hand overall


value of the firm is assessed on the basis of net operating income not
on the basis of net income. Hence this second approach is known as
net operating income approach.

The NOI approach implies that (i) whatever may be the change in
capital structure the overall value of the firm is not affected. Thus the
overall value of the firm is independent of the degree of leverage in
capital structure. (ii) Similarly the overall cost of capital is not affected
by any change in the degree of leverage in capital structure. The
overall cost of capital is independent of leverage.

If the cost of debt is less than that of equity capital the overall cost of
capital must decrease with the increase in debts whereas it is assumed
under this method that overall cost of capital is unaffected and hence
it remains constant irrespective of the change in the ratio of debts to
equity capital. How can this assumption be justified? The advocates of
this method are of the opinion that the degree of risk of business
increases with the increase in the amount of debts. Consequently the
rate of equity over investment in equity shares thus on the one hand
cost of capital decreases with the increase in the volume of debts; on
the other hand cost of equity capital increases to the same extent.
Hence the benefit of leverage is wiped out and overall cost of capital
remains at the same level as before. Let us illustrate this point.

If follows that with the increase in debts rate of equity capitalization


also increases and consequently the overall cost of capital remains
constant; it does not decline.

To put the same in other words there are two parts of the cost of
capital. One is the explicit cost which is expressed in terms of interest
charges on debentures. The other is implicit cost which refers to the
increase in the rate of equity capitalization resulting from the increase
in risk of business due to higher level of debts.

Optimum capital structure

This approach suggests that whatever may be the degree of leverage


the market value of the firm remains constant. In spite of the change
in the ratio of debts to equity the market value of its equity shares
remains constant. This means there does not exist a optimum capital
structure. Every capital structure is optimum according to net
operating income approach.
Working capital management
Working capital management involves the relationship between a firm's shortterm assets and its short-term liabilities. The goal of working capital
management is to ensure that a firm is able to continue its operations and that it
has sufficient ability to satisfy both maturing short-term debt and upcoming
operational expenses. The management of working capital involves managing
inventories, accounts receivable and payable, and cash.
Why Firms Hold Cash
The finance profession recognizes the three primary reasons offered by
economist John Maynard Keynes to explain why firms hold cash. The three
reasons are for the purpose of speculation, for the purpose of precaution, and for
the purpose of making transactions. All three of these reasons stem from the
need for companies to possess liquidity.
Speculation
Economist Keynes described this reason for holding cash as creating the ability
for a firm to take advantage of special opportunities that if acted upon quickly
will favor the firm. An example of this would be purchasing extra inventory at a
discount that is greater than the carrying costs of holding the inventory.
Precaution
Holding cash as a precaution serves as an emergency fund for a firm. If expected
cash inflows are not received as expected cash held on a precautionary basis
could be used to satisfy short-term obligations that the cash inflow may have
been bench marked for.
Transaction
Firms are in existence to create products or provide services. The providing of
services and creating of products results in the need for cash inflows and
outflows. Firms hold cash in order to satisfy the cash inflow and cash outflow
needs that they have.
Float

Float is defined as the difference between the book balance and the bank
balance of an account. For example, assume that you go to the bank and
open a checking account with $500. You receive no interest on the $500
and pay no fee to have the account.
Now assume that you receive your water bill in the mail and that it is for
$100. You write a check for $100 and mail it to the water company. At the
time you write the $100 check you also record the payment in your bank
register. Your bank register reflects the book value of the checking account.
The check will literally be "in the mail" for a few days before it is received
by the water company and may go several more days before the water
company cashes it.
The time between the moment you write the check and the time the bank
cashes the check there is a difference in your book balance and the balance
the bank lists for your checking account. That difference is float. This float
can be managed. If you know that the bank will not learn about your check
for five days, you could take the $100 and invest it in a savings account at
the bank for the five days and then place it back into your checking account
"just in time" to cover the $100 check.

Time

Bo
ok
Bal
an
ce

Bank Balance

Time 0 (make deposit)


Time 1 (write $100
check)
Time 2 (bank receives
check)

$5
00
$4
00
$4
00

$500
$500
$400

Float is calculated by subtracting the book balance from the bank balance.
Float at Time 0: $500 - $500
= $0

Float at Time 1: $500 - $400


= $100
Float at Time 2: $400 - $400
= $0

Ways to Manage Cash

Firms can manage cash in virtually all areas of operations that involve the use of
cash. The goal is to receive cash as soon as possible while at the same time
waiting to pay out cash as long as possible. Below are several examples of how
firms are able to do this.
Policy For Cash Being Held
Here a firm already is holding the cash so the goal is to maximize the benefits
from holding it and wait to pay out the cash being held until the last possible
moment. Previously there was a discussion on Float which includes an example
based on a checking account. That example is expanded here.

Assume that rather than investing $500 in a checking account that does not pay
any interest, you invest that $500 in liquid investments. Further assume that the
bank believes you to be a low credit risk and allows you to maintain a balance of
$0 in your checking account.

This allows you to write a $100 check to the water company and then transfer
funds from your investment to the checking account in a "just in time" (JIT)
fashion. By employing this JIT system you are able to draw interest on the entire
$500 up until you need the $100 to pay the water company. Firms often have
policies similar to this one to allow them to maximize idle cash.

Sales
The goal for cash management here is to shorten the amount of time before the
cash is received. Firms that make sales on credit are able to decrease the
amount of time that their customers wait until they pay the firm by offering
discounts.

For example, credit sales are often made with terms such as 3/10 net 60. The
first part of the sales term "3/10" means that if the customer pays for the sale
within 10 days they will receive a 3% discount on the sale. The remainder of the
sales term, "net 60," means that the bill is due within 60 days. By offering an
inducement, the 3% discount in this case, firms are able to cause their
customers to pay off their bills early. This results in the firm receiving the cash
earlier.
Inventory
The goal here is to put off the payment of cash for as long as possible and to
manage the cash being held. By using a JIT inventory system, a firm is able to
avoid paying for the inventory until it is needed while also avoiding carrying
costs on the inventory. JIT is a system where raw materials are purchased and
received just in time, as they are needed in the production lines of a firm.

Limitations Of Ratio Analysis


Although ratio analysis is very important tool to judge the company's
performance , following are the limitations of it.

1. Ratios are tools of quantitative analysis, which ignore qualitative points of


view.

2. Ratios are generally distorted by inflation.

3. Ratios give false result, if they are calculated from incorrect accounting data.

4. Ratios are calculated on the basis of past data. Therefore, they do not provide
complete information for future forecasting.

5. Ratios may be misleading, if they are based on false or window-dressed


accounting information.

Advantages of Financial Statement Analysis:


There are various advantages of financial statements analysis. The major benefit is
that the investors get enough idea to decide about the investments of their funds in the

specific company. Secondly, regulatory authorities like International Accounting


Standards Board can ensure whether the company is following accounting standards or
not. Thirdly, financial statements analysis can help the government agencies to analyze
the taxation due to the company. Moreover, company can analyze its own performance
over the period of time through financial statements analysis.

Limitations of Financial Statement Analysis:


Although financial statement analysis is highly useful tool, it has two
limitations. These two limitations involve the comparability of financial
data between companies and the need to look beyond ratios.
Comparison of Financial Data:
Comparison of one company with another can provide valuable clues
about the financial health of an organization. Unfortunately,
differences in accounting methods between companies sometimes
make it difficult to compare the companies' financial data. For example
if one firm values its inventories by LIFO method and another firm by
the average cost method, then direct comparison of financial data such
as inventory valuations and cost of goods sold between the two firms
may be misleading. Sometimes enough data are presented in foot notes
to the financial statements to restate data to a comparable basis.
Otherwise, the analyst should keep in mind the lack of comparability of
the data before drawing any definite conclusion. Nevertheless, even
with this limitation in mind, comparisons of key ratios with other
companies and with industry average often suggest avenues for
further investigation.
The Need to Look Beyond Ratios:
An inexperienced analyst may assume that ratios are sufficient in
themselves as a basis for judgment about the future. Nothing could be
further from the truth. Conclusions based on ratios analysis must be
regarded as tentative. Ratios should not be viewed as an end, but
rather they should be viewed as starting point, as indicators of what to
pursue in greater depth. they raise many questions, but they rarely
answer any question by themselves.
In addition to ratios, other sources of data should be analyzed in order
to make judgment about the future of an organization. The analyst
should look, for example, at industry trends, technological changes,
changes in consumer tastes, changes in broad economic factors, and
changes within the firm itself. A recent change in a key management
position, for example, might provide a basis for optimization about the

future, even though the past performance of the firm (as shown by its
ratios) may have been mediocre.
Financial Statement Analysis Limitations

Many things can impact the calculation of ratios and make comparisons
difficult. The limitations include:

The use of estimates in allocating costs to each period. The ratios


will be as accurate as the estimates.

The cost principle is used to prepare financial statements. Financial


data is not adjusted for price changes or inflation/deflation.

Companies have a choice of accounting methods (for example,


inventory LIFO vs FIFO and depreciation methods). These differences
impact ratios and make it difficult to compare companies using
different methods.

Companies may have different fiscal year ends making comparison


difficult if the industry is cyclical.

Diversified companies are difficult to classify for comparison


purposes.

Financial statement analysis does not provide answers to all the


users' questions. In fact, it usually generates more questions!
Common Size Financial Statements

Common size ratios are used to compare financial statements of different-size


companies, or of the same company over different periods. By expressing the
items in proportion to some size-related measure, standardized financial

statements can be created, revealing trends and providing insight into how the
different companies compare.

The common size ratio for each line on the financial statement is calculated as
follows:
Item
Interest
Common
Ratio

of

Size
=
Reference
Item

For example, if the item of interest is inventory and it is referenced to total


assets (as it normally would be), the common size ratio would be:
Inventory
Common Size Ratio
Inventory

for
=

Total
Assets

The ratios often are expressed as percentages of the reference amount.


Common size statements usually are prepared for the income statement and
balance sheet, expressing information as follows:

Income statement items - expressed as a percentage of total


revenue

Balance sheet items - expressed as a percentage of total assets

The following example income statement shows both the dollar amounts and the
common size ratios:
Common Size Income Statement

Revenue

Income
Statement

Common-Size
Income
Statement

70,134

100%

Cost
Sold

of

Goods

44,221

63.1%

Gross Profit

25,913

36.9%

SG&A Expense

13,531

19.3%

Operating Income

12,382

17.7%

Interest Expense

2,862

4.1%

Provision for Taxes

3,766

5.4%

Net Income

5,754

8.2%

For the balance sheet, the common size percentages are referenced to the total
assets. The following sample balance sheet shows both the dollar amounts and
the common size ratios:
Common Size Balance Sheet
Balance
Sheet

CommonSize
Balance
Sheet

6,029

15.1%

Accounts Receivable

14,378

36.0%

Inventory

17,136

42.9%

Total Current Assets

37,543

93.9%

2,442

6.1%

39,985

100%

ASSETS
Cash
&
Securities

Property,
Equipment

Marketable

Plant,

Total Assets

&

LIABILITIES AND SHAREHOLDERS' EQUITY


Current Liabilities

14,251

35.6%

Long-Term Debt

12,624

31.6%

Total Liabilities

26,875

67.2%

Shareholders' Equity

13,110

32.8%

Total Liabilities & Equity

39,985

100%

The above common size statements are prepared in a vertical analysis,


referencing each line on the financial statement to a total value on the
statement in a given period.
The ratios in common size statements tend to have less variation than the
absolute values themselves, and trends in the ratios can reveal important
changes in the business. Historical comparisons can be made in a time-series
analysis to identify such trends.
Common size statements also can be used to compare the firm to other firms.
Comparisons Between Companies (Cross-Sectional Analysis)
Common size financial statements can be used to compare multiple companies
at the same point in time. A common-size analysis is especially useful when
comparing companies of different sizes. It often is insightful to compare a firm to
the best performing firm in its industry (benchmarking). A firm also can be
compared to its industry as a whole. To compare to the industry, the ratios are
calculated for each firm in the industry and an average for the industry is
calculated. Comparative statements then may be constructed with the company
of interest in one column and the industry averages in another. The result is a
quick overview of where the firm stands in the industry with respect to key items
on the financial statements.
Limitations
As with financial statements in general, the interpretation of common size
statements is subject to many of the limitations in the accounting data used to
construct them. For example:

Different accounting policies may be used by different firms or


within the same firm at different points in time. Adjustments should
be made for such differences.

Different firms may use different accounting calendars, so the


accounting periods may not be directly comparable.

What Is the Function of a Comparative Balance Sheet ?

I want to do this! What's This?


A comparative balance sheet is designed to show financial differences between
several accounting periods. A balance sheet is a detailed account of everything
lost and gained financially during a certain time, containing both physical and
abstract data. A comparative balance sheet is useful because a business can
instantly compare profits and losses between different time periods. Most
businesses use comparative balance sheets to help increase profits and
functionality of a company.
Features
1. A comparative balance sheet will include several different types of
accounting data. First there will be the income received and money
spent. There will also be a list of credits and debits to the company.
A list of assets and liabilities is also included. All of these factors are
necessary to see what the total worth of the company is through
the balance sheet. The comparative balance sheet allows the
company or business to see at a glance how its profits differ from
one year to another. These comparative balance sheets are aligned
so that business people can see at a glance the financial differences
from year to year.
Function
2. A balance sheet is designed to help keep a business or company
aware of every expense and profit that it is receiving. It also allows
the company to see which times of the year are most profitable,
and which years they did the best. This knowledge is important so
that the company can adapt to the information to build the best
business possible. If the business did better three years ago, they
can look at that data and try to decide what it was that made them
do so well that year. Then they can change what they are doing in
the present to help boost current profits.
Benefits
3. The main benefit of a comparative balance sheet is that profits and
losses can be seen at a glance. It is also possible to see the
increase or decrease of assets that the business has. The company
will be able to tell what the biggest money suckers in the business
are, and try to think of ways to cut down losses in that area.
Significance
4. Without a comparative balance sheet, businesses would not know
how to change their strategy from year to year. All they would have

to go on would their current balance statements. This would be


detrimental to most businesses. It is very important to be able to
look at past profit information to judge how to act for the future.
Expert Insight
5. Most businesses and companies use comparative balance sheets. It
would be a very poor business decision not to use them. A lot of
times these comparative balance sheets are used when proposing
new additions or changes to a business. The company can go back
as many as 10 or 20 years to identify trends, and to judge if a new
project is right for the company. Comparative balance sheets are a
necessity in the business world.

Derivative
Derivatives are financial contracts, or financial instruments, whose values are derived from
the value of something else (known as the underlying).
The underlying on which a derivative is based can be an asset (e.g., commodities, equities
(stocks), residential mortgages, commercial real estate, loans, bonds), an index (e.g., interest
rates, exchange rates, stock market indices, consumer price index (CPI) see inflation
derivatives), or other items (e.g., weather conditions, or other derivatives). Credit derivatives
are based on loans, bonds or other forms of credit.
The main types of derivatives are forwards, futures, options, and swaps.
Derivatives can be used to mitigate the risk of economic loss arising from changes in the
value of the underlying. This activity is known as hedging. Alternatively, derivatives can be
used by investors to increase the profit arising if the value of the underlying moves in the
direction they expect. This activity is known as speculation.
Because the value of a derivative is contingent on the value of the underlying, the notional
value of derivatives is recorded off the balance sheet of an institution, although the market
value of derivatives is recorded on the balance sheet.

Uses
Hedging
Derivatives allow risk about the value of the underlying asset to be transferred from one
party to another. For example, a wheat farmer and a miller could sign a futures contract to
exchange a specified amount of cash for a specified amount of wheat in the future. Both
parties have reduced a future risk: for the wheat farmer, the uncertainty of the price, and for
the miller, the availability of wheat. However, there is still the risk that no wheat will be
available due to causes unspecified by the contract, like the weather, or that one party will
renege on the contract. Although a third party, called a clearing house, insures a futures
contract, not all derivatives are insured against counterparty risk.
From another perspective, the farmer and the miller both reduce a risk and acquire a risk
when they sign the futures contract: The farmer reduces the risk that the price of wheat will
fall below the price specified in the contract and acquires the risk that the price of wheat will

rise above the price specified in the contract (thereby losing additional income that he could
have earned). The miller, on the other hand, acquires the risk that the price of wheat will fall
below the price specified in the contract (thereby paying more in the future than he otherwise
would) and reduces the risk that the price of wheat will rise above the price specified in the
contract. In this sense, one party is the insurer (risk taker) for one type of risk, and the
counterparty is the insurer (risk taker) for another type of risk.
Hedging also occurs when an individual or institution buys an asset (like a commodity, a
bond that has coupon payments, a stock that pays dividends, and so on) and sells it using a
futures contract. The individual or institution has access to the asset for a specified amount of
time, and then can sell it in the future at a specified price according to the futures contract. Of
course, this allows the individual or institution the benefit of holding the asset while reducing
the risk that the future selling price will deviate unexpectedly from the market's current
assessment of the future value of the asset.

Speculation and arbitrage


Derivatives can be used to acquire risk, rather than to insure or hedge against risk. Thus,
some individuals and institutions will enter into a derivative contract to speculate on the
value of the underlying asset, betting that the party seeking insurance will be wrong about the
future value of the underlying asset. Speculators will want to be able to buy an asset in the
future at a low price according to a derivative contract when the future market price is high,
or to sell an asset in the future at a high price according to a derivative contract when the
future market price is low.
Individuals and institutions may also look for arbitrage opportunities, as when the current
buying price of an asset falls below the price specified in a futures contract to sell the asset.
Speculative trading in derivatives gained a great deal of notoriety in 1995 when Nick Leeson,
a trader at Barings Bank, made poor and unauthorized investments in futures contracts.
Through a combination of poor judgment, lack of oversight by the bank's management and
by regulators, and unfortunate events like the Kobe earthquake, Leeson incurred a $1.3
billion loss that bankrupted the centuries-old institution.[1]

Types of derivatives
OTC and exchange-traded
Broadly speaking there are two distinct groups of derivative contracts, which are
distinguished by the way they are traded in market:

Over-the-counter (OTC) derivatives are contracts that are


traded (and privately negotiated) directly between two parties,
without going through an exchange or other intermediary. Products
such as swaps, forward rate agreements, and exotic options are
almost always traded in this way. The OTC derivative market is the
largest market for derivatives, and is largely unregulated with
respect to disclosure of information between the parties, since the
OTC market is made up of banks and other highly sophisticated
parties, such as hedge funds. Reporting of OTC amounts are
difficult because trades can occur in private, without activity being
visible on any exchange. According to the Bank for International
Settlements, the total outstanding notional amount is $684 trillion
(as of June 2008)[2]. Of this total notional amount, 67% are interest

rate contracts, 8% are credit default swaps (CDS), 9% are foreign


exchange contracts, 2% are commodity contracts, 1% are equity
contracts, and 12% are other. Because OTC derivatives are not
traded on an exchange, there is no central counterparty. Therefore,
they are subject to counterparty risk, like an ordinary contract,
since each counterparty relies on the other to perform.

Exchange-traded derivatives (ETD) are those derivatives


products that are traded via specialized derivatives exchanges or
other exchanges. A derivatives exchange acts as an intermediary to
all related transactions, and takes Initial margin from both sides of
the trade to act as a guarantee. The world's largest[3] derivatives
exchanges (by number of transactions) are the Korea Exchange
(which lists KOSPI Index Futures & Options), Eurex (which lists a
wide range of European products such as interest rate & index
products), and CME Group (made up of the 2007 merger of the
Chicago Mercantile Exchange and the Chicago Board of Trade and
the 2008 acquisition of the New York Mercantile Exchange).
According to BIS, the combined turnover in the world's derivatives
exchanges totalled USD 344 trillion during Q4 2005. Some types of
derivative instruments also may trade on traditional exchanges. For
instance, hybrid instruments such as convertible bonds and/or
convertible preferred may be listed on stock or bond exchanges.
Also, warrants (or "rights") may be listed on equity exchanges.
Performance Rights, Cash xPRTs and various other instruments that
essentially consist of a complex set of options bundled into a simple
package are routinely listed on equity exchanges. Like other
derivatives, these publicly traded derivatives provide investors
access to risk/reward and volatility characteristics that, while
related to an underlying commodity, nonetheless are distinctive.

Common derivative contract types


There are three major classes of derivatives:
1. Futures/Forwards are contracts to buy or sell an asset on or before
a future date at a price specified today. A futures contract differs
from a forward contract in that the futures contract is a
standardized contract written by a clearing house that operates an
exchange where the contract can be bought and sold, while a
forward contract is a non-standardized contract written by the
parties themselves.
2. Options are contracts that give the owner the right, but not the
obligation, to buy (in the case of a call option) or sell (in the case of
a put option) an asset. The price at which the sale takes place is
known as the strike price, and is specified at the time the parties
enter into the option. The option contract also specifies a maturity
date. In the case of a European option, the owner has the right to
require the sale to take place on (but not before) the maturity date;
in the case of an American option, the owner can require the sale to
take place at any time up to the maturity date. If the owner of the

contract exercises this right, the counterparty has the obligation to


carry out the transaction.
3. Swaps are contracts to exchange cash (flows) on or before a
specified future date based on the underlying value of
currencies/exchange rates, bonds/interest rates, commodities,
stocks or other assets.

More complex derivatives can be created by combining the elements of these basic types. For
example, the holder of a swaption has the right, but not the obligation, to enter into a swap on
or before a specified future date.

Examples
Some common examples of these derivatives are:
CONTRACT TYPES
UNDERLYI
NG

Equity
Index

Money
market

Bonds

Single
Stocks

Exchang
e-traded
futures

Exchang
e-traded
options

OTC
swap

DJIA
Index
future
NASDAQ
Index
future

Option on
DJIA Index
future
Option on
NASDAQ
Index
future

Equity
swap

Eurodolla
r future
Euribor
future

Option on
Eurodollar
future
Option on
Euribor
future

OTC
forward

OTC
option

Back-toback

n/a

Interes
t rate
swap

Forward
rate
agreemen
t

Interest
rate cap
and floor
Swaption
Basis
swap

Bond
future

Option on
Bond
future

Total
return
swap

Repurchas
e
agreemen
t

Bond
option

Singlestock
future

Singleshare
option

Equity
swap

Repurchas
e
agreemen
t

Stock
option
Warrant
Turbo

warrant

Credit

n/a

n/a

Credit
default
swap

n/a

Credit
default
option

Other examples of underlying exchangeables are:

Property (mortgage) derivatives

Economic derivatives that pay off according to economic reports [1]


as measured and reported by national statistical agencies

Energy derivatives that pay off according to a wide variety of


indexed energy prices. Usually classified as either physical or
financial, where physical means the contract includes actual
delivery of the underlying energy commodity (oil, gas, power, etc.)

Commodities

Freight derivatives

Inflation derivatives

Insurance derivatives[citation needed]

Weather derivatives

Credit derivatives

Cash flow
The payments between the parties may be determined by:

the price of some other, independently traded asset in the future


(e.g., a common stock);

the level of an independently determined index (e.g., a stock


market index or heating-degree-days);

the occurrence of some well-specified event (e.g., a company


defaulting);

an interest rate;

an exchange rate;

or some other factor.

Some derivatives are the right to buy or sell the underlying security or commodity at some
point in the future for a predetermined price. If the price of the underlying security or
commodity moves into the right direction, the owner of the derivative makes money;
otherwise, they lose money or the derivative becomes worthless. Depending on the terms of
the contract, the potential gain or loss on a derivative can be much higher than if they had
traded the underlying security or commodity directly.

Valuation

Total world derivatives from 1998-2007[4] compared to total world wealth in the
year 2000[citation needed]

Market and arbitrage-free prices


Two common measures of value are:

Market price, i.e. the price at which traders are willing to buy or sell
the contract

Arbitrage-free price, meaning that no risk-free profits can be made


by trading in these contracts; see rational pricing

Determining the market price


For exchange-traded derivatives, market price is usually transparent (often published in real
time by the exchange, based on all the current bids and offers placed on that particular
contract at any one time). Complications can arise with OTC or floor-traded contracts
though, as trading is handled manually, making it difficult to automatically broadcast prices.
In particular with OTC contracts, there is no central exchange to collate and disseminate
prices.

Determining the arbitrage-free price


The arbitrage-free price for a derivatives contract is complex, and there are many different
variables to consider. Arbitrage-free pricing is a central topic of financial mathematics. The
stochastic process of the price of the underlying asset is often crucial. A key equation for the
theoretical valuation of options is the BlackScholes formula, which is based on the
assumption that the cash flows from a European stock option can be replicated by a
continuous buying and selling strategy using only the stock. A simplified version of this
valuation technique is the binomial options model.

Criticisms

Derivatives are often subject to the following criticisms:

Possible large losses


The use of derivatives can result in large losses due to the use of leverage, or borrowing.
Derivatives allow investors to earn large returns from small movements in the underlying
asset's price. However, investors could lose large amounts if the price of the underlying
moves against them significantly. There have been several instances of massive losses in
derivative markets, such as:

The need to recapitalize insurer American International Group (AIG)


with $85 billion of debt provided by the US federal government[5].
An AIG subsidiary had lost more than $18 billion over the preceding
three quarters on Credit Default Swaps (CDS) it had written.[6] It was
reported that the recapitalization was necessary because further
losses were foreseeable over the next few quarters.

The loss of $7.2 Billion by Socit Gnrale in January 2008 through


mis-use of futures contracts.

The loss of US$6.4 billion in the failed fund Amaranth Advisors,


which was long natural gas in September 2006 when the price
plummeted.

The loss of US$4.6 billion in the failed fund Long-Term Capital


Management in 1998.

The bankruptcy of Orange County, CA in 1994, the largest


municipal bankruptcy in U.S. history. On December 6, 1994, Orange
County declared Chapter 9 bankruptcy, from which it emerged in
June 1995. The county lost about $1.6 billion through derivatives
trading. Orange County was neither bankrupt nor insolvent at the
time; however, because of the strategy the county employed it was
unable to generate the cash flows needed to maintain services.
Orange County is a good example of what happens when
derivatives are used incorrectly and positions liquidated in an
unplanned manner; had they not liquidated they would not have
lost any money as their positions rebounded.[citation needed] Potentially
problematic use of interest-rate derivatives by US municipalities
has continued in recent years. See, for example:[7]

The Nick Leeson affair in 1994

Counter-party risk
Derivatives (especially swaps) expose investors to counter-party risk.
For example, suppose a person wanting a fixed interest rate loan for his business, but finding
that banks only offer variable rates, swaps payments with another business who wants a
variable rate, synthetically creating a fixed rate for the person. However if the second
business goes bankrupt, it can't pay its variable rate and so the first business will lose its
fixed rate and will be paying a variable rate again. If interest rates have increased, it is
possible that the first business may be adversely affected, because it may not be prepared to
pay the higher variable rate.

Different types of derivatives have different levels of risk for this effect. For example,
standardized stock options by law require the party at risk to have a certain amount deposited
with the exchange, showing that they can pay for any losses; Banks who help businesses
swap variable for fixed rates on loans may do credit checks on both parties. However in
private agreements between two companies, for example, there may not be benchmarks for
performing due diligence and risk analysis.

Unsuitably high risk for small/inexperienced investors


Derivatives pose unsuitably high amounts of risk for small or inexperienced investors.
Because derivatives offer the possibility of large rewards, they offer an attraction even to
individual investors. However, speculation in derivatives often assumes a great deal of risk,
requiring commensurate experience and market knowledge, especially for the small investor,
a reason why some financial planners advise against the use of these instruments. Derivatives
are complex instruments devised as a form of insurance, to transfer risk among parties based
on their willingness to assume additional risk, or hedge against it.
Add Moral Hazard spread over the risk.

Large notional value

Derivatives typically have a large notional value. As such, there


is the danger that their use could result in losses that the investor
would be unable to compensate for. The possibility that this could
lead to a chain reaction ensuing in an economic crisis, has been
pointed out by legendary investor Warren Buffett in Berkshire
Hathaway's annual report. Buffett called them 'financial weapons of
mass destruction.' The problem with derivatives is that they control
an increasingly larger notional amount of assets and this may lead
to distortions in the real capital and equities markets. Investors
begin to look at the derivatives markets to make a decision to buy
or sell securities and so what was originally meant to be a market
to transfer risk now becomes a leading indicator.

Leverage of an economy's debt


Derivatives massively leverage the debt in an economy, making it ever more difficult for
the underlying real economy to service its debt obligations and curtailing real economic
activity, which can cause a recession or even depression.[8] In the view of Marriner S. Eccles,
U.S. Federal Reserve Chairman from November, 1934 to February, 1948, too high a level of
debt was one of the primary causes of the 1920s-30s Great Depression. (See Berkshire
Hathaway Annual Report for 2002)

Benefits
Nevertheless, the use of derivatives also has its benefits:

Derivatives facilitate the buying and selling of risk, and thus


have a positive impact on the economic system[citation needed]. Although
someone loses money while someone else gains money with a
derivative, under normal circumstances, trading in derivatives
should not adversely affect the economic system because it is not
zero sum in utility.

Former Federal Reserve Board chairman Alan Greenspan


commented in 2003 that he believed that the use of derivatives has
softened the impact of the economic downturn at the
beginning of the 21st century.[citation needed]

Definitions

Bilateral netting: A legally enforceable arrangement between a


bank and a counter-party that creates a single legal obligation
covering all included individual contracts. This means that a banks
obligation, in the event of the default or insolvency of one of the
parties, would be the net sum of all positive and negative fair
values of contracts included in the bilateral netting arrangement.

Credit derivative: A contract that transfers credit risk from a


protection buyer to a credit protection seller. Credit derivative
products can take many forms, such as credit default swaps, credit
linked notes and total return swaps.

Derivative: A financial contract whose value is derived from the


performance of assets, interest rates, currency exchange rates, or
indexes. Derivative transactions include a wide assortment of
financial contracts including structured debt obligations and
deposits, swaps, futures, options, caps, floors, collars, forwards and
various combinations thereof.

Exchange-traded derivative contracts: Standardized derivative


contracts (e.g. futures contracts and options) that are transacted on
an organized futures exchange.

Gross negative fair value: The sum of the fair values of contracts
where the bank owes money to its counter-parties, without taking
into account netting. This represents the maximum losses the
banks counter-parties would incur if the bank defaults and there is
no netting of contracts, and no bank collateral was held by the
counter-parties.

Gross positive fair value: The sum total of the fair values of
contracts where the bank is owed money by its counter-parties,
without taking into account netting. This represents the maximum
losses a bank could incur if all its counter-parties default and there
is no netting of contracts, and the bank holds no counter-party
collateral.

High-risk mortgage securities: Securities where the price or


expected average life is highly sensitive to interest rate changes, as
determined by the FFIEC policy statement on high-risk mortgage
securities.

Notional amount: The nominal or face amount that is used to


calculate payments made on swaps and other risk management
products. This amount generally does not change hands and is thus
referred to as notional.

Over-the-counter (OTC) derivative contracts : Privately negotiated


derivative contracts that are transacted off organized futures
exchanges.

Structured notes: Non-mortgage-backed debt securities, whose


cash flow characteristics depend on one or more indices and/or
have embedded forwards or options.

Total risk-based capital: The sum of tier 1 plus tier 2 capital. Tier 1
capital consists of common shareholders equity, perpetual
preferred shareholders equity with non-cumulative dividends,
retained earnings, and minority interests in the equity accounts of
consolidated subsidiaries. Tier 2 capital consists of subordinated
debt, intermediate-term preferred stock, cumulative and long-term
preferred stock, and a portion of a banks allowance for loan and
lease losses.

Risk management
The broad parameters of risk management function should cover:
(a)
(b)
(c)

(d)
(e)
(f)
(g)

Organizational structure
Comprehensive risk management approach
Risk management policies approved by the board, which should
be consistent with the broader business strategies, capital
strength, management expertise and overall willingness to
assume risk
Guidelines and other parameters used to govern risk taking,
including detailed structure of prudential limits
Strong MIS or reporting, monitoring and controlling risk
Well laid out procedures, effective control and comprehensive
risk reporting framework
Separate risk management organization/framework independent
of operational departments and with clear delineation of levels
of responsibility for management of risk

(h)

Periodical review and evaluation

Accurate and timely credit grading is one of the basic components of risk
management.
Credit risk
Credit risk is defined as the possibility of losses associated with diminution in the
credit quality of borrowers or counterparties.
Market risk
Market risk takes the form of:
(a) Liquidity risk
(b) Interest rate risk
(c) Foreign exchange rate(forex) risk
(d) Commodity price risk
(e) Equity price risk
Operational risk
Managing operational risk is becoming an important feature of sound risk
management practices in modern financial markets in the wake of phenomenal
increase in the volume of transactions, high degree o structural changes and
complex support systems. The most important type of operational risk involves
breakdowns in internal controls and corporate governance. Such breakdowns
can lead to financial loss through error, fraud, or failure to perform in a timely
manner or cause the interest of the banks to be compromised.
Generally, operational risk is defined as any risk, which is not categorized as
market or credit risk or the risk of loss arising from various types of human or
technical error. It is also synonymous with settlement or payments risk and
business interruption, administrative and legal risks. Operational risk has some
form of link between credit and market risks. An operational problem with a
business transaction could trigger a credit or market risk.

Corporate Risk Management


Financial risk management (FRM) had its origins in trading floors and the Basel Accords
on banking regulation during the 1980s and 1990s. If a unifying theme emerged, it was a
need to update asset-liability management (ALM) techniques. These tended to define risks
in terms of their effects on a firm's accounting resultssuch as earnings, net interest
income, and return on assets. The proliferation of off-balance sheet tools,
including derivatives and securitization, were rendering those metrics of performance easy
to manipulate.
The solution of financial risk management was to ignore accounting metrics of value and
focus exclusively on market values. Till Guldimann (1994) captured the new spirit:
Across markets, traded securities have replaced many illiquid instruments, e.g., loans and
mortgages have been securitized to permit disintermediation and trading. Global securities
markets have expanded and both exchange traded and over-the-counter derivatives have
become major components of the markets.
These developments, along with technological breakthroughs in data processing, have gone
hand in hand with changes in management practices: a movement away from management
based on accrual accounting toward risk management based on marking-to-market of
positions.
Financial risks came to be divided into three categories:
Financial risks
Market risk

Credit risk

Operational risk

New techniques for assessing and managing these risks all focused on their impact on market
value.

Market risk, by definition, is risk due to uncertainty in future market values.

New credit risk models assessed potential defaults or credit deteriorations in terms of
their mark-to-market impact.

Operational risk was also assessed in terms of its actual or potential direct costs.

Such techniques proved effective on bank trading floors, where market values were readily
available. Extending them to other parts of the bank, or even to non-financial corporations,

proved problematic. This was the realm of book value accounting. Market values were
difficult or impossible to secure for items such as private equity, pension liabilities, factory
equipment, intellectual property or natural resource reserves.
Corporate risk management emerged as a catch-all phrase for practices that serve to optimize
risk taking in a context of book value accounting. Generally, this includes risks of nonfinancial corporations, but also those of business lines of financial institutions that are not
engaged in trading or investment management. Risks vary from one corporation to the next,
depending on such factors as size, industry, diversity of business lines, sources of capital, etc.
Practices that are appropriate for one corporation are inappropriate for another. For this
reason, corporate risk management is a more elusive notion than is financial risk
management. It encompasses a variety of techniques drawn from both FRM and ALM.
Corporations pick and choose from these, adapting techniques to suit their own needs. This
article is an overview.
Corporate Risk Management
In a corporate setting, the familiar division of risks into market, credit and operational risks
breaks down.
Of these, credit risk poses the least challenges. To the extent that corporations take credit
risk (some take a lot; others take little), new and traditional techniques of credit risk
management are easily adapted.
Operational risk largely doesn't apply to corporations. It includes such factors as model risk
or back office errors. Some aspects do affect corporationssuch as fraud or natural
disastersbut corporations have been addressing these with internal
audit, facilities management and legal departments for decades. Also,
corporations face risks that are akin to the operational risk of financial institutions but are
unique to their own business lines. An airline is exposed to risks due to weather, equipment
failure and terrorism. A power generator faces the risk that a generating plant may go down
for unscheduled maintenance. In corporate risk management, these risksthose that
overlap with the operational risks of financial firms and those that are akin
to such operational risks but are unique to non-financial firmsare called
operations risks.
The real challenge of corporate risk management is those risks that are akin to market risk
but aren't market risk. An oil company holds oil reserves. Their "value" fluctuates with the
market price of oil, but what does this mean? The oil reserves don't have a market value. A
chain of restaurants is thriving. Its restaurants are "valuable," but it is impossible to assign
them market values. Something that doesn't have a market value doesn't pose market risk.
This is almost a tautology. Such risks are business risks as opposed to market risks.
In the realm of corporate risk management, we abandon the division of risks into market,
credit and operational risks and replace it with a new categorization:
Corporate risk

Market risk
risk

Business risk

Credit risk

Operations

Corporations do face some market risks, such as commodity price risk or foreign exchange
risk. These are usually dwarfed by business risks. In a nutshell, the challenge of corporate
risk management is the management of business risk.
Addressing Business Risk
Techniques for addressing business risk take two forms:

Those that treat business risks as market risks, so that techniques of FRM can be
directly applied, and

Those that address business risks from a book value standpoint, modifying or
adapting techniques of FRM and ALM as appropriate.

Both approaches are discussed below.


Economic Value
Techniques of the first form focus on a concept called economic value. This is a vague notion
that generalizes the concept of market value. If a market value exists for an asset, then that
market value is the asset's economic value. If a market value doesn't exist, then economic
value is the "intrinsic value" of the assetwhat the market value of the asset
would be, if it had a market value.
Economic values can be assigned in two ways.

One is to start with accounting metrics of value and make suitable adjustments, so
they are more reflective of some intrinsic value. This is the approach employed with
economic value added (EVA) analyses.

The other approach is to construct some model to predict what value the asset might
command, if a liquid market existed for it. In this respect, a derogatory name for
economic value is mark-to-model value.

Once some means has been established for assigning economic values, these are treated like
market values. Standard techniques of financial risk managementsuch as value-atrisk (VaR) or economic capital allocationare then applied.
This economic approach to managing business risk is applicable if most of a firm's balance
sheet can be marked to market. Economic values then only need to be assigned to a few
items in order for techniques of FRM to be applied firm wide. An example would be a
commodity wholesaler. Most of its balance sheet comprises physical and forward positions in
commodities, which can be mostly marked to market.
More controversial has been the use of economic valuations in power and natural gas
markets. The actual energies trade and, for the most part, can be marked to market. However,
producers also hold significant investments in plants and equipmentand these cannot
be marked to market. Suppose some energy trades spot and forward out
three years. An asset that produces the energy has an expected life of 50
years, which means that an economic value for the asset must reflect a hypothetical 50-year
forward curve. The forward curve doesn't exist, so a model must construct one.
Consequently, assigned economic values are highly dependent on assumptions. Often, they
are arbitrary.

In this context, it isn't enough to assign economic values. VaR analyses require standard
deviations and correlations as well. Assigning these to 50-year forward prices that are
themselves hypothetical is essentially meaninglessyet, those standard deviations
and correlations determine the reported VaR.
These dubious techniques were widely (but not universally) adopted by US energy merchants
in the late 1990s and early 2000s. The most publicized of these was Enron Corp., which went
beyond using economic values for internal reporting and incorporated them into its financial
reporting to investors. The 2001 bankruptcy of Enron and subsequent revelations of fraud
tainted mark-to-model techniques.
Book Value
The second approach to addressing business risk starts by defining risks that are meaningful
in the context of book value accounting.
Most typical of these are:

Earnings risk, which is risk due to uncertainty in future reported earnings, and

Cash flow risk, which is risk due to uncertainty in future reported cash flows.

Of the two, earnings risk is more akin to market risk. Yet, it avoids the arbitrary assumptions
of economic valuations. A firm's accounting earnings are a well defined notion. A problem
with looking at earnings risk is that earnings are, well, non-economic. Earnings may be
suggestive of economic value, but they can be misleading and are often easy to manipulate.
A firm can report high earnings while its long term franchise is eroded away by lack of
investment or competing technologies. Financial transactions can boost short-term earnings
at the expense of long-term earnings. After all, traditional techniques of ALM focus on
earnings, and their shortcomings remain today.
Cash flow risk is less akin to market risk. It relates more to liquidity than the value of a firm,
but this is only partly true. As anyone who has ever worked with distressed firms can attest,
"cash is king." When a firm gets into difficulty, earnings and market values don't pay the
bills. Cash flow is the life blood of a firm. However, as with earnings risk, cash flow risk
offers only an imperfect picture of a firm's business risk. Cash flows can also be manipulated,
and steady cash flows may hide corporate decline.
Techniques for managing earnings risk and cash flow risk draw heavily on techniques of
ALMespecially scenario analysis and simulation analysis. They also
adapt techniques of FRM. In this context, value-at-risk (VaR) becomes
earnings-at-risk (EaR) or cash-flow-at-risk (CFaR). For example, EaR might be
reported as the 10% quantile of this quarter's earnings (which is the same as the 90% quantile
of reported loss, multiplied by minus one).
The actual calculations of EaR or CFaR differ from those for VaR. These are long-term risk
metrics, with horizons of three months or a year. VaR is routinely calculated over a one-day
horizon. Also, EaR and CFaR are driven by rules of accounting while VaR is driven by
financial engineering principles. Typically, EaR or CFaR are calculated by first performing a
simulation analysis. That generates a probability distribution for the period's earnings or cash
flow, which is then used to value the desired metric of EaR or CFaR.

One decision that needs to be made with EaR or CFaR is whether to use a constant or
contracting horizon. If management wants an EaR analysis for quarterly earnings, should the
analysis actually assess risk to the current quarter's earnings? If that is the case, the horizon
will start at three months on the first day of the quarter and gradually shrink to zero by the
end of the quarter. The alternative is to use a constant three-month horizon. After the first day
of the quarter, results will no longer apply to that quarter's actual earnings, but to some
hypothetical earnings over a shifting three-month horizon. Both approaches are used. The
advantage of a contracting horizon is that it addresses an actual concern of management
will we hit our earnings target this quarter? A disadvantage is that the risk metric keeps
changingif reported EaR declines over a week, does this mean that actual risk has declined,
or does it simply reflect a shortened horizon?
Conclusion
While the two approaches to business risk managementthat based on economic value and
that based on book valueare philosophically different, they can complement each other.
Some firms use them side-by-side to assess different aspects of business risk.
This article has focused on the unique challenges of corporate risk management. There is
much else about corporate risk management that overlaps with financial risk management
the need for a risk management function, the role of corporate culture, technology issues,
independence, etc. See the article Financial Risk Management for a discussion of these and
other topics.
Corporate Risk Management
Listed below are brief tips that may be helpful as an overview and guidelines for risk
management activities. If there are questions on these risk management highlights or if more
detailed information is needed, please contact Risk Limited directly.
Please click on text links for display of more information.
10. Identify and assess risks
Risk is everywhere. Success in business often comes down to recognizing and managing
possible risks associated with potential opportunities and returns. The types of risks faced in
most businesses are quite varied and far ranging. Risks typically include both financial and
physical categories. Types of risk include sometimes apparent hazards, such as safety and
health risks associated with operations, as well as financial risks from exposures to market
price volatility, counter party credit defaults, and legal liabilities. Some risks are intuitively
obvious; unfortunately, many are not. Risk categories include: Market, Credit, Legal,
Regulatory, Political, Operational, Strategic, Reputational, Event, Country and Model Risks.
So first identify possible risks throughout your business.
9. Know the numbers
Systematic processes such as a RiskRegister to identify and rank risks by order of
magnitude can be a key first step, but effective risk management strategies typically depend
on quantification of risks, often through probabilistic modeling techniques. Said another way:
one must 'measure it to manage it.' Measurement and valuation can be one of the most
difficult efforts in risk management and finance, but these are crucial for cost effective risk

management and informed decision-making. Spend the time and money to get the tools and
expertise to best quantify the company's key risks. A close corollary is to know what is in
any 'black-box' models used for valuation & reporting.
8. Risks are interrelated
Interactions and correlations of risks are a key element of which to be aware in identifying,
quantifying and mitigating risks. For example, exposure to credit risks may also affect
market price risks, whereas operational risks such as fraud may create legal and reputational
risks. Recognition that risks interact between business activities is one of the basis for the
'enterprise-wide risk management' approach now widely practiced by leading companies.
7. Continually reassess risks
Things change, and so do risks. Market conditions and volatility levels change, financial
strength of counter parties change, physical environments change, geopolitical situations
change, and on-and-on. And these changes can be rather sudden, or they can be creeping and
hidden. Exposures to risks that result from business activities may also change. Effective risk
management requires that one reevaluate risks on an ongoing basis, and processes such as a
RiskAudit should be built into the corporate risk management framework to assess both
current and projected risk exposures. Forecasting future exposures is necessary since hedge
decisions are based on projected risk levels.
6. Commit adequate resources
Effective risk management also requires considerable expertise and resources, from basic risk
control, compliance and governance activities, through advanced quantitative risk analysis.
The costs for these resources are usually not cheap, but as has been proven repeatedly by
high-profile business failures, the cost of losses due to risk management weaknesses or lapses
can be catastrophically high. Investment in risk management capabilities for most businesses
has a high payoff. Due to the potentially extreme cost of mistakes, risk managers should be
especially well trained.
5. Review the cost of risk mitigation
Transferring risks through hedge transactions or other activities is often an effective and
advisable risk management technique, but risk mitigation strategy may largely depend on the
hedge costs. Risk mitigation strategies also depend on the capacity of the firm to sustain risks
and possible losses. Trading activities that are truly for hedging should not be avoided due to
concern that trading could be misconstrued as 'speculative'; however, various hedge
instruments may not have the same cost effectiveness or appropriateness for every company
and environment.
4. Reduce exposure
Risks arise from exposure. A commonly accepted definition of risk is 'exposure to
uncertainty' (at least for that uncertainty for which one is concerned about the outcome).
Reduce the exposure and you likely reduce the risk. The selected approach and structure of

business activities can have a significant effect on the exposure & risk levels generated.
Commercial agreements and transaction structures may result in transference or acceptance
of risks with a counter party. Risk awareness in business processes and commercial activities
can lead to opportunities to reduce current and future exposures. Billing currency for
international purchases is an example of exposure effect.
3. Assess the Risk/Return Ratio
Risk management does not equate to risk aversion; however, decisions driven by risk/reward
assessments usually have a higher probability of successful outcomes. A consideration in
such risk-based business decision-making should also be the capacity of the firm to sustain
risks. As in the well developed finance field of portfolio theory (which in general terms
focuses on how investors can best balance risks and rewards in constructing investment
portfolios), business decisions based on risk/reward balance should optimize returns.
2. Monitor for quantum shifts in risk levels
A key value of quantitative risk measures is to highlight significant changes in risk levels.
Although opinions may differ on the optimal methodology for some valuation metrics,
significant changes or trends in risk metrics, such as Value-at-Risk measures, can provide a
key signal to management. Best practices designs of management reporting 'dashboards'
provide this risk monitoring capability, also showing segment reporting and consolidation to
reflect correlations such as offsets in price risks between markets.
1. Create a risk aware culture
Educate the organization in practical aspects of risk management, and that especially
includes the most senior business executives and the corporate board of directors. Risk
management responsibilities should be clear. Whether it is intuitive actions based on
experience and expertise in risk management or whether it is a result of institutionalized risk
policies and procedures, effective risk management is typically a key factor in successful
businesses. Training and building awareness can lead to a risk management culture that will
drive business success.

Cash management
In United States banking, cash management, or treasury management, is a marketing term
for certain services offered primarily to larger business customers. It may be used to describe
all bank accounts (such as checking accounts) provided to businesses of a certain size, but it
is more often used to describe specific services such as cash concentration, zero balance
accounting, and automated clearing house facilities. Sometimes, private banking customers
are given cash management services.

Cash management services generally offered


The following is a list of services generally offered by banks and utilised by larger businesses
and corporations:

Account Reconcilement Services: Balancing a checkbook can be


a difficult process for a very large business, since it issues so many
checks it can take a lot of human monitoring to understand which
checks have not cleared and therefore what the company's true
balance is. To address this, banks have developed a system which
allows companies to upload a list of all the checks that they issue
on a daily basis, so that at the end of the month the bank
statement will show not only which checks have cleared, but also
which have not. More recently, banks have used this system to
prevent checks from being fraudulently cashed if they are not on
the list, a process known as positive pay.

Advanced Web Services: Most banks have an Internet-based


system which is more advanced than the one available to
consumers. This enables managers to create and authorize special
internal logon credentials, allowing employees to send wires and
access other cash management features normally not found on the
consumer web site.

Armored Car Services (Cash Collection Services): Large


retailers who collect a great deal of cash may have the bank pick
this cash up via an armored car company, instead of asking its
employees to deposit the cash.

Automated Clearing House: services are usually offered by the


cash management division of a bank. The Automated Clearing
House is an electronic system used to transfer funds between
banks. Companies use this to pay others, especially employees (this
is how direct deposit works). Certain companies also use it to
collect funds from customers (this is generally how automatic
payment plans work). This system is criticized by some consumer
advocacy groups, because under this system banks assume that
the company initiating the debit is correct until proven otherwise.

Balance Reporting Services: Corporate clients who actively


manage their cash balances usually subscribe to secure web-based
reporting of their account and transaction information at their lead

bank. These sophisticated compilations of banking activity may


include balances in foreign currencies, as well as those at other
banks. They include information on cash positions as well as 'float'
(e.g., checks in the process of collection). Finally, they offer
transaction-specific details on all forms of payment activity,
including deposits, checks, wire transfers in and out, ACH
(automated clearinghouse debits and credits), investments, etc.

Cash Concentration Services: Large or national chain retailers


often are in areas where their primary bank does not have
branches. Therefore, they open bank accounts at various local
banks in the area. To prevent funds in these accounts from being
idle and not earning sufficient interest, many of these companies
have an agreement set with their primary bank, whereby their
primary bank uses the Automated Clearing House to electronically
"pull" the money from these banks into a single interest-bearing
bank account.

Lockbox - Retail: services: Often companies (such as utilities)


which receive a large number of payments via checks in the mail
have the bank set up a post office box for them, open their mail,
and deposit any checks found. This is referred to as a "lockbox"
service.

Lockbox - Wholesale: services: are for companies with small


numbers of payments, sometimes with detailed requirements for
processing. This might be a company like a dentist's office or small
manufacturing company.

Positive Pay: Positive pay is a service whereby the company


electronically shares its check register of all written checks with the
bank. The bank therefore will only pay checks listed in that register,
with exactly the same specifications as listed in the register
(amount, payee, serial number, etc.). This system dramatically
reduces check fraud.

Reverse Positive Pay: Reverse positive pay is similar to positive


pay, but the process is reversed, with the company, not the bank,
maintaining the list of checks issued. When checks are presented
for payment and clear through the Federal Reserve System, the
Federal Reserve prepares a file of the checks' account numbers,
serial numbers, and dollar amounts and sends the file to the bank.
In reverse positive pay, the bank sends that file to the company,
where the company compares the information to its internal
records. The company lets the bank know which checks match its
internal information, and the bank pays those items. The bank then
researches the checks that do not match, corrects any misreads or
encoding errors, and determines if any items are fraudulent. The
bank pays only "true" exceptions, that is, those that can be
reconciled with the company's files.

Sweep accounts: are typically offered by the cash management


division of a bank. Under this system, excess funds from a
company's bank accounts are automatically moved into a money

market mutual fund overnight, and then moved back the next
morning. This allows them to earn interest overnight. This is the
primary use of money market mutual funds.

Zero Balance Accounting: can be thought of as somewhat of a


hack. Companies with large numbers of stores or locations can very
often be confused if all those stores are depositing into a single
bank account. Traditionally, it would be impossible to know which
deposits were from which stores without seeking to view images of
those deposits. To help correct this problem, banks developed a
system where each store is given their own bank account, but all
the money deposited into the individual store accounts are
automatically moved or swept into the company's main bank
account. This allows the company to look at individual statements
for each store. U.S. banks are almost all converting their systems so
that companies can tell which store made a particular deposit,
even if these deposits are all deposited into a single account.
Therefore, zero balance accounting is being used less frequently.

Wire Transfer: A wire transfer is an electronic transfer of funds.


Wire transfers can be done by a simple bank account transfer, or by
a transfer of cash at a cash office. Bank wire transfers are often the
most expedient method for transferring funds between bank
accounts. A bank wire transfer is a message to the receiving bank
requesting them to effect payment in accordance with the
instructions given. The message also includes settlement
instructions. The actual wire transfer itself is virtually
instantaneous, requiring no longer for transmission than a
telephone call.

Controlled Disbursement: This is another product offered by


banks under Cash Management Services. The bank provides a daily
report, typically early in the day, that provides the amount of
disbursements that will be charged to the customer's account. This
early knowledge of daily funds requirement allows the customer to
invest any surplus in intraday investment opportunities, typically
money market investments. This is different from delayed
disbursements, where payments are issued through a remote
branch of a bank and customer is able to delay the payment due to
increased float time.

In the past, other services have been offered the usefulness of which has diminished with the
rise of the Internet. For example, companies could have daily faxes of their most recent
transactions or be sent CD-ROMs of images of their cashed checks.
Cash management services can be costly but usually the cost to a company is outweighed by
the benefits: cost savings, accuracy, efficiencies, etc.

INVENTORY
Inventory means goods and materials, or those goods and materials themselves, held
available in stock by a business. This word is also used for a list of the contents of a
household and for a list for testamentary purposes of the possessions of someone who has
died. In accounting, inventory is considered an asset.
In business management, inventory consists of a list of goods and materials held available in
stock.

Inventory Management
Inventory refers to the stock of resources, that possess economic value, held by an
organization at any point of time. These resource stocks can be manpower, machines, capital
goods or materials at various stages.
Inventory management is primarily about specifying the size and placement of stocked
goods. Inventory management is required at different locations within a facility or within
multiple locations of a supply network to protect the regular and planned course of
production against the random disturbance of running out of materials or goods. The scope of
inventory management also concerns the fine lines between replenishment lead time,
carrying costs of inventory, asset management, inventory forecasting, inventory valuation,
inventory visibility, future inventory price forecasting, physical inventory, available physical
space for inventory, quality management, replenishment, returns and defective goods and
demand forecasting. Balancing these competing requirements leads to optimal inventory
levels, which is an on-going process as the business needs shift and react to the wider
environment.
Inventory management involves a retailer seeking to acquire and maintain a proper
merchandise assortment while ordering, shipping, handling, and related costs are kept in
check.
Systems and processes that identify inventory requirements, set targets, provide
replenishment techniques and report actual and projected inventory status.
Handles all functions related to the tracking and management of material. This would include
the monitoring of material moved into and out of stockroom locations and the reconciling of
the inventory balances. Also may include ABC analysis, lot tracking, cycle counting support
etc.
Management of the inventories, with the primary objective of determining/controlling stock
levels within the physical distribution function to balance the need for product availability
against the need for minimizing stock holding and handling costs. See inventory
proportionality.

Business inventory
The reasons for keeping stock
There are three basic reasons for keeping an inventory:

1. Time - The time lags present in the supply chain, from supplier to
user at every stage, requires that you maintain certain amounts of
inventory to use in this "lead time."
2. Uncertainty - Inventories are maintained as buffers to meet
uncertainties in demand, supply and movements of goods.
3. Economies of scale - Ideal condition of "one unit at a time at a place
where a user needs it, when he needs it" principle tends to incur
lots of costs in terms of logistics. So bulk buying, movement and
storing brings in economies of scale, thus inventory.

All these stock reasons can apply to any owner or product stage.

Buffer stock is held in individual workstations against the


possibility that the upstream workstation may be a little delayed in
long setup or change over time. This stock is then used while that
changeover is happening. This stock can be eliminated by tools like
SMED.

These classifications apply along the whole Supply chain, not just within a facility or plant.
Where these stocks contain the same or similar items, it is often the work practice to hold all
these stocks mixed together before or after the sub-process to which they relate. This
'reduces' costs. Because they are mixed up together there is no visual reminder to operators of
the adjacent sub-processes or line management of the stock, which is due to a particular
cause and should be a particular individual's responsibility with inevitable consequences.
Some plants have centralized stock holding across sub-processes, which makes the situation
even more acute.

Special terms used in dealing with inventory

Stock Keeping Unit (SKU) is a unique combination of all the


components that are assembled into the purchasable item.
Therefore, any change in the packaging or product is a new SKU.
This level of detailed specification assists in managing inventory.

Stockout means running out of the inventory of an SKU.[1]

"New old stock" (sometimes abbreviated NOS) is a term used in


business to refer to merchandise being offered for sale that was
manufactured long ago but that has never been used. Such
merchandise may not be produced anymore, and the new old stock
may represent the only market source of a particular item at the
present time.

Typology
1. Buffer/safety stock
2. Cycle stock (Used in batch processes, it is the available inventory,
excluding buffer stock)
3. De-coupling (Buffer stock that is held by both the supplier and the
user)
4. Anticipation stock (Building up extra stock for periods of increased
demand - e.g. ice cream for summer)

5. Pipeline stock (Goods still in transit or in the process of distribution have left the factory but not arrived at the customer yet)

Inventory examples
While accountants often discuss inventory in terms of goods for sale, organizations manufacturers, service-providers and not-for-profits - also have inventories (fixtures,
furniture, supplies, ...) that they do not intend to sell. Manufacturers', distributors', and
wholesalers' inventory tends to cluster in warehouses. Retailers' inventory may exist in a
warehouse or in a shop or store accessible to customers. Inventories not intended for sale to
customers or to clients may be held in any premises an organization uses. Stock ties up cash
and, if uncontrolled, it will be impossible to know the actual level of stocks and therefore
impossible to control them.
While the reasons for holding stock were covered earlier, most manufacturing organizations
usually divide their "goods for sale" inventory into:

Raw materials - materials and components scheduled for use in


making a product.

Work in process, WIP - materials and components that have begun


their transformation to finished goods.

Finished goods - goods ready for sale to customers.

Goods for resale - returned goods that are salable.

For example:
Manufacturing

A canned food manufacturer's materials inventory includes the ingredients to form the foods
to be canned, empty cans and their lids (or coils of steel or aluminum for constructing those
components), labels, and anything else (solder, glue, ...) that will form part of a finished can.
The firm's work in process includes those materials from the time of release to the work floor
until they become complete and ready for sale to wholesale or retail customers. This may be
vats of prepared food, filled cans not yet labeled or sub-assemblies of food components. It
may also include finished cans that are not yet packaged into cartons or pallets. Its finished
good inventory consists of all the filled and labeled cans of food in its warehouse that it has
manufactured and wishes to sell to food distributors (wholesalers), to grocery stores
(retailers), and even perhaps to consumers through arrangements like factory stores and
outlet centers.
Examples of case studies are very revealing, and consistently show that the improvement of
inventory management has two parts: the capability of the organisation to manage inventory,
and the way in which it chooses to do so. For example, a company may wish to install a
complex inventory system, but unless there is a good understanding of the role of inventory
and its perameters, and an effective business process to support that, the system cannot bring
the necessary benefits to the organisation in isolation.
Typical Inventory Management techniques include Pareto Curve ABC Classification[2] and
Economic Order Quantity Management. A more sophisticated method takes these two
techniques further, combining certain aspects of each to create The K Curve Methodology[3].
A case study of k-curve[4] benefits to one company shows a successful implementation.

Unnecessary inventory adds enormously to the working capital tied up in the business, as
well as the complexity of the supply chain. Reduction and elimination of these inventory
'wait' states is a key concept in Lean[5]. Too big an inventory reduction too quickly can cause
a business to be anorexic. There are well-proven processes and techniques to assist in
inventory planning and strategy, both at the business overview and part number level. Many
of the big MRP/and ERP systems do not offer the necessary inventory planning tools within
their integrated planning applications.

Principle of inventory proportionality


Purpose
Inventory proportionality is the goal of demand-driven inventory management. The primary
optimal outcome is to have the same number of days' (or hours', etc.) worth of inventory on
hand across all products so that the time of runout of all products would be simultaneous. In
such a case, there is no "excess inventory," that is, inventory that would be left over of
another product when the first product runs out. Excess inventory is sub-optimal because the
money spent to obtain it could have been utilized better elsewhere, i.e. to the product that just
ran out.
The secondary goal of inventory proportionality is inventory minimization. By integrating
accurate demand forecasting with inventory management, replenishment inventories can be
scheduled to arrive just in time to replenish the product destined to run out first, while at the
same time balancing out the inventory supply of all products to make their inventories more
proportional, and thereby closer to achieving the primary goal. Accurate demand forecasting
also allows the desired inventory proportions to be dynamic by determining expected sales
out into the future; this allows for inventory to be in proportion to expected short-term sales
or consumption rather than to past averages, a much more accurate and optimal outcome.
Integrating demand forecasting into inventory management in this way also allows for the
prediction of the "can fit" point when inventory storage is limited on a per-product basis.

Applications
The technique of inventory proportionality is most appropriate for inventories that remain
unseen by the consumer. As opposed to "keep full" systems where a retail consumer would
like to see full shelves of the product they are buying so as not to think they are buying
something old, unwanted or stale; and differentiated from the "trigger point" systems where
product is reordered when it hits a certain level; inventory proportionality is used effectively
by just-in-time manufacturing processes and retail applications where the product is hidden
from view.
One early example of inventory proportionality used in a retail application in the United
States is for motor fuel. Motor fuel (e.g. gasoline) is generally stored in underground storage
tanks. The motorists do not know whether they are buying gasoline off the top or bottom of
the tank, nor need they care. Additionally, these storage tanks have a maximum capacity and
cannot be overfilled. Finally, the product is expensive. Inventory proportionality is used to
balance the inventories of the different grades of motor fuel, each stored in dedicated tanks,
in proportion to the sales of each grade. Excess inventory is not seen or valued by the
consumer, so it is simply cash sunk (literally) into the ground. Inventory proportionality
minimizes the amount of excess inventory carried in underground storage tanks. This

application for motor fuel was first developed and implemented by Petrolsoft Corporation in
1990 for Chevron Products Company. Most major oil companies use such systems today.[6]

Roots
The use of inventory proportionality in the United States is thought to have been inspired by
Japanese just-in-time parts inventory management made famous by Toyota Motors in the
1980s.[3]

High-level inventory management


It seems that around 1880[7] there was a change in manufacturing practice from companies
with relatively homogeneous lines of products to vertically integrated companies with
unprecedented diversity in processes and products. Those companies (especially in
metalworking) attempted to achieve success through economies of scope - the gains of
jointly producing two or more products in one facility. The managers now needed
information on the effect of product-mix decisions on overall profits and therefore needed
accurate product-cost information. A variety of attempts to achieve this were unsuccessful
due to the huge overhead of the information processing of the time. However, the burgeoning
need for financial reporting after 1900 created unavoidable pressure for financial accounting
of stock and the management need to cost manage products became overshadowed. In
particular, it was the need for audited accounts that sealed the fate of managerial cost
accounting. The dominance of financial reporting accounting over management accounting
remains to this day with few exceptions, and the financial reporting definitions of 'cost' have
distorted effective management 'cost' accounting since that time. This is particularly true of
inventory.
Hence, high-level financial inventory has these two basic formulas, which relate to the
accounting period:
1. Cost of Beginning Inventory at the start of the period + inventory
purchases within the period + cost of production within the period
= cost of goods available
2. Cost of goods available cost of ending inventory at the end of the
period = cost of goods sold

The benefit of these formulae is that the first absorbs all overheads of production and raw
material costs into a value of inventory for reporting. The second formula then creates the
new start point for the next period and gives a figure to be subtracted from the sales price to
determine some form of sales-margin figure.
Manufacturing management is more interested in inventory turnover ratio or average days to
sell inventory since it tells them something about relative inventory levels.
Inventory turnover ratio (also known as inventory turns) = cost of goods
sold / Average Inventory = Cost of Goods Sold / ((Beginning Inventory +
Ending Inventory) / 2)

and its inverse


Average Days to Sell Inventory = Number of Days a Year / Inventory
Turnover Ratio = 365 days a year / Inventory Turnover Ratio

This ratio estimates how many times the inventory turns over a year. This number tells how
much cash/goods are tied up waiting for the process and is a critical measure of process
reliability and effectiveness. So a factory with two inventory turns has six months stock on
hand, which is generally not a good figure (depending upon the industry), whereas a factory
that moves from six turns to twelve turns has probably improved effectiveness by 100%. This
improvement will have some negative results in the financial reporting, since the 'value' now
stored in the factory as inventory is reduced.
While these accounting measures of inventory are very useful because of their simplicity,
they are also fraught with the danger of their own assumptions. There are, in fact, so many
things that can vary hidden under this appearance of simplicity that a variety of 'adjusting'
assumptions may be used. These include:

Specific Identification

Weighted Average Cost

Moving-Average Cost

FIFO and LIFO.

Inventory Turn is a financial accounting tool for evaluating inventory and it is not necessarily
a management tool. Inventory management should be forward looking. The methodology
applied is based on historical cost of goods sold. The ratio may not be able to reflect the
usability of future production demand, as well as customer demand.
Business models, including Just in Time (JIT) Inventory, Vendor Managed Inventory (VMI)
and Customer Managed Inventory (CMI), attempt to minimize on-hand inventory and
increase inventory turns. VMI and CMI have gained considerable attention due to the success
of third-party vendors who offer added expertise and knowledge that organizations may not
possess.

Accounting for inventory


Each country has its own rules about accounting for inventory that fit with their financialreporting rules.
For example, organizations in the U.S. define inventory to suit their needs within US
Generally Accepted Accounting Practices (GAAP), the rules defined by the Financial
Accounting Standards Board (FASB) (and others) and enforced by the U.S. Securities and
Exchange Commission (SEC) and other federal and state agencies. Other countries often
have similar arrangements but with their own GAAP and national agencies instead.
It is intentional that financial accounting uses standards that allow the public to compare
firms' performance, cost accounting functions internally to an organization and potentially
with much greater flexibility. A discussion of inventory from standard and Theory of
Constraints-based (throughput) cost accounting perspective follows some examples and a
discussion of inventory from a financial accounting perspective.
The internal costing/valuation of inventory can be complex. Whereas in the past most
enterprises ran simple, one-process factories, such enterprises are quite probably in the
minority in the 21st century. Where 'one process' factories exist, there is a market for the
goods created, which establishes an independent market value for the good. Today, with
multistage-process companies, there is much inventory that would once have been finished
goods which is now held as 'work in process' (WIP). This needs to be valued in the accounts,

but the valuation is a management decision since there is no market for the partially finished
product. This somewhat arbitrary 'valuation' of WIP combined with the allocation of
overheads to it has led to some unintended and undesirable results.
Financial accounting

An organization's inventory can appear a mixed blessing, since it counts as an asset on the
balance sheet, but it also ties up money that could serve for other purposes and requires
additional expense for its protection. Inventory may also cause significant tax expenses,
depending on particular countries' laws regarding depreciation of inventory, as in Thor Power
Tool Company v. Commissioner.
Inventory appears as a current asset on an organization's balance sheet because the
organization can, in principle, turn it into cash by selling it. Some organizations hold larger
inventories than their operations require in order to inflate their apparent asset value and their
perceived profitability.
In addition to the money tied up by acquiring inventory, inventory also brings associated
costs for warehouse space, for utilities, and for insurance to cover staff to handle and protect
it from fire and other disasters, obsolescence, shrinkage (theft and errors), and others. Such
holding costs can mount up: between a third and a half of its acquisition value per year.
Businesses that stock too little inventory cannot take advantage of large orders from
customers if they cannot deliver. The conflicting objectives of cost control and customer
service often pit an organization's financial and operating managers against its sales and
marketing departments. Salespeople, in particular, often receive sales-commission payments,
so unavailable goods may reduce their potential personal income. This conflict can be
minimised by reducing production time to being near or less than customers' expected
delivery time. This effort, known as "Lean production" will significantly reduce working
capital tied up in inventory and reduce manufacturing costs (See the Toyota Production
System).

Role of inventory accounting


By helping the organization to make better decisions, the accountants can help the public
sector to change in a very positive way that delivers increased value for the taxpayers
investment. It can also help to incentivise progress and to ensure that reforms are sustainable
and effective in the long term, by ensuring that success is appropriately recognized in both
the formal and informal reward systems of the organization.
To say that they have a key role to play is an understatement. Finance is connected to most, if
not all, of the key business processes within the organization. It should be steering the
stewardship and accountability systems that ensure that the organization is conducting its
business in an appropriate, ethical manner. It is critical that these foundations are firmly laid.
So often they are the litmus test by which public confidence in the institution is either won or
lost.
Finance should also be providing the information, analysis and advice to enable the
organizations service managers to operate effectively. This goes beyond the traditional
preoccupation with budgets how much have we spent so far, how much do we have left to
spend? It is about helping the organization to better understand its own performance. That
means making the connections and understanding the relationships between given inputs

the resources brought to bear and the outputs and outcomes that they achieve. It is also
about understanding and actively managing risks within the organization and its activities.

FIFO vs. LIFO accounting


When a merchant buys goods from inventory, the value of the inventory account is reduced
by the cost of goods sold (COGS). This is simple where the CoG has not varied across those
held in stock; but where it has, then an agreed method must be derived to evaluate it. For
commodity items that one cannot track individually, accountants must choose a method that
fits the nature of the sale. Two popular methods that normally exist are: FIFO and LIFO
accounting (first in - first out, last in - first out). FIFO regards the first unit that arrived in
inventory as the first one sold. LIFO considers the last unit arriving in inventory as the first
one sold. Which method an accountant selects can have a significant effect on net income
and book value and, in turn, on taxation. Using LIFO accounting for inventory, a company
generally reports lower net income and lower book value, due to the effects of inflation. This
generally results in lower taxation. Due to LIFO's potential to skew inventory value, UK
GAAP and IAS have effectively banned LIFO inventory accounting.

Standard cost accounting


Standard cost accounting uses ratios called efficiencies that compare the labour and materials
actually used to produce a good with those that the same goods would have required under
"standard" conditions. As long as similar actual and standard conditions obtain, few problems
arise. Unfortunately, standard cost accounting methods developed about 100 years ago, when
labor comprised the most important cost in manufactured goods. Standard methods continue
to emphasize labor efficiency even though that resource now constitutes a (very) small part
of cost in most cases.
Standard cost accounting can hurt managers, workers, and firms in several ways. For
example, a policy decision to increase inventory can harm a manufacturing manager's
performance evaluation. Increasing inventory requires increased production, which means
that processes must operate at higher rates. When (not if) something goes wrong, the process
takes longer and uses more than the standard labor time. The manager appears responsible
for the excess, even though s/he has no control over the production requirement or the
problem.
In adverse economic times, firms use the same efficiencies to downsize, rightsize, or
otherwise reduce their labor force. Workers laid off under those circumstances have even less
control over excess inventory and cost efficiencies than their managers.
Many financial and cost accountants have agreed for many years on the desirability of
replacing standard cost accounting. They have not, however, found a successor.

Theory of constraints cost accounting


Eliyahu M. Goldratt developed the Theory of Constraints in part to address the costaccounting problems in what he calls the "cost world." He offers a substitute, called
throughput accounting, that uses throughput (money for goods sold to customers) in place of
output (goods produced that may sell or may boost inventory) and considers labor as a fixed
rather than as a variable cost. He defines inventory simply as everything the organization
owns that it plans to sell, including buildings, machinery, and many other things in addition
to the categories listed here. Throughput accounting recognizes only one class of variable

costs: the truly variable costs, like materials and components, which vary directly with the
quantity produced.
Finished goods inventories remain balance-sheet assets, but labor-efficiency ratios no longer
evaluate managers and workers. Instead of an incentive to reduce labor cost, throughput
accounting focuses attention on the relationships between throughput (revenue or income) on
one hand and controllable operating expenses and changes in inventory on the other. Those
relationships direct attention to the constraints or bottlenecks that prevent the system from
producing more throughput, rather than to people - who have little or no control over their
situations.

National accounts
Inventories also play an important role in national accounts and the analysis of the business
cycle. Some short-term macroeconomic fluctuations are attributed to the inventory cycle.

Distressed inventory
Also known as distressed or expired stock, distressed inventory is inventory whose potential
to be sold at a normal cost has passed or will soon pass. In certain industries it could also
mean that the stock is or will soon be impossible to sell. Examples of distressed inventory
include products that have reached their expiry date, or have reached a date in advance of
expiry at which the planned market will no longer purchase them (e.g. 3 months left to
expiry), clothing that is defective or out of fashion, and old newspapers or magazines. It also
includes computer or consumer-electronic equipment that is obsolete or discontinued and
whose manufacturer is unable to support it. One current example of distressed inventory is
the VHS format.[8]
In 2001, Cisco wrote off inventory worth US $2.25 billion due to duplicate orders [9]. This is
one of the biggest inventory write-offs in business history.

Inventory credit
Inventory credit refers to the use of stock, or inventory, as collateral to raise finance. Where
banks may be reluctant to accept traditional collateral, for example in developing countries
where land title may be lacking, inventory credit is a potentially important way of
overcoming financing constraints. This is not a new concept; archaeological evidence
suggests that it was practiced in Ancient Rome. Obtaining finance against stocks of a wide
range of products held in a bonded warehouse is common in much of the world. It is, for
example, used with Parmesan cheese in Italy.[10] Inventory credit on the basis of stored
agricultural produce is widely used in Latin American countries and in some Asian countries.
[11]
A precondition for such credit is that banks must be confident that the stored product will
be available if they need to call on the collateral; this implies the existence of a reliable
network of certified warehouses. Banks also face problems in valuing the inventory. The
possibility of sudden falls in commodity prices means that they are usually reluctant to lend
more than about 60% of the value of the inventory at the time of the loan.

BASIC ACCOUNTS RECEIVABLE MANAGEMENT


This article will outline some of the basic components for managing accounts receivable,
ranging from policies and measurement to outsourcing options.
Managing accounts receivables

Policies
options

Tracking

Measurement

outsourcing

The foundation behind account receivables is your policies and procedures for sales.
For example, do you have a credit policy?
When and how do you evaluate a customer for credit?
If you look at past payment histories, you should be able to ascertain who should get credit
and who shouldn't.
Additionally, you need to establish sales terms.
For example, is it beneficial to offer discounts to speed-up cash collections?
What is the industry standard for sales terms?
There are several questions that have to be answered in building the foundation for managing
accounts receivables.
A system must be in place to track accounts receivables. This will include balance forwards,
listing of all open invoices, and generation of monthly statements to customers.
An aging of receivables will be used to collect overdue accounts. You must act quickly to
collect overdue accounts. Start by making phone calls followed by letters to upper-level
managers for the Customer. Try to negotiate settlement payments, such as installments or
asset donations. If your collection efforts fail, you may want to use a collection agency.
Also remember that the collection process is the art of knowing the customer. A
psychological understanding of the customer gives you insights into what buttons to push in
collecting the account. One of the biggest mistakes made in the collection process is a "sticks
only" approach. For some customers, using a carrot can work wonders in collecting the
overdue account. For example, in one case the company mailed a set of football tickets to a
customer with a friendly note and within weeks, they received full payment of the
outstanding account.
MEASUREMENT
Measurement is another component within account receivable management. Traditional
ratios, such as turnover will measure how many times you were able to convert receivables
over into cash.

Example: Monthly sales were $ 50,000, the beginning monthly balance for receivables was $
70,000 and the ending monthly balance was $ 90,000. The turnover ratio is:
.625 ($ 50,000 / (($70,000 + $ 90,000)/2)). Annual turnover is .625 x 360 / 30 or 7.5 times. If
you divide 360 (bankers year) by 7.5, you get 48 days on average to collect your account
receivables. You can also measure your investment in receivables. This calculation is based
on the number of days it takes you to collect receivables and the amount of credit sales.
Example: Annual credit sales are $ 100,000. Your invoice terms are net 30 days. On average,
most accounts are 13 days past due. Your investment in accounts receivable is:
(30 + 13) / 365 x $ 100,000 or $ 11,781.
Example: Average monthly sales are $ 10,000. On average, accounts receivable are paid 60
days after the sales date. The product costs are 50% of sales and inventory-carrying costs are
10% of sales. Your investment in accounts receivable is:
2 months x $ 10,000 = $ 20,000 of sales x .60 = $ 18,000.
Measurements may need to be modified to account for wide fluctuations within the sales
cycle. The use of weights can help ensure comparable measurements.
Example: Weighted Average Days to Pay = Sum of ((Date Paid - Due Date) x Amount
Paid) / Total Payments
Example: Best Possible Days Outstanding = (Current A/R x # of Days in Period) / Credit
Sales for Period
Receivable Management also involves the use of specialist. After-all, you need to spend most
of your time trying to lower your losses and not trying to collect overdue accounts. A wide
range of specialist can help:
- Credit Bureau services to review and approve new customers.
- Deduction and collection agencies
- Complete management of billings and collections

Cash conversion cycle


In management accounting, the Cash Conversion Cycle (CCC) measures how long a firm
will be deprived of cash if it increases its investment in resources in order to expand
customer sales. It is thus a measure of the liquidity risk entailed by growth. However,
shortening the CCC creates its own risks: while a firm could even achieve a negative CCC by
collecting from customers before paying suppliers, a policy of strict collections and lax
payments is not always sustainable.

Definition
CC
C

# days between disbursing cash and collecting cash in


connection with undertaking a discrete unit of operations.

Inventory
conversion
period

Avg.
Inventory
COGS / 365

Receivables
conversion
period

Avg. Accounts
Receivable
+

Credit Sales /
365

Payables
conversion
period

Avg.
Accounts
Payable
COGS / 365

Derivation
Cashflows insufficient. The term "cash conversion cycle" refers to the timespan between a
firm's disbursing and collecting cash. However, the CCC cannot be directly observed in
cashflows, because these are also influenced by investment and financing activities; it must
be derived from Statement of Financial Position data associated with the firm's operations.
Equation describes retailer. Although the term "cash conversion cycle" technically applies
to a firm in any industry, the equation is generically formulated to apply specifically to a
retailer. Since a retailer's operations consist in buying and selling inventory, the equation
models the time between
(1) disbursing cash to satisfy the accounts payable created by sale of a
unit of inventory, and
(2) collecting cash to satisfy the accounts receivable generated by that
sale.

Equation describes a firm that buys & sells on account. Also, the equation is written to
accommodate a firm that buys and sells on account. For a cash-only firm, the equation would
only need data from sales operations (e.g. changes in inventory), because disbursing cash
would be directly measurable as purchase of inventory, and collecting cash would be directly
measurable as sale of inventory. However, no such 1:1 correspondence exists for a firm that
buys and sells on account: Increases and decreases in inventory do not occasion cashflows
but accounting vehicles (receivables and payables, respectively); increases and decreases in
cash will remove these accounting vehicles (receivables and payables, respectively) from the
books. Thus, the CCC must be calculated by tracing a change in cash through its effect upon
receivables, inventory, payables, and finally back to cashthus, the term cash conversion
cycle, and the observation that these four accounts "articulate" with one another.
Lab
el

Transaction

Suppliers (agree to)


deliver inventory
A

Firm
removes its
credit from its
customers.

Operations (decreasing inventory by


$X)
Create accounting vehicles
(booking "COGS" expense of $X;
accruing revenue and increasing
accounts receivable of $Y)

Cashflows (decreasing cash by $X)


Remove accounting vehicle
(decreasing accounts payable by
$X)

Firm
removes its
debts to its
suppliers

Firm collects $Y cash


from customers

Operations (increasing inventory by


$X)
Create accounting vehicle
(increasing accounts payable by
$X)

Firm is owed
$Y cash
(credit) from
customers

Firm disburses $X cash


to suppliers
C

Firm owes $X
cash (debt) to
suppliers

Customers (agree to)


acquire that inventory
B

Accounting (use different accounting


vehicles if the transactions occur in
a different order)

Cashflows (increasing cash by $Y)


Remove accounting vehicle
(decreasing accounts receivable
by $Y.)

Taking these four transactions in pairs, analysts draw attention to five important intervals,
referred to as conversion cycles (or conversion periods):

the Cash Conversion Cycle emerges as interval CD (i.e.


disbursing cashcollecting cash).

the payables conversion period (or "Days payables


outstanding") emerges as interval AC (i.e. owing cashdisbursing
cash)

the operating cycle emerges as interval AD (i.e. owing


cashcollecting cash)

the inventory conversion period or "Days inventory outstanding"


emerges as interval AB (i.e. owing cashbeing owed cash)

the receivables conversion period (or "Days sales outstanding")


emerges as interval BD (i.e.being owed cashcollecting cash

Knowledge of any three of these conversion cycles permits derivation of the fourth (leaving
aside the operating cycle, which is just the sum of the inventory conversion period and the
receivables conversion period.)
Hence,
interval
{C
D}
CCC (in
days)

interval {A
B}

Inventory
conversion
period

interval {B
D}

Receivables
conversion
period

interval {A
C}

Payables
conversion
period

In calculating each of these three constituent Conversion Cycles, we use the equation TIME
=LEVEL/RATE (since each interval roughly equals the TIME needed for its LEVEL to be
achieved at its corresponding RATE).

We estimate its LEVEL "during the period in question" as the


average of its levels in the two balance-sheets that surround the
period: (Lt1+Lt2)/2.

To estimate its RATE, we note that Accounts Receivable grows only


when revenue is accrued; and Inventory shrinks and Accounts
Payable grows by an amount equal to the COGS expense (in the
long run, since COGS actually accrues sometime after the inventory
delivery, when the customers acquire it).

Payables conversion period: Rate = [inventory increase +


COGS], since these are the items for the period that can increase
"trade accounts payables," i.e. the ones that grew its inventory.

NOTICE that we make an exception when calculating this interval:


although we use a period average for the LEVEL of inventory, we also
consider any increase in inventory as contributing to its RATE of change.
This is because the purpose of the CCC is to measure the effects of

inventory growth on cash outlays. If inventory grew during the period, we


want to know about it.

Inventory conversion period: Rate = COGS, since this is the item


that (eventually) shrinks inventory.

Receivables conversion period: Rate = revenue, since this is the


item that can grow receivables (sales).

Inventory
Account
receivable

Accounts
payable

Cash
conversion
cycle
Cash

Restrictive trade practices


The term restrictive trade practice is used for any strategy used by producers to
restrict competition within a given market. Collusion resulting in the formation of
a cartel is one such practice. Other practices that fall short of the formation of a
cartel but are nonetheless against the public interest and illegal include: (a) the
setting of minimum prices; (b) agreements to share markets; (c) the refusal to
supply retailers that stock the products of other competitors; (d) setting different
prices for different buyers (discriminatory pricing); (e) exchanging information.
The aim of restrictive practices is to raise prices and restrict output to the
benefit of the companies practicing them.

Monopolies and Restrictive Trade Practices Commission (MRTPC)


An important organ of the Department of Company Affairs is the Monopolies and Restrictive
Trade Practices Commission (MRTP Commission) a quasi-judicial body. The MRTP
Commission established under Section 5 of the Monopolies and Restrictive Trade Practices
Act, 1969, discharge functions as per the provisions of the Act. The main function of the
MRTP Commission is to enquire into and take appropriate action in respect of unfair trade
practices and restrictive trade practices. In regard to monopolistic trade practices the
Commission is empowered under section 10(b) to inquire into such practices (i) upon a
reference made to it by the Central Government or (ii) upon its own knowledge or
information and submit its findings to Central Government for further action.

Question bank
Mangement paper-ll
Note: This paper contains fifty (50) multiple-choice questions, each carrying two
(2) marks. Attmpt all of them.
1. The demand curve of a monopolistically competitive firm is;
a. Highly though not perfectly elastic
b. Perfectly-inelastic
c. Kinky demand curve
d. Demand curve will be a straight line
2. A decision maker has to remember the proverb, A bird in hand is worth
two in the bush, while he examines:
a. Opportunity cost principle
b. Discounting and compounding principle
c. Marginal or incremental principle
d. Equi-marginal principle
3. Market with one buyer and one seller is called:
a. Monopsony
b. Bilateral monolpoy
c. Monopoly
d. Duopoly
4. Cardinal measure of utility is required in:
a. Utility theory
b. Indifference curve analysis
c. Revealed preference
d. Inferior goods
5. In case of giffen goods, price effect is:
a. Negative
b. Zero
c. Positive
d. -1
6. Which of the following theories state that employees make comparisions
of their efforts and rewards with those of others in similar work
situations?
a. Vrooms Expectancy theory
b. Adams equity theory
c. Alderfers ERG theory
d. Hertzbergs Two Factor Theory

7. Which of the followings is a method of indicating the feeling of acceptance


or rejection among members of a group?
a. Mutuality
b. Simulation
c. T-group
d. Sociometry
8. Which of the following statements correctly describe extinction?When a
previously reinforced behavior:
a. Is reinforced immediately
b. Is reinforced for a short period
c. Is not reinforced for a long time
d. Both (b) and (c) above
9. In which of the following processes a person rejects his own feelings about
the other person?
a. Self-analysis
b. Projection
c. Empathy
d. Denial
10.Which of the following methods avoid a win-lose attitude?
a. Integration
b. Domination
c. Compromise
d. All of the above
11.In Transactional analysis the I am ok, you are not ok life position is also
referred to as:
a. Bossing
b. Avoidant
c. Diffident
d. Bohemian
12.The main objective of 360 degree appraisal is to bring;
a. Subjectivity
b. Objectivity
c. Uniformity of standards
d. None of these
13.At which level of PCMM the concept of competency management is
brought into workforce practices:
a. One
b. Two
c. Three
d. Four
14.The total dedication of continuous improvement so that customers needs
are met is known as:
a. Theory X
b. Theory Y

c. Total Quality Management


d. Change Management
15.The Employee Stock Ownership Plan(ESOP) was developed by:
a. Michael Porter
b. Kaplan
c. P.F.Drucker
d. Louis Kelso
16.When monotony in work is reduced by giving a wider variety of duties to
employees this is known as:
a. Job rotation
b. Job redesign
c. Job enlargement
d. Job enrichment
17.The book HR Champions is authored by;
a. Mintzberg
b. J. Pfeffer
c. M. Porter
d. Dave Ulrich
18.The central trade union CITU is associated with:
a. Congress party of India
b. Communist party of India Marxist
c. Communist party of India
d. Shiv sena
19.One of the goals of financial management is:
a. Wealth maximization
b. Hostile take-over
c. To raise funds from outsiders
d. None of the above
20.If bond sells at discount, the price is less than par value and:
a. YTM= coupon rate
b. YTM< coupon rate
c. YTM> coupon rate
d. None of the above
21.The discount rate that equates the present value of the future net
cashflows from an investment project with the projects initial cash
outflow is known as:
a. Average rate of return
b. Internal rate of return
c. Cost of capital
d. Hurdle rate
22.As per pecking order theory of capital structure:
a. Internal equity is preferred over external debt
b. External debt is preferred over internal equity
c. External equity is preferred over external debt

d. None of the above


23.A formal legal commitment to extend credit upto some maximum amount
over a stated period of time is known as:
a. Line of credit
b. Revolving credit agreement
c. Commitment charge
d. Secured term loan
24.Economics realized in a merger where the performance of the combined
form exceeds that of its previously separate parts is known as:
a. Economics of scale
b. Synergy
c. Horizontal integration
d. Leveraged buy-out
25.Macro marketing environment refers to:
a. Internal to the company
b. Employees and shareholders of a company
c. External to the company
d. Stake holders
26.Environment scanning involves:
a. Weather forecasting
b. Studying depression in a sea
c. Identifying threats and opportunities
d. Identifying strengths and weaknesses
27.Product mix refers to:
a. The ingredients used for making a product
b. Features of a product
c. Marketing mix element
d. A group of products
28.Which of the following is not a stage in the new product development?
a. Generation of ideas
b. Screening
c. Market segmentation
d. Commercialization
29.Industrial marketing involves:
a. Business to business
b. Customer to customer
c. Online marketing
d. Customer to business
30.Which of the following is not included in the 7 ps of the services marketing
a. People
b. Products
c. Physical evidence
d. Public relations

Economics:Explain the concept of consumers surplus


Define oligopoly and explain price rigidity under oligopoly in terms of Kinked
demand curve
Explain production function and what are its managerial uses?
Distinguish between micro and macro environments and explain the relationship
between the two.
What are the salient features of Monoloplistic competition
What do you mean by circular flow of national income
Describe the incentive and concessions available to the SSI in India
What are important measures for rehabilitation of sick enterprises in India?
What are the factors guiding the activities of corporate social responsibility?
The short-run cost curves are derived from production function, Evaluate
What are the objectives of human resource planning?
What are the weaknesses of Indian trade unions?
What are the methods of segmentation of market?
Distinguish between advertising and publicity
What do you understand by testing of hypothesis? State the application if t-tests
in testing o hypothesis.
Describe the properties of correlation coefficient.
Distinguish between Internet and intranet. Descibe their application in business.
How is the strategic decision different from other kinds of decisions?
State the reasons for the sickness in small enterprises.
Explain the social responsibilities shown by Indian Business Houses.
What is meant by a transportation problem? Formulate the typical
transportation problem as a linear programming problem.

Describe the steps of Vogels approximation method for obtaining an initial


basic feasible solution to a transportation problem.
Summarize the salient feature of realistic approach to risk
What are the economics of scale of operation?
What is Business Cycle and what are the different phases of Business Cycle?
Organizational Behavior:Discuss Douglas McGregors theory of motivation
Examine the opinion of peter F.Drucker on scientific management
What is the essence of scientific management
How can the divorce of planning and doing improve the productivity and
effectiveness of work/
Why was there resistance against acceptance of scientific management by the
workers.
What is the engineering part of scientific management? How does it compliment
the philosophy part of it? Explain.
Explain the Maslows theory of motivation
Max Weber Bureaucracy Theory
Explain the contribution of F.W. Taylor to the scientific management.
Bring out the limitations of F.W.Taylor contribution
Do you think in the light of the Hawthorne experiment that there exists a
relationship between working conditions and productivity/
Atttitude is more important than working condition. Discuss the above statement
in the light of Hwthorne experiment.
Compare and contrast the contribution of F.W.Taylor with that of Elton Mayo.
How do emotional states and prejudices affect perception
Compare and contrast between trait and type approaches to Personality
Selection tests reveal more but that is suggested. They conceal less but that is
vitalCritically evaluate the statement.

What is potential appraisal? How does it differ from Performance appraisal?


Discuss.
Explain Victor Vrooms Expectancy theory and point out its limitations
What is a balanced score card?
What is skills inventory?
What is Organisational Behaviour? Discuss its significance
What is job analysis? Discuss its methods
What is the current relevance of McClellands contribution to the understanding
of motivation/
Critically examine the contributions of McCleland to the concept of motivation
Determine the job suitability of the people who have high need for power
Why do some of the people avoid the pain of being rejected by a social group?
The tendency to feel rejection as an acute pain may have developed in humans
as a defensive mechanism for the species, she said.
"Because we have such a long time as infants and need to be taken care of, it is
really important that we stay close to the social group. If we don't we're not
going to survive," said Eisenberger.
"The hypothesis is that the social attachment system that makes sure we don't
stray too far from the group piggybacked onto the pain system to help our
species survive."
This suggests that the need to be accepted as part of a social group is as
important to humans as avoiding other types of pain, she said.
Just as an infant may learn to avoid fire by first being burned, humans may learn
to stick together because rejection causes distress in the pain center of the
brain, said Eisenberger.
"If it hurts to be separated from other people, then it will prevent us from
straying too far from the social group," she said.

Social Groups
A social group consists of two or more people who interact with one another and who
recognize themselves as a distinct social unit. The definition is simple enough, but it has

significant implications. Frequent interaction leads people to share values and beliefs. This
similarity and the interaction cause them to identify with one another. Identification and
attachment, in turn, stimulate more frequent and intense interaction. Each group maintains
solidarity with all to other groups and other types of social systems.
Groups are among the most stable and enduring of social units. They are important both to
their members and to the society at large. Through encouraging regular and predictable
behavior, groups form the foundation upon which society rests. Thus, a family, a village, a
political party a trade union is all social groups. These, it should be noted are different from
social classes, status groups or crowds, which not only lack structure but whose members are
less aware or even unaware of the existence of the group. These have been called quasigroups or groupings. Nevertheless, the distinction between social groups and quasi-groups is
fluid and variable since quasi-groups very often give rise to social groups, as for example,
social classes give rise to political parties.
Primary Groups
If all groups are important to their members and to society, some groups are more important
than others. Early in the twentieth century, Charles H. Cooley gave the name, primary
groups, to those groups that he said are characterized by intimate face-to-face association and
those are fundamental in the development and continued adjustment of their members. He
identified three basic primary groups, the family, the child's play group, and the
neighborhoods or community among adults. These groups, he said, are almost universal in all
societies; they give to people their earliest and most complete experiences of social unity;
they are instrumental in the development of the social life; and they promote the integration
of their members in the larger society. Since Cooley wrote, over 65 years ago, life in the
United States has become much more urban, complex, and impersonal, and the family play
group and neighborhood have become less dominant features of the social order.
Secondary groups, characterized by anonymous, impersonal, and instrumental relationships,
have become much more numerous. People move frequently, often from one section of the
country to another and they change from established relationships and promoting widespread
loneliness. Young people, particularly, turn to drugs, seek communal living groups and adopt
deviant lifestyles in attempts to find meaningful primary-group relationships. The social
context has changed so much so that primary group relationship today is not as simple as
they were in Cooley's time.
Secondary Groups
An understanding of the modern industrial society requires an understanding of the
secondary groups. The social groups other than those of primary groups may be termed as
secondary groups. They are a residual category. They are often called special interest
groups.Maclver and Page refers to them as great associations. They are of the opinion that
secondary groups have become almost inevitable today. Their appearance is mainly due to

the growing cultural complexity. Primary groups are found predominantly in societies where
life is relatively simple. With the expansion in population and territory of a society however
interests become diversified and other types of relationships which can be called secondary
or impersonal become necessary. Interests become differentiated. The services of experts are
required. The new range of the interests demands a complex organization. Especially selected
persons act on behalf of all and hence arises a hierarchy of officials called bureaucracy.
These features characterize the rise of the modern state, the great corporation, the factory, the
labor union, a university or a nationwide political party and so on. These are secondary
groups.Ogburn and Nimkoff defines secondary groups as groups which provide experience
lacking in intimacy. Frank D. Watson writes that the secondary group is larger and more
formal ,is specialized and direct in its contacts and relies more for unity and continuance
upon the stability of its social organization than does the primary group.
Characteristics of secondary group:
Dominance of secondary relations: Secondary groups are characterized by indirect,
impersonal, contractual and non-inclusive relations. Relations are indirect because secondary
groups are bigger in size and members may not stay together. Relations are contractual in the
sense they are oriented towards certain interests
Largeness of the size: Secondary groups are relatively larger in size. City, nation, political
parties, trade unions and corporations, international associations are bigger in size. They may
have thousands and lakhs of members. There may not be any limit to the membership in the
case of some secondary groups.
Membership: Membership in the case of secondary groups is mainly voluntary. Individuals
are at liberty to join or to go away from the groups. However there are some secondary
groups like the state whose membership is almost involuntary.
No physical basis: Secondary groups are not characterized by physical proximity. Many
secondary groups are not limited to any definite area. There are some secondary groups like
the Rotary Club and Lions Club which are international in character. The members of such
groups are scattered over a vast area.
Specific ends or interest: Secondary groups are formed for the realization of some specific
interests or ends. They are called special interest groups. Members are interested in the
groups because they have specific ends to aim at. Indirect communication: Contacts and
communications in the case of secondary groups are mostly indirect. Mass media of
communication such as radio, telephone, television, newspaper, movies, magazines and post
and telegraph are resorted to by the members to have communication.
Communication may not be quick and effective even. Impersonal nature of social
relationships in secondary groups is both the cause and the effect of indirect communication.

Nature of group control: Informal means of social control are less effective in regulating
the relations of members. Moral control is only secondary. Formal means of social control
such as law, legislation, police, court etc are made of to control the behavior of members.
The behavior of the people is largely influenced and controlled by public opinion,
propaganda, rule of law and political ideologies. Group structure: The secondary group has a
formal structure. A formal authority is set up with designated powers and a clear-cut division
of labor in which the function of each is specified in relation to the function of all. Secondary
groups are mostly organized groups. Different statuses and roles that the members assume
are specified. Distinctions based on caste, colour, religion, class, language etc are less rigid
and there is greater tolerance towards other people or groups.
Limited influence on personality: Secondary groups are specialized in character. People
involvement in them is also of limited significance.Members's attachment to them is also
very much limited. Further people spend most of their time in primary groups than in
secondary groups. Hence secondary groups have very limited influence on the personality of
the members.
Reference Groups
According to Merton reference groups are those groups which are the referring points of the
individuals, towards which he is oriented and which influences his opinion, tendency and
behaviour.The individual is surrounded by countless reference groups. Both the memberships
and inner groups and non memberships and outer groups may be reference groups.
What factors are considered while preparing PERT chart?
What is the status of implementation of WTO guidelines in India?
Discuss the measures taken by government for the promotion of small and tiny
enterprises in the wake of globalization?
What is corporate governance?
HRM:Explain some typical on-the job training techniques
Discuss future of trade unions in India
Who are called rate busters and Christers?
Rate buster: An employee who is highly productive and exceeds the formally
agreed rate of output for the particular task. Whilst this is advantageous for
management, rate-busters are usually disliked by their colleagues because their
action provides managers with the excuse to raise the rate of output for all the
other employees. Typically, there is informal social regulation of work in most
workgroups where rate-busting is deemed antisocial behaviour and potential

rate-busters are brought into line by their work colleagues through a mixture of
persuasion and coercion.
Define Selection.
What is potential assessment?

Finance:Discuss the utility of common size analysis and index analysis


For analyzing the risk involved in capital budgeting decision, simulation is
superior to sensitivity analysis.
Define discounted and non-discounted approaches for appraising capital
budgeting decision.
State bond valuation theorems.
State the meaning and rationale of fundamental analysis with reference to
valuation of securities
What is Adjusted rate of Discount method for incorporating risk in capital
budgeting?
What is consideration of time important in financial decision making? How
can time be adjusted?
Explain briefly cost of capital
Combined leverage is the product of degree of operating leverage and
degree of financial leverage.Comment
According to Modigilliani-Miller approach, the value of the firm is affected by
the debt-equity mix. Discuss

The financial goal of a firm should be to maximize profit and wealth. Do you
agree with the statement? Comment
Explain briefly:
Equity shareholders provide risk capital
Weighted average cost of capital of the firm
How is merger evaluated as a capital budgeting proposal?
State the method of risk analysis with reference to capital budgeting decision
State the reason for merger
Trading on equity is a double-edged weapon elucidate.
Financial statements reflect a combination of recorded facts, accounting
conventions and personal judgement explain
Discuss arbitrage pricing theory for valuation of securities. How is it different
from capital asset pricing model?
What is cash flow statement? What purpose does it serve?
What is hedging? Discuss its utility.
What is working capital? How would you assess the working capital
requirement of a firm?
Explain Arbitrage Pricing Theory with reference to capital market
Discuss Modigliani-Miller approach for capital structure
Discuss Walter-model and Gordon-model or dividend policy and valuation.
Discuss Black-scholes option valuation model
Discuss tools of financial analysis. Explain its role in interpretation and
signaling of corporate health.
Explain the concept and measurement of risk and return of single asset and
a portfolio.
Sensitivity analysis as a tool of risk-analysis is superior to simulation
technique of risk-analysis for capital budgeting decision. Comment.
What is the relationship between an investors required rate of return and the
value of a security? Explain with example

Discuss the purpose of the statements of changes in financial position when


prepared on working capital basis and cash basis
How is cost of debt similar to cost of preference capital/ Descibe the uses and
limitations of cost of capital to a financial manager.
What are the basic financial derivatives? Describe the function of economic
nature as performed by participants in derivative market.
What are the different conflicting views on capital structure? Describe the
Modiglianni and Miller theory on relationship between capital structure and
value of firm.
Describe the important elements of forward contracts, futures aand options.
How can they be used as risk management tools?
Describe the strategies to be adopted to expedite the recovery of
receivables.
The final implication of both Walter-model and Gordon-model are same for
dividend distribution Discuss and comment.
Expalin the factors determining the value of an option
Exposure of corporate to risk has increased over a period of time. Explain in
brief the main guidelines for risk-management.
Proper financial analysis can provide early warning signals about the health
of the organization. Elaborate.
What is the relationship between risk and return as per CAPM?
How is risk-return relationship explained by Capital Asset Pricing
Model(CAPM)? How does Arbitrage pricing Theory (Apt) overcome the
shortcomings of CAPM?
What are the two important characteristics of current assets? State their
implications for Working Capital Management
Explain transaction exposure, translation exposure and operating exposure
with reference to International Finance.
Financing decision is irrelevant to wealth maximization Explain in the
context of M.M.hypothesis.
Reducing rate of interest on loans and debts has led to debt restructuring
Explain this in the context of rising rate of inflation and cost of capital of the
firm.

How are the values of perpetual bonds and preference shares determined?
Bring out the similarity of this process with that used to value a zero growth
share
Bring out the difference between a common-size balance sheet and
comparative balance sheet.
Discuss the process for calculating the cost of retained earnings. Also bring
out the theoretical and practical difficulties associated with this calculation.
Explain the relationship between capital structure and value of the firm.
Explain the net operating income approach
Briefly describe the major types of Financial Management Decisions that a
firm takes.
Explain the computation of operating cycle for a manufacturing unit.
What is meant by technical analysis with reference to valuation of securities?
What is trading on equity?

Trading on equity is sometimes referred to as financial leverage or the leverage factor.


Trading on equity occurs when a corporation uses bonds, other debt, and preferred stock
to increase its earnings on common stock. For example, a corporation might use long
term debt to purchase assets that are expected to earn more than the interest on the debt.
The earnings in excess of the interest expense on the new debt will increase the earnings
of the corporations common stockholders. The increase in earnings indicates that the
corporation was successful in trading on equity.
If the newly purchased assets earn less than the interest expense on the new debt, the
earnings of the common stockholders will decrease.
trading on the equity
Borrowing funds to increase capital investment with the hope that the
business will be able to generate returns in excess of the interest charges.
***************************************
Do dividend have a bearing on share valuation? Discuss the models which
assume that investment and dividend decisions are related with each other.
Dividend announcement has signaling effect on the price of the equity
shares and hence, on the wealth of the shareholder. Discuss the dividend
policy of Indian companies in this context.

periods. Both the stocks are currently selling for Rs. 50 per share. The rupee
return (dividend plus price) of these stocks for the next year would be as
follows:

Economic condition
High
growth

Low
growth

Stagnati
on

Recessi
on

Probability

0.28

0.32

0.22

0.18

Return of P
Ltd. stock

55

50

60

70

Return of Q
Ltd. stock

75

65

50

40

Calculate the expected return and standard deviation of:


Rs.1000 in the equity stock of P Ltd;
Rs.1000 in the equity stock of Q Ltd;
Rs.500 in the equity stock of P Ltd and Rs.500 in the equity stock of Q Ltd;
Rs.700 in the equity stock of P Ltd and Rs.300 in the equity stock of Q Ltd
Which of the above four options would you chose? Why?
In what different ways are the investment, financing and dividend decisions
interrelated? Give arguments in support of the position that dividends are
relevant to stock valuation and that dividend policy is an active decision
variable.
Dividend policy is based on the goal of shareholders wealth maximization.
Critically examine the statement.
Explain briefly the factors which are influencing dividend policy of a company
Financial Management is concerned with solution of three major decisions a
firm must make: the investment decision, the financing decision and the
dividend decisions. Explain this statement highlighting the interrelationship
amongst these decisions.
****************************************
Briefly explain why one prefer NPV method over the IRR method as project
evaluation technique

Define Capital Budgeting and discuss its features


Under what conditions do the NPV and IRR methods conflict? Which of these
two methods should be used to take capital budgeting decision under such
conflicting situation/
Comapre the NPVmethod with the IRR method. What are the steps involved
in the calculation of IRR in the case of uneven cash inflows?
Explain the criterion for judging the acceptability of investments when benefit
cost ration is used. What is the B/C ration of an investment when its NPV is
zero?
******************************************
What is the basic purpose of holding inventory? Describe the risk-return trade
offs associated with inventory management.
In what different ways is accounts receivable management different from
cash management and inventory management? How will you evaluate the
risk of extending credit to an applicant? Discuss
What are the costs associated with inventory management? Illustrate the use
of Economic Order Quantity
Briefly explain the concept of factor productivity, factor intensity and returns
to scale under production analysis.
Discuss determinants of working capital requirement

Marketing:State the different techniques used in sales promotion


Explain the role of segmentation in marketing
Discuss the methods adopted for gathering of primary data in marketing
research.
What is linear programming problem and what are its components? Discuss the
scope and role of linear programming in solving management problems.
Discuss the different types of variations in the manufacturing purpose. How does
SQC help to identify different types of variations?
What is System Life Cycle? What are the important steps involved in the system
analysis? Illustrate your answer with reference to a real life situation.
Is there any value creation in retailing on net? If yes, Discuss.
When is family branding preferred?
Operations:Point out the benefits of Queuing theory
Trace out the subsequent developments to the Henry Gnatts Chart
How Gantt Chart can be used in work scheduling
Describe the North West Corner Rule for solving a transportation problem.
The business planning differs from project planning. Explain
Briefly describe the principles propounded by Frank Gilberth for improving the
work efficiency.
Bring out the contributions of Lillian Gilberth
Identify management thinker of your interest and compare his contributions with
the any of the above mentioned thinkers
Describe Porters approach to Industry analysis.
Critically analyse the issue covered in the last three Ministerial conference of
world trade organization.

Statistics:The weekly wages of 2000 workers in the factory is normally distributed with a
mean of Rs. 200 and a standard deviation of Rs. 20. Estimate the lowest weekly
wages of the 200 highest paid workers and the highest weekly wages of 200
lowest paid workers (given hi(1.28) =0.90)
Differentiate between correlation and regression analysis and give their
properties
Explain the method of testing the significance of correlation coefficient
What factors determine market structure?
Explain a few difficulties in the estimation of national income
What do you understand by trait theory of leadership?
Selection is a process of rejection How?
What is the role of competence mapping in performance management?
Define modern concept of marketing
Explain product mix
Explain the graphical method of solving an LPP involving two variables
Explain the terms Lead time, re-order point, stock-out cast and set-up cast in
inventory management
What is the significance of regression analysis? Why we have two regression
equations. Derive the correlation coefficient from the two regression coefficients.
Write a short note on management information system
Describe the strategic management process
What is meant by accounting ratios? Distinguish between liquidity and leverage
ratios.
Discuss the concept of operating profit. How is it different from net profit/

What is dividend growth model approach to the cost of equity ? Discuss its
rationale.
Discuss the basic financial derivatives
What is the funds flow statement based on working capital concept? What
purpose does it serve?
Discuss the methods for ranking investment proposals. What are the methods
commonly used for incorporating risk in capital budgeting decisions?
What is Balanced Score Card?
Write major function of a Trade Union.
State derivation of cost of debt adapting both book-value and market value
approach
Define functions of financial management
Distinguish between marketing information system and marketing research
List out different distribution channels
Describe briefly the basic steps to be followed in developing PERT/CPM
programme. Hoe does PERT differ from CPM?
What is rank correlation? How is it measured? Why rank correlation is used?
Define a simple random sample. Describe briefly some practical methods of
drawing a random sample from a finite population.
Explain generic strategies. How these strategies can be used to gain competitive
advantage/
Discuss the basic features of small enterprises
Identify the ethical issues involved in gender related problems in organisatons.
Distinguish between complete enumeration and sample survey. What are the
advantages of sampling over complete enumeration. Describe in brief different
sampling methods.
Implementing empowerment calls for organization wide revolution and if
pursued religiously can deliver unparallel results. Elaborate this statement and
reason out your answer.
What are the pre-requisites for implementing empowerment programme in an
organization.

Discuss the significance of cultural Fit,leadership and total quality in the


process of empowerment at organizational level.
Critically evaluate how is management by stressapproach being equated with
empowerment.
What are the critical factors responsible for failure of empowerment.
Define Production Function. What are its managerial applications?
List out Hygiene factors and motivators of the Hertzbergs Two Factor Theory
How do you segment a market or Health Beverage?
Explain different positioning strategies adopted by different shampoo brands in
the market.
What factors are considered in Plant location
Define a student t-statistic and state its uses.
Describe in brief the steps involved in designing the data base for an
information system of the Mangement Department of your University/Institute.
What is the structural analysis of Industry? How Porters five force model can be
used in industry analysis?
Identify the steps involved in the business plan preparation
What environmental and ecological issues can crop up as ethical challenge to
business?
What is meant by transportation problem? Formulate the typical transportation
problem as a linear programming problem. Discuss the steps of Vogels
Approxiamtion Method for obtaining an initial basic feasible solution to a
transportation problem.
What is the current relevance of managerial roles approach/
Are classical managerial functions inclusive in nature/
What are the roles of managers in addition to their classical functions?
On what ground the contribution of Mintzberg has been criticized?
Is sources allocation a planning exercise? If yes, how?
What is a kinked demand curve and what is the point of kink?
What is the basis on which advertising budget is determined?

Explain Abraham Maslows theory of hierarchy of needs


Distinguish between Job enlargement and job enrichment
What is downsizing and rightsizing?
How does packaging work as means of marketing communication/
How is customer relationship management useful in aggregated marketing
efforts?
Discuss the importance of production management
What is MRP? Explain its significance.
Define chi-square. Cite some statistical problems where you can apply chi
square for testing hypothesis.
Distinguish between corporate strategy and Business strategy
Examine the causes for the sickness in small enterprises.
What are the Corporate Social responsibilities practiced by Indian companies/
What is organizational behavior? Enumerate its elements
What are the challenges before the Human Resource Mangement in India?
Name five social security legislations of our country
Discuss any two important elements of promotional mix
What do you mean by marketing and how does it differ for Selling?
Compare and contrast: Product layout vs Process layout.
Discuss briefly the basic steps to be followed in developing PERT programme for
a project.
Two random samples gave the following results:
Samp
le

Siz
e

Sample
mean

Sum of squares of deviations from


the means

10

15

90

12

14

108

Test whether the sample come from the same normal population at 5% level of
significance.
{given:}
What is a Data Flow Diagram (DFD) and a Data Dictionary? Draw a DFD for
payroll processing of an organization
Explain the BCG matrix. Bring out its usefulness in corporate level strategy
formulation.
Elucidate the characteristics of an Entrepreneur
What is Ecological Consciousness? Give illustrations
What are the main features of the scientific management?
What is the importance of time and motion study in scientific management
What are the main benefit of specialization
What are the main feature of the Industrial age
How will the business world change in the information age?
State the law of variable proportion
Discuss the relevance of Need Hierarchy Theory of Motivation in developing
economics
How 360 degree appraisal is an improved technique of performance appraisal?
In what ways have the functions of Human Resource Manager changed in the
post globalistion scenario
What is customer orientation in marketing
Define branding
Define and explain the following terms
Optimum solution, feasible solution, unrestricted variables
Derive the EOQ in the inventory control method
Give the classical and frequency definition of probability. What are the
objections raised in these definitions?
Write down the Normal Distribution unction and the characteristics of the Normal
Probability Curve

Why has strategic management become so important to todays business


organizations
What are essentials of a successful entrepreneur?
What is the scope of corporate governance?
Distinguish between marketing information system and marketing research
What are the objectives of Production management? Discuss the 5 ps pf
production management
Discuss the steps involved in the selection of best strategy
Examine the concept of corporate governanace. Give the example of corporate
governance
Explain the contribution of Elton Mayo in the area of Management
How can productivity be increased according to Mayos experiments/
Where and under what conditions did Elton Mayo conduct his experiments/
What is the experiment regarding work incentive pay paln?
Strategy:Discuss the importance of a clear business strategy
Define the PEST analysis and describe how to carry one out
Proper Working capital management is the backbone for success of the
organization. Discuss various aspects of capital management from angle of
profitability and liquidity.
Entrepreneurship:Discuss the role played by government in the promotion of small business
Examine the process of business opportunity identification
Economics:What is meant by Elasticity of Demand?
Gross National Product (GNP) measures national welfare.Comment
Explain the significance of ordering and carrying cost of inventories.

Distinguish between the perfectly competitive market and monopolistic


competition. Prove that perfectly competitive market is more efficient than the
monopolistic competition.

Ethics and corporate governance:Explain ethical issues involved in corporate governance


Corporate governance:

In the context of corporate governance, discuss the role of Board of


Directors under Agency Theory and Stewardship theory perspective.
Discuss the role of independent directors in the context of CEO duality.

Just in time, lean manufacturing

Job enrichment
Job enrichment is an attempt to motivate employees by giving them the opportunity to use
the range of their abilities. It is an idea that was developed by the American psychologist
Frederick Hertzberg in the 1950s. It can be contrasted to job enlargement which simply
increases the number of tasks without changing the challenge. As such job enrichment has
been described as 'vertical loading' of a job, while job enlargement is 'horizontal loading'. An
enriched job should ideally contain:

A range of tasks and challenges of varying difficulties (Physical or


Mental)

A complete unit of work - a meaningful task

Feedback, encouragement and communication

Contents
[hide]

1 Techniques

2 Literature

3 References

4 See also

[edit] Techniques
Job enrichment, as a managerial activity includes a three steps technique:[citation needed]
1. Turn employees' effort into performance:

Ensuring that objectives are well-defined and understood by


everyone. The overall corporate mission statement should be
communicated to all. Individual's goals should also be clear. Each
employee should know exactly how he/she fits into the overall

process and be aware of how important their contributions are to


the organization and its customers.

Providing adequate resources for each employee to perform well.


This includes support functions like information technology,
communication technology, and personnel training and
development.

Creating a supportive corporate culture. This includes peer support


networks, supportive management, and removing elements that
foster mistrust and politicking.

Free flow of information. Eliminate secrecy.

Provide enough freedom to facilitate job excellence. Encourage and


reward employee initiative. Flextime or compressed hours could be
offered.

Provide adequate recognition, appreciation, and other motivators.

Provide skill improvement opportunities. This could include paid


education at universities or on the job training.

Provide job variety. This can be done by job sharing or job rotation
programmes.

It may be necessary to re-engineer the job process. This could


involve redesigning the physical facility, redesign processes,
change technologies, simplification of procedures, elimination of
repetitiveness, redesigning authority structures.

2. Link employees performance directly to reward:[citation needed]

Clear definition of the reward is a must

Explanation of the link between performance and reward is


important

Make sure the employee gets the right reward if performs well

If reward is not given, explanation is needed

3. Make sure the employee wants the reward. How to find out?[citation needed]

Ask them

Use surveys( checklist, listing, questions)

What are Management Information Systems?


Management information systems (MIS) are a combination of hardware and
software used to process information automatically. Commonly, MIS are used
within organizations to allow many individuals to access and modify information.
In most situations, the management information system mainly operates behind
the scenes, and the user community is rarely involved or even aware of the
processes that are handled by the system.
A computer system used to process orders for a business could be considered a
management information system because it is assisting users in automating
processes related to orders. Other examples of modern management
information systems are websites that process transactions for an organization
or even those that serve support requests to users. A simple example of a
management information system might be the support website for a product,
because it automatically returns information to the end user after some initial
input is provided.
Online bill pay at a bank also qualifies as a management information system
when a bill is scheduled to be paid, the user has provided information for the
system to act against. The management information system then processes the
payment when the due date approaches. The automated action taken by the
online system is to pay the bill as requested. Since the bills within an online bill
pay system can be scheduled to be automatically paid month after month, the
user is not required to provide further information. Many times, the bill pay
system will also produce an email for the user to let him know that the action
has occurred and what the outcome of the action was.
Management information systems typically have their own staff whose function
it is to maintain existing systems and implement new technologies within a
company. These positions are often highly specialized, allowing a team of people
to focus on different areas within the computer system. In recent years, colleges
and universities have begun offering entire programs devoted to management
information systems. In these programs, students learn how to manage large
interconnected computer systems and troubleshoot the automation of these
management information systems.
Many people use management information systems every day without thinking
about the actual system they are using. The individual will see a website and
enter information with the expectation that a specific action will happen; these
websites, just like the accounting systems used by large corporations, act as
management information systems to automate the process.

Definition: Management Information Systems (MIS) is the term given to the discipline
focused on the integration of computer systems with the aims and objectives on an
organisation.
The development and management of information technology tools assists executives and the
general workforce in performing any tasks related to the processing of information. MIS and
business systems are especially useful in the collation of business data and the production of
reports to be used as tools for decision making.
Applications of MIS

With computers being as ubiquitous as they are today,


there's hardly any large business that does not rely
extensively on their IT systems.
However, there are several specific fields in which MIS has
become invaluable.
* Strategy Support
While computers cannot create business strategies by
themselves they can assist management in understanding
the effects of their strategies, and help enable effective
decision-making.
MIS systems can be used to transform data into
information useful for decision making. Computers can
provide financial statements and performance reports to
assist in the planning, monitoring and implementation of
strategy.
MIS systems provide a valuable function in that they can
collate into coherent reports unmanageable volumes of
data that would otherwise be broadly useless to decision
makers. By studying these reports decision-makers can
identify patterns and trends that would have remained
unseen if the raw data were consulted manually.
MIS systems can also use these raw data to run
simulations hypothetical scenarios that answer a range
of what if questions regarding alterations in strategy. For
instance, MIS systems can provide predictions about the
effect on sales that an alteration in price would have on a
product. These Decision Support Systems (DSS) enable
more informed decision making within an enterprise than
would be possible without MIS systems.
* Data Processing
Not only do MIS systems allow for the collation of vast
amounts of business data, but they also provide a valuable
time saving benefit to the workforce. Where in the past
business information had to be manually processed for
filing and analysis it can now be entered quickly and easily
onto a computer by a data processor, allowing for faster
decision making and quicker reflexes for the enterprise as

Q1.
A 300 meter long train passes a pole in 12 seconds. What is its speed in
kilometer per hour?
Q2.

In an examination 40% students fail in maths, 30% in English and 15% in


both. Find the pass percentage.

Q3.

If the hands of a clock are in perpendicular position what will be the time
when they are in the 8-9 position?

Q4.

In the figure OR and PR are radii of circles. The length of OP is 4. If OR=2,


what is PR?(PR is tangent to circle with centre O)
R
O

Q5.

ABCD is a square, EPGH is a rectangle, AB =3 EF=4, FG=6 what is the


area of the region outside of ABCD and inside EFGH?

F
B

A
E

Q6.

D
H

A train completes a journey with a few stoppages in between at an


average speed of 40 km per hour. If the train had not stopped anywhere,
it would have completed the journey at an average speed of 60 km per
hour on average how many minutes per hour does the train stop during
the journey?

Q7.

In the figure, CD is parallel to EF, AD=DF, CD=4 and DF=3, what is EF?
E
C

You might also like