Professional Documents
Culture Documents
MANAGEMENT]
Contents
Instructions.................................................................................................................4
Organizational behavior...........................................................................................18
Management Thoughts.............................................................................................49
Human resource management.................................................................................51
Business statistics....................................................................................................55
Marketing environment and environment scanning.................................................62
Corporate Strategy...................................................................................................67
Values and ethics in management...........................................................................94
Corporate governance............................................................................................122
Fundamental and Ethics Theories of Corporate Governance..........................122
Production management........................................................................................142
Financial management...........................................................................................157
Different types of transactions in the Foreign Exchange Market....................157
Risk management...........................................................................................181
Cash management.................................................................................................188
Cash management services generally offered................................................188
Inventory................................................................................................................190
Inventory Management..................................................................................190
Business inventory.........................................................................................191
The reasons for keeping stock.....................................................................191
Special terms used in dealing with inventory..............................................192
Typology......................................................................................................192
Inventory examples.....................................................................................192
Principle of inventory proportionality..............................................................193
Purpose.......................................................................................................193
Applications.................................................................................................194
Roots...........................................................................................................194
Instructions
SCHEME AND DATE OF TEST:
(i) The Test will consist of three papers. All the three papers will be held on 26th December,
2010 in two separate sessions as under:
Session
Paper
Marks
Duration
First
100
First
II
100
Second
III
200
Paper-I shall be of general nature, intended to assess the teaching/research aptitude of the
candidate. It will primarily be designed to test reasoning ability, comprehension, divergent
thinking and general awareness of the candidate. UGC has decided to provide choice to the
candidates from the December 2009 UGC-NET onwards. Sixty (60) multiple choice questions of
two marks each will be given, out of which the candidate would be required to answer any fifty
(50). In the event of the candidate attempting more than fifty questions, the first fifty questions
attempted by the candidate would be evaluated.
Paper-II shall consist of questions based on the subject selected by the candidate. Each of these
papers will consist of a Test Booklet containing 50 compulsory objective type questions of two
marks each.
The candidate will have to mark the responses for questions of Paper-I and Paper-II on the
Optical Mark Reader (OMR) sheet provided along with the Test Booklet. The detailed
instructions for filling up the OMR Sheet will be sent to the candidate along with the Admit
Card.
Paper-III will consist of only descriptive questions from the subject selected by the candidate.
The candidate will be required to attempt questions in the space provided in the Test Booklet.
The structure of Paper-III has been revised from June, 2010 UGC-NET and is available on the
UGC website www.ugc.ac.in.
Paper-III will be evaluated only for those candidates who are able to secure the minimum
qualifying marks in Paper-I and Paper-II, as per the table given in the following:
MINIMUM QUALIFYING MARKS
CATEGORY
PAPER - I
PAPER - II
PAPER - I + PAPER
II
GENERAL
40
40
100 (50 %)
OBC/PH/VH
35
35
90 (45 %)
SC/ST
35
35
80 (40 %)
PAPER - I
PAPER - II
PAPER - I +
PAPER - II
PAPER III
GENERAL
40
40
100 (50 %)
100 (50 %)
OBC/PH/VH
35
35
90 (45 %)
90 (45 %)
SC/ST
35
35
80 (40 %)
80 (40 %)
However, the final qualifying criteria for Junior Research Fellowship (JRF) and Eligibility for
Lectureship shall be decided by UGC before declaration of result.
(ii) For Visually Handicapped (VH) candidates thirty minutes extra time shall be provided
separately for paper-I and Paper-II. For paper-III, forty five minutes extra time shall be
provided. They will also be provided the services of a scribe who would be a graduate in a
subject other than that of the candidate. Those Physically Handicapped (PH) candidates who
are not in a position to write in their own hand-writing can also avail these services by making
prior request (at least one week before the date of UGC-NET) in writing to the Co-ordinator of
the test centre. Extra time and facility of scribe would not be provided to other Physically
Handicapped candidates.
(iii) Syllabus of Test: Syllabi for all NET subjects can be downloaded from the UGC Website
www.ugc.ac.in and are also available in the libraries of all Indian universities. UGC will not send
the syllabus to individual candidates.
(iv) In Paper III, candidate has the option to answer either in Hindi or in English in all
subjects except the languages where the candidate is required to write in the concerned
language only. In case of Computer Science & Applications, Electronic Science and
Environmental Sciences, the question papers have to be answered in English only.
(v) In case of any discrepancy found in the English and Hindi versions, the
questions in English version shall be taken as final.
Section-2:
Three extended answer based questions to test the analytical ability of the
candidates are to be asked on the major specialization/electives. Questions will be
asked on all major specialization/electives and the candidates may be asked to
choose one specialization/ elective and answer the three questions. There is to be
no internal choice. Each question will be answered in up to 300 words and shall
carry 15 marks each(3Q X 15 M=45 Marks). Where there is no
specialization/elective, 3 questions may be set across the syllabus. The questions
in this section should be numbered 3 to 5.
Section-3:
Nine questions may be asked across the syllabus. The questions will be
definitional or seeking particular information and are to be answered in up to 50
words each. For Science subjects as mentioned in Section-1, short
numerical/computational problems may be considered. Each question will carry
10 marks (9Q X 10 M =90 Marks). There should be no internal choice. The
questions in this section should be numbered 6 to 14.
Section-4:
It requires the candidates to answer questions from a given text of around 200-300
words taken from the works of a known thinker/author. Five carefully considered
specific questions are to be asked on the given text, requiring an answer in up to
30 words each. This section carries 5 questions of 5 marks each (5Q X 5M =25
Marks). In the case of science subjects, a theoretical/ numerical problem may be
set. These questions are meant to test critical thinking ability to comprehend and
Section
Type of
Questions
Test of
No. of
questions
Words
Per
answer
Total
Marks
Per
question
Total
Essay
Ability to
dwell on a
theme at an
optimum
level
500
1000
20
40
Three
analytical/
evaluative
questions
Ability to
3
reason and
hold on
argument
on the given
topic
300
900
15
45
Nine
definitional
/ short
answer
questions
Ability to
understand
and express
the same
50
450
10
90
Text based
questions
Critical
5
thinking,
ability to
comprehend
and
formulate
the concept
30
150
25
2500
200
Total
19
Here are some of the tips and techniques to score well in UGC NET examination.
Follow them at the best. Good Luck.
1. Writing skills matter a lot in the NET Examination. Most of the candidates appearing for the
NET examination have a lot of knowledge, but lack writing skills. You should be able to present
all the information/knowledge in a coherent and logical manner, as expected by the examiner.
For example: Quoting with facts and substantiating your answer with related concepts and
emphasizing your point of view.
2. Preparations for NET examination should be done intensively.
3. Prepare a standard answer to the question papers of the previous years. This will also make
your task easy at the UGC examination.
4. Do Not miss the concepts. Questions asked are of the Masters level examination. Sometimes
the questions are conceptual in nature, aimed at testing the comprehension levels of the basic
concepts.
5. Get a list of standard textbooks from the successful candidates, or other sources and also
selective good notes. The right choice of reading material is important and crucial. You should
not read all types of books as told by others
6. While studying for the subjects, keep in mind that there is no scope for selective studies in
UGC. The whole syllabus must be covered thoroughly. Equal stress and weight should be given
all the sections of the syllabus.
7. Note that in the ultimate analysis both subjects carry exactly the same amount of maximum
marks.
8. Go through the unsolved papers of the previous papers and solve them to stimulate the
atmosphere of the examination.
9. Stick to the time frame. Speed is the very essence of this examination. Hence, time
management assumes crucial importance.
10. For developing the writing skills, keep writing model answers while preparing for the NET
examination. This helps get into the habit of writing under time pressure in the Mains
examination.
11. Try not to exceed the word limit, as far as possible. Sticking to the word limit that will save
time. Besides, the numbers of marks you achieve are not going to increase even if you exceed the
word limit. Its the quality that matters not the quantity.
Wage determination
Industrial relations and trade unions
Dispute resolution and grievance management
Labour welfare and social security measures
*****************
Unit 4
*****************
financial management-nature and scope
valuation concepts and valuation of securities
Capital budgeting decision-risk analysis
Capital structure and cost of capital
Dividend policies-determinants
Long term and short term financing instruments
mergers and acquisitions
*****************
Unit 5
*****************
Marketing environment and environment scanning, marketing information systems and
marketing research, understanding consumer and industrial markets, demand measurement and
forecasting, market segmentation-targeting and positioning, product decisions,product mix,
product life cycle, new product development, branding and packaging, pricing methods and
strategies.
Promotion decisions-promotion mix, advertising, personal selling, channel management, vertical
marketing system, evaluation and control of marketing effort, marketing of service, customer
relation management,
Uses of internet as a marketing medium- other related issues like branding, market development,
advertising and retailing on the net.
New issues in marketing.
*****************
Unit 6
*****************
Role and scope of production management, facility location, layout planning and analysis,
production planning and control- production process analysis, demand forecasting for operations,
determinant of product mix, production scheduling, work measurement, time and motion study,
statistical quality control.
role and scope of operations research, linear programming, sensitivity analysis, transportation
model, inventory control, queuing theory, decision theory, markov analysis, PERT/CPM
*****************
Unit 7
*****************
Probability theory, probability distribution-binomial, poission, normal and exponential,
correlation and regression analysis, sampling theory, sampling distribution, tests of hypothesis,
large and small samples, t, z, f, chi-square tests.
use of computers in managerial applications, technological issues and data processing in
organisations.
Information systems, MIS and decision making, system analysis and design, trends in
information technology,Internet and internet-based applications.
*****************
Unit 8
*****************
Concept of corporate strategy, component of strategy formulation, Ansoff's growth vector, BCG
model, Porter's generic strategies, competitor analyis, strategic dimensions and group mapping,
industry analysis, strategies in industry evolution, fragmentation, maturity and
decline,competitive strategy and corporate strategy, transnationalisation of world
economy,managing cultural diversity, global entry strategies, globalisation of financial system
and services, managing international business, competitive advantage of nations, RTP and WTO
*****************
Unit 9
*****************
Concepts- types, characterstics, motivation, competencies and its development, innovation and
entrepreneurship, small business-concepts
Government policy for promotion of small and tiny enterprises, process of business opportunity
identification, detailed business plan preparation, managing small enterprises, planning for
growth, sickness in small enterprises, rehabilitation of sick enterprises,
Intrapreneurship(organisational entrepreneurship).
*****************
Unit 10
*****************
Ethics and management systems, ethical issues and analysis in management, value based
organisations, personal framework for ethical choices, ethical pressure on individuals in
organisations, gender issues, ecological consciousness, environmental ethics, social
responsibilities of business, corporate governance and ethics.
************************
Elective-I
************************
Human Resource Management (HRM) - Significance; Objectives; functions; a diagnostic model;
External and Internal environment
Forces and Influences; organizing HRM function
Recruitment and selection-sources of recruits; recruiting methods; selection procedure; selection
tests; Placement and follow-up.
Performance appraisal system-importance and objectives; techniques of appraisal system; new
trends in appraisal system.
Development of personnel- objectives; determining needs; methods o training and development
programs; evaluation.
Career planning and development-concept of career; career planning and development methods.
Compensation and benefits- job evaluation techniques; wage and salary administration; fringe
benefits; human resource records and audit.
Employee discipline importance; causes and forms; disciplinary action; domestic enquiry.
Grievance management- importance; process and practices; employee welfare and social security
measures.
Industrial relations- importance; industrial conflicts; causes; dispute settlement machinery
Trade union- importance of unionism; union leadership; national trade union movement
Collective bargaining- concept; process; pre-requisite; new trends in collective bargaining
Industrial democracy and employee participation- need for industrial democracy; pre-requisite
for industrial democracy; employee participation objectives; forms of employee participation.
Future of Human Resource Management.
************************
Elective-II
************************
Marketing-Concepts; Nature and scope; Marketing myopia; Marketing mix; Different
environments and their influences on marketing; understanding the customer and competition.
Role and relevance of Segmentation and positioning; Static and dynamic understanding of BCG
matrix and Product Life Cycle; Brands-Meaning and role; Brand building strategies; Share
increase strategies.
Pricing objectives; pricing concepts; Pricing methods
Product- Basic and augmented stages in new product development
Test marketing concepts
Promotion mix- Role and relevance of advertising
Sales promotion- media planning and management
Advertising- Planning, execution and evaluation
Different tools used in sales promotion and their specific advantages and limitations
Public relations- concept and relevance
Distribution channel hierarchy; role of each member in the channel; Analysis of businesss
potential and evaluation of performance of the channel members
Wholesaling and retailing- Different formats and the strength of each one; Emerging issues in
different formats of retailing in India
Marketing research- Sources of information; Data collection; Basic tools used in data analysis;
structuring a research report
Marketing to orgaisations- Segmentation models; Buyer behavior models; Organiational buying
process
Consumer behavior theories and models and their specific relevance to marketing managers
Sales function- Role of technology in sales function automation
Customer relationship management including the concept of Relationship marketing
Use of internet as a medium of marketing; Managerial issues in researching consumer/
organization through internet
Structuring and managing marketing organizations, Export
Marketing- Indian and global context
************************
Elective-III
************************
Nature and scope of financial management
valuation concepts-risk and return, valuation of securities, pricing theories- capital asset pricing
model and arbitrage pricing theory
Understanding financial statements and analysis thereof
Capital budgeting decision, risk analysis in capital budgeting and long-term sources of finance
Capital structure- theories and factors, cost of capital
Dividend policies -theories and determinants
Working capital management-determinants and financing, cash management,inventory
management, receivables management
Elements of derivatives
Corporate risk management
mergers and acquisitions
International financial management
************************
Elective-IV
************************
Indias foreign Trade and Policy; Export promotion policies; Trade agreements with other
countries; Policy and performance o Export zones and Export-oriented unit; Export incentives.
International marketing logistics; International logistic structures; Export Documentation
framework; Organisation of shipping services; Chartering practices; Marine cargo insurance.
Managerial economics
Managerial economics
Definition of managerial economics
Nature and characteristics of managerial economics
Scope of managerial economics
Difference between managerial economics and economics
Economic tools used in managerial economics
Decision criteria
Managerial economics is the study of economic theories, logic and methodology
which are generally applied to seek solutions to the practical problems of
business.
Nature and characteristics of managerial economics
Scope of managerial economics
Demand analysis
Demand curve
Demand schedule
Elasticity of demand
Demand forecasting
Relationship between Average and Marginal cost:
In the figure above the average and marginal cost curves have been drawn. It
will be seen that so long as the average cost is falling, marginal cost is less than
average cost. In the same way when average cost is rising marginal cost is
above the average cost.
Seller
entry
barrier
Seller
numbe
r
Buyer
entry
barrier
Buyer
numbe
r
Perfect competition
No
Many
No
Many
Monopolistic competition
No
Many
No
Many
Oligopoly
Yes
Few
No
Many
Monopoly
Yes
One
No
Many
Monopsony
No
Many
Yes
One
Market structure
Perfect
competition
Monopolistic
competition
Monopoly
Goal of firms
Maximize
profit
Maximize profit
Maximize
profit
MR= MC
MR= MC
MR= MC
Yes
yes
Yes
Price takers?
Yes
No
No
Price
P=MC
P>MC
P>MC
Yes
No
No
Number of firms
Many
Many
one
Yes
Yes
No
No
No
Yes
quantit
y
0
1
2
3
4
5
6
7
8
monop
oly
quantit
y
0
1
2
3
4
5
6
7
8
average
cost
5
4
4
4.25
4.6
5
5.428571
429
5.875
average
cost
5
4
4
4.25
4.6
5
5.428571
429
5.875
tot
al
cos
t
3
5
8
12
17
23
30
38
margin
al cost
average
revenue
total
reven
ue
margina
l
revenue
2
3
4
5
6
7
8
47
prof
it
change
in
profit
6
6
6
6
6
6
6
0
6
12
18
24
30
36
42
6
6
6
6
6
6
6
-3
1
4
6
7
7
6
4
4
3
2
1
0
-1
-2
-3
48
-1
tot
al
cos
t
3
5
8
12
17
23
30
38
margin
al cost
average
revenue
total
reven
ue
margina
l
revenue
prof
it
change
in
profit
10
8
6
4
2
0
-2
-3
5
10
12
11
7
0
-10
8
5
2
-1
-4
-7
-10
-13
2
3
4
5
6
7
8
10
9
8
7
6
5
4
10
18
24
28
30
30
28
47
24
-4
-23
23
Macro-Economics
Macro-economics is also known as the theory of income and employment or
simply income analysis. It is concerned with the problems of unemployment,
economic fluctuations, inflation or deflation, international trade and economic
growth.
Macro-economics is the study of aggregates or averages covering the entire
economy, such as total employment, national income, national output, national
output, total investments, total consumption, and total saving; aggregate supply
and aggregate demand, general price level, wage level and cost structure.
In other words, it is aggregate economics which examines the inter-relations
among the various aggregates, their determination and causes of fluctuations in
them. Thus, in the words of Professor Ackley, Macro-economics deals with
economic affairs in the large; it concerns the overall dimensions of economic
life. It looks at the total size and shape and functioning of the elephant of
economic experience, rather than working of articulation or dimensions of the
individual parts. It studies the character of the forest independently of the trees
which compose it.
Scope and Importance of Macro-economics:As a method of economic analysis macro-economics is of much theoretical and
practical importance
1
2
3
4
5
6
7
8
Limitations of Macro-economics:1
2
3
4
5
Fallacy of compositions
To regard the aggregate as homogeneous
Aggregate variables may not be important necessarily
Indiscriminate use of Macro-economics misleading
Statistical and conceptual difficulties
Capital Budgeting
Meaning
Capital budgeting decisions pertain to fixed/long term assets by definition refers
which are in operation and yield a return, over a period of time, usually,
exceeding one year. They therefore, involve a current outlay of series of outlays
of cash resources in return for an anticipated flow of future benefits. In other, the
system of capital budgeting is employed to evaluate expenditure decisions which
involve current outlays but are likely to produce benefits over a period of time
longer than one year. These benefits may be either in the form of increased
revenues or reduced costs. Capital expenditure management, therefore, includes
addition, disposition, modification and replacement of fixed assets. The features
of capital budgeting are as follows:
1
2
3
BUSINESS CYCLE
Business Cycle refers to fluctuations in economic activity. It is also known as the
economic cycle. The Business Cycle has four distinct phases that revolve around
its long-tern growth trend.
- Contraction : That features slow down in economic activity.
- Trough : Turning point of business cycle where contraction shifts to expansion.
- Expansion: Growth in economic activity.
- Peak : Upper turning point of business cycle.
There are four phases of the business cycle:
1. Peak/boom
2. Recession
3. Trough
4. Recovery
Let's briefly discuss each phase now:
1. Peak/Boom: This is the stage when the business activity is at its
maximum, although this level of activity is temporary.
2. Recession: After operating at maximum activity, the business goes into
the recession phase. This phase witnesses a decrease in total output,
employment and trade. Recession may last for about 6 months or more.
3. Trough: At this stage, output and employment are at their lowest. This is
also referred to as the stage of depression. This stage may be short term
or may be long term depending on circumstances and market conditions.
4. Recovery: The recovery stage, as the name suggests is the rise in output,
employment and trade after the depression stage. The employment levels
increase till maximum employment is reached.
These stages are often depicted as a graph. The graph would look like a wave.
The peak of the wave is the boom phase, the decreasing slope is recession, the
rock bottom of the wave is the trough/depression, and recovery phase is shown
as the increasing slope after the trough. The vertical axis measures real output
and the horizontal axis measures time.
Organizational behavior
Organizational theory is a set of interrelated constructs (concepts), definitions and
propositions that present a systematic view of behavior of individuals, groups, and subgroups
interacting in some relatively patterned sequence of activity, the intent of which is goal
directed.
Some important organizational theories are:(a) Classical theory
a. Scientific management theory
b. Administrative theory
(b) Neo-classical theory
(c) Modern theory
Concept of organizational behavior
Role of organizational behavior
Organizational theories
Appraisal of Classical theory
Systems approach
Contingency or situational approach
*************
Organisational structure
Mechanism of designing structure
Departmentation
Choosing a basis of departmentation
Span of management
Delegation of authority
Centralisation and decentralization
*************
Personality
Determinants of personality
Personality and behavior
Organisational applications of personality
Personality:Personality is the dynamic organization within the individual of those
psychological systems that determine his unique adjustments to his
environment.
Personality theories:Psychoanalytic theory:The ID
The Id, The Ego and The super Ego are the parts of the personality.
The Id proceeds unchecked to satisfy life instincts and death instincts.
The Ego
The Ego keeps the Id in check through the realities of the external environment
through intellect and reason.
Socio-psychological theory
Social variables and not the biological instincts, are the important determinant in
shaping personality. There is an interaction between the society and the
individual.
Trait theory
The trait approach to personality is one of the major theoretical areas in the
study of personality. The trait theory suggests that individual personalities are
composed broad dispositions. Consider how you would describe the personality
of a close friend. Chances are that you would list a number of traits, such as
outgoing, kind and even-tempered. A trait can be thought of as a relatively
stable characteristic that causes individuals to behave in certain ways.
Unlike many other theories of personality, such as psychoanalytic or humanistic
theories, the trait approach to personality is focused on differences between
individuals. The combination and interaction of various traits forms a personality
that is unique to each individual. Trait theory is focused on identifying and
measuring these individual personality characteristics.
Gordon Allports Trait Theory
In 1936, psychologist Gordon Allport found that one English-language dictionary
alone contained more than 4,000 words describing different personality traits.1
He categorized these traits into three levels:
* Cardinal Traits: Traits that dominate an individuals whole life, often to the
point that the person becomes known specifically for these traits. People with
such personalities often become so known for these traits that their names are
often synonymous with these qualities. Consider the origin and meaning of the
following descriptive terms: Freudian, Machiavellian, narcissism, Don Juan,
Christ-like, etc. Allport suggested that cardinal traits are rare and tend to develop
later in life.
* Central Traits: These are the general characteristics that form the basic
foundations of personality. These central traits, while not as dominating as
cardinal traits, are the major characteristics you might use to describe another
person. Terms such as intelligent, honest, shy and anxious are considered
central traits.
* Secondary Traits: These are the traits that are sometimes related to
attitudes or preferences and often appear only in certain situations or under
specific circumstances. Some examples would be getting anxious when speaking
to a group or impatient while waiting in line.
Raymond Cattells Sixteen Personality Factor Questionnaire
Trait theorist Raymond Cattell reduced the number of main personality traits
from Allports initial list of over 4,000 down to 171,3 mostly by eliminating
While most agree that people can be described based upon their personality
traits, theorists continue to debate the number of basic traits that make up
human personality. While trait theory has objectivity that some personality
theories lack (such as Freuds psychoanalytic theory), it also has weaknesses.
Some of the most common criticisms of trait theory center on the fact that traits
are often poor predictors of behavior. While an individual may score high on
assessments of a specific trait, he or she may not always behave that way in
every situation. Another problem is that trait theories do not address how or why
individual differences in personality develop or emerge.
Self theory
Self-concept: The composite of ideas, feelings, and attitudes that a person has
about his or her own identity, worth, capabilities, and limitations. A persons self
concept gives him sense of meaningfulness and consistency.
There are four factors in shelf concept.
Self-image: A person's self image is the mental picture, generally of a kind that
is quite resistant to change, that depicts not only details that are potentially
available to objective investigation by others (height, weight, hair color, gender,
I.Q. score, etc.), but also items that have been learned by that person about
himself or herself, either from personal experiences or by internalizing the
judgments of others.
In short, self-image is the way one sees oneself.
Ideal-self: The ideal self denotes the way one would like to be.
Looking Glass-self: The perception of a person about how others are
perceiving his qualities and characteristics.
Real-Self: The real-self is what one really is.
Determinants of personality
Biologica
l factor
Family
and
Individua
social
l group
factors
Cultural
factors
Situation
al factor
*************
Perception
Perception is defined as a process by which individuals organize and interpret their sensory
impressions in order to give meaning to their environment.
perceptual process
Perceptual selectivity
Perceptual organization
Interpersonal perception
Managerial application of perception
*************
Attitudes
Attitude is the persistent tendency to feel and behave in a favourable or unfavourable way
towards some object, person or idea.
Concepts of attitudes
Attitudes and values
Theories of attitude formation
Factors in attitude formation
Attitude measurement
Attitude change
Methods of attitude change
*************
Values
Values are global beliefs that guide actions and judgment across a variety of situations.
Values and attitudes
Values and behavior
Factors in value formation
Types of values
*************
Learning
*************
Organizational behavior modification
*************
Motivation
Motivation refers to the way in which urges, drives, desires, aspirations, striving
or need direct, control or explain the behavior of human beings.
Motivation and behavior
Motivation
The term motivation comes from the Latin movere which means to move.
Motivation as the base-building block of human action has been studied
extensively. Studies on motivation broadly refer to two areas:
(a) Motivating self, and
(b) Motivating others
The concept motive refers to the purpose underlying all goal directed actions.
All motives, however, may not be equally important to the context of goal. Some
actions arise from biological or physiological needs, over which people do not
have much control. Such motives are common to the entire animal kingdom. But
there are certain crucial and other higher order needs which are common to
human beings. The distinctly human motives are largely unrelated to biological
and survival needs. These are related to feelings of self esteem, competency,
social acceptance, etc.
Psychologists have defined the term motivation as:
Motivation Process
Motivation is essentially a process. It may be illustrated with the help of a
generalized model.
Behaviors or action
Incentive or
goal
Motivation of inner
state (feedback)
Needs or expectation
Behavior
Goal, and
Some form of feedback
Achievement motivation
Achievement motivation is also termed as n Ach, the need to achieve, the
urge to improve in common parlance. If a man spends his time thinking about
doing his job better, accomplishing something unusual and important or
advancing his career, the psychologist says he has a high need of achievement.
He thinks not only about the achievement goal, but also about how it can be
attained , what obstacles or blocks might be encountered and how he would take
help to overcome the obstacles in achieving his goal.
Attitude Formation
In Social Psychology attitudes are defined as positive or negative evaluations of
objects of thought. Attitudes typically have three components.
Attitudes involve social judgments. They are either for, or against, pro,
or con, positive, or negative; however, it is possible to be ambivalent
about the attitudinal object and have a mix of positive and negative
feelings and thoughts about it.
Attitudes involve a readiness (or predisposition) to respond; however, for a
variety of reasons we dont always act on our attitudes.
Attitudes vary along dimensions of strength and accessibility. Strong
attitudes are very important to the individual and tend to be durable and
have a powerful impact on behavior, whereas weak attitudes are not very
important and have little impact. Accessible attitudes come to mind
quickly, whereas other attitudes may rarely be noticed.
Attitudes tend to be stable over time, but a number of factors can cause
attitudes to change.
Stereotypes are widely held beliefs that people have certain
characteristics because of their membership in a particular group.
A prejudice is an arbitrary belief, or feeling, directed toward a group of
people or its individual members. Prejudices can be either positive or
negative; however, the term is usually used to refer to a negative attitude
held toward members of a group. Prejudice may lead to discrimination,
which involves behaving differently, usually unfairly, toward the members
of a group.
3.
4.
5.
6.
7.
Below is an article that I worked on with a group for a college class I took
recently.
Attitude Formation / Change: Importance to Marketers
There are also multiattribute attitude models. The first one is the attitude toward
an object model. This is when ones attitude toward a product or brand is a
function of the presence, or absence, and evaluation of certain product-specific
beliefs and/or attitudes. The second is the attitude toward behavior model. This
is when the individuals attitude toward behaving or acting with respect to an
object rather than the attitude toward the object itself seem to correspond more
closely to actual behavior than does the attitude toward object model. The
theory of reasoned action model is a comprehensive integration of attitude
components designed to lead to both better explanation and better predictions
of behavior. It incorporates subjective norms that influence intention. This
assesses normative beliefs attributed to others and motivation to comply with
others.
Attitude Formation / Change: Theory Of Trying to Consume and Attitude towards
ad
Two last models were formed to look at consumers attitudes from a different
perspective. There is the theory of trying-to-consume model, which reflects
instances in which the action or outcome is not certain but instead reflects the
consumers attempts to consume. And our final model is the attitude toward the
ad model in which the consumer sees an ad and forms certain feelings and
judgments as a result of the ad. These feelings and judgments in turn affect the
consumers attitude toward the ad and beliefs about the brand. Then these two
things combined influence his or her attitude toward the brand.
Attitude Formation / Change: Attitudes are Learned
Attitudes are learned. This learning process is the shift from having no attitude
about a product to having an attitude. For example, new technology is always
coming out, and until something is invented we have no attitudes toward it. An
attitude can follow the purchase or consumption of a product or it can come
before the purchase, perhaps from something as simple as viewing an
advertisement for that product. Things that may influence ones attitude are
personal experience, influence of family and friends, direct marketing, mass
media, and the Internet. Attitudes that have been formed from direct
experiences are more confidently held, and therefore stronger, than attitudes
formed from an indirect experience. As we discussed in class, a consumers
personality will have an effect on how they perceive an advertisement. People
with a high need for cognition enjoy lots of product information, whereas those
low in need for cognition respond better to celebrities or attractive models.
Attitude Formation / Change: Methods used to change attitudes
Consumers attitudes can be changed, however. There are five methods for
attempting to alter the attitudes of consumers. They are: (1) Changing the
consumers basic motivational function (2) associating the product with an
admired group or event (3) resolving two conflicting attitudes (4) altering
components of the multiattribute model and (5) changing consumer beliefs
about competitors brands.
The functional approach to changing attitudes says that there are four
classifications of attitudes. They are the utilitarian function, the ego defensive
function, the value-expressive function, and the knowledge function. The
utilitarian function is when an attitude is held due to the brands utility. A way to
change this attitude is to show the utility or purpose of the brand that they might
not have considered. The next is the ego-defensive function which expresses
peoples desire to protect their self-image. Showing how a product can boost
peoples self esteem and feelings of self doubt is one way of changing their
attitude in this situation.
Attitude Formation / Change: Value Expressive
There is also the idea of combining several of the above functions to appeal to
different groups of people who may use the same product but for different
reasons.
Another way to change attitudes is to associate a product with an admired group
or event, such as a charity cause. One example of this is Gaps Red campaign.
Half the profit made from the Red clothing goes to the Global Fund, which helps
women and children in Africa who are affected by AIDS/HIV.
Attitude Formation / Change: Using negative attitudes
Showing consumers that their negative attitude toward a product, brand, etc. is
not in conflict with another attitude, may make them inclined to change their
negative opinion of the brand. This is just one more way of changing consumers
attitudes.
Another solution is altering components of the multiattribute model. One way of
altering this model is changing the relative evaluation of attributes. It is easier to
persuade customers to cross over to another product when the two are similar.
They can be encouraged to shift their favorable attitude toward another version
of the product. Another way of altering the model is by changing brand beliefs,
which is changing perceptions or beliefs about the brand itself. Suggesting
information about your brand, however, must be compelling and repeated
Now that we have discussed how attitudes are formed and how they can be
altered, we will go into how attitudes affect the actions that consumers take, or
vice versa. Consumers behavior can either precede or follow their attitude
formation.
Two explanations as to why behavior may precede attitude formation are the
cognitive dissonance theory and the attribution theory. The cognitive dissonance
theory is the discomfort or dissonance that occurs when a consumer holds
conflicting thoughts about a belief or an attitude object. An example would be a
post-purchase dissonance, where the consumer thinks about the unique, positive
qualities of the brands that they did not select. An ad may help to assure the
consumer that they made the right decision and ease this dissonance. The
attribution theory explains how people assign blame or credit to events on the
basis of either their behavior or the behavior of others. They may ask themselves
why they made a decision. The process of making inferences is a major part of
attitude formation and change.
There are different perspectives on the attribution theory, which include selfperception theory, attributions toward others, attributions toward things, and
how we test our attributions.
Attitude Formation / Change: Self Perception Theory
which is attributing positive results to factors beyond ones control and defensive
attribution, which says consumers will often accept personal credit success and
credit failure to others or outside causes.
Attitude Formation / Change: Attributions are Opinions
Attributions towards others and attributions towards things are the opinions
people have of things which they come into contact with. For example, when
talking to a salesperson at a store, a consumer will try to determine if the
salesperson is knowledge, trustworthy, and reliable. The same can be said of
attributions towards things. Consumers will judge a products performance and
form attributes in an attempt to find out why the product meets or fails to meet
their expectations.
Testing Attributions is an important step for consumers. They want to test
firsthand whether the attributions they have made towards a certain product,
service, or person is correct. People want conviction about a particular
observation and will go about collecting additional information in order to do this.
They may use the following criteria: Distinctiveness, consistency over time,
consistency over modality, and consensus.
Attitude Formation / Change: Distinctiveness
Hierarchical Systems
Western organizations have been heavily influenced by the command and
control structure of ancient military organizations, and by the turn of the century
introduction of Scientific Management. Most organizations today are designed as
a bureaucracy in which authority and responsibility are arranged in a hierarchy.
Within the hierarchy rules, policies, and procedures are uniformly and
impersonally applied to exert control over member behaviors. Activity is
organized within sub-units (bureaus, or departments) in which people perform
specialized functions such as manufacturing, sales, or accounting. People who
perform similar tasks are clustered together.
The same basic organizational form is assumed to be appropriate for any
organization, be it a government, school, business, church, or fraternity. It is
familiar, predictable, and rational. It is what comes immediately to mind when
we discover that ...we really have to get organized!
As familiar and rational as the functional hierarchy may be, there are distinct
disadvantages to blindly applying the same form of organization to all purposeful
groups. To understand the problem, begin by observing that different groups
wish to achieve different outcomes. Second, observe that different groups have
different members, and that each group possesses a different culture. These
differences in desired outcomes, and in people, should alert us to the danger of
assuming there is any single best way of organizing. To be complete, however,
also observe that different groups will likely choose different methods through
which they will achieve their purpose. Service groups will choose different
methods than manufacturing groups, and both will choose different methods
than groups whose purpose is primarily social. One structure cannot possibly fit
all.
Organizing on Purpose
The purpose for which a group exists should be the foundation for everything its
members do including the choice of an appropriate way to organize. The idea
is to create a way of organizing that best suits the purpose to be accomplished,
regardless of the way in which other, dissimilar groups are organized.
Only when there are close similarities in desired outcomes, culture, and methods
should the basic form of one organization be applied to another. And even then,
only with careful fine tuning. The danger is that the patterns of activity that help
one group to be successful may be dysfunctional for another group, and actually
inhibit group effectiveness. To optimize effectiveness, the form of organization
must be matched to the purpose it seeks to achieve.
The Design Process
Organization design begins with the creation of a strategy a set of decision
guidelines by which members will choose appropriate actions. The strategy is
derived from clear, concise statements of purpose, and vision, and from the
organizations basic philosophy. Strategy unifies the intent of the organization
and focuses members toward actions designed to accomplish desired outcomes.
The strategy encourages actions that support the purpose and discourages those
that do not.
Creating a strategy is planning, not organizing. To organize we must connect
people with each other in meaningful and purposeful ways. Further, we must
connect people with the information and technology necessary for them to be
successful. Organization structure defines the formal relationships among people
and specifies both their roles and their responsibilities. Administrative systems
govern the organization through guidelines, procedures and policies. Information
and technology define the process(es) through which members achieve
outcomes. Each element must support each of the others and together they
must support the organizations purpose.
Exercising Choice
Organizations are an invention of man. They are contrived social systems
through which groups seek to exert influence or achieve a stated purpose.
People choose to organize when they recognize that by acting alone they are
limited in their ability to achieve. We sense that by acting in concert we may
overcome our individual limitations.
When we organize we seek to direct, or pattern, the activities of a group of
people toward a common outcome. How this pattern is designed and
implemented greatly influences effectiveness. Patterns of activity that are
complementary and interdependent are more likely to result in the achievement
of intended outcomes. In contrast, activity patterns that are unrelated and
independent are more likely to produce unpredictable, and often unintended
results.
The process of organization design matches people, information, and technology
to the purpose, vision, and strategy of the organization. Structure is designed to
enhance communication and information flow among people. Systems are
designed to encourage individual responsibility and decision making. Technology
is used to enhance human capabilities to accomplish meaningful work. The end
product is an integrated system of people and resources, tailored to the specific
direction of the organization.
Conflict
Conflict is any situation in which two or more parties feel themselves in
opposition. It is an interpersonal process that arises from disagreements over the
goals or the methods to accomplish those goals.
Followings are the important features of conflict:1
2
3
4
There are four basic issues which may be involved in a conflict. These are:(1) Facts- Conflicts may occur because of disagreement that the persons have
over the definition of a problem, relevant facts related to the problem, or
their authority and power.
(2) Goals- Sometimes, there may be disagreement over the goals which two
parties want to achieve. The relationship between goals of the parties may
be viewed as incompatible with the result that one goal may be achieved
at the cost of the other.
(3) Methods- Even if goals are perceived to be the same, there may be
difference over the methods, procedures, strategies, tactics, etc. through
which goals may be achieved.
(4) Values- There may be differences over the value-ethical standards,
considerations for fairness, justice etc. These differences are of more
intrinsic nature in persons and may affect the choice of goals or methods
of achieving those.
Types of conflict:
members in developing each members perception and the insight into the
problems faced by various members on the job.
Third phase is inter group development. This phase aims at developing the
relationships between different departments
Fourth phase is concerned with the creation of a strategic model for the
organization where Chief Executives and their immediate subordinates
participate in this activity.
Fifth phase is concerned with implementation of strategic model.. Planning
teams are formed for each department to know the available resources, required
resources, procuring them if required and implementing the model
Sixth Phase is concerned with the critical evaluation of the model and making
necessary adjustment for successful implementation.
Starting from the top of a company, the six stages of Management by
Objectives (MBO) are:
1. Define corporate objectives at board level
2. Analyze management tasks and devise formal job specifications, which
allocate responsibilities and decisions to individual managers
3. Set performance standards
4. Agree and set specific objectives
5. Align individual targets with corporate objectives
6. Establish a management information system to monitor achievements
against objectives
Organizational development
Burke (1982) defined organizational development (OD) as "a planned process of change in an
organization's culture through the utilization of behaviourial science, technology, research
and theory." It refers to the management of change and the development of human resources.
It is a response to change (Bennis, 1969). OD is a complex educational strategy intended to
change the beliefs, attitudes, values and structure of the organization so that the organization
can better adapt to new technologies, markets and challenges.
A variety of forces cause changes in the modern organization (Hellriegel, Slocum and
Woodman, 1983). Some of these are:
technological change;
the knowledge explosion;
product and service obsolescence; and
social change.
Environment, resources and technology perform a decisive role in determining organizational
policies. If any one of these determinants changes, the policies need to be re-examined to
determine if a different organizational design would be better suited.
Approaches to OD
The major schools of thought in OD are considered in the following paragraphs.
Group Dynamics
This is a historical and traditional method of OD based on the assumption that OD activities
are process consultation (Albrecht, 1983). In this approach, an expert works at a small-group
level, using group methods, sensitivity training and other related approaches.
The Behaviour Modification School
The 'be-mod' school of OD (based on the various works of Skinner) attempts to rearrange the
reward system in the organization so as to strengthen selected 'target' behaviour on the part of
employees.
The Systems Approach
This approach aims at enhancing the overall effectiveness of the organization. The system
can be defined as having:
The OD process
The OD process entails various activities at different levels in the organization. Through
these activities, interventions are made in the ongoing organization to change the structure,
processes, behaviour or values of individuals and groups. Golembiewski, Prochl and Sink
(1981) categorized these interventions under eight headings:
Process Analysis Activities, referring to applications of behaviourial science perspectives to
fathom complete and dynamic situations;
OD techniques
Techniques used for OD are considered below.
Sensitivity training
This has many applications and is still used widely, even though new techniques have
emerged (Lewin, 1981). Sensitivity training (Benny, Bradford and Lippitt, 1964) basically
aims at:
growth in effective membership;
developing ability to learn;
stimulating to give help; and
developing insights to be sensitive to group processes.
These process variables - in a systems sense - interact and are interdependent.
Grid Training
Blake and Moutons Managerial Grid
The treatment of task orientation and people orientation as two independent dimensions was a
major step in leadership studies. Many of the leadership studies conducted in the 1950s at the
University of Michigan and the Ohio State University focused on these two dimensions.
Building on the work of the researchers at these Universities, Robert Blake and Jane Mouton
(1960s) proposed a graphic portrayal of leadership styles through a managerial grid
(sometimes called leadership grid). The grid depicted two dimensions of leader behavior,
concern for people (accommodating peoples needs and giving them priority) on y-axis and
concern for production (keeping tight schedules) on x-axis, with each dimension ranging
from low (1) to high (9), thus creating 81 different positions in which the leaders style may
fall.
1. Impoverished Management (1, 1): Managers with this approach are low on both the
dimensions and exercise minimum effort to get the work done from subordinates. The leader
has low concern for employee satisfaction and work deadlines and as a result disharmony and
disorganization prevail within the organization. The leaders are termed ineffective wherein
their action is merely aimed at preserving job and seniority.
2. Task management (9, 1): Also called dictatorial or perish style. Here leaders are more
concerned about production and have less concern for people. The style is based on theory X
of McGregor. The employees needs are not taken care of and they are simply a means to an
end. The leader believes that efficiency can result only through proper organization of work
systems and through elimination of people wherever possible. Such a style can definitely
increase the output of organization in short run but due to the strict policies and procedures,
high labour turnover is inevitable.
3. Middle-of-the-Road (5, 5): This is basically a compromising style wherein the leader
tries to maintain a balance between goals of company and the needs of people. The leader
does not push the boundaries of achievement resulting in average performance for
organization. Here neither employee nor production needs are fully met.
4. Country Club (1, 9): This is a collegial style characterized by low task and high people
orientation where the leader gives thoughtful attention to the needs of people thus providing
them with a friendly and comfortable environment. The leader feels that such a treatment
with employees will lead to self-motivation and will find people working hard on their own.
However, a low focus on tasks can hamper production and lead to questionable results.
5. Team Management (9, 9): Characterized by high people and task focus, the style is based
on the theory Y of McGregor and has been termed as most effective style according to Blake
and Mouton. The leader feels that empowerment, commitment, trust, and respect are the key
elements in creating a team atmosphere which will automatically result in high employee
satisfaction and production.
Advantages of Blake and Moutons Managerial Grid
The Managerial or Leadership Grid is used to help managers analyze their own leadership
styles through a technique known as grid training. This is done by administering a
questionnaire that helps managers identify how they stand with respect to their concern for
production and people. The training is aimed at basically helping leaders reach to the ideal
state of 9, 9.
Limitations of Blake and Moutons Managerial Grid
The model ignores the importance of internal and external limits, matter and scenario. Also,
there are some more aspects of leadership that can be covered but are not.
Grid training is an outgrowth of the managerial grid approach to leadership (Blacke and
Mouton, 1978). It is an instrumental approach to laboratory training. Sensitivity training is
supplemented with self-administered instruments (Benny, Bradford and Lippitt, 1964). The
analysis of these instruments helps in group development and in the learning of group
members. This technique is widely used and has proved effective.
Grid training for OD is completed in six phases. They are:
laboratory-seminar training, which aims at acquainting participants with concepts and
material used in grid training;
a team development phase, involving the coming together of members from the same
department to chart out as to how they will attain a 9 x 9 position on the grid;
inter-group development aims at overall OD. During this phase, conflict situations between
groups are identified and analysed;
organization goal setting is based on participative management, where participants
contribute to and agree upon important goals for the organization;
goal attainment aims at achieving goals which were set during the phase of organizational
goal setting; and
stabilization involves the evaluation of the overall programme and making suggestions for
changes if appropriate.
Survey Feedback
Survey feedback is based on the study (survey) of the unit of analysis (such as work group, a
department or a whole organization) by using questionnaires (Taylor and Bowers, 1972). The
resulting data are then used to identify and analyse problems and propose a suitable action
plan to overcome them. A typical survey questionnaire would generate information on
leadership, organizational climate and satisfaction (Table 1).
Satisfaction
with pay
Typical factors covered in a survey research
questionnaire
Satisfaction with the work group
Reducing involvement and terminating This is the mutual agreement to cease the
consultation.
Third Party
The third-party peace-making technique attempts to settle inter-personal and inter-group
conflicts using modern concepts and methods of conflict management. This technique
analyses the processes involved, discerns the problem on the basis of the analysis, and
suitably manages the conflict situation.
Team building
Team building has been considered the most popular OD technique in recent years, so much
so that it has replaced sensitivity training. It aims at improving overall performance, tends to
be more task-oriented, and can be used with family groups (members from the same unit) as
well as special groups (such as task forces, committees and inter-departmental groups).
There are five major elements involved in team building (French and Bell, 1978):
problem solving, decision making, role clarification and goal setting for accomplishing the
assigned tasks;
building and maintaining effective inter-personal relationships;
understanding and managing group processes and culture;
role analysis techniques for role clarification and definition; and
role negotiation techniques.
Transactional Analysis
Transactional analysis is widely used by management practitioners to analyse group
dynamics and inter-personal communications. It deals with aspects of identity, maturation,
insight and awareness (Berne, 1964). As a tool for OD, it attempts to help people understand
their egos - both their own and those of others - to allow them to interact in a more
meaningful manner with one another (Huse, 1975). It attempts to identify peoples' dominant
ego states and help people understand and analyse their transactions with others. It is quite
effective if applied in the early stage of the diagnostic phase.
Team Building
Formi
ng
Stormi
ng
Normi
ng
Performin
g
Adjourni
ng
Managers have a position of authority vested in them by the company, and their
subordinates work for them and largely do as they are told. Management style is
transactional, in that the manager tells the subordinate what to do, and the
subordinate does this not because they are a blind robot, but because they have
been promised a reward (at minimum their salary) for doing so.
Work focus
Managers are paid to get things done (they are subordinates too), often within
tight constraints of time and money. They thus naturally pass on this work focus
to their subordinates.
Seek comfort
An interesting research finding about managers is that they tend to come from
stable home backgrounds and led relatively normal and comfortable lives. This
leads them to be relatively risk-averse and they will seek to avoid conflict where
possible. In terms of people, they generally like to run a 'happy ship'.
Leaders have followers
Leaders do not have subordinates - at least not when they are leading. Many
organizational leaders do have subordinates, but only because they are also
managers. But when they want to lead, they have to give up formal authoritarian
control, because to lead is to have followers, and following is always a voluntary
activity.
Charismatic, transformational style
Telling people what to do does not inspire them to follow you. You have to appeal
to them, showing how following them will lead to their hearts' desire. They must
want to follow you enough to stop what they are doing and perhaps walk into
danger and situations that they would not normally consider risking.
Leaders with a stronger charisma find it easier to attract people to their cause.
As a part of their persuasion they typically promise transformational benefits,
such that their followers will not just receive extrinsic rewards but will somehow
become better people.
People focus
Although many leaders have a charismatic style to some extent, this does not
require a loud personality. They are always good with people, and quiet styles
that give credit to others (and takes blame on themselves) are very effective at
creating the loyalty that great leaders engender.
Although leaders are good with people, this does not mean they are friendly with
them. In order to keep the mystique of leadership, they often retain a degree of
separation and aloofness.
This does not mean that leaders do not pay attention to tasks - in fact they are
often very achievement-focused. What they do realize, however, is the
importance of enthusing others to work towards their vision.
Seek risk
In the same study that showed managers as risk-averse, leaders appeared as
risk-seeking, although they are not blind thrill-seekers. When pursuing their
vision, they consider it natural to encounter problems and hurdles that must be
overcome along the way. They are thus comfortable with risk and will see routes
that others avoid as potential opportunities for advantage and will happily break
rules in order to get things done.
A surprising number of these leaders had some form of handicap in their lives
which they had to overcome. Some had traumatic childhoods, some had
problems such as dyslexia, others were shorter than average. This perhaps
taught them the independence of mind that is needed to go out on a limb and
not worry about what others are thinking about you.
In summary
This table summarizes the above (and more) and gives a sense of the differences
between being a leader and being a manager. This is, of course, an illustrative
characterization, and there is a whole spectrum between either ends of these
scales along which each role can range. And many people lead and manage at
the same time, and so may display a combination of behaviors.
Subject
Leader
Manager
Essence
Change
Stability
Focus
Leading people
Managing work
Have
Followers
Subordinates
Horizon
Long-term
Short-term
Seeks
Vision
Objectives
Approach
Sets direction
Plans detail
Decision
Facilitates
Makes
Power
Personal charisma
Formal authority
Appeal to
Heart
Head
Energy
Passion
Control
Culture
Shapes
Enacts
Dynamic
Proactive
Reactive
Persuasion
Sell
Tell
Style
Transformational
Transactional
Exchange
Likes
Striving
Action
Wants
Achievement
Results
Risk
Takes
Minimizes
Rules
Breaks
Makes
Conflict
Uses
Avoids
Direction
New roads
Existing roads
Truth
Seeks
Establishes
Concern
What is right
Being right
Credit
Gives
Takes
Blame
Takes
Blames
Rensis Likert
Management Systems and Styles
Dr. Rensis Likert has conducted much research on human behavior within
organizations, particularly in the industrial situation.
He has examined different types of organizations and leadership styles, and he
asserts that to achieve maximum profitability, good labor relations and high
productivity, every organization must make optimum use of their human assets.
The form of the organization which will make greatest use of the human
capacity, Likert contends, is;
Employees must be seen as people who have their own needs, desires and
values and their self-worth must be maintained or enhanced.
Supportive relationships must exist within each work group. These are
characterized not by actual support, but by mutual respect.
The work groups which form the nuclei of the participative group system, are
characterized by the group dynamics:
The group has existed long enough to have developed a well established
relaxed working relationship.
The members of the group are loyal to it and to each other since they
have a high degree of mutual trust.
The norms, values and goals of the group are an expression of the values
and needs of its members.
The members perform a "linking-pin" function and try to keep the goals of
the different groups to which they belong in harmony with each other.
and line of authority. All employees have thus clearly defined rules in a
system of authority and subordination.
Formal hierarchical structure. An organization is organized into a hierarchy
of authority and follows a clear chain of command. The hierarchical
structure effectively delineates the lines of authority and the subordination
of the lower levels to the upper levels of the hierarchical structure.
Personnel hired on grounds of technical competence. Appointment to a
position within the organization is made on the grounds of technical
competence. Work is assigned based on the experience and competence
of the individual.
Managers are salaried officials. A manager is a salaried official and does
own the administered unit. All elements of a bureaucracy are defined with
clearly defined roles and responsibilities and are managed by trained and
experienced specialists.
Written documents. All decisions, rules and actions taken by the
organization are formulated and recorded in writing. Written documents
ensure that there is continuity of the organizations policies and
procedures.
Management Thoughts
There are a few people in every age who produce new, paradigm-shifting ideas.
Sometimes these ideas don't catch on right away, but as time passes, their worth
becomes more evident. The art of management is an old one, but it was a fairly
static one until about 150 years ago, when changes in technology, e.g. railroads
and telegraph, changed our economy quite dramatically, and at the same time
changed the discipline of management. We don't really have much perspective
yet. Without it, it's hard to say what ideas will endure, and who the real pioneers
will turn out to be. But, guessing, here are some of the people in our
management heroes gallery.
* George Box
* Philip Crosby
* W. Edwards Deming
* John Dewey
* Fredrick Herzberg
Kaoru Ishikawa
Kaoru Ishikawa wanted to change the way people think about work. He urged managers to
resist becoming content with merely improving a product's quality, insisting that quality
improvement can always go one step further. His notion of company-wide quality control
called for continued customer service. This meant that a customer would continue receiving
service even after receiving the product. This service would extend across the company itself
in all levels of management, and even beyond the company to the everyday lives of those
involved. According to Ishikawa, quality improvement is a continuous process, and it can
always be taken one step further.
With his cause and effect diagram (also called the "Ishikawa" or "fishbone" diagram) this
management leader made significant and specific advancements in quality improvement.
With the use of this new diagram, the user can see all possible causes of a result, and
hopefully find the root of process imperfections. By pinpointing root problems, this diagram
provides quality improvement from the "bottom up." Dr. W. Edwards Deming --one of
Isikawa's colleagues -- adopted this diagram and used it to teach Total Quality Control in
Japan as early as World War II. Both Ishikawa and Deming use this diagram as one the first
tools in the quality management process.
Ishikawa also showed the importance of the seven quality tools: control chart, run chart,
histogram, scatter diagram, Pareto chart, and flowchart. Additionally, Ishikawa explored the
concept of quality circles-- a Japanese philosophy which he drew from obscurity into world
wide acceptance. .Ishikawa believed in the importance of support and leadership from top
level management. He continually urged top level executives to take quality control courses,
knowing that without the support of the management, these programs would ultimately fail.
He stressed that it would take firm commitment from the entire hierarchy of employees to
reach the company's potential for success. Another area of quality improvement that Ishikawa
emphasized is quality throughout a product's life cycle -- not just during production.
Although he believed strongly in creating standards, he felt that standards were like
continuous quality improvement programs -- they too should be constantly evaluated and
changed. Standards are not the ultimate source of decision making; customer satisfaction is.
He wanted managers to consistently meet consumer needs; from these needs, all other
decisions should stem. Besides his own developments, Ishikawa drew and expounded on
principles from other quality gurus, including those of one man in particular: W. Edwards
Deming, creator of the Plan-Do-Check-Act model. Ishikawa expanded Deming's four steps
into the following six:
Implement work.
* Joseph M. Juran
* Kurt Lewin
* Lawrence D. Miles
* Alex Osborne
* Walter Shewhart
* Genichi Taguchi
* Frederick Winslow Taylor
* J. Edgar Thomson
Wages
Union rivalry
Political interference
Unfair labor practices
Multiplicity of labour law
Others: Industrial relation managers stoke the fire and then try to
extinguish it- all to justify their own existence in organization.
Collective bargaining
Code of discipline
Grievance procedure
Arbitration
Conciliation
Adjudication
Consultative machinery
(c) Facilitative Services- canteen, rest room, lunch room, housing facility(rent
free or loan), Medical facility, Washing facility, Education facility, Leave
travel concession
Social security:
According to ILO, Social security is that security that society furnishes
through appropriate orgainsation against certain risk to which its members
are exposed.
The term social security originated n U.S.A. In 1935, the Social Security Act was
passed there and Social Security
Board was established to govern and
administer the scheme of
unemployment,
sickness and
old-age insurance.
(iii)
Business statistics
Define the term probability
Probability measures provide the decision-maker with the means for quantifying
the uncertainties which affect the choices of appropriate actions. Understanding
probability and taking decision after understanding it minimizes the risk.
What is priori approach to probability?
Priori approach assumes that all the possible outcomes of an experiment are
mutually exclusive and equally likely.
The word equally likely conveys the motion of equally probable, and mutually
exclusively means of one event occurs the other event will not occur.
What are the steps involved in fitting a Binomial Distribution?
When a binomial distribution is to be fitted to observe data the following
procedure is adopted:
1
The exponential pdf has no shape parameter, as it has only one shape.
The exponential pdf is always convex and is stretched to the right as decreases in
value.
The value of the pdf function is always equal to the value of at T = 0 (or T = ).
As
The probability that a success will occur is proportional to the size of the
region.
Note that the specified region could take many forms. For instance, it could be a length, an
area, a volume, a period of time, etc.
Notation
The following notation is helpful, when we talk about the Poisson distribution.
Poisson Distribution
A Poisson random variable is the number of successes that result from a Poisson
experiment. The probability distribution of a Poisson random variable is called a Poisson
distribution.
Given the mean number of successes () that occur in a specified region, we can compute the
Poisson probability based on the following formula:
Poisson Formula. Suppose we conduct a Poisson experiment, in which the
average number of successes within a given region is . Then, the Poisson
probability is:
Example 1
The average number of homes sold by the Acme Realty company is 2 homes per day. What is
the probability that exactly 3 homes will be sold tomorrow?
Solution: This is a Poisson experiment in which we know the following:
A cumulative Poisson probability refers to the probability that the Poisson random variable
is greater than some specified lower limit and less than some specified upper limit.
Example 1
Suppose the average number of lions seen on a 1-day safari is 5. What is the probability that
tourists will see fewer than four lions on the next 1-day safari?
Solution: This is a Poisson experiment in which we know the following:
To solve this problem, we need to find the probability that tourists will see 0, 1, 2, or 3 lions.
Thus, we need to calculate the sum of four probabilities: P(0; 5) + P(1; 5) + P(2; 5) + P(3; 5).
To compute this sum, we use the Poisson formula:
P(x < 3, 5) = P(0; 5) + P(1; 5) + P(2; 5) + P(3; 5)
P(x < 3, 5) = [ (e-5)(50) / 0! ] + [ (e-5)(51) / 1! ] + [ (e-5)(52) / 2! ] + [ (e-5)(53) / 3! ]
P(x < 3, 5) = [ (0.006738)(1) / 1 ] + [ (0.006738)(5) / 1 ] + [ (0.006738)(25) / 2 ] +
[ (0.006738)(125) / 6 ]
P(x < 3, 5) = [ 0.0067 ] + [ 0.03369 ] + [ 0.084224 ] + [ 0.140375 ]
P(x < 3, 5) = 0.2650
Thus, the probability of seeing at no more than 3 lions is 0.2650.
The weekly wages of 2000 workers in a factory is normally distributed with a mean of
Rs. 200 and a standard deviation of Rs. 20.
Estimate the lowest weekly wages at the 200 highest paid workers and the highest wages
of 200 lowest paid workers.
[Given phi(1.28)=0.09]
As regards the environment that is specific to the given business, the firm
studies:
1
2
3
4
5
6
In marketing planning
In marketing implementation
In marketing control
Periodic reports
Triggered reports
Demand reports
Plan reports
Specialized databases
Customer database
Marketing intelligence
Data mining and data warehousing
1 Bachelor
stage
2 Newly
married
couples
3 Full nest I Youngest child under six, home purchasing at peak, liquid assets
low, interested in new products, advertised products. Buy:
Washers, dryers, TV, baby food, chest rub and cough medicines,
vitamins, dolls, wagons, sleds,skates
4 Full nest
II
5 Full nest
III
6 Empty
nest I
7 Empty
nest II
8 Solitary
survivor I
9 Solitary
survivor
II
Marketing research
Marketing research is a systematic, objective and exhaustive search for the
study of the facts realting to any problem in the field of marketing Richard Crisp
Classification of marketing research jobs
(Based on the subject of the
research)
Routine problem analysis and research on nonroutine problems
Research on consumer
Research on market/
Intangibility: Refers to the aspect not associated with any physical form or
characteristics. It is very much, pronounced in the pure service elements
like the lecture given by a professor.
Inseparability: It means that the production and consumption of the
service are inextricably interwined. Hence, the consumers presence is in
most cases necessary at the time of production. Goods are usually
purchased, sold and consumed; whereas, services are usually sold and
then produced and consumed.
Heterogeneity: The services offered are not similar all the time to all the
customers. This feature of service is called Heterogenity. The quality of
a service depends on the person, who provides the service, or the time,
when provided. Even though standard systems may be used to handle a
flight reservation, book a car for service, each unit of service differs from
other units.
Perishability: This means that the service units cannot be stocked. If a
seat is unfilled when the plane leaves or the play starts, it cannot be
stored and sold next day or next week; that revenue is lost forever.
Difference between physical goods and Services
Physical goods
Services
1 Tangible
Intangible
2 Homogeneous
Heterogeneous
4 A thing
An activity or process
interactions
6 Customers do not participate in
the production process
8 Transfer of ownership
No transfer of ownership
This process is effective for developing all types of business, and delivers business growth
via:
company in the chain continues to perform a separate task. In an administered VMS, one
member of the channel is large and powerful enough to coordinate the activities of the other
members without an ownership stake. Finally, a contractual VMS consists of independent
firms joined together by contract for their mutual benefit. One type of contractual VMS is a
retailer cooperative, in which a group of retailers buy from a jointly owned wholesaler.
Another type of contractual VMS is a franchise organization, in which a producer licenses a
wholesaler to distribute its products.
The concept behind vertical marketing systems is similar to vertical integration. In vertical
integration, a company expands its operations by assuming the activities of the next link in
the chain of distribution. For example, an auto parts supplier might practice forward
integration by purchasing a retail outlet to sell its products. Similarly, the auto parts supplier
might practice backward integration by purchasing a steel plant to obtain the raw materials
needed to manufacture its products. Vertical marketing should not be confused with
horizontal marketing, in which members at the same level in a channel of distribution band
together in strategic alliances or joint ventures to exploit a new marketing opportunity.
As Tom Egelhoff wrote in an online article entitled "How to Use Vertical Marketing
Systems," VMS holds both advantages and disadvantages for small businesses. The main
advantage of VMS is that your company can control all of the elements of producing and
selling a product. In this way, you are able to see the whole picture, anticipate problems,
make changes as they become necessary, and thus increase your efficiency. However, being
involved in all stages of distribution can make it difficult for a small business owner to keep
track of what is happening. In addition, the arrangement can fail if the personalities of the
different areas do not fit together well.
For small business owners interested in forming a VMS, Egelhoff recommended starting out
by developing close relationships with suppliers and distributors. "What suppliers or
distributors would you buy if you had the money? These are the ones to work with and form a
strong relationship," he stated. "Vertical marketing can give many companies a major
advantage over their competitors."
Corporate Strategy
Key words: Strategic management,
2
3
4
5
Spelling out
organizations
mission and
objectives
Choice of strategy
Environmental
Scanning:
Opportunities and
threats
Evaluation of
Strategic
Alternatives
Organizational
Analysis: Strengths
and Weaknesses
Developing
Strategic
Alternatives
You need to run faster and faster to remain at the same place
H. Igor Ansoff first published the now well-known vector matrix or product-matrix
in the Harvard Business review in Sep/ Oct edition of 1957. The matrix also
appeared in the book written later by Ansoff and published in 1965- corporate
strategy. Although the matrix was published a long time ago, it still remains one
of the most popular matrices and is used to identify the basic alternative
strategies, which are options for a firm wanting to grow.
Ansoff developed the matrix out of his realization that a firm needs a welldefined scope and growth direction. For most companies growth is often the
perquisite for survival.
Ansoff felt that many of the theorists had too broad a concept of business and
that the traditional identification of a firm with a particular industry had become
too narrow. This was because many firms acquired a diverse range of products
through policies of vertical and horizontal integration to protect their existing
markets, and also through new product development, done to exploit
technological innovations and to develop new market with opportunities to
growth.
The vector matrix is based on joint consideration of the implication of change in
the product (technology) and/ or the market and is perhaps the simplest and
most basic statement of the strategic alternatives open to the firm who desires
growth.
Products
Existing
Market penetration
strategy
1. More purchase and
usage from existing
customers
2. Gain customers from
competitors
3. Convert non users into
users
Market development
strategy
1. New market segment
2. New distribution
channels
3. New geographic areas
New
Product development
strategy
1. Product modification
via new features
2. Different quality levels
3. New product
Diversification strategy
1.
2.
3.
4.
Organic growth
Joint ventures
Mergers
Acquisition/take-overs
Existing
Market
New
Product strategies
Marketing strategies which are based on the product element are called product
strategies. Product strategies are of two types:(a) Strategies based on Product Mix
(b) Strategies based on Product life cycle
Product modification
Product elimination
Diversification
Stages
Sale
s
ch
ar
ac
te
Profit
Strategic thrust
Customer targets
Introducti
on
Growth
Maturity
Decli
ne
Low
Fast
Growth
Slow
growth
Decli
ne
ris
tic
s
Competition
Differential
advantage
Stages
Sale
s
Ma
rk
eti
ng
Mi
x
Introducti
on
Growth
Maturity
Declin
e
Low
Fast
Growth
Slow
growth
Declin
e
Product
Price
Promotion
Advertising
focus
distribution
Products
Market
Immobile
Existing
non-innovative
Immobile
Existing
innovative
mobile
non-innovative
Immobile
innovative
Existing
New
BCG Model
Lo
w
Stars
Question marks
Cash cows
Hi
gh
Porters Generic
Strategies
Dogs
Market Share
Hig
h
Strategic dimensions and Group mapping
Competitors Analysis
Industry Analysis
Fragmentation maturity and decline
Competitive strategy
Grand strategies
Stability strategies
Expansion strategies
Retrenchment strategies
Combination strategies
Low
Corporate restructuring
Transnationalisation of world economy
World trade organization
The Uruguay round of trade negotiations, after more than seven years of
deliberation was wrapped uo on 14th Dec, 1993 and was formalized by more than
120 countries on April 15, 1994. WTO came into existence on Jan 1, 1995.
Functions:
1. To facilitate the implementation, administration and operation of Uruguay
round agreements.
2. To review national trade policies
3. To provide a forum for negotiations among member countries on their
multilateral trade relations
4. To cooperate with other international institutions, especially the IMF and
the World Bank in order to ensure more meaningful compatibility in global
economic policies
5. To administer the trade dispute settlement procedures
Agreement on agriculture:
The tariffs resulting from transformation of non-tariff barriers, as well as other
tariffs on agricultural product to be reduced on an average by 36% in the case of
Balanced Scorecard
The balanced scorecard is a strategic planning and management system that is
used extensively in business and industry, government, and nonprofit
organizations worldwide to align business activities to the vision and strategy of
the organization, improve internal and external communications, and monitor
organization performance against strategic goals. It was originated by Drs.
Robert Kaplan (Harvard Business School) and David Norton as a performance
measurement framework that added strategic non-financial performance
measures to traditional financial metrics to give managers and executives a
more 'balanced' view of organizational performance. While the phrase balanced
scorecard was coined in the early 1990s, the roots of the this type of approach
are deep, and include the pioneering work of General Electric on performance
measurement reporting in the 1950s and the work of French process engineers
(who created the Tableau de Bord literally, a "dashboard" of performance
measures) in the early part of the 20th century.
The balanced scorecard has evolved from its early use as a simple performance
measurement framework to a full strategic planning and management system.
The new balanced scorecard transforms an organizations strategic plan from
an attractive but passive document into the "marching orders" for the
organization on a daily basis. It provides a framework that not only provides
Adapted from Robert S. Kaplan and David P. Norton, Using the Balanced
Scorecard as a Strategic Management System, Harvard Business Review
(January-February 1996): 76.
Perspectives
The balanced scorecard suggests that we view the organization from four
perspectives, and to develop metrics, collect data and analyze it relative to each
of these perspectives:
The Learning & Growth Perspective
This perspective includes employee training and corporate cultural attitudes
related to both individual and corporate self-improvement. In a knowledgeworker organization, people -- the only repository of knowledge -- are the main
resource. In the current climate of rapid technological change, it is becoming
necessary for knowledge workers to be in a continuous learning mode. Metrics
can be put into place to guide managers in focusing training funds where they
can help the most. In any case, learning and growth constitute the essential
foundation for success of any knowledge-worker organization.
Kaplan and Norton emphasize that 'learning' is more than 'training'; it also
includes things like mentors and tutors within the organization, as well as that
ease of communication among workers that allows them to readily get help on a
problem when it is needed. It also includes technological tools; what the Baldrige
criteria call "high performance work systems."
The Business Process Perspective
This perspective refers to internal business processes. Metrics based on this
perspective allow the managers to know how well their business is running, and
whether its products and services conform to customer requirements (the
mission). These metrics have to be carefully designed by those who know these
processes most intimately; with our unique missions these are not something
that can be developed by outside consultants.
The Customer Perspective
Recent management philosophy has shown an increasing realization of the
importance of customer focus and customer satisfaction in any business. These
are leading indicators: if customers are not satisfied, they will eventually find
other suppliers that will meet their needs. Poor performance from this
perspective is thus a leading indicator of future decline, even though the current
financial picture may look good.
In developing metrics for satisfaction, customers should be analyzed in terms of
kinds of customers and the kinds of processes for which we are providing a
product or service to those customer groups.
The Financial Perspective
Kaplan and Norton do not disregard the traditional need for financial data. Timely
and accurate funding data will always be a priority, and managers will do
whatever necessary to provide it. In fact, often there is more than enough
handling and processing of financial data. With the implementation of a
corporate database, it is hoped that more of the processing can be centralized
and automated. But the point is that the current emphasis on financials leads to
the "unbalanced" situation with regard to other perspectives. There is perhaps a
need to include additional financial-related data, such as risk assessment and
cost-benefit data, in this category.
Strategy Mapping
Strategy maps are communication tools used to tell a story of how value is
created for the organization. They show a logical, step-by-step connection
between strategic objectives (shown as ovals on the map) in the form of a causeand-effect chain. Generally speaking, improving performance in the objectives
found in the Learning & Growth perspective (the bottom row) enables the
organization to improve its Internal Process perspective Objectives (the next row
up), which in turn enables the organization to create desirable results in the
Customer and Financial perspectives (the top two rows).
Corporate level
strategies
Stability
Expansion
Retrenchment
Combination
Types of merger
There are four types of merger, viz., horizontal
merger, vertical
merger,
Simultaneous
Sequential
Simultaneous
No- concentric
Pause/Proceed
Profit merger and conglomerate merger. Horizontal mergers normally
and sequential
change,
with caution
involve the merger
of two or more companies which are producing similar
Do
products or rendering similar services, i.e. products or services which compete
nothing
directly with each other. This type of merger normally results in reduction in the
number of players in that particular industry and may reduce or eliminate
Turnaround
Divestment
Liquidation
competition. Vertical mergers involve the merger of
two companies, where one
of them is an actual or potential supplier of goods or services to the other. The
object of this kind of merger could be to ensure a source of supply or an outlet
for products and the effect may improve efficiency.
In concentric or congeneric mergers, the two companies may be related through
Concentration Integration
Diversification
Internationalizatio
Cooperation
the basic technologies, production
process or markets.
The merged company
n to the
provides an extension of product line, market participations or technology
surviving company. Such mergers provide greater opportunities to diversify into
a relative market having higher return that it enjoyed earlier. Conglomerate
International
Vertical
mergers
neither constitute the bringing together of competitorsStrategic
or have a
Merger Takeover
Conglomerate
vertical connection.Concentric
It involves a predominant element of diversification
of
alliance Multidomestic
activities.
Thus,
in
this
kind
of
merger,
one
company
derives
most
of
the
revenue
Horizontal
Joint
from a particular industry, acquiring companies operating venture
in other industries-Global
with a view
to obtain greater stability of earnings through diversification or to
MarketingHorizontal
Friendly
obtain benefits
relatedof economies of scale, etc.
Transnational
Vertical
What isTechnology
industry lifecycle?
Hostile
related
Concentric
Like other living creatures, industry
also has its circle of life. The industry
Conglomerate
Marketing
and
lifecycle imitates the human lifecycle. The stages of industry lifecycle include
technology
fragmentation,
shake-out, maturity and decline (Kotler 2003). These stages will
related
be described
in the followings section.
What are the main aspects of industry lifecycle?
Pro-competitive Non-competitive Competitive
Pre-competitive
Fragmentation Stage
Tree of Strategic Alternatives the family
Fragmentation is the first stage of the new industry. This is the stage when the
new industry develops the business. At this stage, the new industry normally
arises when an entrepreneur overcomes the twin problems of innovation and
invention, and works out how to bring the new products or services into the
market (Ayres et al., 2003). For example, air travel services of major airlines in
Europe were sold to the target market at a high price. Therefore, the majority of
airlines' customers in Europe were those people with high incomes who could
afford premium prices for faster travel.
In 1985, Ryanair made a huge change in the European airline industry. Ryanair
was the first airline to engage low-cost airlines in Europe. At that time, Ryanair's
services were perceived as the innovation of the European airline industry (Le
Bel, 2005). Ryanair tickets are half the price of British Airways. Some of its sales
promotions were as low as 0.01. This made people think that air travel was not
just made for the rich, but everybody (Haley & Tan 1999).
Ryanair overcame the twin problems of innovation and invention in the airline
industry by inventing air travel services that could serve passengers with tight
budgets and those who just wanted to reach their destination without breaking
their bank savings. Ryanair achieved this goal by eliminating unnecessary
services offered by traditional airlines (Kaynak & Kucukemiroglu, 1993). It does
not offer free meals, uses paper-free air tickets, gets rid of mile collecting
scheme, utilises secondary airports, and offers frequent flights. These techniques
help Ryanair save time and costs spent in airline business operation (Haley &
Tan 1999).
Shake-out
Shake-out is the second stage of the industry lifecycle. It is the stage at which a
new industry emerges. During the shake-out stage, competitors start to realise
business opportunities in the emerging industry. The value of the industry also
quickly rises (Ayres et al., 2003).
For example, many people die and suffer because of cigarettes every year. Thus,
the UK government decided to launch a campaign to encourage people to quit
smoking. Nicorette, one of the leading companies is producing several nicotine
products to help people quit smoking. Some of its well-known products include
Nicorette patches, Nicolette gums and Nicorette lozenges (Nicorette 2007).
Smokers began to see an easy way to quit smoking. The new industry started to
attract brand recognition and brand awareness among its target market during
the shake-out stage (Hendrickson et al., 2006). Nicorette's products began to
gain popularity among those who wanted to quit smoking or those who wanted
to reduce their daily cigarette consumption.
During this period, another company realised the opportunity in this market and
decided to enter it by launching nicotine product ranges, including Nic Lite gum
and patches. It recently went beyond UK boarder after the UK government
introduced non-smoking policy in public places, including pubs and nightclubs.
This business threat created a new business opportunity in the industry for Nic
Lite to launch a new nicotine-related product called Nic Time (ABC News 2006).
Nic Time is a whole new way for smokers to "get a cigarette" an eight-ounce
bottle contains a lemon-flavoured drink laced with nicotine, the same amount of
nicotine as two cigarettes (ABC News 2006). Nic Lite was first available at Los
Angeles airports for smokers who got uneasy on flights, but now the nicotine soft
drinks are available in some convenience stores (ABC News 2006).
Maturity
Maturity is the third stage in the industry lifecycle. Maturity is a stage at which
the efficiencies of the dominant business model give these organisations
competitive advantage over competition (Kotler, 2003). The competition in the
industry is rather aggressive because there are many competitors and product
substitutes. Price, competition, and cooperation take on a complex form
(Gottschalk & Saether, 2006). Some companies may shift some of the production
overseas in order to gain competitive advantage.
For example, Toyota is one of the world's leading multinational companies,
selling automobiles to customers worldwide. The export and import taxes mean
that its cars lose competitiveness to the local competitors, especially in the
European automobile industry. As a result, Toyota decided to open a factory in
the UK in order to produce cars and sell them to customers in the European
market (Toyota, 2007).
The haute couture fashion industry is another good example. There are many
western-branded fashion labels that manufacture their products overseas by
cooperating with overseas partners, or they could seek foreign suppliers who
specialise in particular materials or items. For instance, Nike has factories in
China and Thailand as both countries have cheap labour costs and cheap, quality
materials, particularly rubber and fabric. However, their overseas partners are
not allowed to sell shoes produced for Adidas and Nike (Harrison & Boyle, 2006).
The items have to be shipped back to the US, and then will be exported to
countries worldwide, including China and Thailand.
Decline
Decline is the final stage of the industry lifecycle. Decline is a stage during which
a war of slow destruction between businesses may develop and those with heavy
bureaucracies may fail (Segil, 2005). In addition, the demand in the market may
be fully satisfied or suppliers may be running out (Ayres et al., 2003).
In the stage of decline, some companies may leave the industry if there is no
demand for the products or services they provide, or they may develop new
products or services that meet the demand in the market. In such cases, this will
create a new industry (Francis & Desai, 2005).
For example, at the beginning of the communication industry, pagers were used
as the main communication method among people working in the same
organisation, such as doctors and nurses. Then, the cutting edge of the
communication industry emerged in the form of the mobile phone. The
communication process of pagers could not be accomplished without telephones.
To send a message to another pager, the user had to phone the call-centre staff
who would type and send the message to another pager. On the other hand,
people who use mobile phones can make a phone-call and send messages to
other mobiles without going through call-centre staff (Hui et al., 2002).
In recent years, the features of mobile phones have been developing rapidly and
continually. Now people can use mobiles to send multimedia messages, take
pictures, check email, surf the internet, read news and listen to music (Hui et al.,
2002). As mobile phone feature development has reached saturation, thus the
new innovation of mobile phone technology has incorporated the use of
computers.
The launch of personal digital assistants (PDA) is a good example of the decline
stage of the mobile phone industry as the features of most mobiles are similar.
PDAs are hand-held computers that were originally designed as a personal
organiser but it become much more multi-faceted in recent years. PDAs are
known as pocket computers or palmtop computers (Wikipedia, 2007). They have
many uses for both mobile phones and computers such as computer games,
global positioning system, video recording, typewriting and wireless wide-area
network (Wikipedia, 2007).
How do you use industry lifecycle analysis?
It is important for companies to understand the use of the industry lifecycle
The entry barriers may be low and the potential competition may be high,
thus companies must adapt to shift the mobility barriers (Ayres et al.,
2003).
The new products and applications are harder to come by, while buyers
become more sophisticated and difficult to understand in the maturity
stage of the industry lifecycle. Thus, consumer research should be carried
out and this could help companies in building up new product lines (Baum
& McGahan, 2004).
For companies to survive the dynamic environment, it is necessary for them to:
Single out a viable strategy for decline such as leadership, liquidation and
harvest (Baum & McGahan, 2004).
Hill & Jones, 1998: fragmentation, growth, shake-out, maturity and decline
rapid
growth,
competitive
Conclusion
The industry lifecycle imitates the cycle of human being. Industry lifecycle
comprises four stages including fragmentation, growth, maturity and decline. An
understanding of the industry lifecycle can help competing companies survive
during periods of transition. Information on the industry lifecycle can be found in
most business management books. Several variations of the lifecycle model have
been developed to address the development and transition of products, market
and industry. The models are similar but the number of stages and names of
each may differ. Major models include those developed by Fox (1973), Wasson
(1974), Anderson & Zeithaml (1984), and Hill & Jones (1998).
Strategic dimensions
A careful balance of four interrelated elements: people, space, time and money.
Advantage
Target Scope
Low Cost
Product Uniqueness
Broad
(Industry Wide)
Cost Leadership
Strategy
Differentiation
Strategy
Narrow
(Market Segment)
Focus
Strategy
(low cost)
Focus
Strategy
(differentiation)
This generic strategy calls for being the low cost producer in an industry for a given
level of quality. The firm sells its products either at average industry prices to earn a
profit higher than that of rivals, or below the average industry prices to gain market
share. In the event of a price war, the firm can maintain some profitability while the
competition suffers losses. Even without a price war, as the industry matures and
prices decline, the firms that can produce more cheaply will remain profitable for a
longer period of time. The cost leadership strategy usually targets a broad market.
Some of the ways that firms acquire cost advantages are by improving process
efficiencies, gaining unique access to a large source of lower cost materials, making
optimal outsourcing and vertical integration decisions, or avoiding some costs
altogether. If competing firms are unable to lower their costs by a similar amount, the
firm may be able to sustain a competitive advantage based on cost leadership.
Firms that succeed in cost leadership often have the following internal strengths:
Each generic strategy has its risks, including the low-cost strategy. For example,
other firms may be able to lower their costs as well. As technology improves, the
competition may be able to leapfrog the production capabilities, thus eliminating the
competitive advantage. Additionally, several firms following a focus strategy and
targeting various narrow markets may be able to achieve an even lower cost within
their segments and as a group gain significant market share.
Differentiation Strategy
A differentiation strategy calls for the development of a product or service that offers
unique attributes that are valued by customers and that customers perceive to be
better than or different from the products of the competition. The value added by the
uniqueness of the product may allow the firm to charge a premium price for it. The
firm hopes that the higher price will more than cover the extra costs incurred in
offering the unique product. Because of the product's unique attributes, if suppliers
increase their prices the firm may be able to pass along the costs to its customers
who cannot find substitute products easily.
Firms that succeed in a differentiation strategy often have the following internal
strengths:
Strong sales team with the ability to successfully communicate the perceived
strengths of the product.
Because of their narrow market focus, firms pursuing a focus strategy have lower
volumes and therefore less bargaining power with their suppliers. However, firms
pursuing a differentiation-focused strategy may be able to pass higher costs on to
customers since close substitute products do not exist.
Firms that succeed in a focus strategy are able to tailor a broad range of product
development strengths to a relatively narrow market segment that they know very
well.
Some risks of focus strategies include imitation and changes in the target segments.
Furthermore, it may be fairly easy for a broad-market cost leader to adapt its product
in order to compete directly. Finally, other focusers may be able to carve out subsegments that they can serve even better.
A Combination of Generic Strategies
- Stuck in the Middle?
These generic strategies are not necessarily compatible with one another. If a firm
attempts to achieve an advantage on all fronts, in this attempt it may achieve no
advantage at all. For example, if a firm differentiates itself by supplying very high
quality products, it risks undermining that quality if it seeks to become a cost leader.
Even if the quality did not suffer, the firm would risk projecting a confusing image.
For this reason, Michael Porter argued that to be successful over the long-term, a
firm must select only one of these three generic strategies. Otherwise, with more
than one single generic strategy the firm will be "stuck in the middle" and will not
achieve a competitive advantage.
Porter argued that firms that are able to succeed at multiple strategies often do so by
creating separate business units for each strategy. By separating the strategies into
different units having different policies and even different cultures, a corporation is
less likely to become "stuck in the middle."
However, there exists a viewpoint that a single generic strategy is not always best
because within the same product customers often seek multi-dimensional
satisfactions such as a combination of quality, style, convenience, and price. There
have been cases in which high quality producers faithfully followed a single strategy
and then suffered greatly when another firm entered the market with a lower-quality
product that better met the overall needs of the customers.
Generic Strategies and Industry Forces
These generic strategies each have attributes that can serve to defend against
competitive forces. The following table compares some characteristics of the generic
strategies in the context of the Porter's five forces.
Generic Strategies and Industry Forces
Generic Strategies
Industry
Force
Cost
Leadership
Differentiation
Customer loyalty can
discourage potential
Focus
Focusing develops core
competencies that can act as
retaliation deters
potential
entrants.
entrants.
an entry barrier.
Ability to offer
Buyer
lower price to
Power
Better insulated
Supplier
from powerful
Power
suppliers.
Customer's become
Threat
Can use low price attached to
Specialized products & core
of
to defend against differentiating attributes, competency protect against
Substitu
substitutes.
reducing threat of
substitutes.
tes
substitutes.
Better able to
Rivalry compete on
price.
What might their plans be for the future? How might you create greater impact by
reconsidering your relationship with them?
It's important not only to think about who these other players are, but also about the
marketplace you each work in and how this could affect your future strategies. To help with
this, think about the two most important factors driving success (or ensuring outcomes) for
your service users or beneficiaries. Examples that people sometimes come up with are:
You will have your own factors for your beneficiaries. Once you've picked the top two, draw
up a matrix showing each factor as in the example below.
Example of strategic group map
regards to these two factors) are the other players. The size of the circle that represents each
corresponds to their size in the marketplace.
Create your own strategic group map
Plot out each of the other players on this matrix. You could draw a circle for each that gives
an idea of relative size. Put your organisation on there too. Where are the gaps? Where are
the overlaps? What are the options for change?
*
banking sector, and they need to search for other options in domestic and global markets to
rejuvenate the sector.
Advantages of Financial Globalization and Financial Stability
It can undoubtedly be said that due to financial globalization and financial stability there is
boom in the economic sector. Plenty of options have been opened and the sources of global
financing have become cheaper and easily accessible as well. Due to financial globalization
numerous countries are enjoying financial stability which is widely accepted. The most
important thing is that, for the developing countries, financial globalization and financial
stability is really a boon. They have been highly benefited from the security markets of the
developed countries. Furthermore, to keep the inflation rate in control, financial stability has
been very much effectual. In a word, financial globalization and financial stability is
definitely a perfect step for boosting up the economy in different countries worldwide.
Technology and Innovation in Financial Services: Scenarios to 2020
Innovation has already transformed the financial services (FS) industry. Fourteen years ago,
who expected the massive growth of e-banking and e-brokerage? Who envisioned the entry
of new players such as retailers and telecommunications providers into financial services
arena thanks to their ability to harness the power of innovative technology? Who predicted
that technology would enable outsourcing and offshore contracting of core financial
processes in lowcost countries such as India?
The business environment continues to change today, and the financial services sector needs
to confront many issues to remain competitive. In particular, technology and innovation are
board level issues; they create opportunities and pose threats.
To explore these issues, the World Economic Forum and representatives of the financial
services, information technology and telecommunication communities set out to develop
scenarios for the future of financial services and how they might be affected by innovation.
The objective of these scenarios is to explore how innovation will transform access to, and
delivery of, financial services by the year 2020.
From the many key drivers, project participants in particular, leading industry
representatives identified two crucial groups of questions.
1. How will the globalization of financial services evolve? Will it be further supported by
governments and regulators? What outcomes will we see in the next decade?
2. Will innovation be incremental or fundamental? Will it be driven by traditional or new
players? What types of innovation will we see for example, in products and services,
distribution and sales channels, operations, and new business models?
Project participants worked from these questions to develop three very different but plausible
scenarios of the future of financial services over the next 14 years.
Global Ivy League describes a highly concentrated financial services sector dominated by a
small number of large, global players. Governments support globalization but take a very
conservative approach to customer protection and regulation of the sector. At the same time,
declining trust in digital media means customers favour the solidity of traditional financial
service providers. In this environment, a small number of financial services institutions
evolve into global powerhouses.
Next Frontier describes a world in which governments pursue deregulation and, as the title
reflects, technology enables a great variety of new business models to emerge. The result is a
financial services industry as an ecosystem of highly specialized providers, each focusing on
creating a competitive advantage over incumbents. There are many new players, including
telecommunications companies, peer-to-peer financial services providers, processing
providers, retailers and Internet companies.
Innovation Islands describes a world in which globalization stalls due to geopolitical tensions
and global instability. Government policies toward the financial services sector differ widely
among countries. Three trends become apparent:
* "Leapfrogging": in large emerging economies such as China and India, government
regulation and investment in infrastructure fosters the local financial service industry,
expanding access to the poor and leading to new business models that "leapfrog" over
developed markets in areas such as mobile banking and flexible, low-cost operating models.
* "Business as usual": in other mainly developed economies such as the US or European
countries where innovation neither accelerates nor decelerates. There is only limited change
to business models.
* "Back to the past": in the remaining countries and regions, mainly in developing
economies. Governments increase control over the financial services sector but do not foster
local innovation;as a result, there is little progress and sometimes even regression in the
efficiency and quality of FS.
these two acts work to allow banks to merge and thrift institutions (credit unions,
savings and loans and mutual savings banks) to offer checkable deposits. These
changes also became the catalysts for the dramatic transformation of the U.S.
financial service markets in 2008 and the emergence of reconstituted players as
well as new players and service channels. (For more on this, check out our
Financial Crisis Survival Guide special feature.)
Nearly a decade later, the implementation of the Second Banking Directive in
1993 deregulated the markets of European Union countries. In 1994, European
insurance markets underwent similar changes as a result of the Third Generation
Insurance Directive of 1994. These two directives brought the financial services
industries of the United States and Europe into fierce competitive alignment,
creating a vigorous global scramble to secure customers that had been
previously unreachable or untouchable.
The ability for business entities to use the internet to deliver financial services to
their clientle also impacted the product-oriented and geographic diversification
in the financial services arena.
Going Global
Asian markets joined the expansion movement in 1996 when "Big Bang"
financial reforms brought about deregulation in Japan. Relatively far-reaching
financial systems in that country became competitive in a global environment
that was enlarging and changing swiftly. By 1999, nearly all remaining
restrictions on foreign exchange transactions between Japan and other countries
were lifted. (For background on Japan, see The Lost Decade: Lessons From
Japan's Real Estate Crisis and Crashes: The Asian Crisis.)
Following the changes in the Asian financial market, the United States continued
to implement several additional stages of deregulation, concluding with the
Gramm-Leach-Bliley Act of 1999. This law allowed for the consolidation of major
financial players, which pushed U.S.-domiciled financial service companies
involved in M&A transactions to a total of $221 billion in 2000. According to a
2001 study by Joseph Teplitz, Gary Apanaschik and Elizabeth Harper Briglia in
Bank Accounting & Finance, expansion of such magnitude involving trade
liberalization, the privatization of banks in many emerging countries and
technological advancements has become a rather common trend. (For more
insight, see State-Run Economies: From Public To Private.)
The immediate effects of deregulation were increased competition, market
efficiency and enhanced consumer choice. Deregulation sparked unprecedented
changes that transformed customers from passive consumers to powerful and
sophisticated players. Studies suggest that additional, diverse regulatory efforts
further complicated the running and managing of financial institutions by
increasing the layers of bureaucracy and number of regulations. (For more on
this topic, see Free Markets: What's The Cost?)
Simultaneously, the technological revolution of the internet changed the nature,
scope and competitive landscape of the financial services industry. Following
deregulation, the new reality has each financial institution essentially operating
in its own market and targeting its audience with narrower services, catering to
the demands of a unique mix of customer segments. This deregulation forced
financial institutions to prioritize their goals by shifting their focus from ratesetting and transaction-processing to becoming more customer-focused.
Challenges and Drawbacks of Financial Partnerships
Since 1998, the financial services industry in wealthy nations and the United
States has been experiencing a rapid geographic expansion; customers
previously served by local financial institutions are now targeted at a global
level. Additionally, according to Alen Berger and Robert DeYoung in their article
(ii)
(iii)
(iv)
(v)
1)
2)
3)
4)
5)
foreign equity.
Capital Market Reforms: The major reform in the capital market are as
follows:
The capital issues (control) Act 1947 has been repealed Indian
Companies faced bureaucratic delays in issues of securities due to
this act.
Listing of Companies on stock exchanges has been liberalized.
The role of foreign Institutional Investors (FIIs) on Indian stock has
increased tremendously due to liberalization of foreign portfolio
inflows.
Private mutual funds (both Indian and Foreign) have been permitted
to operate thereby ending the monopoly of UTI.
The securities and Exchange Boards of India (SEBI) has become the
regulator of capital market.
Year
Place
Dec.9-13, 1996
Singapore
II
Geneva
III
Nov.30-Dec.9, 1999
Ciatel
IV
Nov. 9-14,2001
Doha
Kankun
VI
Hong-kong
Minister of Italy Mr. Renatto Rugaro is its present Director General. Four Deputy
Director General are also elected to assist the Director General.
Like GATT, WTOs headquarter is also at Geneva. According to the latest WTO
report (WTR-2003), at the end of April 2003, 146 countries were the members of
WTO. WTO increased the present membership of WTO to 151.
Objectives of WTO
1. To improve standard o living of people in the member countries.
2. To ensure full employment and broad increase in effective demand.
3. To enlarge production
4. To enlarge trade of goods.
5. To enlarge production with trade of services.
6. To ensure optimum utilization of world resources.
Negative and positive rights Negative rights- an action is right if it protects an individual
from unwarranted interference from Government and/ or
other people in the exercise of that right
Positive rights- an action is right if it provides any individual
with whatever he or she needs to exist
Social Contract- an action is right if it conforms to the terms agreed
upon, conditions, or rules for social well-being negotiated by
competent parties
Social justice- an action is right if it promotes the duty of fairness in
the distributive, retributive, and compensatory dimensions of social
benefits and burdens.
Individual character
Work character
Professional character
Personal improvement
Organizational ethics
Extra- organizational ethics
6
7
What environmental
generations?
obligations
do
we
need
to
keep
for
future
Is it right for humans to knowingly cause the extinction of a species for the
convenience of humanity?
Environmental ethics is becoming an important issue for many companies and
businesses as there is a greater push for corporate responsibility. Leaders of
organizations of all sizes and in all sectors face a growing number of issues
related to ethical behavior, particularly in terms of environmental responsibility.
As global understanding of the significant ecological and environmental ethics
issues we face expands and moves to the forefront of debates, it is even more
important for leaders to take action to both remedy the causes of the problem
and to act as models for other organizations and individuals. Although there are
many examples of responsible corporate and organizational environmental
governance and behavior, there is yet to emerge a global initiative aimed at
changing the face of environmentally ethical and responsible action that will
promote further corporate responsibility. This lack of understanding of issues of
environmental ethics and corporate responsibility occurs for a number of a
reasons, one of which could be because of a lack of global consensus on the
importance of taking the necessary steps to remedy the problem. As one scholar
notes, "In our pluralistic societies, there is no uncontested common ethical
ground in general and no undisputed conception of environmental responsibility
in particular" (Enderle 2006) and as a result there is little unified action. If this
assessment is valid then it is necessary to first define a clear set of issues and
resolutions that organizations and leaders can agree upon.
INFORMAL CONSULTATION
ON STRATEGIES
FOR
GENDER EQUALITY- IS MAINSTREAMING A DEAD END?
Following four days of discussion of strategies for gender equality in international
organisations, the gender focal points of 15 UN organisations and development
banks together with representatives of 5 donor agencies and resource persons
drew the following conclusions and recommendations related to lessons learned
in promoting institutional change and effective strategies for the future:
A. Gender mainstreaming is not a dead end strategy. But it is not always fully
understood and implemented in the right way.
* There is confusion about concepts: gender and women. However, one does
not exclude the other. The use depends on the context. Gender is most
fruitfully used as an adjective, not a noun, in concepts like gender equality and
gender analysis. Women (and girls) are essential actors and target groups in
relation to gender equality. It is important to analyse issues so that gender
differences and disparities appear and women are visible in relation to men.
* There is also confusion about goals and means. The goal is gender equality and
womens empowerment. To achieve the goal, different strategies and actions are
needed according to circumstances. Polarisation of approaches does not work. A
main strategy is gender mainstreaming of all policies, programmes and projects.
But women must not be lost in the mainstream, or malestream!. Targeted
women-specific policies, programmes and projects are necessary to strengthen
the status of women and promote mainstreaming. In any case, there must be
specialist support, institutional mechanisms and accountability.
* Agencies have chosen different bases for their action: human rights or
efficiency considerations. In fact, it is not a question of either/or. The human
rights basis is more fundamental, but is not always made explicit and in some
organisations it is not well understood or appreciated. The emphasis will vary
from one organisation to the other, but it is important to realise that the
promotion of gender equality implies a social transformation in society in
addition to more effective economic development and poverty reduction.
B. Global commitment. The international womens conferences from Mexico
(1975) to Beijing (1995) established a global consensus and commitment to
promote gender equality which was reaffirmed by the Millennium Summit (2000).
This is a long term commitment and it is important to keep the goal on the
agenda. Ongoing political and financial support from Member States is essential
to maintain focus on gender issues and ensure implementation of the
recommendations. The mandates and policy statements of UN organisations and
development banks should have conceptual clarity and explicit language so
people understand them. Commitments should be clearly spelled out, given
visibility and cultivated. Without pressure from governing bodies and top
management mandates and policy statements do not get implemented. External
advisory gender boards or panels can be used to answer questions and help
elucidate and depersonalise issues.
C. Organizational change. The challenge is to transform multilateral
organisations to actively pursue the goal of promoting gender equality and
womens empowerment through a process of gender mainstreaming and other
forms of organizational change. As gender equality often touches on power
relations, there can be strong discomfort and even reistance to change. To make
progress the following is needed:
- strong, active leadership
- incentives and accountability
- a critical mass of commited individuals
D. Tools. Useful tools include
- partnerships: internally and externally
kept together and then ideally additional fulltime specialists in other units and
decentralized offices. There should be allocation of adequate resources and a
match of expectations and resources expressed in clear terms of reference of
catalytic functions of the gender unit
H. Capacity-building. Capacity-building for gender mainstreaming is still needed
in international organisations. A corporate capacity-building plan should be
elaborated and be the responsibility of the staff training and capacity-building
unit. The sustainability of efforts and investments is crucial, particularly in times
of high staff turn-over. It is important that policy informs practice as practice
should influence policy. Capacity-building should be tailor-made and demanddriven for various audiences: orientation for newcomers, gender modules in
other courses (e.g. project cycle), gender sensitivity training, gender analysis
training etc. Examples of successful practice are very useful and more cases
should be presented. But lessons learned cannot only be general, some must be
context-related.
I. Networks. Networks and alliances are important within the organisation and
outside. Internally, ownership should be shared with both women and men, and
between Headquarters and the field. Externally collaboration should be
established with governments, civil society and other UN organisations. Links
should be established and support provided for womens organisations and
groups, keeping in mind the character of the different groups and organisations.
It is also important to collaborate with business and professional organizations,
employers and trade unions, social and cultural associations, youth clubs etc.
J. Involvement of men. The involvement of men is important to promote gender
equality: more male staff in gender specialist posts, more male gender focal
points in other units and more male trainees/facilitators for gender capacitybuilding courses. Training curriculum should be packaged with a resultsoriented focus to appeal to managers. It is important to break stereotypes.
HIV/AIDS might be a good entry-point for talking with men about masculinity,
gender-based violence, trafficking etc. Contacts should be established with male
government and NGO representatives and they should be encouraged to
participate in advocacy events and discussions.
K. Accountability. To monitor progress it is important to define different roles and
responsibilities for staff members at different levels of accountability. Existing
accountability mechanisms need to be catalogued or mapped by level:
leadership (executive head), management (ADGs/Directors), gender advisers in
units, corporate gender units, country representatives. The role and
accountability should also be mapped for non-programme/non-technical units
such as evaluation/audit offices, programme budget offices and human
resources offices. Core competencies needed for fulfilling various responsabilities
need to be identified. Special attention should be given to the development of
results frameworks and systematic measurement of results. Even if planned
results are not achieved, efforts undertaken to meet gender commitments
should be acknowledged.
L. Mottos:
Pune: Women occupy just about 5% positions on the boards of director of Indian
firms listed on the Bombay Stock Exchange. The revelation, which comes amid
The study-a first of its kind in India and second in Asia- note that only 59 (5.3%)
of the 1,112 directors of companies that form the elite BSE-100 group are
women. These directorships are held by 48 different women, the study said.
The percentage compares unfavorably with Canada, where women hold 15% of
directorships, the United States (14.5%), the United Kingdom(12.2%),
Hongkong(8.9%) and Australia(8.3%).
The findings also reveal that 12 companies on the BSE-100 have more than one
female director, 7 companies have female executive directors and 2.5% of all
executive director roles are held by women. Less than half of the companies-only
46%- have women on their boards.
Of all the appointments made in 2010 (as of May 2010), 4.9% were women. Two
companies Jindal Steel& Power ltd. Have women as chairpersons and two of the
countries most significant banks-ICICI Bank and Axis Bank- have female CEOs.
The report includes a Women on Boards League Table which ranks companies
listed on the BSE-100 in terms of the gender diversity of their boards, with those
with the highest percentage o women on their boards appearing at the top. At
the top of the list is JSW Steel Limited, which has three women (23.1%) on its
board of 13. Oracle Financial Services Software is second with two
women(22.2%) on its board of nine and Piramal Healthcare is third with 20%
female board directors.
Both of Piramal Healthcares two female directors hold executive directorships,
the only BSE-100 company with two executive female directors. In forth place is
Axis Bank Ltd, with two (18.2%) of its 11 board members being female. In joint
fifth place, with two women (16.7%) out of 12 board members, are Lupin and
Titan Industries. The research looks at the representation o female directors on
the BSE-100 and ranks the companies in terms of the gender diversity of their
boards, with those with the highest percentage of women on their boards
Prime minister Manmohan singh has described the historic womens reservation
bill as a giant step towards the empowerment of women and a celebration of
womanhood. The passing of the bill in the Rajya Sabha is a momentous, heart
warming step for India; also an inspirational trendsetter for womens
empowerment in the entire region.
The movement for womens right has broken many a fetter, but it has also
forged new ones. Women today are the striking power, a great contributor to
many working sectors, ready to accept challenges. But do we ever recognize
what boundaries they are being forced to cross?
The sexual laws and moral standards have always been stricter for women. The
female body was regarded down the ages as a mere vessel for the male creative
fluids. Women were the soil in which men planted their seeds. This perception
was also reflected in religious beliefs. Women were stripped of their creative role
and burdened with the responsibility for the Original Sin. The Ten
Commandments list wives among a mans possessions. Not surprisingly,
therefore, in a traditional Jewish prayer men implored God, Let not my offspring
be a girl, for very wretched is the life of woman. And they gladly repeated every
day: Blessed be Thou, O Lord our God, for not making me a woman.
The sacred texts of every major religion enshrines the subjugation of women
through myth (Eve causing the fall of man) or through code (the Shariah that
values a womans testimony as half that of a man and authorizes a man to beat
and whip his wife to keep her obedient to him).
Apostle Paul made it clear that the head of the woman is the man, For the man is
not of the woman; but the woman of the man. And if they will learn anything, let
them ask their husbands at home: or it is a shame for women to speak in the
church.
Christianity excluded women from priesthood and other church offices. At the
same time, they were also expected to remain subservient to men at home. In all
societies, the obvious biological difference between men and women is used as a
justification for forcing them into different social roles which limit and shape their
attitudes and behavior. A woman, in addition to being a female, must be
feminine. Sexual oppression, no matter how harsh or unjustified, has never
lacked rationalization. These may range from simple religious dogmas to
sophisticated pseudo-scientific theories. For over a hundred years, the old form
of marriage, based on the Bible, till death do us part, has been denounced as
an institution that stands for the sovereignty of the man over the woman, of her
complete submission to his whims and commands, and absolute dependence on
his name and support. In addition, women are generally exploited by the media.
They become like goods which are sold and bought. For instance, in
advertisements we usually see women presenting products; but unfortunately,
their bodies are used to attract consumers.
Break barriers.
The problem that confronts us today is how to be ones self and yet be in
oneness with others, to feel deeply with all human beings and still retain ones
own characteristic qualities. The modern woman would be enabled to blossom in
true sense- with full respect for her personality; all artificial barriers should be
broken, and the road towards grater freedom cleared of every trace of centuries
of submission and slavery
Ethics the Framework for success: while some ethical decisions are
simply a matter of right vs. wrong, the tough ethical decisions are right
vs. right.
The widespread attention given to the fall of companies such as Tyco,WorldCom,
and Enron has led to an increased focus on ethics in the business world. Because
of the enormous pressure to produce higher and better returns, some individuals
at corporations have adopted the philosophy, "the ends justify the means." They
fall into the trap of setting unrealistic budgets, improbable expectations, and
unlikely goals. Not surprisingly, investor confidence has been low due to the
many corporate scandals. Despite these results, however, firms continue to allow
external sources, such as outside analysts, to define success.
Instead, companies must ask the following question: "Have we replaced our
underlying business theme of 'succeeding at all costs' with 'succeeding only the
right way'?" An ethical culture can ensure success by establishing appropriate
expectations using proper guidelines, thus preventing the need or desire to be
turned to Johnson & Johnson's credo: "We believe our first responsibility is to
doctors, nurses, and patients, to mothers and fathers and all others who use our
products and services." He ignored the immediate short-term financial
implication and adhered to the attitude of "doing the right thing," ordering the
recall of more than 31 million bottles at a cost of more than $100 million. This
action set a new standard for crisis management. As a result of these events, the
company developed the tamper-proof seal and gained even more market share
and customer loyalty than it had before the incident.
To make choices like Burke requires individuals to take the steps listed in "A
Framework for Thinking Ethically" from the Markkula Center for Applied Ethics at
Santa Clara University (www.scu.edu/ethics):
* Be sensitive to ethical issues,
* Explore ethical aspects of a decision,
* Weigh the considerations that impact their course of action, and
* Have the moral courage to make the right ethical choice.
[ILLUSTRATION OMITTED]
While companies will inevitably face difficult situations, their ability to make
ethical decisions must not be compromised for any reason. Consider Exxon, for
example. This company refused to accept responsibility for the Valdez accident,
and their attempt to blame state and federal officials for delays in containing the
spill damaged their reputation. Even today the name Exxon is synonymous with
environmental catastrophe. Due to ineffective communication from Exxon, the
public questioned their credibility and truthfulness. According to Jennifer Hogue
in
"What
is
Crisis
Management?"
(http://iml.jou.ufl.edu/
projects/Spring01/Hogue/crisismanagement.html), a survey conducted by Porter
Novelli several years after the accident found that 54% of respondents were still
less likely to buy Exxon products.
THE DANIEL EFFECT
Everyone within an organization should work together to create the "Daniel
Effect." This comes from the Old Testament account of a governing body trying
to discredit Daniel in front of the whole kingdom of Babylon. In the New King
James Version, the Book of Daniel, Chapter 6:3-4, says, "Then this Daniel
distinguished himself above the governors and satraps, because an excellent
spirit was in him; and the king gave thought to setting him over the whole realm.
So the governors and satraps sought to find some charge against Daniel
concerning the kingdom; but they could find no charge or fault, because he was
faithful; nor was there any error or fault found in him."
Employees would benefit individually from this mindset during their careers by
adhering to high ethical standards. Companies must build a strong ethical
framework to withstand attacks from the public through frivolous lawsuits,
competition's claims of wrongdoing, and any fraud attempted by their
employees. Positive public perception is vital to success in the marketplace,
which is protected by ethical behavior just as Daniel protected himself from his
enemies by remaining faithful to his high moral standards.
Some ethical decisions, such as cheating on taxes, lying under oath, or
overstating revenue and understating expenses, are simply a matter of right vs.
wrong. The tough ethical decisions are right vs. right. Four such dilemmas
include truth vs. loyalty, individual vs. community, short-term vs. long-term,
justice vs. mercy. Here are some real-world examples from Rushworth Kidder's
How Good People Make Tough Choices:
* It is right to find out all you can about your competitor's costs and price
structures--and right to obtain information only through proper channels;
* It is right to throw the book at good employees who make dumb decisions that
endanger the firm--and right to have enough compassion to mitigate the
punishment and give them another chance.
* It is right to protect the endangered spotted owl in the old-growth forests of the
American Northwest--and right to provide jobs to loggers.
Unfortunately, no magic formula exists to guide management through these
types of decisions. Companies must be willing to equally weigh the ethical
repercussions of one decision over the other.
DIFFICULT CHOICES
In Moral Courage, Kidder relates the story of Eric Duckworth. A metallurgist by
training, the recently married Duckworth took a position in 1949 with Federal
Mogul, a firm that made bearings for internal combustion engines. His job
description included examining damaged bearings returned by customers. He
would determine the cause of the failure, report to the customers, and
recommend changes to correct the problem.Most were due to misuse, improper
installation, and lack of lubrication. Sometimes he discovered that the faulty
parts were the result of production mistakes. His boss, the chief metallurgist,
regularly tried to cover up such faults by refusing to divulge all the facts and by
attributing the failure to end users mishandling the bearings, making no effort to
compensate customers.
At first, Duckworth rationalized "that he was prepared to commit sins of omission
but not of commission." Eventually, a particularly flagrant case drove him to
write a completely honest report, which his boss rejected. Summoning his moral
courage, he protested that he would resign if they didn't report the true findings
to the customer. His boss, as well as the sales department, protested that such
findings would cost them customers and perhaps more.
Fortunately, Duckworth previously had made several suggestions that increased
the productivity of the manufacturing process and won him the admiration of the
CEO, who backed him against his boss. The report went to the customer, who
responded with a congratulatory letter that said: "We had always suspected
concealment in some of your reports." In the wake of the company's new-found
honesty, the customer increased orders. Duckworth later recalled his moral
courage, "On one occasion when I was young and idealistic, I succeeded--and
have been proud of it ever since."
My own experience illustrates how one benefit of ethical behavior is improved
employee morale. The testing lab at a former employer of mine discovered a
potential electrical hazard related to a specific motor supplier. Under unique
circumstances that required the existence of several conditions, this motor had
the potential to deliver an electric shock to the end user. The possible financial
impact of rework or possible recall could cost the company millions of dollars.
Our management team, aware of the chance for a possible recall, decided to
report this issue to the Consumer Product Safety Commission (CPSC). Taking a
pro-ethical approach had a positive impact on me and other employees because
we all were impressed with the company's commitment to product safety.
SAFEGUARD THE FUTURE
Every day, management decisions affect individuals, families, and even nations.
Before making a final decision, the goal should be to completely consider the
ethical implications, including the immediate financial impact as well as the
lasting consequences. If the organization's climate is to not permit wrongdoing of
any kind, then employees are more likely to work harder for the company's
common good. Ethical decision making safeguards an enterprise's future.
Managing companies in the ever-changing business environment is difficult even
without falling into the trap of earnings-only management. But an organization's
management can't concentrate on the future if it's worried about any past
corrupt business dealings. An ethical culture cultivates realistic expectations with
the focus on following sound and unquestionable business principles. Ethics
improves goodwill, company perception, employee morale, and even sales.
Ethics allows management to be focused on the future, thereby becoming the
framework for long-term success.
Steve Hunter, CMA, is a senior finance manager for equipment at an
international company. He has 16 years of experience in accounting and finance.
You can reach Steve at (731) 645-4526 or shunter7263@bellsouth.net.
Ethics is a topic at IMA's Annual Conference, June 14-18, 2008, in Tampa, Fla. For
information, visit www.imaconference.org.
BY STEVE HUNTER, CMA
****************************************
Thinking Ethically:
A Framework for Moral Decision Making
Developed by Manuel Velasquez, Claire Andre, Thomas Shanks, S.J., and
Michael J. Meyer
Moral issues greet us each morning in the newspaper, confront us in the memos
on our desks, nag us from our children's soccer fields, and bid us good night on
the evening news. We are bombarded daily with questions about the justice of
our foreign policy, the morality of medical technologies that can prolong our
lives, the rights of the homeless, the fairness of our children's teachers to the
diverse students in their classrooms.
Dealing with these moral issues is often perplexing. How, exactly, should we
think through an ethical issue? What questions should we ask? What factors
should we consider?
The first step in analyzing moral issues is obvious but not always easy: Get the
facts. Some moral issues create controversies simply because we do not bother
to check the facts. This first step, although obvious, is also among the most
important and the most frequently overlooked.
But having the facts is not enough. Facts by themselves only tell us what is; they
do not tell us what ought to be. In addition to getting the facts, resolving an
ethical issue also requires an appeal to values. Philosophers have developed five
different approaches to values to deal with moral issues.
The Utilitarian Approach
Utilitarianism was conceived in the 19th century by Jeremy Bentham and John
Stuart Mill to help legislators determine which laws were morally best. Both
Bentham and Mill suggested that ethical actions are those that provide the
greatest balance of good over evil.
To analyze an issue using the utilitarian approach, we first identify the various
courses of action available to us. Second, we ask who will be affected by each
action and what benefits or harms will be derived from each. And third, we
choose the action that will produce the greatest benefits and the least harm. The
ethical action is the one that provides the greatest good for the greatest number.
The Rights Approach
The second important approach to ethics has its roots in the philosophy of the
18th-century thinker Immanuel Kant and others like him, who focused on the
individual's right to choose for herself or himself. According to these
philosophers, what makes human beings different from mere things is that
people have dignity based on their ability to choose freely what they will do with
their lives, and they have a fundamental moral right to have these choices
respected. People are not objects to be manipulated; it is a violation of human
dignity to use people in ways they do not freely choose.
Of course, many different, but related, rights exist besides this basic one. These
other rights (an incomplete list below) can be thought of as different aspects of
the basic right to be treated as we choose.
The right to the truth: We have a right to be told the truth and to be
informed about matters that significantly affect our choices.
The right of privacy: We have the right to do, believe, and say whatever
we choose in our personal lives so long as we do not violate the rights of
others.
The right not to be injured: We have the right not to be harmed or injured
unless we freely and knowingly do something to deserve punishment or
we freely and knowingly choose to risk such injuries.
The right to what is agreed: We have a right to what has been promised by
those with whom we have freely entered into a contract or agreement.
What benefits and what harms will each course of action produce, and
which alternative will lead to the best overall consequences?
What moral rights do the affected parties have, and which course of action
best respects those rights?
Which course of action treats everyone the same, except where there is a
morally justifiable reason not to, and does not show favoritism or
discrimination?
this paper. Section 4 presents the results of the empirical study and discussion.
Section 5 proposes the theoretical and managerial contributions, and
suggestions for future research and section 6 ends the study with the conclusion.
2. RELATION MODEL AND HYPOTHESES
In this study and attend to test the effect of ethical pressure, and professional
expectation are independent variables, job quality is dependent variable, stress
is mediating variable, time pressure, and self esteem are moderator variable, as
shown in Figure 1.
[FIGURE 1 OMITTED]
2.1 Ethical Pressure
Ethical pressure is defined as an objective stimulus constructs referring to
individual characteristics or combinations of characteristics and events that
impinge on the perceptual and cognitive processes of individuals (DeZoort and
Lord, 1997; Pratt and Barling, 1988; Eden, 1982; Kahn et al. 1964). It refers to
conformity pressure affects individuals who tend to alter their attitudes or
behavior in an effort to be consistent with perceived group norm (DeZoort and
Lord, 1997; Brehm and Kassin, 1990). In this study, Ethical pressure is defined as
perceptions of professional values which accountants have as a professional
responsibility to adhere to a code of conduct, and ethical code that expressly
prohibits engaging in actions such as the fact of reported financial results.
Shafer (2002) and Aranya and Ferris (1984) found that accountants employed in
industry did in fact experience higher levels of organizational-professional
conflict than those employed in public accounting. Perceived ethical conflicts can
lead to dysfunctional organizational outcomes such as lower organizational
outcomes such as lower organizational commitment and higher turnover
intentions (Shafer, 2002; Schwepker, 1999). Thus ethical pressure is an
important factor to impact an accountant's stress. This implies that if there is
high ethical pressure it may also have great stress. This leads to the following
hypotheses:
H1a: The accountants with higher ethical pressure will have greater stress.
2.2 Professional expectation
Brierley (1999) and Lachman & Aranya (1986b) described that the realization of
professional expectations has been measured in research of accountants by
assessing the discrepancy between responses to questions about "how much
should there be" and "how much is there now", on aspects of professional
values, such as the autonomy to act according to professional judgment and
responsibility to clients. Thus in this study, Professional expectation refers to the
perceptions of public about professionalism, independence, self improvement,
commitment to learning, responsibility, skill with accountant's practice.
Sanders et al., (1995) described stress created by job requirements which
exceed the individual's ability or skill level. Professional expectation requires
accountant's ability and skill. Thus professional expectation has effect on
between role stress fit and job satisfaction. In order to examine the effects of
self-esteem on job quality, self-esteem thus may be related to job quality.
H2b: Accountants with the higher self esteem will potentially have greater
positive relationship between stress and job quality.
3. RESEARCH METHODS
3.1 Data collection
The samples were randomly drawn from 818 companies in Automotive/Auto
parts and accessories/Machiner in Thailand's Exporting Industries. The sampling
frame was listed from the Thailand's exporting firm database. The questionnaire
was constructed covering contents according to each variable that was
operationalized for empirical studies. The pretest was used to verify the validity
and reliability of expertise and misunderstanding were reduced that can arise
from ambiguities, and it is improved in its contents, item ordering, and wording.
Reliability was tested by Cronbach alpha reliability coefficients of all constructs
to make sure that the items of the questionnaire were designed to measure
consistency for each concept.
Later, 600 questionnaires were sent to accounting manager firms to provide data
for this study via mail. After two weeks 152 questionnaires were received. There
were 33 questionnaires that could not be sent to receivers and these were
returned. However, 2 received questionnaires were incomplete, and were not
included in the data analysis. This resulted in 100 usable responses or a
response rate of 26%.
3.2 Reliability and Validity
Constructs, multi-item scale, were tested by Cronbach alpha to measure
reliability of the data. Table 1 shows an alpha ranged from 0.60-0.80.comfortably
above the minimum 0.60 requirements (Chalos and Poon, 2000).That is internal
consistency of the measures used in this study can be considered very well for
all constructs.
Factor analysis is employed to test the validity of data in the questionnaire.
Items are used to measure each construct that is extracted to be one only
principal component. Table 1 shows factor loading of each construct that
presents a value higher than 0.50. Thus, construct validity of this study is tapped
by items in the measure, as theorized. That is, factor loading of each construct
should not be less than 4.00 (Hair et al., 2006).
3.3 Statistic Technique
OLS regression analysis is employed to estimate parameters in hypothesis test.
From the relationship model and the hypotheses the following seven equation
models are formulated:
Equation 1: S = [[beta].sub.01] [[beta].sub.1] EP [epsilon]
Equation 2: S = [[beta].sub.02] [[beta].sub.2] EP [[beta].sub.3] PE [epsilon]
[[beta].sub.10]
[[beta].sub.11]
SE
interaction of ethical pressure and time pressure with stress (b =. 052; p>.95).
Other interactions are not significant. Therefore, H1c and H1d are not supported.
4.2 Consequence of Stress
Results are presented in table 4; regression analysis is employed to estimate
parameters to test H2a. For Model 4, there is a positive and significant
relationship between stress and job quality (b = .326; p<.01).
Job quality is explained by stress equaling 10 percent. VIF values among
independent variables in less than 10 (Maximum of VIF value = 2.186), and little
multicolinearity is accepted. Thus, H2a is supported.
Table 4 shows the results of regression analysis to inference H2b that is
measured via user information satisfaction and monitoring items, moderated by
self esteem. The results indicate that in Model 5 of regression equation
consisting of stress, and self-esteem as independent variables, and job quality as
dependent variable, there is a significant and positive association between stress
and job quality (b =. 418; p<.01); but finding shows not significant relationship
between interaction of stress and self-esteem with job quality (b =. 056; p >.44).
Therefore, H2b is not supported.
5. CONTRIBUTIONS AND FUTURE DIRECTIONS FOR RESEARCH
5.1 Theoretical Contributions and Future Direction for Research
This research aims to provide an understanding of ethical pressure and
professional expectation that have a significant direct positive influence on
accountant's stress, stress has a significant direct positive influence on job
quality. The study provides important theoretical contributions expanding on
previous knowledge and literature of ethical pressure, professional expectation,
stress, and job quality. This research is one of the first known studies to link
among ethical pressure, professional expectation, stress, and job quality in
accountants' perspective. In addition, this study examines differences of
pressure affects accountants' stress and job quality via moderating effects of
time pressure, and self-esteem. According to the results of this research, the
need for future research should have effects of ethical pressure and professional
expectation with other industry.
5.2 Managerial Contributions
This study helps accountants identify and explain key components that may
influence to accountants' stress. Accountants should be continuously training in
order to continuously maintain knowledge and increase skills and ethics that
reduce accountants' stress. Accountants should provide other factors to support
job quality including the good staff, the greater time-management, the
appropriate accounting work scope, the character and number of work when suit
to accountant's capability. An important point is accountant's professional ethic
behavior that he or she should care for acting to professions and users.
Consequently, reducing accountant's stress and job quality are needed for
businesses and managers.
6. CONCLUSION
and are "monstrous sweatshops of the New World Order," according to the
National Labor Committee, the New York-based group that publicized the issue.
But Honduran union leaders universally resent the moralizing of U.S. labor
activists who, like the National Labor Committee, are funded by organized labor
committed to preserving American jobs.
According to Honduran labor leaders, maquiladoras are increasingly unionized
and offer wages two-to-three times the minimum wage. These are prime jobs in
an economy in which almost half of the population can find no work at all. Labor
shortages at these jobs have helped bump up wages throughout the economy.
Even the bugaboo of child labor is more complicated than it seems. Honduran
adolescents are legally allowed to work at 14 with parental permission, and most
are desperate to help their families. The frenzy sparked by the Gifford spectacle
has led to the dismissal of hundreds of legally hired adolescents. Rather than
returning to school, which is not an option for most families who cannot afford to
feed and clothe their children, adolescents buy documents to work at even lower
pay or in some cases peddle their bodies. When confronted with the
consequences of their highpowered campaign, the New York labor group offered
little solace: "Obviously, this is not what we wanted to happen."
Although many clothing companies, such as Nike, KMart, JC Penney, and Reebok,
have rushed to pass sourcing codes, few make the effort to examine the
complexity of these issues. Of the high-profile retailers, Levi Strauss and The Gap
have distinguished themselves by devoting considerable resources to identifying
the first link in the supply chain (the shops that supply their suppliers) and
bringing direct pressure to establish minimum wage standards and working
conditions.
The Never-Never-Land of Good Intentions
Celebrating "good intentions" when complex social problems are at issue and not
understood goes to the heart of the corporate ethics conundrum. Rewarding
noble posturing also obscures meaningful progress by "messier" companies.
While many highly praised "New Age" firms have been found lacking in critical
areas of accountability and honesty of marketing, some of yesterday's most
vilified companies have quietly moved to the forefront of corporate
responsibility. Despite their regular appearances on "dishonorable" lists,
controversial multinationals such as Monsanto, DuPont, or Gillette offer fair
wages and benefits, have launched impressive affirmative action practices, are
addressing complicated environmental issues, actively engage their community
responsibilities, give many millions of dollars to charity, and sell quality,
competitively priced products and services.
Reforms can reduce litigation expenses, lighten regulatory pressures, and
improve company morale. Frequently they can result in considerable savings in
their own right. Selling necessary products with an eye to a broader definition of
stakeholder responsibilities is not politically sexy, but it can promote positive
social change.
When comparing these environmental and social reforms with the cosmetic code
at Starbucks or other boutique retailers, one has to wonder how they rack up so
many "good business" honors. A more basic question is why do so many "socially
responsible" awards go to companies that sell commodity goods to affluent
consumers at eyepopping prices?Starbucks, for example, where mark-ups
exceed 1,000 percent?
When asked why Starbucks was honored, CEP's Marlin says, "We want to reward
positive role models." Dare one suggest that CEP should have waited until
Starbucks did more than pass a "framework for a code of conduct," as admirably
symbolic as it may be? According to Starbucks, its code has had no effect on the
way it does business in Guatemala or dozens of other countries.
Awarding "A"s for visionary rhetoric shifts focus away from corporate governance
and behavior to the nevernever-land of good intentions. It's a dangerous trend
that companies promote Thoreau-like mission statements without organizational
commitments to implement those ideals. Character demonstrated by actions, not
by intentions, is the only reliable measure of corporate ethics.
Raising the Ethical Parapet
Socially responsible business, by promoting boutique social issues and using
simplistic litmus tests, encourages cynicism. Can we break out of this ideological
box and raise the ethical parapet? This special issue of At Work moves us beyond
the concept of corporate responsibility to its expression. How are companies
manifesting social and environmental responsibility? In what ways can they be
influenced to become better in this arena?
Our first articles describe the steps taken by the chemical industry and one of its
member firms, Velsicol Chemical Corporation, to become accountable to local
communities and to the environment. Then David Mager draws from 20 years of
experience to tell how socially and environmentally responsible behavior benefits
the bottom line.
Richard Adams' description of the new retail chain he has founded, Out of this
World, illustrates how it is possible to incorporate the means for corporate
accountability to multiple stakeholders into the design and operation of a
company.
We conclude with two articles that examine the principal avenues owners can
take to influence corporate behavior in a positive direction: ethical investing and
pension fund activism.
The corporate world cannot be divided easily into "good guys" and "evil
companies." Companies are dysfunctional families writ large. Mistakes,
sometimes whoppers, are built into life, including the life of corporations. Selfscrutiny and accountability are essential. The measure of a company's integrity
is not how loudlyit beats its own breast, or whether it blunders, but its respect for
its stakeholders and its responsiveness to problems.
1
"In Honduras, 'Sweatshops' Can Look Like Progress." New York Times, July 18,
1996, p. A1.
Circumstances impact upon the problem definition (for whom does the
problem exist? What is the setting?)
Corporate governance
Corporate governance is a broad term that has to do with the manner in which
the rights and responsibilities are shared among owners, managers and
shareholders of a given company.
In essence, the exact structure of the corporate governance will determine what
rights, responsibilities, and privileges are extended to each of the corporate
participants, and to what degree each participant may enjoy those rights.
Generally, the foundation for any system of corporate governance will be
determined by several factors, all of which help to form the final form of
governing the company.
Within any corporation, the structure of corporate governance begins with laws
that impact the operation of any company within the area of jurisdiction.
Companies cannot legally operate without a corporate structure that meets the
minimum requirements set by the appropriate government jurisdiction. All
founding documents of the company must comply with these laws in order to be
granted the privilege of incorporation. In many jurisdictions, these documents
are required by law to contain at least the seeds of how the company will be
structured to allow the creation of a balance of power within the corporation.
Much of the basis for corporate governance is found in the documents that must
be prepared and approved before incorporation can take place. These
documents help to form the basis for the final expression of the balance of power
between shareholders, stakeholders, management, and the board of directors.
The bylaws, articles of incorporation, and the company charter will all include
details that determine who has what authority in the decision making process of
the company.
Along with the laws of the land and the founding documents, corporate
governance is further refined by the drafting of formal policies that not only
recognize the assignment of powers in accordance to the bylaws and corporate
charter, but also help to further define how those powers may be employed. This
helps to allow the company some degree of flexibility in maintaining a balance of
power as the company grows, without undermining the rights and privileges
inherent in each type of corporate participation.
Fundamental
Governance
and
Ethics
Theories
of
Corporate
governance began with the agency theory, expanded into stewardship theory and
stakeholder theory and evolved to resource dependency theory, transaction cost theory,
political theory and ethics related theories such as business ethics theory, virtue ethics
theory, feminists ethics theory, discourse theory and postmodernism ethics theory.
However, these theories address the cause and effect of variables, such as the configuration
of board members, audit committee, independent directors and the role of top management
and their social relationships rather than its regulatory frameworks. Hence, it is suggested
that a combination of various theories is best to describe an effective and good governance
practice rather than theorizing corporate governance based on a single theory.
Introduction
Corporations have become a powerful and dominant institution. They have reached to every
corner of the globe in various sizes, capabilities and influences. Their governance has
influenced economies and various aspects of social landscape. Shareholders are seen to be
losing trust and market value has been tremendously affected. Moreover with the emergence
of globalization, there is greater deterritorialization and less of governmental control, which
results is a greater need for accountability (Crane and Matten, 2007). Hence, corporate
governance has become an important factor in managing organizations in the current global
and complex environment. In order to understand corporate governance, it is important to
highlight its definition. Even though, there is no single accepted definition of corporate
governance but it can be defined as a set of processes and structures for controlling and
directing an organization. It constitutes a set of rules, which governs the relationships
between management, shareholders and stakeholders (Ching et al, 2006). The term corporate
governance has a clear origin from a Greek word, kyberman meaning to steer, guide or
govern.
From a Greek word, it moved over to Latin, where it was known as gubernare and the
French version of governer . It could also mean the process of decision-making and the
process by which decisions may be implemented. Henceforth, corporate governance has
much a different meaning to different organizations (Abu-Tapanjeh, 2008). In recent years,
with much corporate failures, the countenance of corporate has been scared.
Corporate governance includes all types of firms and its definitions could extend to cover all
of
the economic and non-economic activities. Literatures in corporate governance provide some
form of
meaning on governance, but fall short in its precise meaning of governance. Such ambiguity
emerges
in words like control, regulate, manage, govern and governance. Owing to such ambiguity,
there are
many interpretations. It may be important to consider the influences a firm has or affected by
in order
to grasp a better understanding of governance. Owing to vast influential factors, proposed
models of
corporate governance can be flawed as each social scientist is forming their own scope and
concerns.
Hence, this article reviews various fundamental theories underlining corporate governance.
These
theories range from the agency theory and expanded into stewardship theory, stakeholder
theory,
resource dependency theory, transaction cost theory, political theory and ethics related
theories such as
business ethics theory, virtue ethics theory, feminists ethics theory, discourse theory and
postmodernism ethics theory.
shareholders wealth
Empower and
trust
Shareholders Stewards
FIRM
2.4. Resource Dependency Theory
Whilst, the stakeholder theory focuses on relationships with many groups for individual
benefits,
resource dependency theory concentrates on the role of board directors in providing access to
resources
needed by the firm. Hillman, Canella and Paetzold (2000) contend that resource dependency
theory
focuses on the role that directors play in providing or securing essential resources to an
organization
through their linkages to the external environment. Indeed, Johnson et al, (1996) concurs that
resource
dependency theorists provide focus on the appointment of representatives of independent
organizations
as a means for gaining access in resources critical to firm success. For example, outside
directors who
are partners to a law firm provide legal advice, either in board meetings or in private
communication
with the firm executives that may otherwise be more costly for the firm to secure.
It has been argued that the provision of resources enhances organizational functioning, firms
performance and its survival (Daily et al, 2003). According to Hillman, Canella and Paetzold
(2000)
that directors bring resources to the firm, such as information, skills, access to key
constituents such as
suppliers, buyers, public policy makers, social groups as well as legitimacy. Directors can be
classified
into four categories of insiders, business experts, support specialists and community
influentials. First,
the insiders are current and former executives of the firm and they provide expertise in
specific areas
such as finance and law on the firm itself as well as general strategy and direction. Second,
the
business experts are current, former senior executives and directors of other large for-profit
firms and
they provide expertise on business strategy, decision making and problem solving. Third, the
support
specialists are the lawyers, bankers, insurance company representatives and public relations
experts
and these specialists provide support in their individual specialized field. Finally, the
community
influentials are the political leaders, university faculty, members of clergy, leaders of social
or
community organizations.
2.5. Transaction Cost Theory
Transaction cost theory was first initiated by Cyert and March (1963) and later theoretical
described
and exposed by Williamson (1996). Transaction cost theory was an interdisciplinary alliance
of law,
economics and organizations. This theory attempts to view the firm as an organization
comprising
people with different views and objectives. The underlying assumption of transaction theory
is that
firms have become so large they in effect substitute for the market in determining the
allocation of
resources. In other words, the organization and structure of a firm can determine price and
production.
The unit of analysis in transaction cost theory is the transaction. Therefore, the combination
of people
with transaction suggests that transaction cost theory managers are opportunists and arrange
firms
transactions to their interests (Williamson, 1996).
Middle Eastern Finance and Economics - Issue 4 (2009) 93
2.6. Political Theory
Political theory brings the approach of developing voting support from shareholders, rather
by
purchasing voting power. Hence having a political influence in corporate governance may
direct
corporate governance within the organization. Public interest is much reserved as the
government
participates in corporate decision making, taking into consideration cultural challenges
(Pound,
1993). The political model highlights the allocation of corporate power, profits and privileges
are
determined via the governments favor. The political model of corporate governance can
have an
immense influence on governance developments. Over the last decades, the government of a
country
has been seen to have a strong political influence on firms. As a result, there is an entrance of
politics
into the governance structure or firms mechanism (Hawley and Williams, 1996).
Other than the fundamental corporate governance theories of agency theory, stewardship
theory, stakeholder theory, resource dependency theory, transaction cost theory and political
theory, there are
other ethical theories that can be closely associated to corporate governance. These include
business
ethics theory, virtue ethics theory, feminist ethics theory, discourse ethics theory, postmodern
ethics
theory.
Business ethics is a study of business activities, decisions and situations where the right and
wrongs are addressed. The main reasons for this are the power and influence of business in
any given
society is stronger than ever before. Businesses have become a major provider to the society,
in terms
of jobs, products and services. Business collapse has a greater impact on society than ever
before and
the demands placed by the firms stakeholders are more complex and challenging. Only a
handful of
business giants have had any formal education on business ethics but there seems to be more
compromises these days. Business ethics helps us to identify benefits and problems
associated with
ethical issues within the firm and business ethics is important as it gives us a new light into
present and
traditional view of ethics (Crane and Matten, 2007). In understanding the right and wrongs
in
business ethics, Crane & Matten, (2007) injected morality that is concerned with the norms,
values and
beliefs fixed in the social process which helps right and wrong for an individual or social
community.
Ethics is defined as the study of morality and the application of reason which sheds light on
rules and
principle, which is called ethical theories that ascertains the right and wrong for a situation.
Whilst business ethics theory focuses on the rights and wrongs in business, feminist ethics
theory emphasizes on empathy, healthy social relationship, loving care for each other and the
avoidance of harm. In an organization, to care for one another is a social concern and not
merely a
profit centered motive. Ethics has also to be seen in the light of the environment in which it is
exercised. This is important as an organization is a network of actions, hence influencing
transcommunal
levels and interactions (Casey, 2006). On the other end, discourse ethics theory is
concerned with peaceful settlement of conflicts. Discourse ethics, also called argumentation
ethics,
refers to a type of argument that tries to establish ethical truths by investigating the
presuppositions of
discourse (Habermas, 1996). Meisenbach (2006) contends that such kind of settlement would
be
beneficial to promote cultural rationality and cultivate openness.
Virtue ethics theory focuses on moral excellence, goodness, chastity and good character.
Virtue
is a state to act in a given situation. It is not a habit as a habit can be mindless (Annas, 2003).
Aristotle
calls it as disposition with choice or decision. For example, if a board member decides to be
honest,
now that a decision which he makes and thus strengthens his virtue of honesty. Virtue
involves two
aspects, the affective and intellectual. The concept of affective in virtue theory suggests
doing the
right thing and have positive feelings, whilst, the concept of intellectual suggests to do
virtuous act
with the right reason. Virtues can be instilled with education. Aristotle mentions that
knowledge on
ethics is just like becoming a builder (Annas, 2003). Through the process of educating and
exposure to
good virtues, the development of ethical values in a childs life is evident. Hence, if a person
is
94 Middle Eastern Finance and Economics - Issue 4 (2009)
exposed to good or positive ethical standards, exhibiting honesty, just and fairness, than he
would
exercise the same and it will be embedded in his will to do the right thing at any given
situation. Virtue
ethics is eminent to bring about the intangibles into an organization. Virtue ethics highlights
the
virtuous character towards developing a morally positive behavior (Crane and Matten, 2007).
Virtues
are a set of traits that helps a person to lead a good life. Virtues are exhibited in a persons
life.
Aristotle believed that virtue ethics consists of happiness not on a hedonistic sense, but rather
on a
broader level. Nevertheless, postmodern ethics theory goes beyond the facial value of
morality and
addressed the inner feelings and gut feelings of a situation. It provides a more holistic
approach in
which firms may make goals achievement as their priority, foregoing or having a minimal
focus on
values, hence having a long term detrimental effect. On the other hand, there are firms today
who are
so value driven that their values become their ultimate goal (Balasubramaniam, 1999).
4.0. Conclusion
This review has seen corporate governance from various theoretical perspectives. The
emergence of
agency theory, stewardship theory, stakeholder theory, transaction cost theory and political
theory
addresses the cause and effect of variables, such as the configuration of board members, audit
committee, independent directors and the role of top management. In addition, ethics in
business have
been closely associated with corporate governance. This can be seen with the association of
business
ethics theory, feminist ethics theory, discourse ethics theory, virtue ethics theory and
postmodern ethics
theory. Hence, it can be argued that corporate governance is more of a social relationships
rather than
process orientated structure. In addition, these theories focused on the view that the
shareholders
aimed to get a return on their investments. In todays business environment, business process
should
also focus on other critical factors such as legislation, culture and institutional contexts.
Corporate
governance is constantly changing and evolving and changes are driven by both internal and
external
environmental dynamics. The internal environment has a fixed mindset of shareholders
relationship
with stakeholders and maximizing profits. Whilst, issues in the external environment such as
the breakup
of large conglomerates like Enron, mergers and acquisitions of corporation, business
collaborations,
easier financial funding, human resource diversity, new business start-ups, globalization and
business
internationalization, and the advance of communication and information technology have
directly and
indirectly caused the changes in corporate governance. The current corporate governance
theories
cannot fully explain the complexity and heterogeneity of corporate business. Governance for
different
country may vary due to its cultural values, political and social and historical circumstances.
In this
sense, governance for developed countries and developing countries can vary due to the
culture and
economic contexts of individual country.
Moreover, an effective and good corporate governance cannot be explained by one theory but
it
is best to combine a variation of theories, addressing not only the social relationships but also
emphasize on the rules and legislation and stricter enforcement surrounding good governance
practice
and going beyond the norms of a mechanical approach towards corporate governance.
Literature has
proven that even with strict regulations, there have been infringements in corporate
governance. Hence
it is crucial that a holistic realization be driven across the corporate world that would bring
about a
different perspective towards corporate governance. The days of cane and bridle are
becoming a mere
shadow and the need to get to the root of a corporation is essential. Therefore, it is important
to re-visit
corporate governance in the light of the convergence of these theories and with a fresh angle,
which
has a holistic view and incorporating subjectivity from the perspective of social sciences.
Middle Eastern Finance and Economics - Issue 4 (2009) 95
Invest
or
Invest
or
Invest
or
Invest
or
Firm
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Invest
or
Appointing a CEO's successor gets a little more complicated when the chief executive officer
is also a member of the board of directors. Let's examine how muddled up things can get in
this case.
So we know that shareholders elect a board of directors for a company, and that board in turn
elects the CEO. But we've also learned that in some cases, a CEO can be a member of the
board itself. In fact, he or she can simultaneously hold the position of chairman of the board
and CEO. Those who study corporate governance call this situation CEO duality.
As you might expect, duality is controversial. Even theorists who strive to find the best ways
of managing a company are split about the issue. Two schools of thought represent the
different arguments. Advocates of agency theory argue that the positions of CEO and
chairman should be separate. They say that a single officer who holds both positions creates a
conflict of interest that could negatively affect the interests of the shareholders. Why? Well,
in this situation, the CEO/chairman is be able to direct board meetings and isn't restrained
from acting in his or her own self-interest when a separate chairman isn't there to look out for
shareholders. This very powerful CEO would therefore generally weaken the oversight power
that boards hold -- in other words, there wouldn't be a solid system of checks and balances.
And it's not just an issue of power for the acting CEO/chairman. CEO duality can also
complicate the already frustrating issue of CEO succession. In some cases, a CEO/chairman
may choose to retire as CEO, but keep his or her role as the chairman. Although this splits up
the roles, which appeases agency theorists somewhat, it nonetheless puts the new CEO in a
difficult position. The chairman is bound to question some of the new changes put in place,
and the board as a whole might take sides with the chairman whom they trust and have a
history with [source: Lavelle]. This conflict of interest would make it difficult for the new
CEO to institute any changes, as the power and influence still remains with the former CEO.
If that's agency theory, what does the opposing side argue?
CEO Duality and Stewardship Theory
Breaking Barriers
In the United States, the positions of CEO and board members have been
dominated historically by white males, but this is slowly changing. Now, about
14.5 percent of Fortune 500 companies have female CEOs [source: Shambaugh].
Women and minorities also make up about 11 percent of board members in
corporations [source: Kidder].
CEO duality is a pretty hot debate. While advocates of agency theory believe that little good
can come from a CEO who serves simultaneously as chairman of the board of directors, there
is another side to the argument. Those who support stewardship theory maintain that when
one person holds both roles, he or she is able to act more efficiently and effectively. Holding
dual roles as CEO/chairman creates unity across the company's managers and board of
directors, which ultimately allows the CEO to serve the shareholders even better.
Unfortunately, studies on the different situations (companies that have duality and those who
do not) haven't been able to come up with a clear answer on which is better for running a
company [source: Crane]. Studies seem to indicate that duality doesn't have a direct
correlation to how well a company performs. One might assume that without a separate
chairman to oversee the CEO, the environment is ripe for corruption. However, many are
surprised to learn that even in the high-profile corporate scandals of Enron and WorldCom,
which centered around CEO corruption, the companies didn't have a duality structure [source:
Knowledge@Wharton].
This last fact is even more intriguing when you consider that most CEOs of big companies in
the United States also act as the chairman. About 80 percent of the big corporations in the
United States have a system of duality [source: Alvarez]. The same isn't true in Europe,
however. There, duality is either not permitted or, as in the U.K., not very common [source:
Huse].
Up until now, we haven't discussed what is actually the most hot-button issue regarding
CEOs: salary. We'll get to that next.
CEO Salaries
We all know that our boss makes more money than we do -- but finding out just how much
more can be shocking and often hard to swallow. Chief executive officers (CEOs) obviously
get paid handsomely (for the most part). But how much is too much? CEO pay is always
controversial -- especially when the CEOs are getting perks at a time when the company isn't
doing well.
Looking at how much modern CEOs get paid, you may think that they get to decide their own
salary. But this isn't allowed in public companies. Boards of directors have that responsibility,
and this is a harder task than you might expect. Pay too much and the board risks not only
marring the public image of the company, but also squandering corporate funds. Pay too little
and the board won't be able to attract or retain talented executives who are sought after in a
competitive market.
It's such a difficult decision that boards often designate a compensation committee made up
typically of two to five board members to determine how much to pay a CEO. Regulations
stipulate that the members of this committee can't be current employees of the company
(inside directors), which would cause a conflict of interest. Although private companies aren't
required to follow such regulations, many do anyway [source: Smith].
Compensation committees often consider the advice of internal executives, but they also
recruit outside consultants to help them determine an appropriate salary for the company's
CEO. The committees strive to design an appropriate philosophy for compensating the CEO
in a way that motivates performance. After the committee makes its recommendations, the
board can decide whether or not to approve them. In the United States, Securities and
Exchange Commission (SEC) regulations require that committees explain the reasons for
their decision to shareholders in a released statement [source: Smith].
There's at least one CEO who makes less than minimum wage -- kind of. Find out who on the
next page.
CEO Perks
Loss of Loyalty
Although it used to be customary for upper-management employees to stick with
a single company for much of their lives, this tradition changed in the 1980s.
Since then, executives have been more willing to switch companies for better
offers. This trend has contributed to higher salaries for executives as companies
make bids for the best candidates on the job market [source:
Knowledge@Wharton].
Steve Jobs, the CEO of Apple whose health we discussed on a previous page, is a pretty
notable exception when it comes to high CEO salaries. Apple pays him $1 a year. You read
that right: a single dollar. But don't feel too badly for him; he actually takes home a whole lot
more than that and is reportedly worth billions [source: Knowledge@Wharton]. That's
because in lieu of a traditional paycheck, Jobs receives stock options that allow him to cash in
on the success of the company.
As Jobs' case clearly illustrates, CEO compensation is more than just salary. Actually, most
top earners receive the bulk of their take-home pay from stock options. Larry Ellison, CEO of
Oracle Corporation and the top-paid CEO of 2007, received a cool $182 million in stock
options and a mere million from his salary [source: DeCarlo]. In addition to stock options,
CEOs often get hefty bonuses, privileges to use company-paid perks (like private jets) and
large contributions to their retirement plans. And although this is great news for CEOs, it
gives researchers quite a headache. Because compensation takes so many forms, those who
want to analyze, compare and determine CEO compensation find it a daunting task.
Overall, it's important to take sensationalized reports of a CEO's high salary with a grain of
salt. It can be difficult to estimate his or her value to a company and to guess the various
factors that go into the board's difficult decision of determining salary.
If you want more on the spoken and unspoken rules that govern a company, browse the links
on the next page.
How CEOs Work
If you're too intimidated to ask him or her personally, this article will tell you
what a CEO does. See more pictures of corporate life.
You've heard about his private jet, fancy mansion and sports car collection -- not to mention
the cutthroat business practices that helped him attain all these things. He's the CEO of your
company, and you're probably lucky if he knows your name.
Well, this is the stereotypical portrait of a CEO, anyway. In reality, yours might be a nice,
down-to-earth guy, or he may be a she. Regardless, CEOs have a reputation for living
luxuriously, having keen business minds and striking fear into the hearts of employees
whenever they happen to drop in.
In corporate culture, a chief executive officer, or CEO, is the big boss. CEOs may not do the
nitty-gritty hirings and firings themselves, but they run the show. They're in charge of setting
strategy, company goals and making the high-end decisions. Because this is a big job, they
delegate many of their powers to other executives. Employees can question a CEO's
judgment, but only at their own risk. That's not to say CEOs are untouchable or have
unchecked power. Although he or she may be top dog in the office, the CEO must answer to
a board of directors.
Nevertheless, the power associated with the position often generates suspicion and
controversy. When a company is suffering through a tough quarter and sends word to its
employees that there won't be any Christmas bonus this year, it certainly looks bad to see a
CEO take an increase in salary and fly off on vacation in the company jet. It's also suspicious
when a company's CEO serves simultaneously as chairman of the board of directors. What's
more, the position draws heightened scrutiny these days after such corporate scandals as
Enron exposed CEOs abusing their power.
Before we delve into these and other controversies that swarm around CEOs, we need to
understand what these officers do. It can be difficult to define a CEO's responsibilities due to
the fact that every company's CEO is different. Because they hold the top internal position in
a corporation, CEOs get to decide which duties they want to take on personally and which
they want to delegate. And because every corporation has its own culture and various
industries operate on different corporate structures, we'll have to look at the role from a
general perspective. Let's start with a brief overview of how corporations work.
Corporate Structure: Board of Directors
Have you ever tried to understand the ranks of executives in a company only to get lost in
acronyms and jargon? You're not alone; the balance of power in the corporate world can be
confusing even to those entrenched in it. But don't dismay: We'll walk you through the basic
corporate structure.
Just like many governments, corporations have a system of checks and balances so that not
too much power is centered in one person or group. In companies, the structure is set up to
separate powers of ownership and management. This wasn't always the case. Before the
Industrial Revolution in the 19th century, companies were typically family-run and very
small by today's standards. But eventually, powered by machines and advanced efficiency,
individual companies grew exponentially. Soon after came the dawn of public ownership of
companies, which helped fund these gargantuan institutions.
When various shareholders have partial ownership of a company, they want to make sure
whoever's running the show is looking out for their best interests. This is what a board of
directors is for. The board represents the shareholders and other stakeholders (those who
have a vested interest in the company). The board of directors doesn't run the company itself,
but it oversees those who do.
In a public company, the shareholders elect the members of a board of directors. The board is
headed by a chairman and contains other directors, the number of which varies from
company to company. Directors can be either inside directors or outside directors. Inside
directors are those who are also managers in the company or happen to be major
shareholders. Outside directors, on the other hand, don't have a role in the company. They
typically have experience in the industry (or might even be chief executive officers of other
companies), which allows them to make informed decisions about the business. Some have
memberships on multiple boards.
While inside directors can share their unique insight from an internal perspective, outside
directors are considered unbiased. Both kinds of directors have the same general
responsibilities on a board. Directors oversee the management of the company collectively by
approving strategies and budgets. They may not meet regularly, and the influence they truly
wield over management can depend on the dynamics and atmosphere of the company.
Corporate Structure: Company Management Ladder
Private Matters
Today, even if a corporation is private and isn't publicly traded, laws and
regulations usually require it to have a board of directors that looks out for the
interests of owners and various stakeholders, such as the local community.
However, the board of a private company has fewer oversight rules and
regulations to follow. Many private companies also have CEOs, though not all of
them do.
The different theories on how best to organize and run a large corporation have allowed the
subject to blossom into its own field of study known as corporate governance. Under this
subject, researchers inspect such things as how many inside or outside directors should make
up a board, or the best balance of powers between the board and the CEO.
Next, let's focus in on the CEO.
Duties of a CEO
A CEO must make the important high-end decisions for the company.
Putting aside the vague language, what does a chief executive officer (CEO) do, exactly?
All CEOs are responsible for determining the overall strategy of a company. For example, the
CEO of a car company would have to decide whether to focus on building large SUVs for the
family and adventurer demographic or to jump on the latest green trend and build vehicles
with more efficient gas mileage, instead. The CEO of a company that makes computers might
decide whether to cut prices to be more competitive in the consumer market or to hire more
engineers so that the company can make a better computer.
The CEO's day-to-day duties may depend on the size of the company he or she oversees. In a
big company, setting the strategy in all departments and for all facets of the industry can be a
full-time job. This is why you never see CEOs of large corporations stepping into the
warehouse and helping to get orders through (except, perhaps, in photo ops). In smaller
companies and start-ups, things are usually different. A CEO who was also the founder of the
company and is struggling to make it grow probably has a more hands-on role. He or she is
more likely to step into any role necessary to get the job done. And, of course, the daily
responsibilities of a CEO may also vary across industries.
Even though they can delegate power, CEOs are ultimately responsible for everything related
to management, such as operations and financial matters. This means that the chief operating
officer (COO) and chief financial officer (CFO) report directly to the CEO. As we've
mentioned, since the board of directors chooses the CEO, the CEO must, in turn, report to the
board.
Depending on how involved the board chooses to be, it can take a backseat to the CEO's
vision and decisions. Or, the board could opt to take a more direct role and charge the CEO
with carrying out its plans. The CEO's personality is a major factor in determining his or her
relationship with the board. In general, CEOs tend to have domineering, arresting
personalities that can help them wield power over a board. But because the board has the
power to choose and remove the CEO, there's always that check on power that can reign in a
CEO's behavior.
More CEO Responsibilities
Regardless of whether it's a big or small company that he or she oversees, the CEO is usually
instrumental in setting the tone for an organization. CEOs are able to use their power and
method of leadership in a way that motivates employees. For instance, if employees get the
impression that their CEO is working as hard as they do and that he or she really appreciates
their hard work, this can elicit loyalty from all levels of employees. But the CEO doesn't
always set a positive tone; his or her behavior can discourage employees as easily as it
bolsters their morale. If a CEO comes across as unattached to the company's employees and
flies off frequently on exotic vacations, employees may not feel compelled to work hard for
him or her.
Many people assume that because of their heavy responsibilities, CEOs are especially prone
to stress-related health problems. According to some research, however, those in mid-level
management are more likely to develop health problems than those who work at higher levels
of the corporate ladder [source: Quick]. So it would seem that more responsibility doesn't
necessarily equate to more stress. However, some argue that top-ranking CEOs are able to
avoid job stress by dodging responsibility. When a company's performance takes a dive,
CEOs may try to pass the buck down to lower executives. Although this is just one possible
explanation for why CEOs wouldn't be as stressed as some of those managers to whom they
delegate power, shirking responsibility has shown to be an unwise business tactic. According
to some studies of Fortune 500 companies, when high-level executives take the blame for
slumps, it's more likely to result in improved performance from the employees [source:
Pfeffer]. Other studies confirm that even in hypothetical situations, employees are more likely
to approve of and respect executives who shoulder the blame for unfavorable events [source:
Pfeffer].
Because CEOs are so vital to the success, identity and tone of a company, controversy always
lurks around the corner when the top dog retires, as we'll see next.
The Problem of Losing a CEO
Car accidents, heart attacks, cancer. As much as we hate to think about it, no one lives
forever. If a CEO is truly successful, he or she won't outlive the corporation itself. And,
CEOs may also choose to leave the company suddenly to go another organization, to pursue
other exploits or retire. Of course, the board can always fire the CEO as well.
Whatever the cause, when a company loses a CEO, it can be like the frenzy of a chicken
running around with its head cut off. That's because of the problem of CEO succession -- in
other words, deciding who will be a suitable replacement. Just as monarchies have struggled
historically with the death of a king who has no strong or obvious successor, so must
companies struggle with the departure of a CEO. If companies aren't careful, what plays out
is the stuff of Shakespearean drama. In fact, in the 2000 motion picture release of
Shakespeare's "Hamlet," which deals with problems of royal succession, filmmaker Michael
Almereyda modernized the plot to revolve around the death of a CEO in place of a king.
So why is naming a new CEO such a big deal? Why does the media rush to the scene when
Steve Jobs, the CEO of Apple, so much as sneezes? Basically, it's because of the reasons we
laid out on the last page -- the CEO is the lifeblood of a company. He or she sets the direction
of a corporation, and shareholders don't want to hold on to the stock of a directionless
company for long. Jobs himself is a great example of this because many credit him with
saving Apple from the brink of bankruptcy and subsequently raising it to enormous success.
Without him, some fear the company might sink yet again. To see evidence of how much a
company hinges on its CEO, note how Apple's shares dipped at the mere rumor of Jobs'
remission into ill health [source: Reuters]. In January 2009, news surfaced of Steve Jobs
taking a leave of absence from his position at Apple. The announcement was enough to
institute a temporary halt on the trading of Apple stock. To calm investors, Jobs appointed
COO Tim Cook to take over daily operations for him during his leave.
CEO Succession
But not every CEO phases himself or herself out of the picture gradually like Gates did. In
the modern dynamic of corporate culture, a board of directors is more likely to take an
aggressive role in appointing a successor. In fact, it's not uncommon for the board to make an
independent choice, perhaps selecting a candidate from outside the company. Hiring CEOs
from outside the organization has become more popular lately. In the 1960s, for instance,
outsiders accounted for 9 percent of new CEOs, but by 2000, this figure had risen to about 33
percent [source: Carey]. Theorists disagree about what factors are behind this shifting
ideology. Some claim that boards increasingly (and unwisely) seek charismatic, superstar
CEOs for the illusion of strong company leadership [source: Monks].
Because of the problems than can ensue from the sudden death or departure of a CEO,
experts recommend that boards always have a plan ready for a stable transition. This would
involve communicating with various managers to appoint the best successor [source: Monks].
Production management
How Transportation method works?
The transportation method consists of the following three steps:
1
2
3
Layout Planning
The term 'layout planning' can be applied at various levels of planning:
Plant location planning (where you are concerned with location of a factory or a warehouse or
other facility.) This is of some importance in design of multi-nationally cooperating, Globalsupply Chain systems.
Department location Planning: This deals with the location of different departments or sections
within a plant/factory. This is the problem we shall study in a little more detail, below.
Machine location problems: which deal with the location of separate machine tools, desks,
offices, and other facilities within each cell or department.
Detailed planning: The final stage of a facility planning is the generation, using CAD tools or
detailed engineering drawings, of scaled models of the entire floor plans, including details such
as the location of power supplies, cabling for computer networks and phone lines, etc.
The Department Location Problem: A department is defined as any single, large resource, with a
well defined set of operations, and fixed material entry and exit points. Examples range from a
large machine tool, or a design department. The aim is to develop a BLOCK PLAN showing the
relative locations of the departments.
Criteria: The primary criteria for evaluating any layout will be the:
MINIMIZATION of material handling costs.
MH cost components: depreciation of MH equipment, variable operating costs, labor expenses.
Also, MH costs are typically directly proportional to (a) the frequency of movement of material,
and (b) The length over which material is moved.
adjacent machine
single part
unit load
another plant
The spine defines a central channel of material flow for the entire facility. Each department
branches out of this central core. Ideally, each department has its own input/output area along the
spine. This departmental point of usage concept reduces material flow.
We shall now look at some details of how to locate departments along a spine to optimize the
flow of materials. Let us first try to see if we can evaluate whether there is a dominant flow
pattern in a manufacturing system or not.
wij = fij.hij
Given the values of all the wij's, one measure of flow dominance is the coefficient of variation,
defined as:
3. Quantitative analysis: Some factors, such as flow costs, can be quantified. Several others
are not so easy to quantify. For example
a. MH receiving and delivery stations to be kept together.
b. Delicate testing equipment should be placed far from high vibration areas, etc.
Such relationships can be quantified by using REL diagrams, as shown in the
figure below. The relative importance of each factor is expressed in terms of
subjective evaluations, ranging from A (absolutely necessary) to U (unnecessary),
and X (necessary to keep apart).
The diagram can also give reasons for such decisions. An example is shown
below.
To give a numerical example, assume that we allow: V(A) = 81, V(E) = 27, V(I) =9,
V(O) = 3 and V(U) = 1. Then the closeness ratings corresponding to each department in
the example figure above are:
Department
Total
Closeness
Rating
SR
9+3+9+3+81
= 105
PC
9+0+1+1+27
= 38
PS
58
IC
39
XT
35
AT
165
In the above, the X-ratings were ignored in order to allow each department to have a fair
chance in placement in the initial design of the layout. The real value of this rating will be
used later, when we put some effort into modification on the first-guess solution.
Forming the first guess solution (greedy algorithm):
Step 1. Notice that AT has the highest rating, and so is placed in the center of the layout
(why ?)
Step 2. The next highest ranked department is SR, which may be placed adjacent to AT
due to their mutual A-rating. We put it on top of AT.
Step 3. Next up is PS, which should go adjacent to AT (since V(AT,PS) is the highest
rated closeness value for PS.
Step 4. Next comes XT, which should be close to PS.
Step 5. Next is IC, which should be close to AT and is placed below it.
Step 6. Finally, we have PC, which must stay away from PS.
Using these directions, we have a first attempt at the layout as follows:
Notice the odd shape of the final layout. This does not matter, since we still have not
considered the relative sizes of the departments. But before considering that, we must
also attempt to improve upon our greedy solution.
One heuristic to do so is called the 2-Opt method. A k-opt method is said to have
converged when any switching between k variables (in this case, locations of
departments) cannot improve upon the objective (in our case, minimization of the total
MH cost).
The 2-Opt procedure to improve on the greedy solution is pretty straightforward, and
described rather well in your text (Askin and Standridge, pp 219). In summary, it is a hillclimbing heuristic, in which, starting from the initial solution, at each step we compute
the reduction (if any) in cost associated with switching the positions of each pair of
departments.
The pair which yields the maximum reduction in costs (steepest local benefit) is selected
at this step. The switch is made, and the procedure continues, until at some stage, we are
unable to find any pair-switch which improves on the MH cost.
In the above, the MH cost associated with any pair of departments is often based on the
estimated MH cost factor, wij that we computed earlier, multiplied by an estimate of the
distance between the two cells.
5. Space requirements: these are determined based on industrial standards, equipment
required, shelf space required, etc.
6. Space availability: this is determined based on the economic analysis, as well as on other
constraints that may arise (especially if the system is to be housed in an existing facility).
The last two considerations will give an estimate of total space for each department, and
sometimes also the shape of each department (based on flow type within the department).
7. Space relationship diagram: In this part, we substitute in the actual area on each
department, and fit the departments into the available space. Usually, the solution
methods may be computer-assisted heuristics, or just direct visual methods.
8. Putting in the constraints: Finally, other existing constraints are employed to cut down the
number of feasible solutions, to result in a small set of solutions. from among these,
direct comparison can be used to rank, eliminate, or select the optimum design.
The decision theory is a branch of the applied probability theory, which evaluates consequences
of decisions. The decision theory is often used as economical instrument. Two well-known
methods in addition are e.g. the simple efficiency analysis (NWA) or the more precise Analytic
Hierarchy Process (AHP), where criteria and alternatives are represented, compared and
evaluated, in order to find the optimal solution to a decision or a problem definition.
One differentiates between three subsections of the decision theory:
1. The normative decision theory looks for criteria of rational deciding and wants to give
assistance for the question, how one is to decide in a given situation reasonably. In addition it
must meet some simplifying model acceptance, then it must proceed for example from the axiom
of the of the Entscheiders.
2. The decision theory concerns itself with the supply from procedures to the precipitation of
rational and practicable decisions.
3. The descriptive decision theory examines against it empirically the question how decisions
in the reality are actually made.
The basic model (normative) of the decision theory consists of the decision field and the target
system. The decision field contains the activity space (quantity of the possible action
alternatives), the Zustandsraum (quantity of the possible environmental conditions) and a result
function, which assign a value to each combination of action and condition. A frequent problem
is that the true environmental condition does not admit is. Here one speaks of uncertainty,
contrary to the situation of the security, in which the environmental condition admits is. The
uncertainty situation can be arranged into
* Decision under security: The occurring situation is well-known. (Deterministic decision
model)
* Decision under uncertainty: It is not with security well-known, which environmental
situation occurs s_j, one continues to differentiate thereby in:
o Decision under risk: The probability p_j for the possibly occurring environmental
situations s_j is well-known. (Stochastic decision model)
o Decision under uncertainty: One knows the possibly occurring environmental situations,
however not their probabilities of entrance.
With a decision under risk expectancy values can be calculated over all possible consequences of
each individual decision, while with a decision under uncertainty and/or is used the principle is
not possible by the insufficient reason/Indifferenzprinzip, which assigns the same probability to
each option. On the basis of such probability evaluations a determination of the expectancy value
can be made also under uncertainty
(In or multi-level) the decision-making process with the different consequences can be plotted as
decision tree.
The decision theory is not applicable, if the entrepreneur and/or manager lets this competition
likewise flow competed with a rationally acting opponent (a competitor about) to which into his
decision. This can do also with the help of the probability calculation alone is not no more
illustrated, since the behavior of the opponent is not deterministically however not coincidental.
In such a case the game theory is used.
The decision theory is used recently also with the evaluation of investments. Under the name
material option is used the decision tree procedure (and/or options) for it to be able to decide the
value of flexibility concerning decisions - i.e. the option (at a later time) - to judge.
OPERATIONS SCHEDULING
Scheduling pertains to establishing both the timing and use of resources within an organization.
Under the operations function (both manufacturing and services), scheduling relates to use of
equipment and facilities, the scheduling of human activities, and receipt of materials.
While issues relating to facility location and plant and equipment acquisition are considered long
term and aggregate planning is considered intermediate term, operations scheduling is considered
to be a short-term issue. As such, in the decision-making hierarchy, scheduling is usually the
final step in the transformation process before the actual output (e.g., finished goods) is
produced. Consequently, scheduling decisions are made within the constraints established by
these longer-term decisions. Generally, scheduling objectives deals with tradeoffs among
conflicting goals for efficient utilization of labor and equipment, lead time, inventory levels, and
processing times.
Byron Finch notes that effective scheduling has recently increased in importance. This increase
is due in part to the popularity of lean manufacturing and just-in-time. The resulting drop in
inventory levels and subsequent increased replenishment frequency has greatly increased the
probability of the occurrence of stock-outs. In addition, the Internet has increased pressure to
schedule effectively. "Business to customer" (B2C) and "business to business" (B2B)
relationships have drastically reduced the time needed to compare prices, check product
availability, make the purchase, etc. Such instantaneous transactions have increased the
expectations of customers, thereby, making effective scheduling a key to customer satisfaction. It
is noteworthy that there are over 100 software scheduling packages that can perform schedule
evaluation, schedule generation, and automated scheduling. However, their results can often be
improved through a human scheduler's judgment and experience.
There are two general approaches to scheduling: forward scheduling and backward scheduling.
As long as the concepts are applied properly, the choice of methods is not significant. In fact, if
process lead times (move, queue and setup times) add to the job lead time and process time is
assumed to occur at the end of process time, then forward scheduling and backward scheduling
yield the same result. With forward scheduling, the scheduler selects a planned order release date
and schedules all activities from this point forward in time.
With backward scheduling, the scheduler begins with a planned receipt date or due date and
moves backward in time, according to the required processing times, until he or she reaches the
point where the order will be released.
Of course there are other variables to consider other than due dates or shipping dates. Other
factors which directly impact the scheduling process include: the types of jobs to be processed
and the different resources that can process each, process routings, processing times, setup times,
changeover times, resource availability, number of shifts, downtime, and planned maintenance.
LOADING
Loading involves assigning jobs to work centers and to various machines in the work centers. If
a job can be processed on only one machine, no difficulty is presented. However, if a job can be
loaded on multiple work centers or machines, and there are multiple jobs to process, the
assignment process becomes more complicated. The scheduler needs some way to assign jobs to
the centers in such a way that processing and setups are minimized along with idle time and
throughput time.
Two approaches are used for loading work centers: infinite loading and finite loading. With
infinite loading jobs are assigned to work centers without regard for capacity of the work center.
Priority rules are appropriate for use under the infinite loading approach. Jobs are loaded at work
centers according to the chosen priority rule. This is known as vertical loading.
Finite loading projects the actual start and stop times of each job at each work center. Finite
loading considers the capacity of each work center and compares the processing time so that
process time does not exceed capacity. With finite loading the scheduler loads the job that has
the highest priority on all work centers it will require. Then the job with the next highest priority
is loaded on all required work centers, and so on. This process is referred to as horizontal
loading. The scheduler using finite loading can then project the number of hours each work
center will operate. A drawback of horizontal loading is that jobs may be kept waiting at a work
center, even though the work center is idle. This happens when a higher priority job is expected
to arrive shortly. The work center is kept idle so that it will be ready to process the higher
priority job as soon as it arrives. With vertical loading the work center would be fully loaded. Of
course, this would mean that a higher priority job would then have to wait to be processed since
the work center was already busy. The scheduler will have to weigh the relative costs of keeping
higher priority jobs waiting, the cost of idle work centers, the number of jobs and work centers,
and the potential for disruptions, new jobs and cancellations.
If the firm has limited capacity (e.g., already running three shifts), finite loading would be
appropriate since it reflects an upper limit on capacity. If infinite loading is used, capacity may
have to be increased through overtime, subcontracting, or expansion, or work may have to be
shifted to other periods or machines.
SEQUENCING
Sequencing is concerned with determining the order in which jobs are processed. Not only must
the order be determined for processing jobs at work centers but also for work processed at
individual work stations. When work centers are heavily loaded and lengthy jobs are involved,
the situation can become complicated. The order of processing can be crucial when it comes to
the cost of waiting to be processed and the cost of idle time at work centers.
There are a number of priority rules or heuristics that can be used to select the order of jobs
waiting for processing. Some well known ones are presented in a list adapted from Vollmann,
Berry, Whybark, and Jacobs (2005):
Random (R). Pick any job in the queue with equal probability. This rule is often used as a
benchmark for other rules.
First come/first served (FC/FS). This rule is sometimes deemed to be fair since jobs are
processed in the order in which they arrive.
Shortest processing time (SPT). The job with the shortest processing time requirement
goes first. This rule tends to reduce work-in-process inventory, average throughput time,
and average job lateness.
Earliest due date (EDD). The job with the earliest due date goes first. This seems to work
well if the firm performance is judged by job lateness.
Critical ratio (CR). To use this rule one must calculate a priority index using the formula
(due datenow)/(lead time remaining). This rule is widely used in practice.
Least work remaining (LWR). An extension of SPT, this rule dictates that work be
scheduled according to the processing time remaining before the job is considered to be
complete. The less work remaining in a job, the earlier it is in the production schedule.
Fewest operations remaining (FOR). This rule is another variant of SPT; it sequences
jobs based on the number of successive operations remaining until the job is considered
complete. The fewer operations that remain, the earlier the job is scheduled.
Slack time (ST). This rule is a variant of EDD; it utilizes a variable known as slack. Slack
is computed by subtracting the sum of setup and processing times from the time
remaining until the job's due date. Jobs are run in order of the smallest amount of slack.
Slack time per operation (ST/O). This is a variant of ST. The slack time is divided by the
number of operations remaining until the job is complete with the smallest values being
scheduled first.
Next queue (NQ). NQ is based on machine utilization. The idea is to consider queues
(waiting lines) at each of the succeeding work centers at which the jobs will go. One then
selects the job for processing that is going to the smallest queue, measured either in hours
or jobs.
Least setup (LSU). This rule maximizes utilization. The process calls for scheduling first
the job that minimizes changeover time on a given machine.
These rules assume that setup time and setup cost are independent of the processing sequence.
However, this is not always the case. Jobs that require similar setups can reduce setup times if
sequenced back to back. In addition to this assumption, the priority rules also assume that setup
time and processing times are deterministic and not variable, there will be no interruptions in
processing, the set of jobs is known, no new jobs arrive after processing begins, and no jobs are
canceled. While little of this is true in practice, it does make the scheduling problem manageable.
GANTT CHARTS
Gantt charts are named for Henry Gantt, a management pioneer of the early 1900s. He proposed
the use of a visual aid for loading and scheduling. Appropriately, this visual aid is known as a
Gantt chart. This Gantt chart is used to organize and clarify actual or intended use of resources
within a time framework. Generally, time is represented horizontally with scheduled resources
listed vertically. Managers are able to use the Gantt chart to make trial-and-error schedules to get
some sense of the impact of different arrangements.
There are a number of different types of Gantt charts, but the most common ones, and the ones
most appropriate to our discussion, are the load chart and schedule chart. A load chart displays
the loading and idle times for machines or departments; this shows when certain jobs are
scheduled to start and finish and where idle time can be expected. This can help the scheduler
redo loading assignments for better utilization of the work centers. A schedule chart is used to
monitor job progress. On this type of Gantt chart, the vertical axis shows the orders or jobs in
progress while the horizontal axis represents time. A quick glance at the chart reveals which jobs
are on schedule and which jobs are on time.
Gantt charts are the most widely used scheduling tools. However, they do have some limitations.
The chart must be repeatedly updated to keep it current. Also, the chart does not directly reveal
costs of alternate loadings nor does it consider that processing times may vary among work
centers.
SCHEDULING SERVICE OPERATIONS
The scheduling of services often encounters problems not seen in manufacturing. Much of this is
due to the nature of service, i.e., the intangibility of services and the inability to inventory or
store services and the fact that demand for services is usually random. Random demand makes
the scheduling of labor extremely difficult as seen in restaurants, movie theaters, and amusement
parks. Since customers don't like to wait, labor must be scheduled so that customer wait is
minimized. This sometimes requires the use of queuing theory or waiting line theory. Queuing
theory uses estimate arrival rates and service rates to calculate an optimum staffing plan. In
addition, flexibility can often be built into the service operation through the use of casual labor,
on-call employees, and cross-training.
Scheduling of services can also be complicated when it is necessary to coordinate and schedule
more than one resource. For example, when hospitals schedule surgery, not only is the
scheduling of surgeons involved but also the scheduling of operating room facilities, support
staff, and special equipment. Along with the scheduling of classes, universities must also
schedule faculty, classrooms, labs, audiovisual and computer equipment, and students. To further
complicate matters, cancellations are also common and can add further disruption and confusion
to the scheduling process.
Instead of scheduling labor, service firms frequently try to facilitate their service operations by
scheduling demand. This is done through the use of appointment systems and reservations.
Frank and Lillian Gilbreth
Pioneers of Ergonomics and "Time and Motion"
Efficiency and productivity go together, and working efficiently has many meanings. It's not just
about working in a way that allows you to get the most done in a fixed period of time. It also
involves making sure that you don't hurt productivity.
If you work too fast, you risk making mistakes. You also risk becoming so tired, either mentally
or physically, that you have to stop working too early, which means that your total efficiency
suffers.
Today, we regularly use ergonomic principles to design work and workplace equipment. From
something as simple as placing the photocopier in a central location, to custom designing
workstations to minimize repetitive strain injuries, the principles of work efficiency are all
around us. But where did these ideas originate?
The poorly-designed, inefficient workplaces of the late 19th century led to the scientific
management movement in the early 20th century, which applied the scientific method to the
study of the workplace. Frank Gilbreth and his wife, Lillian, were supporters of this movement.
The Gilbreths pioneered the study of "time and motion" at work. They were interested in
efficiency, so they set up experiments to examine the movements that individual workers made
while doing their daily work.
Before he became a workplace researcher, Frank was a bricklayer. He noted that every worker
had his own way of laying bricks. By observing these individual methods, he determined the
most efficient way to complete the task. Frank believed that by working efficiently, both the
employer and the worker would benefit employers would gain more productivity, and workers
would have reduced stress and fatigue. His observations eventually led to a new way of laying
bricks that more than doubled daily output.
Another of Frank's studies led to creating the role of the surgical assistant in modern operating
rooms. Instead of the surgeon finding each instrument he needed, a nurse would stand by and
hand the surgeon the appropriate tool.
Interesting Fact: The book "Cheaper by the Dozen" was written by
Frank and Lillian's children Frank Jr. and Ernestine. There were 12
children in the family, and the book (and subsequent movies)
highlighted the efficiencies that were introduced into their household
as a result of their parents' methods.
Experimental Technique
Work simplification strategies can be traced back to the work of the Gilbreths, whose methods
were quite sophisticated. For example, they weren't satisfied with simply saying that a person
"moved the hand," so they broke down this action into 17 separate units of motion. They called
each motion a "therblig," which is "Gilbreth" spelled backward (the "th" is transposed for easier
pronunciation).
They also invented a microchronometer to study work motion. This is a clock capable of
recording time to the 1/2000th of a second. By placing the clock in the field of the picture, they
could break movements down into very small units of time. Henry Gantt, the originator of the
Gantt Chart, was a contemporary of the Gilbreths, who used a Gantt Chart to demonstrate
graphically the various pieces of a larger task.
The Gilbreths' discoveries about workplace efficiency were not limited to the need to increase
output. They were also interested in how workers could reduce fatigue. From this industrial
psychology perspective, they advanced ideas about how best to train and develop workers.
Tactics like job rotation and finding work best suited for a worker's natural skills and abilities
developed from the Gilbreths' extensive experiments.
While the Gilbreths' work is very important, their methods are no
longer used directly in the modern workplace. However, the
underlying theory of workplace efficiency remains strong. See a
current list of team tools to improve the effectiveness and
functionality of your team, and learn about the Kaizen approach to
efficiency.
Key Points
While you may not have known the names Frank and Lillian Gilbreth before reading this article,
their contribution to the advancement of management science and modern management theory
was significant. Today, we're very familiar with the idea of workplace efficiency no one argues
with its importance. We can thank pioneers in the management science movement, like the
Gilbreths, for this knowledge.
Financial management
statements. Ratios tell the whole story of changes in the financial condition of the
business
Ratios highlight the factors associated with with successful and unsuccessful firm.
They also reveal strong firms and weak firms, overvalued and undervalued firms.
2. Comparative study required: Ratios are useful in judging the efficiency of the
business only when they are compared with past results of the business. However,
such a comparison only provide glimpse of the past performance and forecasts for
future may not prove correct since several other factors like market conditions,
management policies, etc. may affect the future operations.
3. Ratios alone are not adequate: Ratios are only indicators, they cannot be taken as
final regarding good or bad financial position of the business. Other things have also
to be seen.
4. Problems of price level changes: A change in price level can affect the validity of
ratios calculated for different time periods. In such a case the ratio analysis may not
clearly indicate the trend in solvency and profitability of the company. The financial
statements, therefore, be adjusted keeping in view the price level changes if a
meaningful comparison is to be made through accounting ratios.
5. Lack of adequate standard: No fixed standard can be laid down for ideal ratios. There
are no well accepted standards or rule of thumb for all ratios which can be accepted
as norm. It renders interpretation of the ratios difficult.
6. Limited use of single ratios: A single ratio, usually, does not convey much of a
sense. To make a better interpretation, a number of ratios have to be calculated
which is likely to confuse the analyst than help him in making any good decision.
7. Personal bias: Ratios are only means of financial analysis and not an end in itself.
Ratios have to interpreted and different people may interpret the same ratio in
different way.
8. Incomparable: Not only industries differ in their nature, but also the firms of the
similar business widely differ in their size and accounting procedures etc. It makes
comparison of ratios difficult and misleading.
Giving priority to value creation, managers have now shifted from traditional
approach to modern approach of financial management that focuses on wealth
maximization. This leads to better and true evaluation of business. For e.g.,
under wealth maximization, more importance is given to cash flows rather than
profitability. As it is said that profit is a relative term, it can be a figure in some
currency, it can be in percentage etc. For e.g. a profit of say $10,000 cannot be
judged as good or bad for a business, till it is compared with investment, sales
etc. Similarly, duration of earning the profit is also important i.e. whether it is
earned in short term or long term.
In wealth maximization, major emphasizes is on cash flows rather than profit. So,
to evaluate various alternatives for decision making, cash flows are taken under
consideration. For e.g. to measure the worth of a project, criteria like: present
value of its cash inflow present value of cash outflows (net present value) is
taken. This approach considers cash flows rather than profits into consideration
and also use discounting technique to find out worth of a project. Thus,
maximization of wealth approach believes that money has time value.
An obvious question that arises now is that how can we measure wealth. Well, a
basic principle is that ultimately wealth maximization should be discovered in
increased net worth or value of business. So, to measure the same, value of
business is said to be a function of two factors - earnings per share and
capitalization rate. And it can be measured by adopting following relation:
decisions when the analysis of financial statements is done for the management of
the firm. The peformane of competitors within the industry, and the viability
of businesss future can be evaluated through financial statement analysis.
2. Viability of a project can be found out through a financial statement analysis which
can be performed by financial analysts employed by the firm. Projects that would
bring in the maximum amount of revenues over the course of time over similar
projects are recommended by financial analysts to the management. Expected returns
from projects are provided by financial analysts to the management. Analysts
employed by the business can also give the management suggestions on whether to
issue new stocks or borrow money to fund new projects. Financial analysts will
recommend whether a new project should be undertaken or invest the money
somewhere else, essentially performing capital budgeting decisions.
3. Financial Institutions will carry out a financial statement analysis of a business to see
how strong its fundamentals are, and then use their findings to either make good
investments for themselves, or pass on ther findings to their clients. Large
investment corporations have their own in house financial analysts who advice to
their employers on what stocks might be a good buy, these recommendations are
usually private and only available within the company. A corporations stock price
can be affected based on a financial analyst recommendations as these
recommendations are used by stockholders to determine whether it is a good
investment. If a financial analyst after evaluating a companys financial statements
finds that the company isnt performing well, he might suggest owners to sell
the stock if they already own it. If such a suggestion were to be made public, the price
of that businesss share could see its value drop moderately.
***********************
What is trading on equity? How can it prove to be a double-edged sword?
In their book Fundamentals of Financial Management, 8th edition, SouthWestern, 1998, authors Eugene F. Brigham and Joel F. Houston include a chapter
entitled Risk Analysis and the Optimal Capital Budget. With examples from
industry, they illustrate the pitfalls of using uncertain single-point estimates for
the cash flows associated with a project. One recommendation in the chapter is
to model the uncertainty in all of the quantities being estimated and to use
Monte Carlo simulation to produce a probability distrubution for the NPV (or the
IRR) of the project. Additionally, the analyst can produce sensitivity analyses to
determine the most critical uncertainties in the estimation. The additional
information that these statistical techniques provide can aid the capital budget
decision-makers and can help them avoid costly mistakes.
As part of the CFA exam review courses that he teaches, Bill covers the area of
risk analysis in capital budgeting. His experience in developing mathematical
models for Monte Carlo simulation fit naturally into this very important area of
corporate decision-making.
Risk Analysis in Capital Budgeting
Introduction
In discussing the capital budgeting techniques, we have so far assumed that the
proposed investment projects do not involve any risk. This assumption was
made simply to facilitate the understanding of the capital budgeting techniques.
In real world situation, however, the firm in general and its investment projects
in particular are exposed to different of risk. What is risk? How can risk be
measured and analyzed in the investment decisions?
Nature of risk
Risk exists because of the inability of the decision maker to make perfect
forecasts. Forecasts cannot be made with perfection or certainty since the future
events on which they depend are uncertain. An investment is not risky if, we can
specify a unique sequence of cash flows for it. But whole trouble is that cash
flows cannot be forecast accurately, and alternative sequences of cash flows can
occur depending on the future events. Thus, risk arises in investment evaluation
because we cannot anticipate the occurrence of the possible future events with
certainty and consequently, cannot, make are correct prediction about the cash
flow sequence. To illustrate, let us suppose that a firm is considering a proposal
to commit its funds in a machine, which will help to produce a new product. The
demand for this product may be very sensitive to the general economic
conditions. It may be very high under favorable economic conditions and very
low under unfavorable economic conditions. Thus, the investment would be
profitable in the former situation and unprofitable in the later case. But, it is
quite difficult to predict the future state of economic conditions, uncertainty
about the cash flows associated with the investment derives
A large number of events influence forecasts. These events can be grouped in
different ways. However, no particular grouping of events will be useful for all
purposes. We may, for example, consider three broad categories of the events
influencing the investment forecasts.
This category includes events which influence general level of business activity.
The level of business activity might be affected by such events as internal and
external economic and political situations, monetary and fiscal policies, social
conditions etc.
Industry factors
This category of events may affect all companies in an industry. For example,
companies in an industry would be affected by the industrial relations in the
industry, by innovations, by change in material cost etc.
Company factors
This category of events may affect only a company. The change in management,
strike in the company, a natural disaster such as flood or fire may affect directly
a particular company
Risk Analysis in Capital Budgeting
Capital budgeting is used to ascertain the requirements of the long-term
investments of a company.
Profitability index
Equivalent annuity
Besides these methods, other methods that are used include Return on
Investment (ROI), Accounting Rate of Return (ARR), Discounted Payback Period
and Payback Period.
The different types of risks that are faced by entrepreneurs regarding capital
budgeting are the following:
Corporate risk
International risk
Stand-alone risk
Competitive risk
Market risk
Sensitivity Analysis:
This is also known as a "what if analysis". Because of the uncertainty of the
future, if an entrepreneur wants to know about the feasibility of a project in
variable quantities, for example investments or sales change from the
anticipated value, sensitivity analysis can be a useful method. This is calculated
in terms of NPV, or net present value.
Scenario Analysis:
In the case of scenario analysis, the focus is on the deviation of a number of
interconnected variables. It is different from sensitivity analysis, which usually
concentrates on the change in one particular variable at a specific point of time.
Break Even Analysis:
The Break Even Analysis allows a company to determine the minimum
production and sales amounts for a project to avoid losing money. The lowest
possible quantity at which no loss occurs is called the break-even point. The
break-even point can be delineated both in financial or accounting terms.
Hillier Model:
In particular situations, the anticipated NPV and the standard deviation of NPV
can be incurred with the help of analytical derivation. This was first realized by
F.S. Hillier. There are situations where correlation between cash flows is either
complete or nonexistent.
Decision Tree Analysis: The principal steps of decision tree analysis are the
definition of the decision tree and the assessment of the alternatives.
Corporate Risk Analysis: Corporate risk analysis focuses on the analysis of risk
that may influence the project in terms of the entire cash flow of the firm. The
corporate risk of a project refers to its share of the total risk of a company.
Selection of project under risk: This involves procedures such as payback period
requirement, risk adjusted discount rate, judgmental evaluation and certainty
equivalent method.
Practical Risk Analysis: The techniques involved include the Acceptable Overall
Certainty Index, Margin of Safety in Cost Figures, Conservative Revenue
Estimation, Flexible Investment Yardsticks and Judgment on Three Point
Estimates.
Chapter 4: Additional Considerations in Capital Budgeting Analysis
Whenever we analyze a capital project, we must consider unique factors. A
discussion of all of these factors is beyond the scope of this course. However,
three common factors to consider are:
In our previous example (Example 6), we used the cost of capital for discounting
cash flows. Our example involved the replacement of equipment and carried a
low level of risk since the expected outcome was reasonably certain. Suppose
we have a project involving a new product line. Would we still use our cost of
capital to discount these cash flows? The answer is no since this project could
have a much wider variation in outcomes. We can adjust for higher levels of risk
by increasing the discount rate. A higher discount rate reflects a higher rate of
return that we require whenever we have higher levels of risk.
Another way to adjust for risk is to understand the impact of risk on outcomes.
Sensitivity Analysis and Simulation can be used to measure how changes to a
project affect the outcome. Sensitivity analysis is used to determine the change
in Net Present Value given a change in a specific variable, such as estimated
project revenues. Simulation allows us to simulate the results of a project for a
given distribution of variables. Both sensitivity analysis and simulation require a
definition of all relevant variables associated with the project. It should be noted
that sensitivity analysis is much easier to implement since sophisticated
computer models are usually required for simulation.
sitemap | top
International Projects
Capital investments in other countries can involve additional risks. Whenever we
invest in a foreign project, we want to focus on the values that are added (or
subtracted) to the Parent Company. This makes us consider all relevant risks of
the project, such as exchange rate risk, political risk, hyper-inflation, etc. For
example, the discounted cash flows of the project are the discounted cash flows
of the project to the foreign subsidiary converted to the currency of the home
country of the Parent Company at the current exchange rate. This forces us to
take into account exchange rate risks and its impact to the Parent Company.
Post Analysis
One of the most important steps in capital budgeting analysis is to follow-up and
compare your estimates to actual results. This post analysis or review can help
identify bias and errors within the overall process. A formal tracking system of
capital projects also keeps everyone honest. For example, if you were to
announce to everyone that actual results will be tracked during the life of the
project, you may find that people who submit estimates will be more careful. The
purpose of post analysis and tracking is to collect information that will lead to
improvements within the capital budgeting process.
sitemap | top
Course Summary
The long-term investments we make today determines the value we will have
tomorrow. Therefore, capital budgeting analysis is critical to creating value
within financial management. And the only certainty within capital budgeting is
uncertainty. Therefore, one of the biggest challenges in capital budgeting is to
manage uncertainty. We deal with uncertainty through a three-stage process:
Sensitivity Analysis
Sensitivity analysis is a way of analyzing change in the projects NPV (or IRR) for
a given change in one of the variables. It indicates how sensitive a projects NPV
(or IRR) is to changes in particular variables. The more sensitive the NPV, the
more critical is the variable. The following three steps are unsolved in the use or
sensitivity analysis:
Simulation Analysis
Sensitivity and scenario analyses are quite useful to understand the uncertainty
of the investment projects. But both Approaches suffer from certain weaknesses.
As we have discusses, they do not consider the interactions between variables
and also, they do not reflect on the probability of the changes in variables.
The Monte carol simulation or simple the simulation analysis considers the
interactions among variables and probability of the change in variables. It does
not given the probability distribution of NPV. The simulation analysis is an
extension of scenario analysis. In simulation analysis a computer generates a
very large number of scenarios according to the probability distributions of the
variables. The simulation analysis involves the following steps:
First, you should identify variables that influence cash inflows and outflows. For
example, when a firm introduces a new product in the market these variables
are initial investment, market size, market growth, market share, price, variable
costs, fixed costs fixed costs, product life cycle, and terminal value.
Second specify the formulas that relate variables. For example, revenue
depends on by sales volume and price: sales volume is given by market size,
market share and market growth. Similarly, operating expenses depend on
production, sales and variable and fixed costs.
Third, indicate the probability distribution for each variable. Some variables will
have more uncertainty than others. For example, it is quite difficult to predict or
market growth with confidence.
Fourth, develop a computer programmed that randomly selects on e value
from the probability distribution of each variable and uses these values to
calculate the projects NPV. The computer generates a large; number of such
scenarios, calculates NPVs and stores them. The stored value are primed as a
probability distribution of the projects NPVs along with the expected NPV and its
standard deviation the rick-free rate should be used as the discount rate to
computer the projects cash flows, the discount rate should reflect only the time
value of money.
Simulation analysis is a very useful technique for risk analysis. Unfortunately. Its
practical use is limited because of a number of shortcomings. First the model
becomes quite complex to use because the variables are interrelated with each
other and each variable depends on its value in the precious periods as well.
Identifying all possible relationships and estimating as well as expensive.
Second, the model helps in generating a probability distribution of the projects
NPVs. But it does not indicate whether or not the project should be accepted.
Third, simulation analysis, like sensitivity or scenario analysis, considers the risk
of any project in isolation of other projects. We know that if we consider the
portfolio of projects, the unsystematic risk can be diversified. A risky project may
have a negative correlation with the firms other projects, and therefore
accepting the project may reduce the overall risk of the firm.
INTERNATIONAL FINANCE
Different types of transactions in the Foreign Exchange
Market
Spot and Forward Exchanges
Spot Market:
The term spot exchange refers to the class of foreign exchange transaction which requires the
immediate delivery or exchange of currencies on the spot. In practice the settlement takes
place within two days in most markets. The rate of exchange effective for the spot transaction
is known as the spot rate and the market for such transactions is known as the spot market.
Forward Market:
The forward transactions is an agreement between two parties, requiring the delivery at some
specified future date of a specified amount of foreign currency by one of the parties, against
payment in domestic currency be the other party, at the price agreed upon in the contract. The
rate of exchange applicable to the forward contract is called the forward exchange rate and
the market for forward transactions is known as the forward market.
The foreign exchange regulations of various countries generally regulate the forward
exchange transactions with a view to curbing speculation in the foreign exchanges market. In
India, for example, commercial banks are permitted to offer forward cover only with respect
to genuine export and import transactions. Forward exchange facilities, obviously, are of
immense help to exporters and importers as they can cover the risks arising out of exchange
rate fluctuations be entering into an appropriate forward exchange contract. With reference to
its relationship with spot rate, the forward rate may be at par, discount or premium. If the
forward exchange rate quoted is exact equivalent to the spot rate at the time of making the
contract the forward exchange rate is said to be at par.
The forward rate for a currency, say the dollar, is said to be at premium with respect to the
spot rate when one dollar buys more units of another currency, say rupee, in the forward than
in the spot rate on a per annum basis.
The forward rate for a currency, say the dollar, is said to be at discount with respect to the
spot rate when one dollar buys fewer rupees in the forward than in the spot market. The
discount is also usually expressed as a percentage deviation from the spot rate on a per
annum basis.
The forward exchange rate is determined mostly be the demand for and supply of forward
exchange. Naturally when the demand for forward exchange exceeds its supply, the forward
rate will be quoted at a premium and conversely, when the supply of forward exchange
exceeds the demand for it, the rate will be quoted at discount. When the supply is equivalent
to the demand for forward exchange, the forward rate will tend to be at par.
Futures
While a focus contract is similar to a forward contract, there are several differences between
them. While a forward contract is tailor made for the client be his international bank, a future
contract has standardized features the contract size and maturity dates are standardized.
Futures cab traded only on an organized exchange and they are traded competitively.
Margins are not required in respect of a forward contract but margins are required of all
participants in the futures market an initial margin must be deposited into a collateral account
to establish a futures position.
Options
While the forward or futures contract protects the purchaser of the contract fro m the adverse
exchange rate movements, it eliminates the possibility of gaining a windfall profit from
favorable exchange rate movement. An option is a contract or financial instrument that gives
holder the right, but not the obligation, to sell or buy a given quantity of an asset as a
specified price at a specified future date. An option to buy the underlying asset is known as a
call option and an option to sell the underlying asset is known as a put option. Buying or
selling the underlying asset via the option is known as exercising the option. The stated price
paid (or received) is known as the exercise or striking price. The buyer of an option is known
as the long and the seller of an option is known as the writer of the option, or the short. The
price for the option is known as premium.
Types of options: With reference to their exercise characteristics, there are two types of
options, American and European. A European option cab is exercised only at the maturity or
expiration date of the contract, whereas an American option can be exercised at any time
during the contract.
Swap operation
Commercial banks who conduct forward exchange business may resort to a swap operation
to adjust their fund position. The term swap means simultaneous sale of spot currency for the
forward purchase of the same currency or the purchase of spot for the forward sale of the
same currency. The spot is swapped against forward. Operations consisting of a simultaneous
sale or purchase of spot currency accompanies by a purchase or sale, respectively of the same
currency for forward delivery are technically known as swaps or double deals as the spot
currency is swapped against forward.
Arbitrage
Arbitrage is the simultaneous buying and selling of foreign currencies with intention of
making profits from the difference between the exchange rate prevailing at the same time in
different markets.
Foreign exchange
exposure
Transaction
exposure
Translation
(Accounting)
exposure
Operating/Econo
mic -Exposure
Hedging
through invoice
currency
Positive
Negati
ve
Asset
Operatin
g
Hedging
through invoice
currency
Selecting
low cost
productio
n sites
Flexible
sourcing
policy
Diversificatio
n of the
market
Hedging via lead and lag: Another operational technique the firm
can use to reduce transaction exposure is leading and lagging
foreign currency receipts and payments.
Lead means to pay or collect early, where as
Lag means to pay or collect late.
The firm would like to lead soft currency receivables and lag hard
currency receivables to avoid the loss from depreciation of the soft
currency and benefit from the appreciation of the hard currency. For the
same reason, the firm will attempt to lead the hard currency payables
and lag soft currency payables. To the extent that the firm can effectively
implement the Lead/Lag strategy, the transaction exposure the firm faces
can be reduced.
loss. The position will be reversed if the currency rate for foreign currency is lesser than its
historic rate of exchange. The translation gain/loss is shown as a separate component of the
shareholders equity in the balance-sheet. It does not affect the current earnings of the
company.
3. Economic Exposure
Economic exposure can be defined as the extent to which the value of the firm would be
affected by unanticipated changes in exchange rates. An economic exposure is more a
managerial concept than an accounting concept. A company can have an economic exposure
to say Yen: Rupee rates even if it does not have any transaction or translation exposure in the
Japanese currency.
This would be the case for example, when the companys competitors are using Japanese
imports. If the Yen weekends the company loses its competitiveness (vice-versa is also
possible). The companys competitor uses the cheap imports and can have competitive edge
over the company in terms of his cost cutting. Therefore the companys exposed to Japanese
Yen in an indirect way.
In simple words, economic exposure to an exchange rate is the risk that a change in the rate
affects the companys competitive position in the market and hence, indirectly the bottomline. Broadly speaking, economic exposure affects the profitability over a longer time span
than transaction and even translation exposure. Under the Indian exchange control, while
translation and transaction exposures can be hedged, economic exposure cannot be hedged.
Economic exposure consists of mainly two types of exposures.
Asset exposure
Operating exposure
Exposure to currency risk can be properly measured by the sensitivities of (1) the future
home currency values of the firms assets (and liabilities) (2) the firms operating cash flows
to random changes in exchange rates.
Asset exposure: Let us discuss the case of asset exposure. For convenience, assume that
dollar inflation is non random. Then, from the perspective of the U.S. firm that owns an asset
in Britain, the exposure can be measured by the coefficient b in regressing the dollar value
P of the British asset on the dollar/pound exchange rate S.
P=a+b*S+e
Where a is the regression constant and e is the random error term with mean zero, P =
SP*, where P* is the local currency (pound) price of asset. It is obvious from the above
equation that the regression coefficient b measures the sensitivity of the dollar value of
asset (P) to the exchange rate (S). If the regression coefficient is zero, the dollar value of the
asset is independent of exchange rate movement, implying no exposure. On the basis of
above analysis, one can say that exposure is the regression coefficient. Statistically, the
exposure coefficient, b, is defined as follows:
b = Cov (P,S)/ Var (S)
Where Cov (P,S) is the covariance between the dollar value of the asset and the exchange
rate, and Var (S) is the variance of the exchange rate.
Next, we show how to apply the exposure measurement technique using numerical examples.
Suppose that a U.S. firm has an asset in Britain whose local currency price is random. For
simplicity, let us assume that there are three states of the world, with each state equally likely
to occur. The future local currency price of this British asset as well as the future exchange
rate will be determined, depending on the realized state of the world.
Operating exposure: Operating exposure can be defined as the extent to which the firms
operating cash flows would be affected by random changes in exchange rates. Operating
exposure may affect in two different ways to the firm, viz., competitive effect and
conversion effect. Adverse exchange rate change increase cost of import which makes firms
product costly thus firms position becomes less competitive, which is competitive effect.
Adverse exchange rate change may reduce value of receivable to the exporting firm which is
called conversion effect.
Some strategy to manage operating exposure
Related posts:
1. Foreign Exchange Risk
2. Economics of the Foreign Exchange Market
3. Settlement of Transactions in Foreign Exchange Markets
4. Flexible v/s fixed foreign exchange rates
5. Official actions to influence foreign exchange rates
6. Management of Foreign Exchange Risks
7. Role of FEDAI in Foreign Exchange
8. Foreign Exchange Management Policy in India
9. Different types of transactions in the Foreign Exchange Market
10.Merchant Rate and Exchange Margin in Foreign Exchange Markets
Before we look at how the CAPM can be used to price a portfolio (or an
investment), it is important for you to understand that it is after all a theoretical
model, which means that it is based on an idealistic investment environment
different from the real world. Despite its simplistic assumptions about the
investment environment, the CAPM still serves as a valuable tool in
understanding the relationship between the risk and return.
The following are the assumptions of the CAPM. Briefly explain what each
assumption means.
(a) Investors are price takers
(b) Investors have identical single-period holding horizons
(c) Investors have access to all investments and have access to unlimited
borrowing and lending opportunities at the risk-free rate
(d) The financial markets are frictionless
(e) Investors are rational mean-variance optimizer
(f) Investors have homogenous expectations
In Topic 2, we know that when you have access to all the different investments
available in the financial market, the best place you can be is on the capital
market line (CML). Portfolios that are located on this line will provide you the
best (or optimal) combination of risk and return. As a result, the CML is a good
measure for the relationship between risk and return. Just in case you forgot, the
CML is represented by the following formula:
E (rm ) rrf
E (r p ) = rrf +
rm
What is the similarity and difference between the CAPM and the CML in
measuring the relationship between risk and return? We need to first re-arrange
the formula (which is presented below) for the CML before we will address the
question.
E (r p ) = rrf +
[ E (r
) rrf
Do you begin to see the resemblance between the CML and the CAPM?
According to the two formulae, the return of the portfolio can be broken down
into two components: (i) the guaranteed risk-free rate and (ii) the compensation
for taking on risk. In addition, the compensation is determined by two things: (i)
a relative measurement of the portfolios risk and (ii) the market risk premium
[i.e. E(rmr)rf].
What about the differences between the CML and the CAPM? Can you tell what
are the two differences between the CML and CAPM?
2. , and SML
The CAPM, Beta (i.e.
Now that we know more about the similarities and differences between the CML
and the CAPM, we need to go back and look at some of the details related to the
CAPM.
Even though the formula presented earlier for the CAPM is for a portfolio, the
formula can easily be modified to determine the return of a single investment as
follows:
Since the risk-free rate and the market return should be the same for every
investment in the financial market, the only thing that is different from
investment to investment is the beta of the investment. As a result, we can
claim that the only driving force behind the determination of an investments
return is its beta.
What is the beta? It represents an investments non-diversifiable risk (and not its
total risk) relative to the market risk. In other words, the beta of an investment
measures the co-movement of the investments expected return with the
markets expected return. The formula of an investments beta is as follows:
i =
im i
m
where
im = correlation between investment is return and the markets return
p = wi i
Just as in the case with the capital allocation line, we can also represent the
CAPM in a graphical manner. The straight line that represents the relationship
between risk and return (according to the CAPM) is known as the security market
line (SML).
E(ri)
E(rm)
SML
rrf
1.0
The security market line will help you determine if an investment is correctly
price. In other words, help you determine if the investment is offering a return
that is appropriate for its level of risk (as measured by the beta). If an
investments return falls on the SML, the investment is considered to be
correctly price because the expected return of the investment matches the one
according to the CAPM (based on for its beta). However, if the expected return of
the investment differs from the one as predicted by the CAPM, the investment
is considered to be either underpriced or overpriced. The difference between the
investments actual expected return and its fair return (as dictated by the CAPM)
is known as the investments alpha (i.e.
).
Lets analyze the two investments A and B as depicted in the graph above.
Based on your analysis, what can you say about the two investments?
(a) Investment A
(b) Investment B
Estimating the Beta of an Investment Using the Index Model
Since the driving force behind the CAPM in determining the return of an
investment is its beta, it is important that you know the process commonly
adopted to estimate the beta of an investment. Before we can proceed with the
discussion on how to estimate beta, you need to first understand that we cannot
implement the CAPM in the real world as it is because of two main issues. First,
the CAPM assumes that the market portfolio (which includes all investments in
the financial market) is available to all investors. Second, it focuses on the
expected return of an investment.
Index model
To apply the CAPM in the real world, we need to use the index model, which
addresses the above two issues as follows:
(a) The index model uses a proxy such as a market index (e.g. S&P 500) to
represents a more relevant market portfolio (and the market risk).
(b) The index model uses realized returns (rather than expected returns,
which are not easily observable).
If we are to estimate the beta of an investment using CAPM, we will need to
establish the following regression model, which is based on the realized excess
returns of the investment in relation to the realized excess returns of the
market:
E (ri ) rrf = i + i [ E (rm ) rrf ]
However, since we are using the index model (i.e. using realized returns), the
regression model will look as follows:
ri rrf = i + i [rm rrf ]
Based on the graph above, can you tell if the beta will be positive or negative?
One thing that is crucial to remember is that because of the setup of the
regression model, the excess returns of the investment have to be on the y-axis
and the excess returns of the market index have to be on the x-axis.
Once you have the excess returns of the investment and the market index
plotted as above, you want to find a straight line that best fit the data as
presented in the graph below:
What does it mean to have a straight line that best fit the data points?
The straight line, which best fit the data points, is known as the security
characteristic line (SCL). Once again, a straight line is determined by its yintercept and its slope. How do you determine the y-intercept and the slope of
the SCL? You can do so by performing a regression analysis using any statistical
packages or Microsoft Excel.
the choice of the representation for the market portfolio will affect an
investors investment decisions.
(c) It has been proven empirically that the beta of an investment is unstable
over time. In other words, the value of the beta of an investment changes
over time. This could be due to changes in the companys management,
its financing policy, etc. In addition, the estimates for the beta of a
particular investment vary among analysts and publications for several
reasons:
(i) The proxy for the market can be different among analysts and
publications. For example, one analyst might be using the Value
Line index (which contains 1700 stocks), while another analyst
might be using the S&P 500 index.
(ii) The time period used in estimating the beta of a stock can be
different among analysts and publications. For example, the beta
of an investment estimated using 5 years of return will differ from
the one estimated using 10 years of return.
(iii)The intervals of the measurement of the returns will also affect the
estimates of the betas. For example, a beta estimated with weekly
returns will differ from the one estimated with monthly returns.
The APT is based on the concept of arbitrage (or law of one price), which states
that any two identical investments cannot be sold at a different price. In other
words, the theory states that market forces will adjust to eliminate any arbitrage
opportunities, where a zero investment portfolio can be created to yield a riskfree profit.
The key thing you need to understand is that, unlike the CAPM, the APT does not
assume that the market risk is the only factor that influences the return of a
portfolio. The APT recognizes that several other factors (or risks) can influence
the return of a portfolio.
The APT preserves the linear relationship between risk and return of the CAPM
but abandons the single measure of risk by the beta of the portfolio. The APT
model is a multiple factor model, which uses factors such as the inflation rate,
the growth rate of the economy, the slope of the yield curve, etc. in addition to
the beta of the portfolio in determining the return of the portfolio. Keep in mind
that just as in the case with the CAPM, the APT can also be modified to
determine the return of an individual investment. The formula of the APT can be
presented as follows:
What does this all means to an investor like you? Should you use the CAPM or
the APT? The key thing you need to remember is that neither of the theories
dominates the other one. The APT is more general because it does not require as
many assumptions as the CAPM. However, the CAPM is more general because it
applies to all individual investments without reservation (whereas the APT works
better with well-diversified portfolio).
where
bjk is the sensitivity of the asset to factor k, also called factor loading,
and j is the risky asset's idiosyncratic random shock with mean zero.
That is, the uncertain return of an asset j is a linear relationship among n factors.
Additionally, every factor is also considered to be a random variable with mean zero.
Note that there are some assumptions and requirements that have to be fulfilled for the latter
to be correct:
There must be perfect competition in the market, and the total number of factors may never
surpass the total number of assets (in order to avoid the problem of matrix singularity),
Arbitrage is the practice of taking advantage of a state of imbalance between two (or possibly
more) markets and thereby making a risk free profit; see Rational pricing.
Arbitrage in expectations
The APT describes the mechanism whereby arbitrage by investors will bring an asset which
is mispriced, according to the APT model, back into line with its expected price. Note that
under true arbitrage, the investor locks-in a guaranteed payoff, whereas under APT arbitrage
as described below, the investor locks-in a positive expected payoff. The APT thus assumes
"arbitrage in expectations" - i.e. that arbitrage by investors will bring asset prices back into
line with the returns expected by the model portfolio theory.
Arbitrage mechanics
In the APT context, arbitrage consists of trading in two assets with at least one being
mispriced. The arbitrageur sells the asset which is relatively too expensive and uses the
proceeds to buy one which is relatively too cheap.
Under the APT, an asset is mispriced if its current price diverges from the price predicted by
the model. The asset price today should equal the sum of all future cash flows discounted at
the APT rate, where the expected return of the asset is a linear function of various factors,
and sensitivity to changes in each factor is represented by a factor-specific beta coefficient.
A correctly priced asset here may be in fact a synthetic asset - a portfolio consisting of other
correctly priced assets. This portfolio has the same exposure to each of the macroeconomic
factors as the mispriced asset. The arbitrageur creates the portfolio by identifying x correctly
priced assets (one per factor plus one) and then weighting the assets such that portfolio beta
per factor is the same as for the mispriced asset.
When the investor is long the asset and short the portfolio (or vice versa) he has created a
position which has a positive expected return (the difference between asset return and
portfolio return) and which has a net-zero exposure to any macroeconomic factor and is
therefore risk free (other than for firm specific risk). The arbitrageur is thus in a position to
make a risk free profit:
Where today's price is too low:
The implication is that at the end of the period the portfolio would have
appreciated at the rate implied by the APT, whereas the mispriced asset would
have appreciated at more than this rate. The arbitrageur could therefore:
Today:
1 short sell the portfolio
2 buy the mispriced-asset with the proceeds.
At the end of the period:
1 sell the mispriced asset
2 use the proceeds to buy back the portfolio
3 pocket the difference.
Where today's price is too high:
The implication is that at the end of the period the portfolio would have
appreciated at the rate implied by the APT, whereas the mispriced asset would
have appreciated at less than this rate. The arbitrageur could therefore:
Today:
1 short sell the mispriced-asset
surprises in inflation;
As a practical matter, indices or spot or futures market prices may be used in place of macroeconomic factors, which are reported at low frequency (e.g. monthly) and often with
significant estimation errors. Market indices are sometimes derived by means of factor
analysis. More direct "indices" that might be used are:
a diversified stock index such as the S&P 500 or NYSE Composite Index;
oil prices
(ii)
The NOI approach implies that (i) whatever may be the change in
capital structure the overall value of the firm is not affected. Thus the
overall value of the firm is independent of the degree of leverage in
capital structure. (ii) Similarly the overall cost of capital is not affected
by any change in the degree of leverage in capital structure. The
overall cost of capital is independent of leverage.
If the cost of debt is less than that of equity capital the overall cost of
capital must decrease with the increase in debts whereas it is assumed
under this method that overall cost of capital is unaffected and hence
it remains constant irrespective of the change in the ratio of debts to
equity capital. How can this assumption be justified? The advocates of
this method are of the opinion that the degree of risk of business
increases with the increase in the amount of debts. Consequently the
rate of equity over investment in equity shares thus on the one hand
cost of capital decreases with the increase in the volume of debts; on
the other hand cost of equity capital increases to the same extent.
Hence the benefit of leverage is wiped out and overall cost of capital
remains at the same level as before. Let us illustrate this point.
To put the same in other words there are two parts of the cost of
capital. One is the explicit cost which is expressed in terms of interest
charges on debentures. The other is implicit cost which refers to the
increase in the rate of equity capitalization resulting from the increase
in risk of business due to higher level of debts.
Float is defined as the difference between the book balance and the bank
balance of an account. For example, assume that you go to the bank and
open a checking account with $500. You receive no interest on the $500
and pay no fee to have the account.
Now assume that you receive your water bill in the mail and that it is for
$100. You write a check for $100 and mail it to the water company. At the
time you write the $100 check you also record the payment in your bank
register. Your bank register reflects the book value of the checking account.
The check will literally be "in the mail" for a few days before it is received
by the water company and may go several more days before the water
company cashes it.
The time between the moment you write the check and the time the bank
cashes the check there is a difference in your book balance and the balance
the bank lists for your checking account. That difference is float. This float
can be managed. If you know that the bank will not learn about your check
for five days, you could take the $100 and invest it in a savings account at
the bank for the five days and then place it back into your checking account
"just in time" to cover the $100 check.
Time
Bo
ok
Bal
an
ce
Bank Balance
$5
00
$4
00
$4
00
$500
$500
$400
Float is calculated by subtracting the book balance from the bank balance.
Float at Time 0: $500 - $500
= $0
Firms can manage cash in virtually all areas of operations that involve the use of
cash. The goal is to receive cash as soon as possible while at the same time
waiting to pay out cash as long as possible. Below are several examples of how
firms are able to do this.
Policy For Cash Being Held
Here a firm already is holding the cash so the goal is to maximize the benefits
from holding it and wait to pay out the cash being held until the last possible
moment. Previously there was a discussion on Float which includes an example
based on a checking account. That example is expanded here.
Assume that rather than investing $500 in a checking account that does not pay
any interest, you invest that $500 in liquid investments. Further assume that the
bank believes you to be a low credit risk and allows you to maintain a balance of
$0 in your checking account.
This allows you to write a $100 check to the water company and then transfer
funds from your investment to the checking account in a "just in time" (JIT)
fashion. By employing this JIT system you are able to draw interest on the entire
$500 up until you need the $100 to pay the water company. Firms often have
policies similar to this one to allow them to maximize idle cash.
Sales
The goal for cash management here is to shorten the amount of time before the
cash is received. Firms that make sales on credit are able to decrease the
amount of time that their customers wait until they pay the firm by offering
discounts.
For example, credit sales are often made with terms such as 3/10 net 60. The
first part of the sales term "3/10" means that if the customer pays for the sale
within 10 days they will receive a 3% discount on the sale. The remainder of the
sales term, "net 60," means that the bill is due within 60 days. By offering an
inducement, the 3% discount in this case, firms are able to cause their
customers to pay off their bills early. This results in the firm receiving the cash
earlier.
Inventory
The goal here is to put off the payment of cash for as long as possible and to
manage the cash being held. By using a JIT inventory system, a firm is able to
avoid paying for the inventory until it is needed while also avoiding carrying
costs on the inventory. JIT is a system where raw materials are purchased and
received just in time, as they are needed in the production lines of a firm.
3. Ratios give false result, if they are calculated from incorrect accounting data.
4. Ratios are calculated on the basis of past data. Therefore, they do not provide
complete information for future forecasting.
future, even though the past performance of the firm (as shown by its
ratios) may have been mediocre.
Financial Statement Analysis Limitations
Many things can impact the calculation of ratios and make comparisons
difficult. The limitations include:
statements can be created, revealing trends and providing insight into how the
different companies compare.
The common size ratio for each line on the financial statement is calculated as
follows:
Item
Interest
Common
Ratio
of
Size
=
Reference
Item
for
=
Total
Assets
The following example income statement shows both the dollar amounts and the
common size ratios:
Common Size Income Statement
Revenue
Income
Statement
Common-Size
Income
Statement
70,134
100%
Cost
Sold
of
Goods
44,221
63.1%
Gross Profit
25,913
36.9%
SG&A Expense
13,531
19.3%
Operating Income
12,382
17.7%
Interest Expense
2,862
4.1%
3,766
5.4%
Net Income
5,754
8.2%
For the balance sheet, the common size percentages are referenced to the total
assets. The following sample balance sheet shows both the dollar amounts and
the common size ratios:
Common Size Balance Sheet
Balance
Sheet
CommonSize
Balance
Sheet
6,029
15.1%
Accounts Receivable
14,378
36.0%
Inventory
17,136
42.9%
37,543
93.9%
2,442
6.1%
39,985
100%
ASSETS
Cash
&
Securities
Property,
Equipment
Marketable
Plant,
Total Assets
&
14,251
35.6%
Long-Term Debt
12,624
31.6%
Total Liabilities
26,875
67.2%
Shareholders' Equity
13,110
32.8%
39,985
100%
Derivative
Derivatives are financial contracts, or financial instruments, whose values are derived from
the value of something else (known as the underlying).
The underlying on which a derivative is based can be an asset (e.g., commodities, equities
(stocks), residential mortgages, commercial real estate, loans, bonds), an index (e.g., interest
rates, exchange rates, stock market indices, consumer price index (CPI) see inflation
derivatives), or other items (e.g., weather conditions, or other derivatives). Credit derivatives
are based on loans, bonds or other forms of credit.
The main types of derivatives are forwards, futures, options, and swaps.
Derivatives can be used to mitigate the risk of economic loss arising from changes in the
value of the underlying. This activity is known as hedging. Alternatively, derivatives can be
used by investors to increase the profit arising if the value of the underlying moves in the
direction they expect. This activity is known as speculation.
Because the value of a derivative is contingent on the value of the underlying, the notional
value of derivatives is recorded off the balance sheet of an institution, although the market
value of derivatives is recorded on the balance sheet.
Uses
Hedging
Derivatives allow risk about the value of the underlying asset to be transferred from one
party to another. For example, a wheat farmer and a miller could sign a futures contract to
exchange a specified amount of cash for a specified amount of wheat in the future. Both
parties have reduced a future risk: for the wheat farmer, the uncertainty of the price, and for
the miller, the availability of wheat. However, there is still the risk that no wheat will be
available due to causes unspecified by the contract, like the weather, or that one party will
renege on the contract. Although a third party, called a clearing house, insures a futures
contract, not all derivatives are insured against counterparty risk.
From another perspective, the farmer and the miller both reduce a risk and acquire a risk
when they sign the futures contract: The farmer reduces the risk that the price of wheat will
fall below the price specified in the contract and acquires the risk that the price of wheat will
rise above the price specified in the contract (thereby losing additional income that he could
have earned). The miller, on the other hand, acquires the risk that the price of wheat will fall
below the price specified in the contract (thereby paying more in the future than he otherwise
would) and reduces the risk that the price of wheat will rise above the price specified in the
contract. In this sense, one party is the insurer (risk taker) for one type of risk, and the
counterparty is the insurer (risk taker) for another type of risk.
Hedging also occurs when an individual or institution buys an asset (like a commodity, a
bond that has coupon payments, a stock that pays dividends, and so on) and sells it using a
futures contract. The individual or institution has access to the asset for a specified amount of
time, and then can sell it in the future at a specified price according to the futures contract. Of
course, this allows the individual or institution the benefit of holding the asset while reducing
the risk that the future selling price will deviate unexpectedly from the market's current
assessment of the future value of the asset.
Types of derivatives
OTC and exchange-traded
Broadly speaking there are two distinct groups of derivative contracts, which are
distinguished by the way they are traded in market:
More complex derivatives can be created by combining the elements of these basic types. For
example, the holder of a swaption has the right, but not the obligation, to enter into a swap on
or before a specified future date.
Examples
Some common examples of these derivatives are:
CONTRACT TYPES
UNDERLYI
NG
Equity
Index
Money
market
Bonds
Single
Stocks
Exchang
e-traded
futures
Exchang
e-traded
options
OTC
swap
DJIA
Index
future
NASDAQ
Index
future
Option on
DJIA Index
future
Option on
NASDAQ
Index
future
Equity
swap
Eurodolla
r future
Euribor
future
Option on
Eurodollar
future
Option on
Euribor
future
OTC
forward
OTC
option
Back-toback
n/a
Interes
t rate
swap
Forward
rate
agreemen
t
Interest
rate cap
and floor
Swaption
Basis
swap
Bond
future
Option on
Bond
future
Total
return
swap
Repurchas
e
agreemen
t
Bond
option
Singlestock
future
Singleshare
option
Equity
swap
Repurchas
e
agreemen
t
Stock
option
Warrant
Turbo
warrant
Credit
n/a
n/a
Credit
default
swap
n/a
Credit
default
option
Commodities
Freight derivatives
Inflation derivatives
Weather derivatives
Credit derivatives
Cash flow
The payments between the parties may be determined by:
an interest rate;
an exchange rate;
Some derivatives are the right to buy or sell the underlying security or commodity at some
point in the future for a predetermined price. If the price of the underlying security or
commodity moves into the right direction, the owner of the derivative makes money;
otherwise, they lose money or the derivative becomes worthless. Depending on the terms of
the contract, the potential gain or loss on a derivative can be much higher than if they had
traded the underlying security or commodity directly.
Valuation
Total world derivatives from 1998-2007[4] compared to total world wealth in the
year 2000[citation needed]
Market price, i.e. the price at which traders are willing to buy or sell
the contract
Criticisms
Counter-party risk
Derivatives (especially swaps) expose investors to counter-party risk.
For example, suppose a person wanting a fixed interest rate loan for his business, but finding
that banks only offer variable rates, swaps payments with another business who wants a
variable rate, synthetically creating a fixed rate for the person. However if the second
business goes bankrupt, it can't pay its variable rate and so the first business will lose its
fixed rate and will be paying a variable rate again. If interest rates have increased, it is
possible that the first business may be adversely affected, because it may not be prepared to
pay the higher variable rate.
Different types of derivatives have different levels of risk for this effect. For example,
standardized stock options by law require the party at risk to have a certain amount deposited
with the exchange, showing that they can pay for any losses; Banks who help businesses
swap variable for fixed rates on loans may do credit checks on both parties. However in
private agreements between two companies, for example, there may not be benchmarks for
performing due diligence and risk analysis.
Benefits
Nevertheless, the use of derivatives also has its benefits:
Definitions
Gross negative fair value: The sum of the fair values of contracts
where the bank owes money to its counter-parties, without taking
into account netting. This represents the maximum losses the
banks counter-parties would incur if the bank defaults and there is
no netting of contracts, and no bank collateral was held by the
counter-parties.
Gross positive fair value: The sum total of the fair values of
contracts where the bank is owed money by its counter-parties,
without taking into account netting. This represents the maximum
losses a bank could incur if all its counter-parties default and there
is no netting of contracts, and the bank holds no counter-party
collateral.
Total risk-based capital: The sum of tier 1 plus tier 2 capital. Tier 1
capital consists of common shareholders equity, perpetual
preferred shareholders equity with non-cumulative dividends,
retained earnings, and minority interests in the equity accounts of
consolidated subsidiaries. Tier 2 capital consists of subordinated
debt, intermediate-term preferred stock, cumulative and long-term
preferred stock, and a portion of a banks allowance for loan and
lease losses.
Risk management
The broad parameters of risk management function should cover:
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Organizational structure
Comprehensive risk management approach
Risk management policies approved by the board, which should
be consistent with the broader business strategies, capital
strength, management expertise and overall willingness to
assume risk
Guidelines and other parameters used to govern risk taking,
including detailed structure of prudential limits
Strong MIS or reporting, monitoring and controlling risk
Well laid out procedures, effective control and comprehensive
risk reporting framework
Separate risk management organization/framework independent
of operational departments and with clear delineation of levels
of responsibility for management of risk
(h)
Accurate and timely credit grading is one of the basic components of risk
management.
Credit risk
Credit risk is defined as the possibility of losses associated with diminution in the
credit quality of borrowers or counterparties.
Market risk
Market risk takes the form of:
(a) Liquidity risk
(b) Interest rate risk
(c) Foreign exchange rate(forex) risk
(d) Commodity price risk
(e) Equity price risk
Operational risk
Managing operational risk is becoming an important feature of sound risk
management practices in modern financial markets in the wake of phenomenal
increase in the volume of transactions, high degree o structural changes and
complex support systems. The most important type of operational risk involves
breakdowns in internal controls and corporate governance. Such breakdowns
can lead to financial loss through error, fraud, or failure to perform in a timely
manner or cause the interest of the banks to be compromised.
Generally, operational risk is defined as any risk, which is not categorized as
market or credit risk or the risk of loss arising from various types of human or
technical error. It is also synonymous with settlement or payments risk and
business interruption, administrative and legal risks. Operational risk has some
form of link between credit and market risks. An operational problem with a
business transaction could trigger a credit or market risk.
Credit risk
Operational risk
New techniques for assessing and managing these risks all focused on their impact on market
value.
New credit risk models assessed potential defaults or credit deteriorations in terms of
their mark-to-market impact.
Operational risk was also assessed in terms of its actual or potential direct costs.
Such techniques proved effective on bank trading floors, where market values were readily
available. Extending them to other parts of the bank, or even to non-financial corporations,
proved problematic. This was the realm of book value accounting. Market values were
difficult or impossible to secure for items such as private equity, pension liabilities, factory
equipment, intellectual property or natural resource reserves.
Corporate risk management emerged as a catch-all phrase for practices that serve to optimize
risk taking in a context of book value accounting. Generally, this includes risks of nonfinancial corporations, but also those of business lines of financial institutions that are not
engaged in trading or investment management. Risks vary from one corporation to the next,
depending on such factors as size, industry, diversity of business lines, sources of capital, etc.
Practices that are appropriate for one corporation are inappropriate for another. For this
reason, corporate risk management is a more elusive notion than is financial risk
management. It encompasses a variety of techniques drawn from both FRM and ALM.
Corporations pick and choose from these, adapting techniques to suit their own needs. This
article is an overview.
Corporate Risk Management
In a corporate setting, the familiar division of risks into market, credit and operational risks
breaks down.
Of these, credit risk poses the least challenges. To the extent that corporations take credit
risk (some take a lot; others take little), new and traditional techniques of credit risk
management are easily adapted.
Operational risk largely doesn't apply to corporations. It includes such factors as model risk
or back office errors. Some aspects do affect corporationssuch as fraud or natural
disastersbut corporations have been addressing these with internal
audit, facilities management and legal departments for decades. Also,
corporations face risks that are akin to the operational risk of financial institutions but are
unique to their own business lines. An airline is exposed to risks due to weather, equipment
failure and terrorism. A power generator faces the risk that a generating plant may go down
for unscheduled maintenance. In corporate risk management, these risksthose that
overlap with the operational risks of financial firms and those that are akin
to such operational risks but are unique to non-financial firmsare called
operations risks.
The real challenge of corporate risk management is those risks that are akin to market risk
but aren't market risk. An oil company holds oil reserves. Their "value" fluctuates with the
market price of oil, but what does this mean? The oil reserves don't have a market value. A
chain of restaurants is thriving. Its restaurants are "valuable," but it is impossible to assign
them market values. Something that doesn't have a market value doesn't pose market risk.
This is almost a tautology. Such risks are business risks as opposed to market risks.
In the realm of corporate risk management, we abandon the division of risks into market,
credit and operational risks and replace it with a new categorization:
Corporate risk
Market risk
risk
Business risk
Credit risk
Operations
Corporations do face some market risks, such as commodity price risk or foreign exchange
risk. These are usually dwarfed by business risks. In a nutshell, the challenge of corporate
risk management is the management of business risk.
Addressing Business Risk
Techniques for addressing business risk take two forms:
Those that treat business risks as market risks, so that techniques of FRM can be
directly applied, and
Those that address business risks from a book value standpoint, modifying or
adapting techniques of FRM and ALM as appropriate.
One is to start with accounting metrics of value and make suitable adjustments, so
they are more reflective of some intrinsic value. This is the approach employed with
economic value added (EVA) analyses.
The other approach is to construct some model to predict what value the asset might
command, if a liquid market existed for it. In this respect, a derogatory name for
economic value is mark-to-model value.
Once some means has been established for assigning economic values, these are treated like
market values. Standard techniques of financial risk managementsuch as value-atrisk (VaR) or economic capital allocationare then applied.
This economic approach to managing business risk is applicable if most of a firm's balance
sheet can be marked to market. Economic values then only need to be assigned to a few
items in order for techniques of FRM to be applied firm wide. An example would be a
commodity wholesaler. Most of its balance sheet comprises physical and forward positions in
commodities, which can be mostly marked to market.
More controversial has been the use of economic valuations in power and natural gas
markets. The actual energies trade and, for the most part, can be marked to market. However,
producers also hold significant investments in plants and equipmentand these cannot
be marked to market. Suppose some energy trades spot and forward out
three years. An asset that produces the energy has an expected life of 50
years, which means that an economic value for the asset must reflect a hypothetical 50-year
forward curve. The forward curve doesn't exist, so a model must construct one.
Consequently, assigned economic values are highly dependent on assumptions. Often, they
are arbitrary.
In this context, it isn't enough to assign economic values. VaR analyses require standard
deviations and correlations as well. Assigning these to 50-year forward prices that are
themselves hypothetical is essentially meaninglessyet, those standard deviations
and correlations determine the reported VaR.
These dubious techniques were widely (but not universally) adopted by US energy merchants
in the late 1990s and early 2000s. The most publicized of these was Enron Corp., which went
beyond using economic values for internal reporting and incorporated them into its financial
reporting to investors. The 2001 bankruptcy of Enron and subsequent revelations of fraud
tainted mark-to-model techniques.
Book Value
The second approach to addressing business risk starts by defining risks that are meaningful
in the context of book value accounting.
Most typical of these are:
Earnings risk, which is risk due to uncertainty in future reported earnings, and
Cash flow risk, which is risk due to uncertainty in future reported cash flows.
Of the two, earnings risk is more akin to market risk. Yet, it avoids the arbitrary assumptions
of economic valuations. A firm's accounting earnings are a well defined notion. A problem
with looking at earnings risk is that earnings are, well, non-economic. Earnings may be
suggestive of economic value, but they can be misleading and are often easy to manipulate.
A firm can report high earnings while its long term franchise is eroded away by lack of
investment or competing technologies. Financial transactions can boost short-term earnings
at the expense of long-term earnings. After all, traditional techniques of ALM focus on
earnings, and their shortcomings remain today.
Cash flow risk is less akin to market risk. It relates more to liquidity than the value of a firm,
but this is only partly true. As anyone who has ever worked with distressed firms can attest,
"cash is king." When a firm gets into difficulty, earnings and market values don't pay the
bills. Cash flow is the life blood of a firm. However, as with earnings risk, cash flow risk
offers only an imperfect picture of a firm's business risk. Cash flows can also be manipulated,
and steady cash flows may hide corporate decline.
Techniques for managing earnings risk and cash flow risk draw heavily on techniques of
ALMespecially scenario analysis and simulation analysis. They also
adapt techniques of FRM. In this context, value-at-risk (VaR) becomes
earnings-at-risk (EaR) or cash-flow-at-risk (CFaR). For example, EaR might be
reported as the 10% quantile of this quarter's earnings (which is the same as the 90% quantile
of reported loss, multiplied by minus one).
The actual calculations of EaR or CFaR differ from those for VaR. These are long-term risk
metrics, with horizons of three months or a year. VaR is routinely calculated over a one-day
horizon. Also, EaR and CFaR are driven by rules of accounting while VaR is driven by
financial engineering principles. Typically, EaR or CFaR are calculated by first performing a
simulation analysis. That generates a probability distribution for the period's earnings or cash
flow, which is then used to value the desired metric of EaR or CFaR.
One decision that needs to be made with EaR or CFaR is whether to use a constant or
contracting horizon. If management wants an EaR analysis for quarterly earnings, should the
analysis actually assess risk to the current quarter's earnings? If that is the case, the horizon
will start at three months on the first day of the quarter and gradually shrink to zero by the
end of the quarter. The alternative is to use a constant three-month horizon. After the first day
of the quarter, results will no longer apply to that quarter's actual earnings, but to some
hypothetical earnings over a shifting three-month horizon. Both approaches are used. The
advantage of a contracting horizon is that it addresses an actual concern of management
will we hit our earnings target this quarter? A disadvantage is that the risk metric keeps
changingif reported EaR declines over a week, does this mean that actual risk has declined,
or does it simply reflect a shortened horizon?
Conclusion
While the two approaches to business risk managementthat based on economic value and
that based on book valueare philosophically different, they can complement each other.
Some firms use them side-by-side to assess different aspects of business risk.
This article has focused on the unique challenges of corporate risk management. There is
much else about corporate risk management that overlaps with financial risk management
the need for a risk management function, the role of corporate culture, technology issues,
independence, etc. See the article Financial Risk Management for a discussion of these and
other topics.
Corporate Risk Management
Listed below are brief tips that may be helpful as an overview and guidelines for risk
management activities. If there are questions on these risk management highlights or if more
detailed information is needed, please contact Risk Limited directly.
Please click on text links for display of more information.
10. Identify and assess risks
Risk is everywhere. Success in business often comes down to recognizing and managing
possible risks associated with potential opportunities and returns. The types of risks faced in
most businesses are quite varied and far ranging. Risks typically include both financial and
physical categories. Types of risk include sometimes apparent hazards, such as safety and
health risks associated with operations, as well as financial risks from exposures to market
price volatility, counter party credit defaults, and legal liabilities. Some risks are intuitively
obvious; unfortunately, many are not. Risk categories include: Market, Credit, Legal,
Regulatory, Political, Operational, Strategic, Reputational, Event, Country and Model Risks.
So first identify possible risks throughout your business.
9. Know the numbers
Systematic processes such as a RiskRegister to identify and rank risks by order of
magnitude can be a key first step, but effective risk management strategies typically depend
on quantification of risks, often through probabilistic modeling techniques. Said another way:
one must 'measure it to manage it.' Measurement and valuation can be one of the most
difficult efforts in risk management and finance, but these are crucial for cost effective risk
management and informed decision-making. Spend the time and money to get the tools and
expertise to best quantify the company's key risks. A close corollary is to know what is in
any 'black-box' models used for valuation & reporting.
8. Risks are interrelated
Interactions and correlations of risks are a key element of which to be aware in identifying,
quantifying and mitigating risks. For example, exposure to credit risks may also affect
market price risks, whereas operational risks such as fraud may create legal and reputational
risks. Recognition that risks interact between business activities is one of the basis for the
'enterprise-wide risk management' approach now widely practiced by leading companies.
7. Continually reassess risks
Things change, and so do risks. Market conditions and volatility levels change, financial
strength of counter parties change, physical environments change, geopolitical situations
change, and on-and-on. And these changes can be rather sudden, or they can be creeping and
hidden. Exposures to risks that result from business activities may also change. Effective risk
management requires that one reevaluate risks on an ongoing basis, and processes such as a
RiskAudit should be built into the corporate risk management framework to assess both
current and projected risk exposures. Forecasting future exposures is necessary since hedge
decisions are based on projected risk levels.
6. Commit adequate resources
Effective risk management also requires considerable expertise and resources, from basic risk
control, compliance and governance activities, through advanced quantitative risk analysis.
The costs for these resources are usually not cheap, but as has been proven repeatedly by
high-profile business failures, the cost of losses due to risk management weaknesses or lapses
can be catastrophically high. Investment in risk management capabilities for most businesses
has a high payoff. Due to the potentially extreme cost of mistakes, risk managers should be
especially well trained.
5. Review the cost of risk mitigation
Transferring risks through hedge transactions or other activities is often an effective and
advisable risk management technique, but risk mitigation strategy may largely depend on the
hedge costs. Risk mitigation strategies also depend on the capacity of the firm to sustain risks
and possible losses. Trading activities that are truly for hedging should not be avoided due to
concern that trading could be misconstrued as 'speculative'; however, various hedge
instruments may not have the same cost effectiveness or appropriateness for every company
and environment.
4. Reduce exposure
Risks arise from exposure. A commonly accepted definition of risk is 'exposure to
uncertainty' (at least for that uncertainty for which one is concerned about the outcome).
Reduce the exposure and you likely reduce the risk. The selected approach and structure of
business activities can have a significant effect on the exposure & risk levels generated.
Commercial agreements and transaction structures may result in transference or acceptance
of risks with a counter party. Risk awareness in business processes and commercial activities
can lead to opportunities to reduce current and future exposures. Billing currency for
international purchases is an example of exposure effect.
3. Assess the Risk/Return Ratio
Risk management does not equate to risk aversion; however, decisions driven by risk/reward
assessments usually have a higher probability of successful outcomes. A consideration in
such risk-based business decision-making should also be the capacity of the firm to sustain
risks. As in the well developed finance field of portfolio theory (which in general terms
focuses on how investors can best balance risks and rewards in constructing investment
portfolios), business decisions based on risk/reward balance should optimize returns.
2. Monitor for quantum shifts in risk levels
A key value of quantitative risk measures is to highlight significant changes in risk levels.
Although opinions may differ on the optimal methodology for some valuation metrics,
significant changes or trends in risk metrics, such as Value-at-Risk measures, can provide a
key signal to management. Best practices designs of management reporting 'dashboards'
provide this risk monitoring capability, also showing segment reporting and consolidation to
reflect correlations such as offsets in price risks between markets.
1. Create a risk aware culture
Educate the organization in practical aspects of risk management, and that especially
includes the most senior business executives and the corporate board of directors. Risk
management responsibilities should be clear. Whether it is intuitive actions based on
experience and expertise in risk management or whether it is a result of institutionalized risk
policies and procedures, effective risk management is typically a key factor in successful
businesses. Training and building awareness can lead to a risk management culture that will
drive business success.
Cash management
In United States banking, cash management, or treasury management, is a marketing term
for certain services offered primarily to larger business customers. It may be used to describe
all bank accounts (such as checking accounts) provided to businesses of a certain size, but it
is more often used to describe specific services such as cash concentration, zero balance
accounting, and automated clearing house facilities. Sometimes, private banking customers
are given cash management services.
market mutual fund overnight, and then moved back the next
morning. This allows them to earn interest overnight. This is the
primary use of money market mutual funds.
In the past, other services have been offered the usefulness of which has diminished with the
rise of the Internet. For example, companies could have daily faxes of their most recent
transactions or be sent CD-ROMs of images of their cashed checks.
Cash management services can be costly but usually the cost to a company is outweighed by
the benefits: cost savings, accuracy, efficiencies, etc.
INVENTORY
Inventory means goods and materials, or those goods and materials themselves, held
available in stock by a business. This word is also used for a list of the contents of a
household and for a list for testamentary purposes of the possessions of someone who has
died. In accounting, inventory is considered an asset.
In business management, inventory consists of a list of goods and materials held available in
stock.
Inventory Management
Inventory refers to the stock of resources, that possess economic value, held by an
organization at any point of time. These resource stocks can be manpower, machines, capital
goods or materials at various stages.
Inventory management is primarily about specifying the size and placement of stocked
goods. Inventory management is required at different locations within a facility or within
multiple locations of a supply network to protect the regular and planned course of
production against the random disturbance of running out of materials or goods. The scope of
inventory management also concerns the fine lines between replenishment lead time,
carrying costs of inventory, asset management, inventory forecasting, inventory valuation,
inventory visibility, future inventory price forecasting, physical inventory, available physical
space for inventory, quality management, replenishment, returns and defective goods and
demand forecasting. Balancing these competing requirements leads to optimal inventory
levels, which is an on-going process as the business needs shift and react to the wider
environment.
Inventory management involves a retailer seeking to acquire and maintain a proper
merchandise assortment while ordering, shipping, handling, and related costs are kept in
check.
Systems and processes that identify inventory requirements, set targets, provide
replenishment techniques and report actual and projected inventory status.
Handles all functions related to the tracking and management of material. This would include
the monitoring of material moved into and out of stockroom locations and the reconciling of
the inventory balances. Also may include ABC analysis, lot tracking, cycle counting support
etc.
Management of the inventories, with the primary objective of determining/controlling stock
levels within the physical distribution function to balance the need for product availability
against the need for minimizing stock holding and handling costs. See inventory
proportionality.
Business inventory
The reasons for keeping stock
There are three basic reasons for keeping an inventory:
1. Time - The time lags present in the supply chain, from supplier to
user at every stage, requires that you maintain certain amounts of
inventory to use in this "lead time."
2. Uncertainty - Inventories are maintained as buffers to meet
uncertainties in demand, supply and movements of goods.
3. Economies of scale - Ideal condition of "one unit at a time at a place
where a user needs it, when he needs it" principle tends to incur
lots of costs in terms of logistics. So bulk buying, movement and
storing brings in economies of scale, thus inventory.
All these stock reasons can apply to any owner or product stage.
These classifications apply along the whole Supply chain, not just within a facility or plant.
Where these stocks contain the same or similar items, it is often the work practice to hold all
these stocks mixed together before or after the sub-process to which they relate. This
'reduces' costs. Because they are mixed up together there is no visual reminder to operators of
the adjacent sub-processes or line management of the stock, which is due to a particular
cause and should be a particular individual's responsibility with inevitable consequences.
Some plants have centralized stock holding across sub-processes, which makes the situation
even more acute.
Typology
1. Buffer/safety stock
2. Cycle stock (Used in batch processes, it is the available inventory,
excluding buffer stock)
3. De-coupling (Buffer stock that is held by both the supplier and the
user)
4. Anticipation stock (Building up extra stock for periods of increased
demand - e.g. ice cream for summer)
5. Pipeline stock (Goods still in transit or in the process of distribution have left the factory but not arrived at the customer yet)
Inventory examples
While accountants often discuss inventory in terms of goods for sale, organizations manufacturers, service-providers and not-for-profits - also have inventories (fixtures,
furniture, supplies, ...) that they do not intend to sell. Manufacturers', distributors', and
wholesalers' inventory tends to cluster in warehouses. Retailers' inventory may exist in a
warehouse or in a shop or store accessible to customers. Inventories not intended for sale to
customers or to clients may be held in any premises an organization uses. Stock ties up cash
and, if uncontrolled, it will be impossible to know the actual level of stocks and therefore
impossible to control them.
While the reasons for holding stock were covered earlier, most manufacturing organizations
usually divide their "goods for sale" inventory into:
For example:
Manufacturing
A canned food manufacturer's materials inventory includes the ingredients to form the foods
to be canned, empty cans and their lids (or coils of steel or aluminum for constructing those
components), labels, and anything else (solder, glue, ...) that will form part of a finished can.
The firm's work in process includes those materials from the time of release to the work floor
until they become complete and ready for sale to wholesale or retail customers. This may be
vats of prepared food, filled cans not yet labeled or sub-assemblies of food components. It
may also include finished cans that are not yet packaged into cartons or pallets. Its finished
good inventory consists of all the filled and labeled cans of food in its warehouse that it has
manufactured and wishes to sell to food distributors (wholesalers), to grocery stores
(retailers), and even perhaps to consumers through arrangements like factory stores and
outlet centers.
Examples of case studies are very revealing, and consistently show that the improvement of
inventory management has two parts: the capability of the organisation to manage inventory,
and the way in which it chooses to do so. For example, a company may wish to install a
complex inventory system, but unless there is a good understanding of the role of inventory
and its perameters, and an effective business process to support that, the system cannot bring
the necessary benefits to the organisation in isolation.
Typical Inventory Management techniques include Pareto Curve ABC Classification[2] and
Economic Order Quantity Management. A more sophisticated method takes these two
techniques further, combining certain aspects of each to create The K Curve Methodology[3].
A case study of k-curve[4] benefits to one company shows a successful implementation.
Unnecessary inventory adds enormously to the working capital tied up in the business, as
well as the complexity of the supply chain. Reduction and elimination of these inventory
'wait' states is a key concept in Lean[5]. Too big an inventory reduction too quickly can cause
a business to be anorexic. There are well-proven processes and techniques to assist in
inventory planning and strategy, both at the business overview and part number level. Many
of the big MRP/and ERP systems do not offer the necessary inventory planning tools within
their integrated planning applications.
Applications
The technique of inventory proportionality is most appropriate for inventories that remain
unseen by the consumer. As opposed to "keep full" systems where a retail consumer would
like to see full shelves of the product they are buying so as not to think they are buying
something old, unwanted or stale; and differentiated from the "trigger point" systems where
product is reordered when it hits a certain level; inventory proportionality is used effectively
by just-in-time manufacturing processes and retail applications where the product is hidden
from view.
One early example of inventory proportionality used in a retail application in the United
States is for motor fuel. Motor fuel (e.g. gasoline) is generally stored in underground storage
tanks. The motorists do not know whether they are buying gasoline off the top or bottom of
the tank, nor need they care. Additionally, these storage tanks have a maximum capacity and
cannot be overfilled. Finally, the product is expensive. Inventory proportionality is used to
balance the inventories of the different grades of motor fuel, each stored in dedicated tanks,
in proportion to the sales of each grade. Excess inventory is not seen or valued by the
consumer, so it is simply cash sunk (literally) into the ground. Inventory proportionality
minimizes the amount of excess inventory carried in underground storage tanks. This
application for motor fuel was first developed and implemented by Petrolsoft Corporation in
1990 for Chevron Products Company. Most major oil companies use such systems today.[6]
Roots
The use of inventory proportionality in the United States is thought to have been inspired by
Japanese just-in-time parts inventory management made famous by Toyota Motors in the
1980s.[3]
The benefit of these formulae is that the first absorbs all overheads of production and raw
material costs into a value of inventory for reporting. The second formula then creates the
new start point for the next period and gives a figure to be subtracted from the sales price to
determine some form of sales-margin figure.
Manufacturing management is more interested in inventory turnover ratio or average days to
sell inventory since it tells them something about relative inventory levels.
Inventory turnover ratio (also known as inventory turns) = cost of goods
sold / Average Inventory = Cost of Goods Sold / ((Beginning Inventory +
Ending Inventory) / 2)
This ratio estimates how many times the inventory turns over a year. This number tells how
much cash/goods are tied up waiting for the process and is a critical measure of process
reliability and effectiveness. So a factory with two inventory turns has six months stock on
hand, which is generally not a good figure (depending upon the industry), whereas a factory
that moves from six turns to twelve turns has probably improved effectiveness by 100%. This
improvement will have some negative results in the financial reporting, since the 'value' now
stored in the factory as inventory is reduced.
While these accounting measures of inventory are very useful because of their simplicity,
they are also fraught with the danger of their own assumptions. There are, in fact, so many
things that can vary hidden under this appearance of simplicity that a variety of 'adjusting'
assumptions may be used. These include:
Specific Identification
Moving-Average Cost
Inventory Turn is a financial accounting tool for evaluating inventory and it is not necessarily
a management tool. Inventory management should be forward looking. The methodology
applied is based on historical cost of goods sold. The ratio may not be able to reflect the
usability of future production demand, as well as customer demand.
Business models, including Just in Time (JIT) Inventory, Vendor Managed Inventory (VMI)
and Customer Managed Inventory (CMI), attempt to minimize on-hand inventory and
increase inventory turns. VMI and CMI have gained considerable attention due to the success
of third-party vendors who offer added expertise and knowledge that organizations may not
possess.
but the valuation is a management decision since there is no market for the partially finished
product. This somewhat arbitrary 'valuation' of WIP combined with the allocation of
overheads to it has led to some unintended and undesirable results.
Financial accounting
An organization's inventory can appear a mixed blessing, since it counts as an asset on the
balance sheet, but it also ties up money that could serve for other purposes and requires
additional expense for its protection. Inventory may also cause significant tax expenses,
depending on particular countries' laws regarding depreciation of inventory, as in Thor Power
Tool Company v. Commissioner.
Inventory appears as a current asset on an organization's balance sheet because the
organization can, in principle, turn it into cash by selling it. Some organizations hold larger
inventories than their operations require in order to inflate their apparent asset value and their
perceived profitability.
In addition to the money tied up by acquiring inventory, inventory also brings associated
costs for warehouse space, for utilities, and for insurance to cover staff to handle and protect
it from fire and other disasters, obsolescence, shrinkage (theft and errors), and others. Such
holding costs can mount up: between a third and a half of its acquisition value per year.
Businesses that stock too little inventory cannot take advantage of large orders from
customers if they cannot deliver. The conflicting objectives of cost control and customer
service often pit an organization's financial and operating managers against its sales and
marketing departments. Salespeople, in particular, often receive sales-commission payments,
so unavailable goods may reduce their potential personal income. This conflict can be
minimised by reducing production time to being near or less than customers' expected
delivery time. This effort, known as "Lean production" will significantly reduce working
capital tied up in inventory and reduce manufacturing costs (See the Toyota Production
System).
the resources brought to bear and the outputs and outcomes that they achieve. It is also
about understanding and actively managing risks within the organization and its activities.
costs: the truly variable costs, like materials and components, which vary directly with the
quantity produced.
Finished goods inventories remain balance-sheet assets, but labor-efficiency ratios no longer
evaluate managers and workers. Instead of an incentive to reduce labor cost, throughput
accounting focuses attention on the relationships between throughput (revenue or income) on
one hand and controllable operating expenses and changes in inventory on the other. Those
relationships direct attention to the constraints or bottlenecks that prevent the system from
producing more throughput, rather than to people - who have little or no control over their
situations.
National accounts
Inventories also play an important role in national accounts and the analysis of the business
cycle. Some short-term macroeconomic fluctuations are attributed to the inventory cycle.
Distressed inventory
Also known as distressed or expired stock, distressed inventory is inventory whose potential
to be sold at a normal cost has passed or will soon pass. In certain industries it could also
mean that the stock is or will soon be impossible to sell. Examples of distressed inventory
include products that have reached their expiry date, or have reached a date in advance of
expiry at which the planned market will no longer purchase them (e.g. 3 months left to
expiry), clothing that is defective or out of fashion, and old newspapers or magazines. It also
includes computer or consumer-electronic equipment that is obsolete or discontinued and
whose manufacturer is unable to support it. One current example of distressed inventory is
the VHS format.[8]
In 2001, Cisco wrote off inventory worth US $2.25 billion due to duplicate orders [9]. This is
one of the biggest inventory write-offs in business history.
Inventory credit
Inventory credit refers to the use of stock, or inventory, as collateral to raise finance. Where
banks may be reluctant to accept traditional collateral, for example in developing countries
where land title may be lacking, inventory credit is a potentially important way of
overcoming financing constraints. This is not a new concept; archaeological evidence
suggests that it was practiced in Ancient Rome. Obtaining finance against stocks of a wide
range of products held in a bonded warehouse is common in much of the world. It is, for
example, used with Parmesan cheese in Italy.[10] Inventory credit on the basis of stored
agricultural produce is widely used in Latin American countries and in some Asian countries.
[11]
A precondition for such credit is that banks must be confident that the stored product will
be available if they need to call on the collateral; this implies the existence of a reliable
network of certified warehouses. Banks also face problems in valuing the inventory. The
possibility of sudden falls in commodity prices means that they are usually reluctant to lend
more than about 60% of the value of the inventory at the time of the loan.
Policies
options
Tracking
Measurement
outsourcing
The foundation behind account receivables is your policies and procedures for sales.
For example, do you have a credit policy?
When and how do you evaluate a customer for credit?
If you look at past payment histories, you should be able to ascertain who should get credit
and who shouldn't.
Additionally, you need to establish sales terms.
For example, is it beneficial to offer discounts to speed-up cash collections?
What is the industry standard for sales terms?
There are several questions that have to be answered in building the foundation for managing
accounts receivables.
A system must be in place to track accounts receivables. This will include balance forwards,
listing of all open invoices, and generation of monthly statements to customers.
An aging of receivables will be used to collect overdue accounts. You must act quickly to
collect overdue accounts. Start by making phone calls followed by letters to upper-level
managers for the Customer. Try to negotiate settlement payments, such as installments or
asset donations. If your collection efforts fail, you may want to use a collection agency.
Also remember that the collection process is the art of knowing the customer. A
psychological understanding of the customer gives you insights into what buttons to push in
collecting the account. One of the biggest mistakes made in the collection process is a "sticks
only" approach. For some customers, using a carrot can work wonders in collecting the
overdue account. For example, in one case the company mailed a set of football tickets to a
customer with a friendly note and within weeks, they received full payment of the
outstanding account.
MEASUREMENT
Measurement is another component within account receivable management. Traditional
ratios, such as turnover will measure how many times you were able to convert receivables
over into cash.
Example: Monthly sales were $ 50,000, the beginning monthly balance for receivables was $
70,000 and the ending monthly balance was $ 90,000. The turnover ratio is:
.625 ($ 50,000 / (($70,000 + $ 90,000)/2)). Annual turnover is .625 x 360 / 30 or 7.5 times. If
you divide 360 (bankers year) by 7.5, you get 48 days on average to collect your account
receivables. You can also measure your investment in receivables. This calculation is based
on the number of days it takes you to collect receivables and the amount of credit sales.
Example: Annual credit sales are $ 100,000. Your invoice terms are net 30 days. On average,
most accounts are 13 days past due. Your investment in accounts receivable is:
(30 + 13) / 365 x $ 100,000 or $ 11,781.
Example: Average monthly sales are $ 10,000. On average, accounts receivable are paid 60
days after the sales date. The product costs are 50% of sales and inventory-carrying costs are
10% of sales. Your investment in accounts receivable is:
2 months x $ 10,000 = $ 20,000 of sales x .60 = $ 18,000.
Measurements may need to be modified to account for wide fluctuations within the sales
cycle. The use of weights can help ensure comparable measurements.
Example: Weighted Average Days to Pay = Sum of ((Date Paid - Due Date) x Amount
Paid) / Total Payments
Example: Best Possible Days Outstanding = (Current A/R x # of Days in Period) / Credit
Sales for Period
Receivable Management also involves the use of specialist. After-all, you need to spend most
of your time trying to lower your losses and not trying to collect overdue accounts. A wide
range of specialist can help:
- Credit Bureau services to review and approve new customers.
- Deduction and collection agencies
- Complete management of billings and collections
Definition
CC
C
Inventory
conversion
period
Avg.
Inventory
COGS / 365
Receivables
conversion
period
Avg. Accounts
Receivable
+
Credit Sales /
365
Payables
conversion
period
Avg.
Accounts
Payable
COGS / 365
Derivation
Cashflows insufficient. The term "cash conversion cycle" refers to the timespan between a
firm's disbursing and collecting cash. However, the CCC cannot be directly observed in
cashflows, because these are also influenced by investment and financing activities; it must
be derived from Statement of Financial Position data associated with the firm's operations.
Equation describes retailer. Although the term "cash conversion cycle" technically applies
to a firm in any industry, the equation is generically formulated to apply specifically to a
retailer. Since a retailer's operations consist in buying and selling inventory, the equation
models the time between
(1) disbursing cash to satisfy the accounts payable created by sale of a
unit of inventory, and
(2) collecting cash to satisfy the accounts receivable generated by that
sale.
Equation describes a firm that buys & sells on account. Also, the equation is written to
accommodate a firm that buys and sells on account. For a cash-only firm, the equation would
only need data from sales operations (e.g. changes in inventory), because disbursing cash
would be directly measurable as purchase of inventory, and collecting cash would be directly
measurable as sale of inventory. However, no such 1:1 correspondence exists for a firm that
buys and sells on account: Increases and decreases in inventory do not occasion cashflows
but accounting vehicles (receivables and payables, respectively); increases and decreases in
cash will remove these accounting vehicles (receivables and payables, respectively) from the
books. Thus, the CCC must be calculated by tracing a change in cash through its effect upon
receivables, inventory, payables, and finally back to cashthus, the term cash conversion
cycle, and the observation that these four accounts "articulate" with one another.
Lab
el
Transaction
Firm
removes its
credit from its
customers.
Firm
removes its
debts to its
suppliers
Firm is owed
$Y cash
(credit) from
customers
Firm owes $X
cash (debt) to
suppliers
Taking these four transactions in pairs, analysts draw attention to five important intervals,
referred to as conversion cycles (or conversion periods):
Knowledge of any three of these conversion cycles permits derivation of the fourth (leaving
aside the operating cycle, which is just the sum of the inventory conversion period and the
receivables conversion period.)
Hence,
interval
{C
D}
CCC (in
days)
interval {A
B}
Inventory
conversion
period
interval {B
D}
Receivables
conversion
period
interval {A
C}
Payables
conversion
period
In calculating each of these three constituent Conversion Cycles, we use the equation TIME
=LEVEL/RATE (since each interval roughly equals the TIME needed for its LEVEL to be
achieved at its corresponding RATE).
Inventory
Account
receivable
Accounts
payable
Cash
conversion
cycle
Cash
Question bank
Mangement paper-ll
Note: This paper contains fifty (50) multiple-choice questions, each carrying two
(2) marks. Attmpt all of them.
1. The demand curve of a monopolistically competitive firm is;
a. Highly though not perfectly elastic
b. Perfectly-inelastic
c. Kinky demand curve
d. Demand curve will be a straight line
2. A decision maker has to remember the proverb, A bird in hand is worth
two in the bush, while he examines:
a. Opportunity cost principle
b. Discounting and compounding principle
c. Marginal or incremental principle
d. Equi-marginal principle
3. Market with one buyer and one seller is called:
a. Monopsony
b. Bilateral monolpoy
c. Monopoly
d. Duopoly
4. Cardinal measure of utility is required in:
a. Utility theory
b. Indifference curve analysis
c. Revealed preference
d. Inferior goods
5. In case of giffen goods, price effect is:
a. Negative
b. Zero
c. Positive
d. -1
6. Which of the following theories state that employees make comparisions
of their efforts and rewards with those of others in similar work
situations?
a. Vrooms Expectancy theory
b. Adams equity theory
c. Alderfers ERG theory
d. Hertzbergs Two Factor Theory
Social Groups
A social group consists of two or more people who interact with one another and who
recognize themselves as a distinct social unit. The definition is simple enough, but it has
significant implications. Frequent interaction leads people to share values and beliefs. This
similarity and the interaction cause them to identify with one another. Identification and
attachment, in turn, stimulate more frequent and intense interaction. Each group maintains
solidarity with all to other groups and other types of social systems.
Groups are among the most stable and enduring of social units. They are important both to
their members and to the society at large. Through encouraging regular and predictable
behavior, groups form the foundation upon which society rests. Thus, a family, a village, a
political party a trade union is all social groups. These, it should be noted are different from
social classes, status groups or crowds, which not only lack structure but whose members are
less aware or even unaware of the existence of the group. These have been called quasigroups or groupings. Nevertheless, the distinction between social groups and quasi-groups is
fluid and variable since quasi-groups very often give rise to social groups, as for example,
social classes give rise to political parties.
Primary Groups
If all groups are important to their members and to society, some groups are more important
than others. Early in the twentieth century, Charles H. Cooley gave the name, primary
groups, to those groups that he said are characterized by intimate face-to-face association and
those are fundamental in the development and continued adjustment of their members. He
identified three basic primary groups, the family, the child's play group, and the
neighborhoods or community among adults. These groups, he said, are almost universal in all
societies; they give to people their earliest and most complete experiences of social unity;
they are instrumental in the development of the social life; and they promote the integration
of their members in the larger society. Since Cooley wrote, over 65 years ago, life in the
United States has become much more urban, complex, and impersonal, and the family play
group and neighborhood have become less dominant features of the social order.
Secondary groups, characterized by anonymous, impersonal, and instrumental relationships,
have become much more numerous. People move frequently, often from one section of the
country to another and they change from established relationships and promoting widespread
loneliness. Young people, particularly, turn to drugs, seek communal living groups and adopt
deviant lifestyles in attempts to find meaningful primary-group relationships. The social
context has changed so much so that primary group relationship today is not as simple as
they were in Cooley's time.
Secondary Groups
An understanding of the modern industrial society requires an understanding of the
secondary groups. The social groups other than those of primary groups may be termed as
secondary groups. They are a residual category. They are often called special interest
groups.Maclver and Page refers to them as great associations. They are of the opinion that
secondary groups have become almost inevitable today. Their appearance is mainly due to
the growing cultural complexity. Primary groups are found predominantly in societies where
life is relatively simple. With the expansion in population and territory of a society however
interests become diversified and other types of relationships which can be called secondary
or impersonal become necessary. Interests become differentiated. The services of experts are
required. The new range of the interests demands a complex organization. Especially selected
persons act on behalf of all and hence arises a hierarchy of officials called bureaucracy.
These features characterize the rise of the modern state, the great corporation, the factory, the
labor union, a university or a nationwide political party and so on. These are secondary
groups.Ogburn and Nimkoff defines secondary groups as groups which provide experience
lacking in intimacy. Frank D. Watson writes that the secondary group is larger and more
formal ,is specialized and direct in its contacts and relies more for unity and continuance
upon the stability of its social organization than does the primary group.
Characteristics of secondary group:
Dominance of secondary relations: Secondary groups are characterized by indirect,
impersonal, contractual and non-inclusive relations. Relations are indirect because secondary
groups are bigger in size and members may not stay together. Relations are contractual in the
sense they are oriented towards certain interests
Largeness of the size: Secondary groups are relatively larger in size. City, nation, political
parties, trade unions and corporations, international associations are bigger in size. They may
have thousands and lakhs of members. There may not be any limit to the membership in the
case of some secondary groups.
Membership: Membership in the case of secondary groups is mainly voluntary. Individuals
are at liberty to join or to go away from the groups. However there are some secondary
groups like the state whose membership is almost involuntary.
No physical basis: Secondary groups are not characterized by physical proximity. Many
secondary groups are not limited to any definite area. There are some secondary groups like
the Rotary Club and Lions Club which are international in character. The members of such
groups are scattered over a vast area.
Specific ends or interest: Secondary groups are formed for the realization of some specific
interests or ends. They are called special interest groups. Members are interested in the
groups because they have specific ends to aim at. Indirect communication: Contacts and
communications in the case of secondary groups are mostly indirect. Mass media of
communication such as radio, telephone, television, newspaper, movies, magazines and post
and telegraph are resorted to by the members to have communication.
Communication may not be quick and effective even. Impersonal nature of social
relationships in secondary groups is both the cause and the effect of indirect communication.
Nature of group control: Informal means of social control are less effective in regulating
the relations of members. Moral control is only secondary. Formal means of social control
such as law, legislation, police, court etc are made of to control the behavior of members.
The behavior of the people is largely influenced and controlled by public opinion,
propaganda, rule of law and political ideologies. Group structure: The secondary group has a
formal structure. A formal authority is set up with designated powers and a clear-cut division
of labor in which the function of each is specified in relation to the function of all. Secondary
groups are mostly organized groups. Different statuses and roles that the members assume
are specified. Distinctions based on caste, colour, religion, class, language etc are less rigid
and there is greater tolerance towards other people or groups.
Limited influence on personality: Secondary groups are specialized in character. People
involvement in them is also of limited significance.Members's attachment to them is also
very much limited. Further people spend most of their time in primary groups than in
secondary groups. Hence secondary groups have very limited influence on the personality of
the members.
Reference Groups
According to Merton reference groups are those groups which are the referring points of the
individuals, towards which he is oriented and which influences his opinion, tendency and
behaviour.The individual is surrounded by countless reference groups. Both the memberships
and inner groups and non memberships and outer groups may be reference groups.
What factors are considered while preparing PERT chart?
What is the status of implementation of WTO guidelines in India?
Discuss the measures taken by government for the promotion of small and tiny
enterprises in the wake of globalization?
What is corporate governance?
HRM:Explain some typical on-the job training techniques
Discuss future of trade unions in India
Who are called rate busters and Christers?
Rate buster: An employee who is highly productive and exceeds the formally
agreed rate of output for the particular task. Whilst this is advantageous for
management, rate-busters are usually disliked by their colleagues because their
action provides managers with the excuse to raise the rate of output for all the
other employees. Typically, there is informal social regulation of work in most
workgroups where rate-busting is deemed antisocial behaviour and potential
rate-busters are brought into line by their work colleagues through a mixture of
persuasion and coercion.
Define Selection.
What is potential assessment?
The financial goal of a firm should be to maximize profit and wealth. Do you
agree with the statement? Comment
Explain briefly:
Equity shareholders provide risk capital
Weighted average cost of capital of the firm
How is merger evaluated as a capital budgeting proposal?
State the method of risk analysis with reference to capital budgeting decision
State the reason for merger
Trading on equity is a double-edged weapon elucidate.
Financial statements reflect a combination of recorded facts, accounting
conventions and personal judgement explain
Discuss arbitrage pricing theory for valuation of securities. How is it different
from capital asset pricing model?
What is cash flow statement? What purpose does it serve?
What is hedging? Discuss its utility.
What is working capital? How would you assess the working capital
requirement of a firm?
Explain Arbitrage Pricing Theory with reference to capital market
Discuss Modigliani-Miller approach for capital structure
Discuss Walter-model and Gordon-model or dividend policy and valuation.
Discuss Black-scholes option valuation model
Discuss tools of financial analysis. Explain its role in interpretation and
signaling of corporate health.
Explain the concept and measurement of risk and return of single asset and
a portfolio.
Sensitivity analysis as a tool of risk-analysis is superior to simulation
technique of risk-analysis for capital budgeting decision. Comment.
What is the relationship between an investors required rate of return and the
value of a security? Explain with example
How are the values of perpetual bonds and preference shares determined?
Bring out the similarity of this process with that used to value a zero growth
share
Bring out the difference between a common-size balance sheet and
comparative balance sheet.
Discuss the process for calculating the cost of retained earnings. Also bring
out the theoretical and practical difficulties associated with this calculation.
Explain the relationship between capital structure and value of the firm.
Explain the net operating income approach
Briefly describe the major types of Financial Management Decisions that a
firm takes.
Explain the computation of operating cycle for a manufacturing unit.
What is meant by technical analysis with reference to valuation of securities?
What is trading on equity?
periods. Both the stocks are currently selling for Rs. 50 per share. The rupee
return (dividend plus price) of these stocks for the next year would be as
follows:
Economic condition
High
growth
Low
growth
Stagnati
on
Recessi
on
Probability
0.28
0.32
0.22
0.18
Return of P
Ltd. stock
55
50
60
70
Return of Q
Ltd. stock
75
65
50
40
Statistics:The weekly wages of 2000 workers in the factory is normally distributed with a
mean of Rs. 200 and a standard deviation of Rs. 20. Estimate the lowest weekly
wages of the 200 highest paid workers and the highest weekly wages of 200
lowest paid workers (given hi(1.28) =0.90)
Differentiate between correlation and regression analysis and give their
properties
Explain the method of testing the significance of correlation coefficient
What factors determine market structure?
Explain a few difficulties in the estimation of national income
What do you understand by trait theory of leadership?
Selection is a process of rejection How?
What is the role of competence mapping in performance management?
Define modern concept of marketing
Explain product mix
Explain the graphical method of solving an LPP involving two variables
Explain the terms Lead time, re-order point, stock-out cast and set-up cast in
inventory management
What is the significance of regression analysis? Why we have two regression
equations. Derive the correlation coefficient from the two regression coefficients.
Write a short note on management information system
Describe the strategic management process
What is meant by accounting ratios? Distinguish between liquidity and leverage
ratios.
Discuss the concept of operating profit. How is it different from net profit/
What is dividend growth model approach to the cost of equity ? Discuss its
rationale.
Discuss the basic financial derivatives
What is the funds flow statement based on working capital concept? What
purpose does it serve?
Discuss the methods for ranking investment proposals. What are the methods
commonly used for incorporating risk in capital budgeting decisions?
What is Balanced Score Card?
Write major function of a Trade Union.
State derivation of cost of debt adapting both book-value and market value
approach
Define functions of financial management
Distinguish between marketing information system and marketing research
List out different distribution channels
Describe briefly the basic steps to be followed in developing PERT/CPM
programme. Hoe does PERT differ from CPM?
What is rank correlation? How is it measured? Why rank correlation is used?
Define a simple random sample. Describe briefly some practical methods of
drawing a random sample from a finite population.
Explain generic strategies. How these strategies can be used to gain competitive
advantage/
Discuss the basic features of small enterprises
Identify the ethical issues involved in gender related problems in organisatons.
Distinguish between complete enumeration and sample survey. What are the
advantages of sampling over complete enumeration. Describe in brief different
sampling methods.
Implementing empowerment calls for organization wide revolution and if
pursued religiously can deliver unparallel results. Elaborate this statement and
reason out your answer.
What are the pre-requisites for implementing empowerment programme in an
organization.
Siz
e
Sample
mean
10
15
90
12
14
108
Test whether the sample come from the same normal population at 5% level of
significance.
{given:}
What is a Data Flow Diagram (DFD) and a Data Dictionary? Draw a DFD for
payroll processing of an organization
Explain the BCG matrix. Bring out its usefulness in corporate level strategy
formulation.
Elucidate the characteristics of an Entrepreneur
What is Ecological Consciousness? Give illustrations
What are the main features of the scientific management?
What is the importance of time and motion study in scientific management
What are the main benefit of specialization
What are the main feature of the Industrial age
How will the business world change in the information age?
State the law of variable proportion
Discuss the relevance of Need Hierarchy Theory of Motivation in developing
economics
How 360 degree appraisal is an improved technique of performance appraisal?
In what ways have the functions of Human Resource Manager changed in the
post globalistion scenario
What is customer orientation in marketing
Define branding
Define and explain the following terms
Optimum solution, feasible solution, unrestricted variables
Derive the EOQ in the inventory control method
Give the classical and frequency definition of probability. What are the
objections raised in these definitions?
Write down the Normal Distribution unction and the characteristics of the Normal
Probability Curve
Job enrichment
Job enrichment is an attempt to motivate employees by giving them the opportunity to use
the range of their abilities. It is an idea that was developed by the American psychologist
Frederick Hertzberg in the 1950s. It can be contrasted to job enlargement which simply
increases the number of tasks without changing the challenge. As such job enrichment has
been described as 'vertical loading' of a job, while job enlargement is 'horizontal loading'. An
enriched job should ideally contain:
Contents
[hide]
1 Techniques
2 Literature
3 References
4 See also
[edit] Techniques
Job enrichment, as a managerial activity includes a three steps technique:[citation needed]
1. Turn employees' effort into performance:
Provide job variety. This can be done by job sharing or job rotation
programmes.
Make sure the employee gets the right reward if performs well
3. Make sure the employee wants the reward. How to find out?[citation needed]
Ask them
Definition: Management Information Systems (MIS) is the term given to the discipline
focused on the integration of computer systems with the aims and objectives on an
organisation.
The development and management of information technology tools assists executives and the
general workforce in performing any tasks related to the processing of information. MIS and
business systems are especially useful in the collation of business data and the production of
reports to be used as tools for decision making.
Applications of MIS
Q1.
A 300 meter long train passes a pole in 12 seconds. What is its speed in
kilometer per hour?
Q2.
Q3.
If the hands of a clock are in perpendicular position what will be the time
when they are in the 8-9 position?
Q4.
Q5.
F
B
A
E
Q6.
D
H
Q7.
In the figure, CD is parallel to EF, AD=DF, CD=4 and DF=3, what is EF?
E
C