Professional Documents
Culture Documents
Personality traits and scales used to measure traits are numerous and commonality
amongst the traits and scales is often difficult to obtain. To curb the confusion, many
personality psychologists have attempted to develop a common taxonomy. A notable
attempt at developing a common taxonomy is Cattell's Sixteen Personality Factor Model
based upon personality adjectives taken form the natural language. Although Cattell
contributed much to the use of factor analysis in his pursuit of a common trait language
his theory has not been successfully replicated.
Science has always strived to develop a methodology through which questions are
answered using a common set of principles; psychology is no different. In an effort to
understand differing personalities in humans, Raymond Bernard Cattell maintained the
belief that a common taxonomy could be developed to explain such differences.
Cattell's scholarly training began at an early age when he was awarded admission to
King's College at Cambridge University where he graduated with a Bachelor of Science
in Chemistry in 1926 (Lamb, 1997). According to personal accounts, Cattell's socialist
attitudes, paired with interests developed after attending a Cyril Burt lecture in the same
year, turned his attention to the study of psychology, still regarded as a philosophy (Horn,
2001). Following the completion of his doctorate studies of psychology in 1929 Cattell
lectured at the University at Exeter where, in 1930, he made his first contribution to the
science of psychology with the Cattell Intelligence Tests (scales 1,2, and 3). During
fellowship studies in 1932, he turned his attention to the measurement of personality
focusing of the understanding of economic, social and moral problems and how objective
psychological research on moral decision could aid such problems (Lamb, 1997).
Cattell's most renowned contribution to the science of psychology also pertains to the
study of personality. Cattell's 16 Personality Factor Model aims to construct a common
taxonomy of traits using a lexical approach to narrow natural language to standard
applicable personality adjectives. Though his theory has never been replicated, his
contributions to factor analysis have been exceedingly valuable to the study of
psychology.
In the process of developing a taxonomy, a process that had taken predecessors sixty
years up to this point, Allport and Odbert systematized thousands of personality attributes
in 1936. They recognized four categories of adjectives in developing the taxonomy
including personality traits, temporary states highly evaluative judgments of personally
conduct and reputation, and physical characteristics. Personality traits are defined as
"generalized and personalized determining tendencies--consistent and stable modes of an
individuals adjustment to their environment" (John, 1999) as stated by Allport and Odbert
in their research. Each adjective relative to personality falls within one of the previous
categories to aid in the identification of major personality categories and creates a
primitive taxonomy, which many psychologists and researchers would elaborate and
build upon later. Norman (1967) divided the same limited set of adjectives into seven
categories, which, like Allport and Odbert's categories, where all mutually exclusive
(John, 1999). Despite this, work from both parties have been classified as containing
ambiguous category boundaries, resulting in the general conviction that such boundaries
should be abolished and the work has less significance than the earlier judgment.
Factor Analysis
Introduced and established by Pearson in 1901 and Spearman three years thereafter,
factor analysis is a process by which large clusters and grouping of data are replaced and
represented by factors in the equation. As variables are reduced to factors, relationships
between the factors begin to define the relationships in the variables they represent
(Goldberg & Digman, 1994). In the early stages of the process' development, there was
little widespread use due largely in part to the immense amount of hand calculations
required to determine accurate results, often spanning periods of several months. Later on
a mathematical foundation would be developed aiding in the process and contributing to
the later popularity of the methodology. In present day, the power of super computers
makes the use of factor analysis a simplistic process compared to the 1900's when only
the devoted researchers could use it to accurately attain results (Goldberg & Digman,
1994).
In performing a factor analysis, the single most import factor to consider is the selection
of variables as considerations such as domain, where a single domain results in the
highest accuracy, and other representative variables related to a single domain would
provide a more accurate outcome (Goldberg & Digman, 1994). Exploratory factor
analysis governs a single domain while confirmatory factor analysis, often less accurate
and more difficult to calculate, governs several domains. In terms of variables, it is
unlikely to see a factor analysis with fewer than 50 variables. In those situations, another
statistical equation may be a better, easier consideration to process the information. A
standard sample size for such a function would range between 500 to 1,000 participants
(Goldberg & Digman, 1994).
Cattell, another champion of the factor analysis methodology, believed that there are
three major sources of data when it comes to research concerning personality traits (Hall
& Lindzey, 1978). L-Data, also referred to as the life record, could include actual records
of a person's behavior in society such as court records. Cattell, however, gathered the
majority of L-Data from ratings given by peers. Self -rating questionnaires, also known
as Q-Data, gathered data by allowing participants to assess their own behaviors .The third
source of Cattell's data the objective test, also known as T-Data, created a unique
situation in which the subject is unaware of the personality trait being measured (Pervin
& John, 2001).
With the intent of generality, Cattell's sample population was representative of several
age groups including adolescents, adults and children as well as representing several
countries including the U.S., Britain, Australia, New Zealand, France, Italy, Germany,
Mexico, Brazil, Argentina, India, and Japan (Hall & Lindzey, 1978).
Through factor analysis, Cattell identified what he referred to as surface and source traits.
Surface traits represent clusters of correlated variables and source traits represent the
underlying structure of the personality. Cattell considered source traits much more
important in understanding personality than surface traits (Hall& Lindzey, 1978). The
identified source traits became the primary basis for the 16 PF Model.
The 16 Personality Factor Model aims to measure personality based upon sixteen source
traits. Table 1 summarizes the surface traits as descriptors in relation to source traits
within a high and low range.
Critical Review
Although Cattell contributed much to personality research through the use of factor
analysis his theory is greatly criticized. The most apparent criticism of Cattell's 16
Personality Factor Model is the fact that despite many attempts his theory has never been
entirely replicated. In 1971, Howarth and Brown's factor analysis of the 16 Personality
Factor Model found 10 factors that failed to relate to items present in the model. Howarth
and Brown concluded, “that the 16 PF does not measure the factors which it purports to
measure at a primary level (Eysenck & Eysenck, 1987) Studies conducted by Sell et al.
(1970) and by Eysenck and Eysenck (1969) also failed to verify the 16 Personality Factor
Model's primary level (Noller, Law, Comrey, 1987). Also, the reliability of Cattell's self-
report data has also been questioned by researchers (Schuerger, Zarrella, & Hotz, 1989).
Cattell and colleagues responded to the critics by maintaining the stance that the reason
the studies were not successful at replicating the primary structure of the 16 Personality
Factor model was because the studies were not conducted according to Cattell's
methodology. However, using Cattell's exact methodology, Kline and Barrett (1983),
only were able to verify four of sixteen primary factors (Noller, Law & Comrey, 1987).
In response to Eysenck's criticism, Cattell, himself, published the results of his own
factor analysis of the 16 Personality Factor Model, which also failed to verify the
hypothesized primary factors (Eysenck, 1987).
Despite all the criticism of Cattell's hypothesis, his empirical findings lead the way for
investigation and later discovery of the 'Big Five' dimensions of personality. Fiske (1949)
and Tupes and Christal (1961) simplified Cattell's variables to five recurrent factors
known as extraversion or surgency, agreeableness, consciousness, emotional stability and
intellect or openness (Pervin & John, 1999).
Cattell's Sixteen Personality Factor Model has been greatly criticized by many
researchers, mainly because of the inability of replication. More than likely, during
Cattell's factor analysis errors in computation occurred resulting in skewed data, thus the
inability to replicate. Since, computer programs for factor analysis did not exist during
Cattell's time and calculations were done by hand it is not surprising that some errors
occurred. However, through investigation into to the validity of Cattell's model
researchers did discover the Big Five Factors, which have been monumental in
understanding personality, as we know it today.
Primary
Descriptors of Low Range Descriptors of High Range
Factor
Impersonal, distant, cool, reserved, Warm, outgoing, attentive to others,
Warmth
detached, formal, aloof kindly, easy-going, participating,
(A)
(Schizothymia) likes people (Affectothymia)
Concrete thinking, lower general
Abstract-thinking, more intelligent,
mental capacity, less intelligent,
Reasoning bright, higher general mental
unable to handle abstract problems
(B) capacity, fast learner (Higher
(Lower Scholastic Mental
Scholastic Mental Capacity)
Capacity)
Reactive emotionally, changeable,
Emotional Emotionally stable, adaptive,
affected by feelings, emotionally
Stability mature, faces reality calmly (Higher
less stable, easily upset (Lower Ego
(C) Ego Strength)
Strength)
Deferential, cooperative, avoids
Dominant, forceful, assertive,
conflict, submissive, humble, Dominance
aggressive, competitive, stubborn,
obedient, easily led, docile, (E)
bossy (Dominance)
accommodating (Submissiveness)
Lively, animated, spontaneous,
Serious, restrained, prudent,
Liveliness enthusiastic, happy go lucky,
taciturn, introspective, silent
(F) cheerful, expressive, impulsive
(Desurgency)
(Surgency)
Rule-conscious, dutiful,
Expedient, nonconforming, Rule-
conscientious, conforming,
disregards rules, self indulgent Consciousness
moralistic, staid, rule bound (High
(Low Super Ego Strength) (G)
Super Ego Strength)
Shy, threat-sensitive, timid, Social Boldness Socially bold, venturesome, thick
hesitant, intimidated (Threctia) (H) skinned, uninhibited (Parmia)
Utilitarian, objective,
Sensitive, aesthetic, sentimental,
unsentimental, tough minded, self- Sensitivity
tender minded, intuitive, refined
reliant, no-nonsense, rough (I)
(Premsia)
(Harria)
Vigilant, suspicious, skeptical,
Trusting, unsuspecting, accepting, Vigilance
distrustful, oppositional
unconditional, easy (Alaxia) (L)
(Protension)
Grounded, practical, prosaic, Abstract, imaginative, absent
Abstractedness
solution oriented, steady, minded, impractical, absorbed in
(M)
conventional (Praxernia) ideas (Autia)
Forthright, genuine, artless, open, Private, discreet, nondisclosing,
Privateness
guileless, naive, unpretentious, shrewd, polished, worldly, astute,
(N)
involved (Artlessness) diplomatic (Shrewdness)
Self-Assured, unworried, Apprehensive, self doubting,
complacent, secure, free of guilt, Apprehension worried, guilt prone, insecure,
confident, self satisfied (O) worrying, self blaming (Guilt
(Untroubled) Proneness)
Traditional, attached to familiar, Openness to Open to change, experimental,
conservative, respecting traditional Change liberal, analytical, critical, free
ideas (Conservatism) (Q1) thinking, flexibility (Radicalism)
Group-oriented, affiliative, a joiner Self-reliant, solitary, resourceful,
Self-Reliance
and follower dependent (Group individualistic, self sufficient (Self-
(Q2)
Adherence) Sufficiency)
Tolerates disorder, unexacting, Perfectionistic, organized,
flexible, undisciplined, lax, self- compulsive, self-disciplined,
Perfectionism
conflict, impulsive, careless of socially precise, exacting will
(Q3)
social rules, uncontrolled (Low power, control, self-sentimental
Integration) (High Self-Concept Control)
Relaxed, placid, tranquil, torpid, Tense, high energy, impatient,
Tension
patient, composed low drive (Low driven, frustrated, over wrought,
(Q4)
Ergic Tension) time driven. (High Ergic Tension)
Primary Factors and Descriptors in Cattell's 16 Personality Factor Model (Adapted From
Conn & Rieke, 1994).
ATTITUDE COMPETENCE
If you want to have success you should should try to absorb as much knowledge as
possible right? Well, not quite. At least not only! I believe success, whether we talk at
professional or personal level, derives from three
factors: knowledge,competencies and attitudes. Most of the people, however, pay an
excessive attention to the knowledge component while neglecting the development of the
other two.
Before discussing the argument further we need to define what we mean by each of these
factors. Knowledge is practical information gained through learning, experience or
association.
Examples of knowledge:
• second degree equations
• human anatomy
• the rules of monopoly
• how to change a wheel
• the capital of Zimbabwe (Harare, if nothing else you learned this reading this
article…)
Competencies, on the other hand, refer to the ability to perform specific tasks.
Examples of competencies:
• ability to communicate effectively
• ability to write clearly
• ability to play an instrument
• ability to solve problems
• ability to dance
The last one, attitude, involves how people react to certain situations and how they
behave in general.
Examples of attitudes:
• being proactive
• being able to get along with other people
• being optimistic
• being critic towards other people
• being arrogant
Now, if you take a look at the picture below, you will see that attitudes are the base of the
pyramid. One should, therefore, focus on developing the right attitudes before passing to
the competencies and to the knowledge. If you take a look at the five attitudes we used as
example it is clear that one would desire to develop the first three. Distinguishing
between a desirable and a problematic attitude is actually an easy task.
.
Why then do we fail to dedicate enough energy to the development of valuable attitudes?
First because we might think that attitude is affected by the genetic, meaning that some
people are born optimistic while others are naturally pessimistic, and there is nothing one
can do to change it. This is far from the truth. While most people are naturally inclined to
behave in certain ways we can still radically change or develop specific attitudes at will.
Developing or changing an attitude will require much more work than developing a
competence or gaining some knowledge, but that is exactly why it is also more valuable.
The second reason why people fail to focus on attitudes is because they are not aware of
the benefits they would derive from that. The common sense states that the more
knowledgeable someone is, the more successful he will be. While this affirmation might
be true, it is only so if that person also has the right attitudes.
After developing the attitudes (which is a life long process, by the way) one should focus
on competencies. Competencies come before knowledge because they are flexible and
can be applied to many different situations.
Consider two different men, John and Mark, working for a financial services company.
Both of them are eager to succeed so that they spend lots of time trying to grow
professionally. John uses his time gaining as much knowledge as possible, he studies
balance sheets, financial reports, accounting practices and the like. Mark, on the other
hand, gets the knowledge that is necessary to carry out his job. Other than that he uses his
time to improve his writing skills, his ability to solve problems, to come up with
innovative ideas and so on. Should the financial services sector enter a downturn some
day who do you think will have a harder time? Yeah, I am sure you have guessed it.
The last part of the pyramid is formed by the knowledge. Now, when I defend that prior
to getting the knowledge one should develop attitudes and competencies I am not saying
that knowledge is not important. Far from it, knowledge is essential. But if you consider
the information and communication technologies revolution you can see that virtually
anyone in the world has access to all the information ever produced. I know that
information and knowledge are two different things, but the process of transforming one
into the other is not that complex. What I am saying, therefore, is that the knowledge
alone will not be sufficient. It does not represent a competitive advantage per se.
Summing up, success at personal or professional level will inevitably derive from three
factors: attitudes, competencies and knowledge. Most people pay an excessive attention
to the knowledge component while neglecting the development of competencies and
attitudes. Make sure you are focusing on all the three components, it is the best strategy
in the long run.
The Capability Maturity Model (CMM) is a service mark owned by Carnegie Mellon
University (CMU) and refers to a development model elicited from actual data. The data
was collected from organizations that contracted with the U.S. Department of Defense,
who funded the research, and became the foundation from which CMU created
the Software Engineering Institute (SEI). Like any model, it is an abstraction of an
existing system.
When it is applied to an existing organization's software development processes, it allows
an effective approach toward improving them. Eventually it became clear that the model
could be applied to other processes. This gave rise to a more general concept that is
applied to business processes and to developing people.
The Capability Maturity Model (CMM) was originally developed as a tool for objectively
assessing the ability of government contractors' processes to perform a contracted
software project. The CMM is based on the process maturity framework first described in
the 1989 book Managing the Software Process by Watts Humphrey. It was later
published in a report in 1993 (Technical Report CMU/SEI-93-TR-024 ESC-TR-93-177
February 1993, Capability Maturity Model SM for Software, Version 1.1) and as a book
by the same authors in 1995.
Though the CMM comes from the field of software development, it is used as a general
model to aid in improving organizational business processes in diverse areas; for example
insoftware engineering, system engineering, project management, software
maintenance, risk management, system acquisition, information technology (IT),
services, business processes generally, and human capital management. The CMM has
been used extensively worldwide in government offices, commerce, industry and
software development organizations.
HISTORY
Precursor
The Quality Management Maturity Grid was developed by Philip B. Crosby in his book
"Quality is Free".[1]
The first application of a staged maturity model to IT was not by CMM/SEI, but rather
by Richard L. Nolan, who, in 1973 published the stages of growth model for IT
organizations.[2]
Watts Humphrey began developing his process maturity concepts during the later stages
of his 27 year career at IBM. (References needed)
Development at SEI
Active development of the model by the US Department of Defense Software
Engineering Institute (SEI) began in 1986 when Humphrey joined the Software
Engineering Institute located at Carnegie Mellon University in Pittsburgh,
Pennsylvania after retiring from IBM. At the request of the U.S. Air Force he began
formalizing his Process Maturity Framework to aid the U.S. Department of Defense in
evaluating the capability of software contractors as part of awarding contracts.
The result of the Air Force study was a model for the military to use as an objective
evaluation of software subcontractors' process capability maturity. Humphrey based this
framework on the earlier Quality Management Maturity Grid developed by Philip B.
Crosby in his book "Quality is Free".[1] However, Humphrey's approach differed because
of his unique insight that organizations mature their processes in stages based on solving
process problems in a specific order. Humphrey based his approach on the staged
evolution of a system of software development practices within an organization, rather
than measuring the maturity of each separate development process independently. The
CMM has thus been used by different organizations as a general and powerful tool for
understanding and then improving general business process performance.
Watts Humphrey's Capability Maturity Model (CMM) was published in 1988[3] and as a
book in 1989, in Managing the Software Process.[4]
Organizations were originally assessed using a process maturity questionnaire and a
Software Capability Evaluation method devised by Humphrey and his colleagues at the
Software Engineering Institute (SEI).
The full representation of the Capability Maturity Model as a set of defined process areas
and practices at each of the five maturity levels was initiated in 1991, with Version 1.1
being completed in January 1993.[5] The CMM was published as a book[6] in 1995 by its
primary authors, Mark C. Paulk, Charles V. Weber, Bill Curtis, and Mary Beth Chrissis.
Superseded by CMMI
The CMM model proved useful to many organizations, but its application in software
development has sometimes been problematic. Applying multiple models that are not
integrated within and across an organization could be costly in training, appraisals, and
improvement activities. The Capability Maturity Model Integration (CMMI) project was
formed to sort out the problem of using multiple CMMs.
For software development processes, the CMM has been superseded by Capability
Maturity Model Integration (CMMI), though the CMM continues to be a general
theoretical process capability model used in the public domain.
Maturity model
A maturity model can be viewed as a set of structured levels that describe how well the
behaviours, practices and processes of an organisation can reliably and sustainably
produce required outcomes. A maturity model may provide, for example :
a place to start
the benefit of a community’s prior experiences
a common language and a shared vision
a framework for prioritizing actions.
a way to define what improvement means for your organization.
A maturity model can be used as a benchmark for comparison and as an aid to
understanding - for example, for comparative assessment of different organizations where
there is something in common that can be used as a basis for comparison. In the case of
the CMM, for example, the basis for comparison would be the organizations' software
development processes.
Structure
The Capability Maturity Model involves the following aspects:
Maturity Levels: a 5-level process maturity continuum - where the uppermost
(5th) level is a notional ideal state where processes would be systematically
managed by a combination of process optimization and continuous process
improvement.
Key Process Areas: a Key Process Area (KPA) identifies a cluster of related
activities that, when performed together, achieve a set of goals considered
important.
Goals: the goals of a key process area summarize the states that must exist for
that key process area to have been implemented in an effective and lasting way.
The extent to which the goals have been accomplished is an indicator of how
much capability the organization has established at that maturity level. The goals
signify the scope, boundaries, and intent of each key process area.
Common Features: common features include practices that implement and
institutionalize a key process area. There are five types of common features:
commitment to Perform, Ability to Perform, Activities Performed, Measurement
and Analysis, and Verifying Implementation.
Key Practices: The key practices describe the elements of infrastructure and
practice that contribute most effectively to the implementation and
institutionalization of the KPAs.
Levels
There are five levels defined along the continuum of the CMM[7] and, according to the
SEI: "Predictability, effectiveness, and control of an organization's software processes are
believed to improve as the organization moves up these five levels. While not rigorous,
the empirical evidence to date supports this belief."
1. Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new
process.
2. Managed - the process is managed in accordance with agreed metrics.
3. Defined - the process is defined/confirmed as a standard business process, and
decomposed to levels 0, 1 and 2 (the latter being Work Instructions).
4. Quantitatively managed
5. Optimizing - process management includes deliberate process
optimization/improvement.
Within each of these maturity levels are Key Process Areas (KPAs) which characterise
that level, and for each KPA there are five definitions identified:
1. Goals
2. Commitment
3. Ability
4. Measurement
5. Verification
The KPAs are not necessarily unique to CMM, representing — as they do — the stages
that organizations must go through on the way to becoming mature.
The CMM provides a theoretical continuum along which process maturity can be
developed incrementally from one level to the next. Skipping levels is not
allowed/feasible.
N.B.: The CMM was originally intended as a tool to evaluate the ability of government
contractors to perform a contracted software project. It has been used for and may be
suited to that purpose, but critics pointed out that process maturity according to the CMM
was not necessarily mandatory for successful software development. There were/are real-
life examples where the CMM was arguably irrelevant to successful software
development, and these examples include many shrinkwrap companies (also
called commercial-off-the-shelf or "COTS" firms orsoftware package firms). Such firms
would have included, for example, Claris, Apple, Symantec, Microsoft, and Lotus.
Though these companies may have successfully developed their software, they would not
necessarily have considered or defined or managed their processes as the CMM described
as level 3 or above, and so would have fitted level 1 or 2 of the model. This did not - on
the face of it - frustrate the successful development of their software.
Level 1 - Initial (Chaotic)
It is characteristic of processes at this level that they are (typically) undocumented and in
a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive
manner by users or events. This provides a chaotic or unstable environment for the
processes.
Level 2 - Repeatable
It is characteristic of processes at this level that some processes are repeatable, possibly
with consistent results. Process discipline is unlikely to be rigorous, but where it exists it
may help to ensure that existing processes are maintained during times of stress.
Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and documented
standard processes established and subject to some degree of improvement over time.
These standard processes are in place (i.e., they are the AS-IS processes) and used to
establish consistency of process performance across the organization.
Level 4 - Managed
It is characteristic of processes at this level that, using process metrics, management can
effectively control the AS-IS process (e.g., for software development ). In particular,
management can identify ways to adjust and adapt the process to particular projects
without measurable losses of quality or deviations from specifications. Process Capability
is established from this level.
Level 5 - Optimizing
It is a characteristic of processes at this level that the focus is on continually improving
process performance through both incremental and innovative technological
changes/improvements.
At maturity level 5, processes are concerned with addressing statistical common causes of
process variation and changing the process (for example, to shift the mean of the process
performance) to improve process performance. This would be done at the same time as
maintaining the likelihood of achieving the established quantitative process-improvement
objectives.
CMMI is a process improvement approach that provides organizations with the essential elements of
effective processes that ultimately improve their performance. CMMI can be used to guide process
improvement across a project, a division, or an entire organization. It helps integrate traditionally separate
organizational functions, set process improvement goals and priorities, provide guidance for quality
processes, and provide a point of reference for appraising current processes.
The benefits you can expect from using CMMI include the following:
• Your organization's activities are explicitly linked to your business objectives.
• Your visibility into the organization's activities is increased to help you ensure
that your product or service meets the customer's expectations.
• You learn from new areas of best practice (e.g., measurement, risk)
CMMI is being adopted worldwide, including North America, Europe, Asia, Australia,
South America, and Africa. This kind of response has substantiated the SEI's
commitment to CMMI.
You can use CMMI in three different areas of interest:
• Product and service acquisition (CMMI for Acquisition model)
• Product and service development (CMMI for Development model)
• Service establishment, management, and delivery (CMMI for Services model)
CMMI models are collections of best practices that you can compare to your
organization's best practices and guide improvement to your processes. A formal
comparison of a CMMI model to your processes is called an appraisal. The Standard
CMMI Appraisal Method for Process Improvement (SCAMPI) incorporates the best
ideas of several process improvement appraisal methods.
Business excellence is the systematic use of quality management principles and tools
in business management, with the goal of improving performance based on the principles
of customer focus, stakeholder value, and process management. Key practices in business
excellence applied across functional areas in an enterprise include continuous and
breakthrough improvement, preventative management and management by facts. Some
of the tools used are the balanced scorecard, Lean, the Six Sigma statistical tools, process
management, andproject management.
Business excellence, as described by the European Foundation for Quality Management
(EFQM), refers to "outstanding practices in managing the organization and achieving
results, all based on a set of eight fundamental concepts." These concepts are "results
orientation, customer focus, leadership and constancy of purpose, management by
processes and facts, people development and involvement, continuous learning,
innovation and improvement; partnership development, and public responsibility."
In general, business excellence models have been developed by national bodies as a basis
for award programs. For most of these bodies, the awards themselves are secondary in
importance to the widespread adoption of the concepts of business excellence, which
ultimately leads to improved national economic performance. By far the majority of
organizations that use these models do so for self-assessment, through which they may
identify improvement opportunities, areas of strength, and ideas for future organizational
development. Users of the EFQM Excellence Model, for instance, do so for the following
purposes: self-assessment, strategy formulation, visioning, project management, supplier
management, and mergers. The most popular and influential model in the western world
is the Malcolm Baldrige National Quality Award Model (also known as the Baldrige
model, the Baldrige Criteria, or the Criteria for Performance Excellence), launched by the
US government. More than 60 national and state/regional awards base their frameworks
upon the Baldrige criteria.
When used as a basis for an organization's improvement culture, the business excellence
criteria within the models broadly channel and encourage the use of best practices into
areas where their effect will be most beneficial to performance. When used simply for
self-assessment, the criteria can clearly identify strong and weak areas of management
practice so that tools such as benchmarking can be used to identify best-practice to enable
the gaps to be closed. These critical links between business excellence models, best
practice, and benchmarking are fundamental to the success of the models as tools of
continuous improvement.
The essence of the Methodology is to concentrate in a perfect blend of Focus between
Processes, Technologies and Resources (Human, Financial, etc.)
The main idea is that neither of those elements can be improved by itself and it needs to
be balanced and improved in a blend with the other two.
Process Phases - Because of the blend of different methodologies that have specific
phases within their processes Business Excellence drives results through four well
defined phases: Discover/Define, Measure/Analyze, Create/Optimize, Monitor/Control.
Those phases evolve continuously within the ever-growing organization, driving constant
monitoring, optimization and re-evaluation.
The Model, which recognises there are many approaches to achieving sustainable
excellence in all aspects of performance, is based on the premise that:
Excellent results with respect to Performance, Customers, People and Society are
achieved through Leadership driving Policy and Strategy, that is delivered through
People Partnerships and Resources, and Processes.
The arrows emphasise the dynamic nature of the model. They show innovation and
learning helping to improve enablers that in turn lead to improved results.
Model structure
The Model's nine boxes, shown above, represent the criteria against which to assess an
organisation's progress towards excellence. Each of the nine criteria has a definition,
which explains the high level meaning of that criterion.
To develop the high level meaning further each criterion is supported by a number of
sub-criteria. Sub-criteria pose a number of questions that should be considered in the
course of an assessment.
Below each sub-criterion are lists of possible areas to address. The areas to address are
not mandatory nor are they exhaustive lists but are intended to further exemplify the
meaning of the sub-criterion.
Enablers
Leadership
Policy & Strategy
People
Partnerships & Resources
Processes
Results
Customer Results
People Results
Society Results
Key Performance Results
There is no significance intended in the order of the concepts. The list is not meant to be
exhaustive and they will change as excellent organisations develop and improve.
Results Orientation
Excellence is achieving results that delight all the organisation's stakeholders.
Customer Focus
Excellence is creating sustainable customer value.
Partnership Development
Excellence is developing and maintaining value-adding partnerships.
RADAR
At the heart of the self assessment process lies the logic known as RADAR which has the
following elements: Results, Approach, Deployment, Assessment and Review.
The logic of RADAR® states that an organisation should:
• Determine the Results it is aiming for.
• Implement an integrated set of sound Approaches to deliver the required
results.
• Deploy the approaches systematically.
• Assess and Review the effectiveness of the approaches.
Buisness excellence model is the means of achieving all the above. Organisations can
develop a customized business excellence model and/or adopt one or more of the below
mentioned concepts.
The criteria provides a framework for performance excellence and helps the organization
to assess and measure performance on a wide range of key business indicators like -
customer, product and service, operational, and financial.
This allows the organization to carry out a self assessment on the business performance
and allows to identify strengths and to target areas for “opportunities for improvement”
on processes and results affecting all key stakeholders – including customers, employees,
owners, suppliers, and the public.
The criteria also helps the organization to align its resources, productivity, effectiveness,
and achieve the goals. In short, Business excellence model is:
• A comprehensive coverage of strategy-driven performance
• Focuses on the needs, expectations and satisfaction of all stakeholders
• Examines all processes that are essential in achieving business excellence
• Is a framework to assess and enhance business excellence
• Continuous improvement of organizational overall performance and capabilities.
• Delivering ever-improving value to customers, resulting in marketplace success.
• Understanding the business and analyzing in areas like - leadership, strategy, customer,
market, information & data, knowledge sharing, HR, production processes, and the
results.
• Framework for excellence through values, processes, and outcome. In the model, they
call it as “Approach”, “deployment” and “results”
• It is not a prescriptive model.
• It asks questions, but does not provide solutions.
Organizations also uses “Balanced score card” as performance measurement system. This
is again a framework, which enables to translate the organizations vision & strategy in to
coherent set of performance and measures.
Although, there are many excellence models available, you can refer to “Malcolm
Baldrige National Quality Award” model. Tata Business excellence model is based on
this.
RISK MANAGEMENT
Risk management is the identification, assessment, and prioritization of risks (defined
in ISO 31000 as the effect of uncertainty on objectives, whether positive or negative)
followed by coordinated and economical application of resources to minimize, monitor,
and control the probability and/or impact of unfortunate events[1] or to maximize the
realization of opportunities. Risks can come from uncertainty in financial markets,
project failures, legal liabilities, credit risk, accidents, natural causes and disasters as well
as deliberate attacks from an adversary. Several risk management standards have been
developed including the Project Management Institute, the National Institute of Science
and Technology, actuarial societies, and ISO standards.[2][3] Methods, definitions and
goals vary widely according to whether the risk management method is in the context of
project management, security, engineering, industrial processes, financial portfolios,
actuarial assessments, or public health and safety.
The strategies to manage risk include transferring the risk to another party, avoiding the
risk, reducing the negative effect of the risk, and accepting some or all of the
consequences of a particular risk.
Certain aspects of many of the risk management standards have come under criticism for
having no measurable improvement on risk even though the confidence in estimates and
decisions increase
Identification
After establishing the context, the next step in the process of managing risk is to identify
potential risks. Risks are about events that, when triggered, cause problems. Hence, risk
identification can start with the source of problems, or with the problem itself.
Source analysis[citation needed] Risk sources may be internal or external to the system
that is the target of risk management.
Examples of risk sources are: stakeholders of a project, employees of a company or the
weather over an airport.
Problem analysis[citation needed] Risks are related to identified threats. For example:
the threat of losing money, the threat of abuse of privacy information or the threat
of accidents and casualties. The threats may exist with various entities, most
important with shareholders, customers and legislative bodies such as the
government.
When either source or problem is known, the events that a source may trigger or the
events that can lead to a problem can be investigated. For example: stakeholders
withdrawing during a project may endanger funding of the project; privacy information
may be stolen by employees even within a closed network; lightning striking an aircraft
during takeoff may make all people onboard immediate casualties.
The chosen method of identifying risks may depend on culture, industry practice and
compliance. The identification methods are formed by templates or the development of
templates for identifying source, problem or event. Common risk identification methods
are:
Objectives-based risk identification[citation needed] Organizations and project teams
have objectives. Any event that may endanger achieving an objective partly or
completely is identified as risk.
Scenario-based risk identification In scenario analysis different scenarios are
created. The scenarios may be the alternative ways to achieve an objective, or an
analysis of the interaction of forces in, for example, a market or battle. Any event
that triggers an undesired scenario alternative is identified as risk - see Futures
Studies for methodology used byFuturists.
Taxonomy-based risk identification The taxonomy in taxonomy-based risk
identification is a breakdown of possible risk sources. Based on the taxonomy and
knowledge of best practices, a questionnaire is compiled. The answers to the
questions reveal risks.[5]
Common-risk checking In several industries, lists with known risks are
available. Each risk in the list can be checked for application to a particular
situation.[6]
Risk charting[7] This method combines the above approaches by listing resources
at risk, Threats to those resources Modifying Factors which may increase or
decrease the risk and Consequences it is wished to avoid. Creating a matrix under
these headings enables a variety of approaches. One can begin with resources and
consider the threats they are exposed to and the consequences of each.
Alternatively one can start with the threats and examine which resources they
would affect, or one can begin with the consequences and determine which
combination of threats and resources would be involved to bring them about.
Assessment
Once risks have been identified, they must then be assessed as to their potential severity
of loss and to the probability of occurrence. These quantities can be either simple to
measure, in the case of the value of a lost building, or impossible to know for sure in the
case of the probability of an unlikely event occurring. Therefore, in the assessment
process it is critical to make the best educated guesses possible in order to properly
prioritize the implementation of the risk management plan.
The fundamental difficulty in risk assessment is determining the rate of occurrence since
statistical information is not available on all kinds of past incidents. Furthermore,
evaluating the severity of the consequences (impact) is often quite difficult for immaterial
assets. Asset valuation is another question that needs to be addressed. Thus, best educated
opinions and available statistics are the primary sources of information. Nevertheless,
risk assessment should produce such information for the management of the organization
that the primary risks are easy to understand and that the risk management decisions may
be prioritized. Thus, there have been several theories and attempts to quantify risks.
Numerous different risk formulae exist, but perhaps the most widely accepted formula for
risk quantification is:
Rate of occurrence multiplied by the impact of the event equals risk
Risk mitigation measures are usually formulated according to one or more of the
following major risk options, which are:
1. Design a new business process with adequate built-in risk control and
containment measures from the start.
2. Periodically re-assess risks that are accepted in ongoing processes as a normal
feature of business operations and modify mitigation measures.
3. Transfer risks to an external agency (e.g. an insurance company)
4. Avoid risks altogether (e.g. by closing down a particular high-risk business area)
Later research[citation needed] has shown that the financial benefits of risk management
are less dependent on the formula used but are more dependent on the frequency and how
risk assessment is performed.
In business it is imperative to be able to present the findings of risk assessments in
financial terms. Robert Courtney Jr. (IBM, 1970) proposed a formula for presenting risks
in financial terms.[8] The Courtney formula was accepted as the official risk analysis
method for the US governmental agencies. The formula proposes calculation of ALE
(annualised loss expectancy) and compares the expected loss value to the security control
implementation costs (cost-benefit analysis).
Risk avoidance
This includes not performing an activity that could carry risk. An example would be not
buying a property or business in order to not take on the legal liability that comes with it.
Another would be not flying in order not to take the risk that the airplane were to
be hijacked. Avoidance may seem the answer to all risks, but avoiding risks also means
losing out on the potential gain that accepting (retaining) the risk may have allowed. Not
entering a business to avoid the risk of loss also avoids the possibility of earning profits.
Hazard Prevention
Main article: Hazard prevention
Hazard prevention refers to the prevention of risks in an emergency. The first and most
effective stage of hazard prevention is the elimination of hazards. If this takes too long, is
too costly, or is otherwise impractical, the second stage is mitigation.
Risk reduction
Risk reduction or "optimisation" involves reducing the severity of the loss or the
likelihood of the loss from occurring. For example, sprinklers are designed to put out
a fire to reduce the risk of loss by fire. This method may cause a greater loss by water
damage and therefore may not be suitable. Halon fire suppression systems may mitigate
that risk, but the cost may be prohibitive as a strategy.
Acknowledging that risks can be positive or negative, optimising risks means finding a
balance between negative risk and the benefit of the operation or activity; and between
risk reduction and effort applied. By an offshore drilling contractor effectively applying
HSE Management in its organisation, it can optimise risk to achieve levels of residual
risk that are tolerable.[10]
Modern software development methodologies reduce risk by developing and delivering
software incrementally. Early methodologies suffered from the fact that they only
delivered software in the final phase of development; any problems encountered in earlier
phases meant costly rework and often jeopardized the whole project. By developing in
iterations, software projects can limit effort wasted to a single iteration.
Outsourcing could be an example of risk reduction if the outsourcer can demonstrate
higher capability at managing or reducing risks.[11] For example, a company may
outsource only its software development, the manufacturing of hard goods, or customer
support needs to another company, while handling the business management itself. This
way, the company can concentrate more on business development without having to
worry as much about the manufacturing process, managing the development team, or
finding a physical location for a call center.
Risk sharing
Briefly defined as "sharing with another party the burden of loss or the benefit of gain,
from a risk, and the measures to reduce a risk."
The term of 'risk transfer' is often used in place of risk sharing in the mistaken belief that
you can transfer a risk to a third party through insurance or outsourcing. In practice if the
insurance company or contractor go bankrupt or end up in court, the original risk is likely
to still revert to the first party. As such in the terminology of practitioners and scholars
alike, the purchase of an insurance contract is often described as a "transfer of risk."
However, technically speaking, the buyer of the contract generally retains legal
responsibility for the losses "transferred", meaning that insurance may be described more
accurately as a post-event compensatory mechanism. For example, a personal injuries
insurance policy does not transfer the risk of a car accident to the insurance company.
The risk still lies with the policy holder namely the person who has been in the accident.
The insurance policy simply provides that if an accident (the event) occurs involving the
policy holder then some compensation may be payable to the policy holder that is
commensurate to the suffering/damage.
Some ways of managing risk fall into multiple categories. Risk retention pools are
technically retaining the risk for the group, but spreading it over the whole group
involves transfer among individual members of the group. This is different from
traditional insurance, in that no premium is exchanged between members of the group up
front, but instead losses are assessed to all members of the group.
Risk retention
Involves accepting the loss, or benefit of gain, from a risk when it occurs. True self
insurance falls in this category. Risk retention is a viable strategy for small risks where
the cost of insuring against the risk would be greater over time than the total losses
sustained. All risks that are not avoided or transferred are retained by default. This
includes risks that are so large or catastrophic that they either cannot be insured against or
the premiums would be infeasible. War is an example since most property and risks are
not insured against war, so the loss attributed by war is retained by the insured. Also any
amounts of potential loss (risk) over the amount insured is retained risk. This may also be
acceptable if the chance of a very large loss is small or if the cost to insure for greater
coverage amounts is so great it would hinder the goals of the organization too much.
Implementation
Implementation follows all of the planned methods for mitigating the effect of the risks.
Purchase insurance policies for the risks that have been decided to be transferred to an
insurer, avoid all risks that can be avoided without sacrificing the entity's goals, reduce
others, and retain the rest.
Limitations
If risks are improperly assessed and prioritized, time can be wasted in dealing with risk of
losses that are not likely to occur. Spending too much time assessing and managing
unlikely risks can divert resources that could be used more profitably. Unlikely events do
occur but if the risk is unlikely enough to occur it may be better to simply retain the risk
and deal with the result if the loss does in fact occur. Qualitative risk assessment is
subjective and lacks consistency. The primary justification for a formal risk assessment
process is legal and bureaucratic.
Prioritizing the risk management processes too highly could keep an organization from
ever completing a project or even getting started. This is especially true if other work is
suspended until the risk management process is considered complete.
It is also important to keep in mind the distinction between risk and uncertainty. Risk can
be measured by impacts x probability.
Risk communication
Risk communication is a complex cross-disciplinary academic field. Problems for risk
communicators involve how to reach the intended audience, to make the risk
comprehensible and relatable to other risks, how to pay appropriate respect to the
audience's values related to the risk, how to predict the audience's response to the
communication, etc. A main goal of risk communication is to improve collective and
individual decision making. Risk communication is somewhat related to crisis
communication.
Greiner’s model suggests how organisations grow, but the basic reasons behind the
growth process and its mechanics remain unclear. As mentioned previously, growth in a
living organism is a result of the interplay between the ontogenic factor and the
environment. Here, positive feedback plays a vital role in explaining changes in a living
system. Although both positive and negative feedback work in concert in any living
system, in order to grow (or to effect other changes in a system), the net type of feedback
must be positive (Skyttner, 2001). In organisms, starting at birth, the importation of
materials and energy from the environment not only sustains life but also contributes to
growth. As they keep growing, so does their ability to acquire resources. This means that
the more they grow, the more capacity in resources acquisition they have and the more
resources they can access. This growth and the increase in resource acquisition
capabilities provides a positive feedback loop, which continues until the organism
matures. The positive feedback loop will be active again when the organism starts to
decline, which will be mentioned later.
An analogy can be made between the process of growth in a business organisation and
that in an organism (provided that the business organisation pursues a growth strategy). If
the resources in a niche or a domain are abundant, a business organisation in that niche is
likely to run at a profit (provided that the relevant costs are under control). An increase in
profit results in an improvement in return on investment (ROI), which tends to attract
more funds from the investors. The firm can use these funds to reinvest for expansion, to
gain more market control, and make even more profit. This positive feedback will
continue until limiting factors (e.g. an increase in competition or the depletion of
resources within a particular niche) take effect.
A living system cannot perpetually maintain growth, nor can it ensure its survival and
viability forever. After its growth, the system matures, declines, and eventually ends.
This can be explained by using the concept of ‘homeokinesis’ (Cardon, et al., 1972; Van
Gigch, 1978, 1991; Skyttner, 2001). It has already been argued that one of the most
important characteristics of any living system is that it has to be in a homeostatic, or
dynamic, equilibrium condition to remain viable. Nonetheless, the fact that a living
system deteriorates over time and eventually expires indicates that there is a limit to this.
Rather than maintaining its dynamic equilibrium, it is argued that a living system is really
in a state of disequilibrium, a state of evolution termed ‘homeokinesis’. Rather than being
a living system’s normal state, homeostasis is the ideal or climax state that the system is
trying to achieve, but that is never actually achievable. Homeostasis can be described in
homeokinetic terms as a ‘homeokinetic plateau’– the region within which negative
feedback dominates in the living system. In human physiology, after age 25 (the
physiological climax state), the body starts to deteriorate but can still function. After
achieving maturity, it seems that a living system has more factors and contingencies to
deal with, and that require more energy and effort to keep under control. Beyond the
‘upper threshold’, it is apparent that the system is again operating in a positive feedback
region, and is deteriorating. Even though the living system is trying its best to maintain
its viability, this effort, nonetheless, cannot counterbalance or defeat the entropically
increasing trend. The system gradually and continuously loses its integration and proper
functioning, which eventually results in the system’s expiry.
Although we argue that the concept of homeokinesis and net positive feedback can also
be applied to the explanation of deterioration and demise in organisations, as noted earlier
it is very difficult to make a direct homology between changes in organisms and changes
in organisations. Rather than being biological machines, which can be described and
explained, to a large extent if not (arguably) completely, in terms of physics and
chemistry, organisations are much more complex socio-technical systems comprising
ensembles of people, artefacts, and technology working together in an organised manner.
Figure 11.3. Control requires that the system be maintained within the bounds of
the homokinetic plateau. Adapted from Van Gigch (1991).
As mentioned earlier, after its maturity, the organism gradually and continuously loses its
ability to keep its integration and organisation under control (to counterbalance the
entropically increasing trend) and this finally leads to its demise. While this phenomenon
is normal in biological systems, even though organisations in general may experience
decline and death (as many empires and civilisations did in history), it appears that the
entropic process in organisations is less definite and more complicated than that in
organisms. Kiel (1991) suggests that this dissimilarity can be explained in terms of
systems’ differences in their abilities to extract and utilise energy, and the capacity to
reorganise as a result of unexpected and chaotic contextual factors. This suggests that
biological systems are less resilient and capable than social systems with respect to
natural decline. This may be reflected in the difference in timing and duration of each of
their developmental phases. For example, while the duration of each phase in the life
cycle, and the life expectancy, are relatively definite for a particular type of organism,
such duration is very difficult, if not impossible, to specify for organisations. A small
business may, on average, last from several months to a number of years whereas, in
contrast, the Roman Catholic Church has lasted for centuries (Scott, 1998). It may be that
the size and form of the organisation are influential factors in this respect, a proposition
that still requires further empirical investigation.
To be in the region of the homeokinetic plateau, the proper amount of control for a well-
functioning and sustainable living systems must be present, and similarly for
organisations. Too little control will lead to poor integration and a chaotic situation
whereas too much control results in poor adaptation and inflexibility.
BUSINESS PLANNING
A business plan is a formal statement of a set of business goals, the reasons why they are
believed attainable, and the plan for reaching those goals. It may also contain background
information about the organization or team attempting to reach those goals.
Business plans may also target changes in perception and branding by the customer,
client, tax-payer, or larger community. When the existing business is to assume a major
change or when planning a new venture - a 3 to 5 year business plan is essential.
Business plans may be internally or externally focused. Externally focused plans target
goals that are important to external stakeholders, particularly financial stakeholders. They
typically have detailed information about the organization or team attempting to reach the
goals. With for-profit entities, external stakeholders include investors and customers.
[1]
External stake-holders of non-profits include donors and the clients of the non-profit's
services.[2] For government agencies, external stakeholders include tax-payers, higher-
level government agencies, and international lending bodies such as the IMF, the World
Bank, various economic agencies of the UN, and development banks.
Internally focused business plans target intermediate goals required to reach the external
goals. They may cover the development of a new product, a new service, a new IT
system, a restructuring of finance, the refurbishing of a factory or a restructuring of the
organization. An internal business plan is often developed in conjunction with a balanced
scorecard or a list of critical success factors. This allows success of the plan to be
measured using non-financial measures. Business plans that identify and target internal
goals, but provide only general guidance on how they will be met are called strategic
plans.
Operational plans describe the goals of an internal organization, working group or
department.[3] Project plans, sometimes known as project frameworks, describe the goals
of a particular project. They may also address the project's place within the organization's
larger strategic goals
Business plans are decision-making tools. There is no fixed content for a business plan.
Rather the content and format of the business plan is determined by the goals and
audience. A business plan represents all aspects of business planning process; declaring
vision and strategy alongside sub-plans to cover marketing, finance, operations, human
resources as well as a legal plan, when required. A business plan is a bind summary of
those disciplinary plans.
For example, a business plan for a non-profit might discuss the fit between the business
plan and the organization’s mission. Banks are quite concerned about defaults, so a
business plan for a bank loan will build a convincing case for the organization’s ability to
repay the loan. Venture capitalists are primarily concerned about initial investment,
feasibility, and exit valuation. A business plan for a project requiring equity
financing will need to explain why current resources, upcoming growth opportunities,
and sustainable competitive advantage will lead to a high exit valuation.
Preparing a business plan draws on a wide range of knowledge from many different
business disciplines: finance, human resource management, intellectual property
management,supply chain management, operations management, and marketing, among
others.[5] It can be helpful to view the business plan as a collection of sub-plans, one for
each of the main business disciplines.[6]
"... A good business plan can help to make a good business credible, understandable, and
attractive to someone who is unfamiliar with the business. Writing a good business plan
can’t guarantee success, but it can go a long way toward reducing the odds of failure."
The format of a business plan depends on its presentation context. It is not uncommon for
businesses, especially start-ups to have three or four formats for the same business plan:
an "elevator pitch" - a three minute summary of the business plan's executive
summary. This is often used as a teaser to awaken the interest of potential funders,
customers, or strategic partners.
an oral presentation - a hopefully entertaining slide show and oral narrative that is
meant to trigger discussion and interest potential investors in reading the written
presentation. The content of the presentation is usually limited to the executive
summary and a few key graphs showing financial trends and key decision making
benchmarks. If a new product is being proposed and time permits, a
demonstration of the product may also be included.
a written presentation for external stakeholders - a detailed, well written, and
pleasingly formatted plan targeted at external stakeholders.
an internal operational plan - a detailed plan describing planning details that are
needed by management but may not be of interest to external stakeholders. Such
plans have a somewhat higher degree of candor and informality than the version
targeted at external stakeholders.
Typical structure for a business plan for a start up venture[7]
cover page and table of contents
executive summary
business description
business environment analysis
industry background
competitor analysis
market analysis
marketing plan
operations plan
management summary
financial plan
attachments and milestones
A business plan is often prepared when:
• Starting a new organization, business venture, or product (service) or
• Expanding, acquiring or improving any of the above.
There are numerous benefits of doing a business plan, including:
• To identify an problems in your plans before you implement those plans.
• To get the commitment and participation of those who will implement the plans,
which leads to better results.
• To establish a roadmap to compare results as the venture proceeds from paper to
reality.
• To achieve greater profitability in your organization, products and services -- all
with less work.
• To obtain financing from investors and funders.
• To minimize your risk of failure.
• To update your plans and operations in a changing world.
• To clarify and synchronize your goals and strategies.
For these reasons, the planning process often is as useful as the business plan document
itself.
Advice to managers
• Monitor for success - not control for its own sake:
• Only intervene where deviation is substantial
• Feed back results to allow subordinates to correct minor deviations
• Keep a focus on strategic goals
• If you micro-manage you will not be able to see the wood for the trees
• Monitor selectively
• Focus on variables that of great significant and those that provide early warning
of major problems
• And always avoid paralysis by analysis
•
Evaluation and modification
• The evaluation of performance should lead an ongoing review process
• Where necessary, modify plans and take corrective action to ensure put the
organisation back on course to achieve its objectives
• Planning is not a one off event but a continuing process:
• The implementation has to be fine tuned during the period of the plan
• Results from the plan will be fed into next years plan
A final thought
"Planning without action is futile Action without planning is fatal"
The Capability Maturity Model (CMM) is a service mark owned by Carnegie Mellon
University (CMU) and refers to a development model elicited from actual data. The data
was collected from organizations that contracted with the U.S. Department of Defense,
who funded the research, and became the foundation from which CMU created
the Software Engineering Institute (SEI). Like any model, it is an abstraction of an
existing system.
When it is applied to an existing organization's software development processes, it allows
an effective approach toward improving them. Eventually it became clear that the model
could be applied to other processes. This gave rise to a more general concept that is
applied to business processes and to developing people.
Overview
The Capability Maturity Model (CMM) was originally developed as a tool for objectively
assessing the ability of government contractors' processes to perform a contracted
software project. The CMM is based on the process maturity framework first described in
the 1989 book Managing the Software Process by Watts Humphrey. It was later
published in a report in 1993 (Technical Report CMU/SEI-93-TR-024 ESC-TR-93-177
February 1993, Capability Maturity Model SM for Software, Version 1.1) and as a book
by the same authors in 1995.
Though the CMM comes from the field of software development, it is used as a general
model to aid in improving organizational business processes in diverse areas; for example
in software engineering, system engineering, project management, software
maintenance, risk management, system acquisition, information technology (IT),
services, business processes generally, and human capital management. The CMM has
been used extensively worldwide in government offices, commerce, industry and
software development organizations.
History
Prior need for software processes
In the 1970s, the use of computers grew more widespread, flexible and less costly.
Organizations began to adopt computerized information systems, and the demand for
software development grew significantly. The processes for software development were
in their infancy, with few standard or "best practice" approaches defined.
As a result, the growth was accompanied by growing pains: project failure was common,
and the field of computer science was still in its infancy, and the ambitions for project
scale and complexity exceeded the market capability to deliver. Individuals such
as Edward Yourdon, Larry Constantine, Gerald Weinberg, Tom DeMarco, and David
Parnas began to publish articles and books with research results in an attempt to
professionalize the software development process.
In the 1980s, several US military projects involving software subcontractors ran over-
budget and were completed far later than planned, if at all. In an effort to determine why
this was occurring, the United States Air Force funded a study at the SEI.
Precursor
The Quality Management Maturity Grid was developed by Philip B. Crosby in his book
"Quality is Free".[1]
The first application of a staged maturity model to IT was not by CMM/SEI, but rather
by Richard L. Nolan, who, in 1973 published the stages of growth model for IT
organizations.[2]
Watts Humphrey began developing his process maturity concepts during the later stages
of his 27 year career at IBM. (References needed)
Development at SEI
Active development of the model by the US Department of Defense Software
Engineering Institute (SEI) began in 1986 when Humphrey joined the Software
Engineering Institute located at Carnegie Mellon University in Pittsburgh,
Pennsylvania after retiring from IBM. At the request of the U.S. Air Force he began
formalizing his Process Maturity Framework to aid the U.S. Department of Defense in
evaluating the capability of software contractors as part of awarding contracts.
The result of the Air Force study was a model for the military to use as an objective
evaluation of software subcontractors' process capability maturity. Humphrey based this
framework on the earlier Quality Management Maturity Grid developed by Philip B.
Crosby in his book "Quality is Free".[1] However, Humphrey's approach differed because
of his unique insight that organizations mature their processes in stages based on solving
process problems in a specific order. Humphrey based his approach on the staged
evolution of a system of software development practices within an organization, rather
than measuring the maturity of each separate development process independently. The
CMM has thus been used by different organizations as a general and powerful tool for
understanding and then improving general business process performance.
Watts Humphrey's Capability Maturity Model (CMM) was published in 1988[3] and as a
book in 1989, in Managing the Software Process.[4]
Organizations were originally assessed using a process maturity questionnaire and a
Software Capability Evaluation method devised by Humphrey and his colleagues at the
Software Engineering Institute (SEI).
The full representation of the Capability Maturity Model as a set of defined process areas
and practices at each of the five maturity levels was initiated in 1991, with Version 1.1
being completed in January 1993.[5] The CMM was published as a book[6] in 1995 by its
primary authors, Mark C. Paulk, Charles V. Weber, Bill Curtis, and Mary Beth Chrissis.
Superseded by CMMI
The CMM model proved useful to many organizations, but its application in software
development has sometimes been problematic. Applying multiple models that are not
integrated within and across an organization could be costly in training, appraisals, and
improvement activities. The Capability Maturity Model Integration (CMMI) project was
formed to sort out the problem of using multiple CMMs.
For software development processes, the CMM has been superseded by Capability
Maturity Model Integration (CMMI), though the CMM continues to be a general
theoretical process capability model used in the public domain.
Adapted to other processes
The CMM was originally intended as a tool to evaluate the ability of government
contractors to perform a contracted software project. Though it comes from the area of
software development, it can be, has been, and continues to be widely applied as a
general model of the maturity of processes (e.g., IT service management processes) in
IS/IT (and other) organizations.l
Model topics
Maturity model
A maturity model can be viewed as a set of structured levels that describe how well the
behaviours, practices and processes of an organisation can reliably and sustainably
produce required outcomes. A maturity model may provide, for example :
• a place to start
• the benefit of a community’s prior experiences
• a common language and a shared vision
• a framework for prioritizing actions.
• a way to define what improvement means for your organization.
A maturity model can be used as a benchmark for comparison and as an aid to
understanding - for example, for comparative assessment of different organizations where
there is something in common that can be used as a basis for comparison. In the case of
the CMM, for example, the basis for comparison would be the organizations' software
development processes.
Structure
The Capability Maturity Model involves the following aspects:
• Maturity Levels: a 5-level process maturity continuum - where the uppermost
(5th) level is a notional ideal state where processes would be systematically
managed by a combination of process optimization and continuous process
improvement.
• Key Process Areas: a Key Process Area (KPA) identifies a cluster of related
activities that, when performed together, achieve a set of goals considered
important.
• Goals: the goals of a key process area summarize the states that must exist for
that key process area to have been implemented in an effective and lasting way.
The extent to which the goals have been accomplished is an indicator of how
much capability the organization has established at that maturity level. The goals
signify the scope, boundaries, and intent of each key process area.
• Common Features: common features include practices that implement and
institutionalize a key process area. There are five types of common features:
commitment to Perform, Ability to Perform, Activities Performed, Measurement
and Analysis, and Verifying Implementation.
• Key Practices: The key practices describe the elements of infrastructure and
practice that contribute most effectively to the implementation and
institutionalization of the KPAs.
Levels
There are five levels defined along the continuum of the CMM[7] and, according to the
SEI: "Predictability, effectiveness, and control of an organization's software processes are
believed to improve as the organization moves up these five levels. While not rigorous,
the empirical evidence to date supports this belief."
1. Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new
process.
2. Managed - the process is managed in accordance with agreed metrics.
3. Defined - the process is defined/confirmed as a standard business process, and
decomposed to levels 0, 1 and 2 (the latter being Work Instructions).
4. Quantitatively managed
5. Optimizing - process management includes deliberate process
optimization/improvement.
Within each of these maturity levels are Key Process Areas (KPAs) which characterise
that level, and for each KPA there are five definitions identified:
1. Goals
2. Commitment
3. Ability
4. Measurement
5. Verification
The KPAs are not necessarily unique to CMM, representing — as they do — the stages
that organizations must go through on the way to becoming mature.
The CMM provides a theoretical continuum along which process maturity can be
developed incrementally from one level to the next. Skipping levels is not
allowed/feasible.
N.B.: The CMM was originally intended as a tool to evaluate the ability of government
contractors to perform a contracted software project. It has been used for and may be
suited to that purpose, but critics pointed out that process maturity according to the CMM
was not necessarily mandatory for successful software development. There were/are real-
life examples where the CMM was arguably irrelevant to successful software
development, and these examples include many shrinkwrap companies (also
called commercial-off-the-shelf or "COTS" firms or software package firms). Such firms
would have included, for example, Claris, Apple, Symantec, Microsoft, and Lotus.
Though these companies may have successfully developed their software, they would not
necessarily have considered or defined or managed their processes as the CMM described
as level 3 or above, and so would have fitted level 1 or 2 of the model. This did not - on
the face of it - frustrate the successful development of their software.
Level 1 - Initial (Chaotic)
It is characteristic of processes at this level that they are (typically) undocumented and in
a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive
manner by users or events. This provides a chaotic or unstable environment for the
processes.
Level 2 - Repeatable
It is characteristic of processes at this level that some processes are repeatable, possibly
with consistent results. Process discipline is unlikely to be rigorous, but where it exists it
may help to ensure that existing processes are maintained during times of stress.
Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and documented
standard processes established and subject to some degree of improvement over time.
These standard processes are in place (i.e., they are the AS-IS processes) and used to
establish consistency of process performance across the organization.
Level 4 - Managed
It is characteristic of processes at this level that, using process metrics, management can
effectively control the AS-IS process (e.g., for software development ). In particular,
management can identify ways to adjust and adapt the process to particular projects
without measurable losses of quality or deviations from specifications. Process Capability
is established from this level.
Level 5 - Optimizing
It is a characteristic of processes at this level that the focus is on continually improving
process performance through both incremental and innovative technological
changes/improvements.
At maturity level 5, processes are concerned with addressing statistical common causes of
process variation and changing the process (for example, to shift the mean of the process
performance) to improve process performance. This would be done at the same time as
maintaining the likelihood of achieving the established quantitative process-improvement
objectives.
Software process framework
The software process framework documented is intended to guide those wishing to assess
an organization/projects consistency with the CMM. For each maturity level there are
five checklist types:
TypeSD Description
Describes the policy contents and KPA goals recommended by the
Policy
CMM.
Describes the recommended content of select work products
Standard
described in the CMM.
Describes the process information content recommended by the
CMM. The process checklists are further refined into checklists for:
• roles
• entry criteria
• inputs
• activities
• outputs
Process
• exit criteria
• reviews and audits
• work products managed and controlled
• measurements
• documented procedures
• training
• tools
Describes the recommended content of documented procedures
Procedure
described in the CMM.
Provides an overview of an entire maturity level. The level overview
checklists are further refined into checklists for:
• KPA purposes (Key Process Areas)
• KPA Goals
• policies
• standards
Level
• process descriptions
overview
• procedures
• training
• tools
• reviews and audits
• work products managed and controlled
• measurements
Tip:
It's easy just to think that people resist change out of sheer awkwardness and lack of
vision. However you need to recognize that for some, change may affect them negatively
in a very real way that you may not have foreseen. For example, people who've
developed expertise in (or have earned a position of respect from) the old way of doing
things can see their positions severely undermined by change.
At stage 3 of the Change Curve, people stop focusing on what they have lost. They start
to let go, and accept the changes. They begin testing and exploring what the changes
mean, and so learn the reality of what's good and not so good, and how they must adapt.
By stage 4, they not only accept the changes but also start to embrace them: They rebuild
their ways of working. Only when people get to this stage can the organization can really
start to reap the benefits of change.
Using the Change Curve
With knowledge of the Change Curve, you can plan how you'll minimize the negative
impact of the change and help people adapt more quickly to it. Your aim is to make the
curve shallower and narrower, as you can see in figure 2.
As someone introducing change, you can use your knowledge of the Change Curve to
give individuals the information and help they need, depending on where they are on the
curve. This will help you accelerate change, and increase its likelihood of success.
Actions at each stage are:
Stage 1:
At this stage, people may be in shock or in denial. Even if the change has been well
planned and you understand what is happening, this is when reality of the change hits,
and people need to take time to adjust. Here, people need information, need to understand
what is happening, and need to know how to get help.
This is a critical stage for communication. Make sure you communicate often, but also
ensure that you don't overwhelm people: They'll only be able to take in a limited amount
of information at a time. But make sure that people know where to go for more
information if they need it, and ensure that you take the time to answer any questions that
come up.
Stage 2:
As people start to react to the change, they may start to feel concern, anger, resentment or
fear. They may resist the change actively or passively. They may feel the need to express
their feelings and concerns, and vent their anger.
For the organization, this stage is the "danger zone". If this stage is badly managed, the
organization may descend into crisis or chaos.
So this stage needs careful planning and preparation. As someone responsible for change,
you should prepare for this stage by carefully considering the impacts and objections that
people may have.
Make sure that you address these early with clear communication and support, and by
taking action to minimize and mitigate the problems that people will experience. As the
reaction to change is very personal and can be emotional, it is often impossible to
preempt everything, so make sure that you listen and watch carefully during this stage (or
have mechanisms to help you do this) so you can respond to the unexpected.
Stage 3:
This is the turning point for individuals and for the organization. Once you turn the
corner to stage 3, the organization starts to come out of the danger zone, and is on the
way to making a success of the changes.
Individually, as people's acceptance grows, they'll need to test and explore what the
change means. They will do this more easily if they are helped and supported to do so,
even if this is a simple matter of allowing enough time for them to do so.
As the person managing the changes, you can lay good foundations for this stage by
making sure that people are well trained, and are given early opportunities to experience
what the changes will bring. Be aware that this stage is vital for learning and acceptance,
and that it takes time: Don't expect people to be 100% productive during this time, and
build in the contingency time so that people can learn and explore without too much
pressure.
Stage 4:
This stage is the one you have been waiting for! This is where the changes start to
become second nature, and people embrace the improvements to the way they work.
As someone managing the change, you'll finally start to see the benefits you worked so
hard for. Your team or organization starts to become productive and efficient, and the
positive effects of change become apparent.
Whilst you are busy counting the benefits, don't forget to celebrate success! The journey
may have been rocky, and it will have certainly been at least a little uncomfortable for
some people involved: Everyone deserves to share the success. What's more, by
celebrating the achievement, you establish a track record of success: Which will make
things easier the next time change is needed.
The change curve is a behavioral model of group and individual reactions to the process
of change. The communicator’s task in any change process is to try to “concertina” the
curve, by helping people to adjust and enthusiastically support change as quickly as
possible. This requires a communication strategy for each angle of the curve.
• Satisfaction:Listen to employees; roll-out your strategy.
• Denial:Maximise face-to-face communication; address the “me” issues.
• Resistance:Involve yourself in informal channels; use multiple communication
forms.
• Exploration:Communicate timelines for the project; encourage involvement.
• Hope:Repeat and reinforce your objectives and strategy; build buy-in.
• Commitment:Reward behaviour change.
The change curve can also track commitment to change. Try putting up posters of the
change curve in each department and ask employees to fix a spot on it to show their
current mood. The feedback, which is anonymous, can be used to assess the overall
feeling of the company.
CHANGE EQUATION
The Formula for Change was created by Richard Beckhard and David Gleicher, refined
by Kathie Dannemiller and is sometimes called Gleicher's Formula.
This formula provides a model to assess the relative strengths affecting the likely success
or otherwise of organisational change programs.
DxVxF>R
Three factors must be present for meaningful organizational change to take place. These
factors are:
D = Dissatisfaction with how things are now;
V = Vision of what is possible;
F = First, concrete steps that can be taken towards the vision.
If the product of these three factors is greater than
R = Resistance,
Then change is possible. Because D, V, and F are multiplied, if any one is absent or low,
then the product will be low and therefore not capable of overcoming the resistance.
To ensure a successful change it is necessary to use influence and strategic thinking in
order to create vision and identify those crucial, early steps towards it. In addition, the
organization must recognize and accept the dissatisfaction that exists by communicating
industry trends, leadership ideas, best practice and competitive analysis to identify the
necessity for change.
History
The original formula, as created by Gleicher and authored by Beckhard and Harris, is:
C = (ABD) > X
Where C is change, A is the status quo dissatisfaction, B is a desired clear state, D is
practical steps to the desired state, and X is the cost of the change.
It was Kathleen Dannemiller who dusted off the formula and simplified it, making it
more accessible for consultants and managers. Dannemiller and Jacobs first published the
more common version of the formula in 1992 (and Jacobs, 1994). Paula Griffin stated it
accurately (in Wheatley et al, 2003) when she wrote that Gleicher started it, Beckhard
and Harris promoted it, but it really took off when Dannemiller changed the language to
make it easier to remember and use.
The Change Equation shows how to tap into the power of the diversity already in your
organization and turn it into a "pluralistic" workplace where change is not something to
resist but something to embrace. Organizational change agents, business leaders, human
resource managers, and anyone who wants to make his or her organization stronger and
more competitive will find in this readable volume a wealth of practical solutions that
will help any forward-looking organization thrive in the new economy.
Richard Beckhard and Rubin Harris first published their change equation in 1977 in
"Organizational Transitions: Managing Complex Change", and it's still useful today. It
states that for change to happen successfully, the following statement must be true:
Historically, the Beckhard change equation can be seen as a major milestone in the field
of Organisational Development in that it acknowledged the role and importance of
employee involvement in change.
It represented a significant shift in management thinking from the "command and
control" of the industrial age to a people centric approach.
Richard Beckhard has long been considered one of the founders of organisation
development and the creator of the core framework for a large system change. He
articulated a generic change framework, which comprises four main themes:
(1) Determining the need for change - We must be clear why things need to change.
We need to articulate why it is unacceptable and undesirable to conduct business in the
same way. If we are not dissatisfied with the present situation, then there is no motivation
to change.
(2) Articulating a desired future - Ensuring that your employees fully understand and
can picture their future as part of a changed organisation and can see their place in the
new organization.
(3) Assessing the present and what needs to be changed in order to move to the
desired future - Making sure that each employee understands what they need to know
what to do to prepare themselves for the change and what steps they need to take in order
for this change to be successful.
(4) Getting to the desired future by managing the transition - using external specialist
help and appropriate processes - see the diagram below.
1. Introduction
As we approach the twenty first century there can be little doubt that successful
organisations of the future must be prepared to embrace the concept of change
management. Change management has been an integral part of organisational theory and
practice for a long time, however, many theorists and practitioners now believe that the
rate of change that organisations are subjected to is set to increase significantly in the
future. Indeed, some even go so far as to suggest that the future survival of all
organisations will depend on their ability to successfully manage change (Burnes 1996;
Peters 1989; Toffler 1983).It could be argued that the study of organisational change
management should be the preserve of the social scientist or the business manager. After
all, much of the theory has evolved from social and business studies and not from the
field of computer science. However, information systems do not exist in a vacuum and It
is widely accepted that technology, particularly Information Technology (IT),
is one of the major enablers of organisational change (Markus and Benjamin 1997; Scott-
Morton 1991). The successful development of any information system must address
sociological issues including the effects of the system itself on the organisation into
which it is introduced. Paul (1994) maintains that information systems must be developed
specifically for change as they must constantly undergo change to meet changing
requirements. Clearly, organisational change is an important issue This paper will focus
on the topic of organisational change management from an information systems
perspective. The paper will examine the issues raised during a review of the change
management literature as background to the issues raised in other papers in this theme of
the book. As in the Management In The 90s (MIT90s) study (Scott-Morton 1991), a very
broad definition of the term IT is used to include: computers of all types, hardware,
software, communications networks and the integration of computing and
communications technologies.
2. Overview of the Field
Many of the theories and models relating to the management of organisational change
have evolved from the social sciences (Burnes 1996; Bate 1994; Dawson 1994).
Information Systems (IS) research is of course a much newer discipline. However, the
socio-technical nature of information systems is now recognised and many of the IS
theories and models have been adopted and adapted from the social sciences (Yetton et
al. 1994; Benjamin and Levinson 1993). This paper presents a discussion on the change
management literature drawn from a social science perspective which is then related to an
IS perspective of IT-enabled change. We will begin by giving a broad overview of
change management and examining the nature of change and its applicability to the IS
field. We will then briefly examine the foundations of change management theory.
Specifically, the three main theories that underpin the different approaches to change
management are examined which concentrate on individual, group and organisation-wide
change respectively. The paper will then examine the major approaches to change
management, namely, the planned, emergent and contingency approaches. The planned
approach to change, based on the work of Lewin (1958), has dominated change
management theory and practice since the early 1950s. The planned approach views the
change process as moving from one fixed state to another. In contrast, the emergent
approach, which appeared in the 1980s (Burnes 1996), views change as a process of
continually adapting an organisation to align with its environment. The contingency
approach is a hybrid approach which advocates that there is not ‘one best way’ to manage
change. The paper will then examine change management within the context of
Information Systems (IS) theory and practice. In particular, the paper will investigate the
fundamental characteristics of ITenabled change and will discuss how this is different to
the management of change in pure social systems.
Finally, the Improvisational Change Model proposed by Orlikowski and Hofman (1997)
will be examined in detail. This model is based on the same principles as the emergent
approach to change management and, similarly, Orlikowski and Hofman (1997) maintain
that their model is more suitable than the traditional Lewinian models for modern,
networked organisations using adaptive technologies.
3 Change Management
Although it has become a cliché, it is nevertheless true to say that the volatile
environment in which modern organisations find themselves today mean that the ability
to manage change successfully has become a competitive necessity (Burnes 1996; Kanter
1989; Peters and Waterman 1982). The aim of this section is to provide a broad overview
of the substance of change and of change management.
Organisational change is usually required when changes occur to the environment in
which an organisation operates. There is no accepted definition of what constitutes this
environment, however, a popular and practical working definition is that the
environmental variables which influence organisations are political, economical,
sociological and technological (Jury 1997).
Change has been classified in many different ways. Most theorists classify change
according to the type or the rate of change required and this is often referred to as the
substance of change (Dawson 1994). Bate (1994) proposes a broad definition for the
amount of change which he argues may be either incremental or transformational. Bate
maintains that incremental change occurs when an organisation makes a relatively minor
change to its technology, processes or structure whereas transformational change occurs
when radical changes programmes are implemented. Bate also argues that modern
organisations are subject to continual environmental change and consequently they must
constantly change to realign themselves. Although there is a general recognition for the
need to successfully manage change in modern organisations, questions regarding the
substance of change and how the process can be managed in today’s context remain
largely unanswered. There are numerous academic frameworks available in the
management literature that seek to explain the issues related to organisational change and
many of these frameworks remain firmly rooted in the work of Lewin (1958). Dawson
(1994) points out that, almost without exception, contemporary management texts
uncritically adopt Lewin’s 3-stage model of planned change and that this approach is now
taught on most modern management courses. This planned (Lewinian) approach to
organisational change is examined in detail later in the paper. Information systems are
inherently socio-technical systems and, therefore, many of the theories and frameworks
espoused by the social sciences for the management of change have been adopted by the
IS community. Consequently, even the most modern models for managing IT-enabled
change are also based on the Lewinian model (Benjamin and Levinson 1993). Figure 1
depicts the most popular and prominent models for understanding organisational change
which are examined in detail in later sections of this paper. These models will be
subsequently be compared with the main change management models adopted by the IS
community.
Change Management
Planned Approach Emergent Approach Contingency Approach
Contextualist Processual
Group dynamics theorists believe that the focus of change should be at the group or team
level and that it is ineffectual to concentrate on individuals to bring about change as they
will be pressured by the group to conform. The group dynamics school has been
influential in developing the theory and practice of change management and of all the
schools they have the longest history (Schein 1969). Lewin (1958) maintains that the
emphasis on effecting organisational change should be through targeting group behaviour
rather than individual behaviour since people in organisations work in groups and,
therefore, individual behaviour must be seen, modified or changed to align with the
prevailing values, attitudes and norms (culture) of the group. The group dynamics
perspective manifests itself as the modern management trend for organisations to view
themselves as teams rather than merely as a collection of individuals.
Lewin (1958) first developed the Action Research (AR) model as a planned and
collective approach to solving social and organisational problems. The theoretical
foundations of AR lie in Gestalt-Field and Group Dynamics theory. Burnes (1996)
maintains that this model was based on the basic premise that an effective approach to
solving organisational problems must involve rational, systematic analysis of the issues in
question. AR overcomes “paralysis through analysis” (Peters and Waterman 1982: 221)
as it emphasises that successful action is based on identifying alternative solutions,
evaluating the alternatives, choosing the optimum solution and, finally, that change is
achieved by taking collective action and implementing the solution. The AR approach
advocates the use of a change agent and focuses on the organisation, often represented by
senior management. The AR approach also focuses on the individuals affected by the
proposed change. Data related to the proposed change is collected by all the groups
involved and is iteratively analysed to solve any problems. Although the AR approach
emphasizes group collaboration, Burnes (1996) argues that cooperation alone is not
always enough and that there must also be a ‘feltneed’ by all the participants.
Within the social sciences, an approach described by Burnes (1996) as the emergent
approach is a popular contemporary alternative to the planned approach to the
management of change. The emergent approach was popularised in the 1980s and
includes what other theorists have described as processual or contextualist perspectives
(Dawson 1994). However, these perspectives share the common rationale that change
cannot and should not be ‘frozen’ nor should it be viewed as a linear sequence of events
within a given time period as it is with a planned approach. In contrast, with an emergent
approach, change is viewed as a continuous process.
The modern business environment is widely acknowledged to be dynamic and uncertain
and consequently, theorists such as Wilson (1992) and Dawson (1994) have challenged
the appropriateness of a planned approach to change management. They advocate that the
unpredictable nature of change is best viewed as a process which is affected by the
interaction of certain variables (depending on the particular theorist’s perspective) and
the organisation. Dawson (1994) proposed an emergent approach based on a processual
perspective which he argues is not prescriptive but is analytical and is thus better able to
achieve a broad understanding of change management within a complex environment.
Put simply, advocates of the processual perspective maintain that there cannot be a
prescription for managing change due to the unique temporal and contextual factors
affecting individual organisations. Dawson succinctly summarises this perspective,saying
that “change needs to be managed as an ongoing and dynamic process and not a single
reaction to adverse contingent circumstance”.(Dawson 1994:182).
For advocates of the emergent approach it is the uncertainty of the external environment
which makes the planned approach inappropriate. They argue that rapid and constant
changes in the external environment require appropriate responses from organisations
which in turn force them to develop an understanding of their strategy, structure, systems,
people, style and culture and how these can affect the change process (Dawson 1994;
Pettigrew and Whipp 1993; Wilson 1992). This has in turn led to a requirement for a
‘bottom-up’ approach to planning and implementing change within an organisation.
The rapid rate and amount of environmental change has prevented senior managers from
effectively monitoring the business environment to decide upon appropriate
organisational responses. Pettigrew and Whipp (1993) maintain that emergent change
involves linking action by people at all levels of a business. Therefore, with an emergent
approach to change, the responsibility for organisational change is devolved and
managers must take a more enabling rather than controlling approach to managing.
Although the proponents of emergent change may have different perspectives there are,
nevertheless, some common themes that relate them all. Change is a continuous process
aimed at aligning an organisation with its environment and it is best achieved through
many small-scale incremental changes which, over time, can amount to a major
organisational transformation. Furthermore, this approach requires the consent of those
affected by change it is only through their behaviour that organisational structures,
technologies and processes move from abstract concepts to concrete realities (Burnes
1996).
ENVIRONMENT
APPROACHES TO CHANGE
Stable Turbulent
Planned Emergent
Previous sections of this paper have dealt with the different approaches to managing
organisational change taken from a social science perspective. Regardless of which
model is adopted, the requirement for an organisation to change is generally caused by
changes in its environmental variables which many academics and practitioners agree are
political, economic, sociological and technological (Jury 1997; Scott-Morton 1991). This
section will focus on one of these environmental variables, namely technology, in the
specific form of IT, and will examine the major issues that are particular to ITenabled
change. Woodward’s (1965) study demonstrated the need to take into account
technological variables when designing organisations and this gave credibility to the
argument for technological determinism which implies that organisational structure is
‘determined’ by the form of the technology. However, despitethe general acceptance that
the application of change management techniques can considerably increase the
probability of a project’s success, many IT-enabled change projects have failed for
nontechnical reasons. Some projects, such as the London Ambulance Service Computer
Aided Dispatch System have failed with fatal consequences (Benyon-Davies 1995).
Markus and Benjamin (1997) attribute this to what they describe as the magic bullet
theory of IT whereby IT specialists erroneously believe in the magic power of IT to
create organisational transformation. Some academics argue that although IT is an
enabling technology it cannot by itself create organisational change (Markus and
Benjamin 1997; McKersie and Walton 1991). McKersie and Walton (1991) maintain that
to create IT-enabled organisational change it is necessary to actively manage the changes.
They also argue that the effective implementation of IT is, at its core, a task of managing
change. The Management In The 1990s (MIT90) program (Scott Morton 1991) proposed
a framework for understanding the interactions between the forces involved in IT-enabled
organisational change. A simplified adaptation of this framework Structure
Processes
Skills & Roles
Strategy IT
External
Technological
Environment
External
Socioeconomic
Environment
A key example of this type of model is presented by Orlikowski and Hofman (1997). We
will review this model here to provide insight into the types of models which are likely to
provide a focus for research in the area in the near future. The model also provides a
strong and interesting framework against which to view some of the papers that follow in
this theme of the book. Theirs is an improvisational model for managing technological
change which is an alternative to the predominantLewinian models. They maintain that
IT-enabled change managers should take as a model the Trukese navigator who begins
with an objective rather than a plan and responds to conditions as they arise in an ad-hoc
fashion. They also argue that traditional Lewinian change models are based on the
fallacious assumption that change occurs only during a specified period whereas they
maintain that change is now a constant. This is similar to the arguments of the proponents
of the emergent change management approach which were examined earlier in this paper.
The origins of Orlikowski and Hofman’s (1997) Improvisational Change Model can be
found in a study by Orlikowski (1996) which examined the use of new IT within one
organisation over a two year period. The study concluded by demonstrating the critical
role of situated change enacted by organisational members using groupware technology
over time. Mintzberg (1987) first made the distinction between deliberate and emergent
strategies and Orlikowski (1996) argues that the perspectives which have influenced
studies of IT-enabled organisational change have similarly neglected emergent change.
Orlikowski challenges the arguments that organisational change must be planned, that
technology is the primary cause of technology-based organisational transformation and
that radical changes always occur rapidly and discontinuously. In contrast, she maintains
that organisational transformation is an ongoing improvisation enacted by organisational
actors trying to make sense of and act coherently in the world.
Orlikowski and Hofman’s (1997) Improvisational Change Model is based on two major
assumptions. First, that changes associated with technology implementations constitute
an ongoing process rather than an event with an end point after which an organisation can
return to a state of equilibrium. Second, that every technological and organisational
change associated with the ongoing process cannot be anticipated in advance. Based on
these assumptions, Orlikowski and Hofman (1997) have identified three different types of
change:
· Anticipated Change. Anticipated changes are planned ahead of time and occur as
intended. For example the implementation of e-mail that accomplishes its intended aim of
facilitating improved communications.
· Opportunity-Based Change. Opportunity-Based changes are not originally anticipated
but are intentionally introduced during the ongoing change process in response to an
unexpected opportunity. For example, as companies gain experience with the World
Wide Web they may deliberately respond to unexpected opportunities to leverage its
capabilities.
· Emergent Change. Emergent changes arise spontaneously from local innovation and
that are not originally anticipated or intended. For example the use of e-mail as an
informal grapevine for disseminating rumours throughout an organisation.
Orlikowski and Hofman (1997) maintain that both anticipated and opportunity-based
changes involve deliberate action in contrast to emergent changes which arise
spontaneously and usually tacitly from organisational members’ actions over time.
Furthermore, they contend that the three types of change usually build iteratively on each
other in an undefined order over time. They also argue that practical change management
using the Improvisational Change Model requires a set of processes and mechanisms to
recognise the different types of change as they occur and to respond effectively to them.
Aligning the Key Change Dimensions (from Orlikowski and Hofman 1997: 18)
Orlikowski and Hofman’s (1997) research suggested that the interaction between these
key change dimensions must ideally be aligned or at least not in opposition. Their
research also suggested that an Improvisation Change Model may only be appropriate for
introducing open-ended technology into organisations with adaptive cultures. Open-
ended technology is defined by them as technology which is locally adaptable by end
users with customizable features and the ability to create new applications.
They maintain that open-ended technology is typically used in different ways across an
organisation. Orlikowski and Hofman appear to share similar views to the contingency
theorists discussed earlier as they do not subscribe to the view that there is ‘one best way’
for managing IT-enabled change.
Summary
The dominant theories and models relating to the management of change have evolved
from the social sciences. IS research is relatively much newer and the socio-technical
nature of information systems has caused most IS theories and models to be adapted from
the social sciences. The main theories that provide the foundation for general change
management approaches are the individual, group dynamics and the open systems
perspectives. The planned approach to change management tends to concentrate on
changing the behaviour of individuals and groups through participation. In contrast, the
newer emergent approach to change management focuses on the organisation as an open
system with its objective being to continually realign the organisation with its changing
external environment.Lewin’s (1958) model is a highly influential planned approach
model that underpins many of the change management models and techniques today and
most contemporary management texts adopt this 3-phase unfreeze, change and re-freeze
model. The rationale of the newer emergent approach is that change should not be
‘frozen’ or viewed as a linear sequence of events but that it should be viewed as an
ongoing process. Contingency theory is a rejection of the ‘one best way’ approach taken
by planned and emergent protagonists. The contingency approach adopts the perspective
that an organisation is‘contingent’ on the situational variables it faces and, therefore, it
must adopt the most appropriate change management approach.
Many IT-enabled change projects fail despite the general acceptance that change
management can considerably increase the probability of a project’s success. This is
often attributable to the misconception that IT is not only an enabling technology but that
it can also create organisational change. The highly influential MIT90s framework is
useful for understanding the interactions between the forces involved in IT-enabled
organisational change which must be aligned to create successful organisations. IT-
enabled change is different from changes driven by other environmental concerns and the
process must be understood to be managed. Consequently, many IS change models have
adopted and adapted the Lewinian unfreeze, change and re-freeze approach to change
management. However, in a situation reminiscent of the developments within the social
sciences, a number of new IT-enabled change management models are now emerging
which are based on the emergent or contingent approaches to change management.
Orlikowski and Hofman (1997) have proposed an improvisational model for managing
technological change as one alternative to the predominant Lewinian models. This
improvisational model is based on the assumptions that technological changes constitute
an ongoing process and that every change associated with the ongoing process cannot be
anticipated beforehand. Based on these assumptions Orlikowski and Hofman (1997) have
identified three different types of change, namely, anticipated,opportunity-based and
emergent changes. Both anticipated and opportunity-based changes involve deliberate
action in contrast to emergent changes which arise spontaneously and usually tacitly from
organisational actors’ actions over time. These three types of change build iteratively on
each other in an undefined order over time. Orlikowski and Hofman (1997) suggest that
the critical enabling conditions which must be fulfilled to allow their Improvisational
Change Model to be successfully adopted for implementing technology are aligning the
key dimensions of change and allocating dedicated resources to provide ongoing support
for the change process. The review of models of change presented in this paper provides
background for the following papers in this theme, and provides a developing research
perspective against which to view the issues discussed by the other authors.
Topography of Mind:
Freud’s Iceberg Model for Unconscious, Pre-conscious, & Conscious
If you assigned a task to someone and the job does not quite get done well enough, one of
the most likely reasons is that:
You have delegated the task to someone who is unwilling – or unable – to complete the
job, and have then remained relatively uninvolved or 'hands-off', or
You may have been too directive or 'hands-on' with a capable person who was quite able
to complete the assignment with little assistance from you; you just ended up
demotivating him/her.
Direct
Identify motivations
Guide...
Excite...
Delegate...
8. The Use of Giving Alms: There is no use giving alms to those who need assistance.
Why should you be too much compassionate to people who are just looking for 2-3
“coaching” words to find their own way/ solution?.
Coach and train your people to greatness. Empowermentalone is not enough. You must
train and coach your people to enhance their learning ability and performance. Coaching
is the key to unlocking the potential of your people, your organization, and yourself. It
increases your effectiveness as a leader. As a coach, you must help your people grow and
achieve more by inspiring them, asking effective questionsand providing feedback. Find
the right combination of instructor-led training and coaching follow-ups to achieve
success.
The High Low Matrix can help managers overcome one of the more challenging aspects
of their role which is understanding what motivates their employees. It's easy to assume
that because you are motivated by knowing you did a good job or by making an impact
on your environment, that others feel this enthusiasm as well. In the real world, people
have many different motivations.
In turn, people also have different levels of skill sets for particular tasks. Their level of
skill can often depend on their experience, the level of training they have received, or the
type of task itself.
Since most coaching techniques rely on the employees skills and their will to accomplish
a goal, it is important to understand how these two aspects work together. This
knowledge will help you to better craft your approach with your employees and teams to
get the best results possible from each individual.
Assessing their skill is typically a much simpler task as it is likely why you are here. You
have no doubt seen results from your employee or team that do not meet your
expectations and therefore have determined a change must be made. Again, to determine
the exact skill that the associate is struggling with that has caused the poor performance,
the IGROW Model is recommended.
Now that you have determined both the employee's skill level and their will level, it is
time to discuss the coaching techniques that you should apply based on where the
employee falls in the High Low Matrix coaching model.
As we have discussed, each scenario you are faced with requires a different approach or
coaching technique to achieve the desired results. To ensure you are successful in your
approach, be sure to spend time prior to your coaching session thinking about where the
associate falls, what motivates them, and what options you may want to offer. Being
prepared for the discussion will make a great difference.
This definition continues to explain organizational values, also called as "beliefs and
ideas about what kinds of goals members of an organization should pursue and ideas
about the appropriate kinds or standards of behavior organizational members should use
to achieve these goals. From organizational values develop organizational norms,
guidelines, or expectations that prescribe appropriate kinds of behavior by employees in
particular situations and control the behavior of organizational members towards one
another.
Strong culture is said to exist where staff respond to stimulus because of their alignment
to organizational values. In such environments, strong cultures help firms operate like
well-oiled machines, cruising along with outstanding execution and perhaps minor
tweaking of existing procedures here and there.
Conversely, there is weak culture where there is little alignment with organizational
values and control must be exercised through extensive procedures and bureaucracy.
Where culture is strong—people do things because they believe it is the right thing to do
—there is a risk of another phenomenon, Groupthink. "Groupthink" was described by
Irving L. Janis. He defined it as "...a quick and easy way to refer to a mode of thinking
that people engage when they are deeply involved in a cohesive in-group, when members'
strive for unanimity override their motivation to realistically appraise alternatives of
action." This is a state where people, even if they have different ideas, do not challenge
organizational thinking, and therefore there is a reduced capacity for innovative thoughts.
This could occur, for example, where there is heavy reliance on a central charismatic
figure in the organization, or where there is an evangelical belief in the organization’s
values, or also in groups where a friendly climate is at the base of their identity
(avoidance of conflict). In fact group think is very common, it happens all the time, in
almost every group. Members that are defiant are often turned down or seen as a negative
influence by the rest of the group, because they bring conflict.
Innovative organizations need individuals who are prepared to challenge the status quo—
be it group-think or bureaucracy, and also need procedures to implement new ideas
effectively.
Several methods have been used to classify organizational culture. Some are described
below:
Hofstede (1980[2]) demonstrated that there are national and regional cultural groupings
that affect the behavior of organizations.
Hofstede looked for national differences between over 100,000 of IBM's employees in
different parts of the world, in an attempt to find aspects of culture that might influence
business behavior.
Power distance - The degree to which a society expects there to be differences in the
levels of power. A high score suggests that there is an expectation that some individuals
wield larger amounts of power than others. A low score reflects the view that all people
should have equal rights.
Uncertainty avoidance reflects the extent to which a society accepts uncertainty and risk.
Masculinity vs. femininity - refers to the value placed on traditionally male or female
values. Male values for example include competitiveness, assertiveness, ambition, and
the accumulation of wealth and material possessions
Deal and Kennedy defined organizational culture as the way things get done around
here.In relation to its feedback this would mean a quick response and also measured
organizations in ition, such as oil prospecting or military aviation.
Charles Handy
Charles Handy[4] (1985) popularized the 1972 work of Roger Harrison of looking at
culture which some scholars have used to link organizational structure to organizational
culture. He describes Harrison's four types thus:
a Power Culture which concentrates power among a few. Control radiates from the center
like a web. Power and influence spread out from a central figure or group. Power desires
from the top person and personal relationships with that individual matters more than any
formal title of position. Power Cultures have few rules and little bureaucracy; swift
decisions can ensue.
In a Role Culture, people have clearly delegated authorities within a highly defined
structure. Typically, these organizations form hierarchical bureaucracies. Power derives
from a person's position and little scope exists for expert power. Controlled by
procedures, roles descriptions and authority definitions. Predictable and consistent
systems and procedures are highly valued.
By contrast, in a Task Culture, teams are formed to solve particular problems. Power
derives from expertise as long as a team requires expertise. These cultures often feature
the multiple reporting lines of a matrix structure. It is all a small team approach, who are
highly skilled and specialist in their own markets of experience.
A Person Culture exists where all individuals believe themselves superior to the
organization. Survival can become difficult for such organizations, since the concept of
an organization suggests that a group of like-minded individuals pursue the
organizational goals. Some professional partnerships can operate as person cultures,
because each partner brings a particular expertise and clientele to the firm.
Edgar Schein
"A pattern of shared basic assumptions that was learned by a group as it solved its
problems of external adaptation and internal integration, that has worked well enough to
be considered valid and, therefore, to be taught to new members as the correct way you
perceive, think, and feel in relation to those problems"(Schein, 2004, p. 17).
At the first and most cursory level of Schein's model is organizational attributes that can
be seen, felt and heard by the uninitiated observer - collectively known as artifacts.
Included are the facilities, offices, furnishings, visible awards and recognition, the way
that its members dress, how each person visibly interacts with each other and with
organizational outsiders, and even company slogans, mission statements and other
operational creeds.
The next level deals with the professed culture of an organization's members - the values.
At this level, local and personal values are widely expressed within the organization.
Organizational behavior at this level usually can be studied by interviewing the
organization's membership and using questionnaires to gather attitudes about
organizational membership.
At the third and deepest level, the organization's tacit assumptions are found. These are
the elements of culture that are unseen and not cognitively identified in everyday
interactions between organizational members. Additionally, these are the elements of
culture which are often taboo to discuss inside the organization. Many of these 'unspoken
rules' exist without the conscious knowledge of the membership. Those with sufficient
experience to understand this deepest level of organizational culture usually become
acclimatized to its attributes over time, thus reinforcing the invisibility of their existence.
Surveys and casual interviews with organizational members cannot draw out these
attributes—rather much more in-depth means is required to first identify then understand
organizational culture at this level. Notably, culture at this level is the underlying and
driving element often missed by organizational behaviorists.
Robert A. Cooke
Robert A. Cooke, PhD, defines culture as the behaviors that members believe are
required to fit in and meet expectations within their organization. The Organizational
Culture Inventory measures twelve behavioral of norms that are grouped into three
general types of cultures:
•Constructive Cultures, in which members are encouraged to interact with people and
approach tasks in ways that help them meet their higher-order satisfaction needs.
•Passive/Defensive Cultures, in which members believe they must interact with people in
ways that will not threaten their own security.
The Constructive Cluster includes cultural norms that reflect expectations for members to
interact with others and approach tasks in ways that will help them meet their higher
order satisfaction needs for affiliation, esteem, and self-actualization.
•Achievement
•Self-Actualizing
•Humanistic-Encouraging
•Affiliative
Organizations with Constructive cultures encourage members to work to their full
potential, resulting in high levels of motivation, satisfaction, teamwork, service quality,
and sales growth. Constructive norms are evident in environments where quality is
valued over quantity, creativity is valued over conformity, cooperation is believed to lead
to better results than competition, and effectiveness is judged at the system level rather
than the component level. These types of cultural norms are consistent with (and
supportive of) the objectives behind empowerment, total quality management,
transformational leadership, continuous improvement, re-engineering, and learning
organizations.
Norms that reflect expectations for members to interact with people in ways that will not
threaten their own security are in the Passive/Defensive Cluster.
•Approval
•Conventional
•Dependent
•Avoidance
The Aggressive/Defensive Cluster includes cultural norms that reflect expectations for
members to approach tasks in ways that protect their status and security.
•Oppositional
•Power
•Competitive
•Perfectionistic
G. Johnson[6] described a cultural web, identifying a number of elements that can be used
to describe or influence Organizational Culture:
The Paradigm: What the organization is about; what it does; its mission; its
values.
Control Systems: The processes in place to monitor what is going on. Role
cultures would have vast rulebooks. There would be more reliance on
individualism in a power culture.
Organizational Structures: Reporting lines, hierarchies, and the way that work
flows through the business.
Power Structures: Who makes the decisions, how widely spread is power, and
on what is power based?
Symbols: These include organizational logos and designs, but also extend to
symbols of power such as parking spaces and executive washrooms.
Rituals and Routines: Management meetings, board reports and so on may
become more habitual than necessary.
Stories and Myths: build up about people and events, and convey a message
about what is valued within the organization.
These elements may overlap. Power structures may depend on control systems, which
may exploit the very rituals that generate stories which may not be true.
Edgar Henry Schein (born 1928), a professor at the MIT Sloan School of Management,
has made a notable mark on the field of organizational development in many areas,
including career development, group process consultation, and organizational culture. He
is generally credited [by whom?] with inventing the term "corporate culture”.
Schein's model of organizational culture originated in the 1980s. Schein (2004) identifies
three distinct levels in organizational cultures:
Espoused values
Assumptions
Values are the organization's stated or desired cultural elements. This is most
often[citation needed] a written or stated tone that the CEO or President hope to exude
throughout the office environment. Examples of this would be employee professionalism,
or a "family first" mantra.
Assumptions are the actual values that the culture represents, not necessarily correlated to
the values. These assumptions are typically so well integrated in the office dynamic that
they are hard to recognize from within.
The model has undergone various modifications, such as the Raz update of Schein's
organizational culture model (2006), and others.
Coercive persuasion
Schein has written on the issues surrounding coercive persuasion, comparing and
contrasting brainwashing as a use for "goals that we deplore and goals that we accept.
The next level includes the culture of the business organization that can be depicted by
company slogans, mission statements and different operational creeds. The behavior of
members in the organization can be reviewed by interviewing the members as well as by
using questionnaires to gather the attitude of organization members. The third level
includes the tacit assumptions of the organization that are whether unseen but these
elements influence the daily interaction between the members of the organization. These
unspoken rules prove their existence by influencing the culture of the organization. Even
surveys and casual interviews are not helpful in determining the culture but more in-
depth knowledge of the organization helps in identifying and understanding the
organization culture at this level and helps in safety of the business.
By this model one can easily understand the paradoxical organizational behaviors.
Organizational norms are something different at the deepest level that is why its bit
difficult for a newcomer to adjust in a new organization and it is the only reason for the
failure of the organizational agents as they cultural norms of the organization are not
understood by them completely before action. Along with in-depth knowledge of the
culture the interpersonal relationship between the members also play an important role in
understanding the dynamics of Schein organizational culture. The basic underlying
assumptions act as cognitive defense mechanism for both individuals and groups that is
difficult, anxiety provoking and time consuming as the culture is deep seated and
complex so its very hard to know about the assumptions. The leaders should behave
marginally for organization culture so as to learn some new ways of culture and member
psychology. The model helps to know the culture in the organization at different levels.
Good Schein organizational culture should be developed so as to give a safe and ethical
environment to the employee. The organization culture helps in effective communication
of the members of the company at every level and this can be achieved by group
activities of the members of the company. As that will initiate the common thoughts and
ideas of the individuals of the members to work in one direction for achieving the goal.
By the development and new innovation one can just have good culture in the
organization that will help the group in making its presence felt both national and
international.
Oct 1997
Each org has its own way and an outsider brings his/her baggage as observer.
Understand new environment and culture before change or observation can be made.
Embedded skills:
Metaphors or symbols:
A pattern of shared basic assumptions that the group learned as it solved its problems of
external adaptation and internal integration that has worked well enough to be considered
valid and, therefore, to be taught to new members as the correct way you perceive, think,
and feel in relation to those problems.
Summary
Levels of Culture
Language technology
Products
Creations
Easy to observe
Difficult to decipher
Problems in classification
Espoused Values
Basic Assumptions
React emotionally
Defense mechanisms
McGregor: if people are treated consistently in terms of certain basic assumptions, they
come eventually to behave according to those assumptions in order to make their world
stable and predictable.
Different cultures make different assumptions about others based on own values etc: see
them with our eyes not theirs.
Culture of 2 orgs
Key is he looked at these two with his eyes, his culture, not theirs.
Dimensions of Culture
External Environments
Essential elements:
Goals
Means of developing consensus, reaching goals
Measurement
Correction
Goals to achieve consensus on goals group needs shared language and shared
assumptions.
Means:
Design of tasks
Division of labor
Org structure
Control system
Info systems
Skills, technology, and knowledge acquired to cope become part of culture of org
Crowding is a problem
Measuring results
From top
Trust self
Use outsider
Correction, Repair
Correctiveness can have great effect on culture, because it may question culture: mission.
Summary
Common Language
Common Language
Conflict arises when two parties assume about the other without communicating.
Groups Boundaries
Developing Rules
Ideology too
Summary
Nature of time
Nature of space
Levels of Reality
External reality refers to that which is determined empirically, by objective tests, Western
tradition. ie SAT
Individual reality is self learned knowledge, experience, but this truth may not be shared
by others.
High = mutual casualty culture, events understood only in context, meanings can vary,
categories can change
Moralism-Pragmatism
Revealed dogma: wisdom based on trust in the authority of wise men, formal leaders,
prophets, or kings.
What is Information?
Nature of time
*Time is a fundamental symbolic category that we use for talking about the orderliness of
social life.
*Time: not enough, late, on-, too much at one, lost time never found again.
*polychromic several things done simultaneously. Kill two birds with one stone.
*length of time depended on work to be done. ie research people would not get closure.
Nature of space
*intrusion distance
Symbols of Space
Body Language
*controlled by manipulation
*nature is powerful
*humanity is subservient
*kind of fatalism
Being-in-Becoming Orientation
*nature of work and the relationships between work, family, and personal concepts
Organization/Environment Relations
Etzioni's theories:
Coercive systems
Exit if can
Unions develop
Utilitarian systems
Will participate
Incentive system
Morally involved
Autocratic
Paternalistic
Consultative/democratic
Abdicative
Parsons:
Founder brings in one or more people and creates core group. They share vision and
believe in the risk.
Founding group acts in concert, raises money, work space...
Culture does not survive if the main culture carriers depart and if bulk of members leave.
Socialization
Charisma
Culture-Embedding Mechanisms
What leaders pay attention to, measure, and control on a regular basis?
Observed criteria by which leaders recruit, select, promote, retire, and excommunicate
organizational members.
What is noticed?
Comments made
Attention is focused in part by the kinds of questions that leaders ask and how they set
the agendas for meetings
Emotional reactions.
How much of what is decided is all inclusive? bottom up? top down?
Own visible behavior has great value for communicating assumptions and values to
others.
Members learn from their own experience with promotions, performance appraisals, and
discussions with the boss.
In young org design, structure, architecture, rituals, stories, and formal statements are
cultural rein forcers, not culture creators.
Once an org stabilizes these become primary and constrain future leaders.
These are cultural artifacts that are highly visible but hard to interpret.
When org is in developmental stage, the leader is driving force. After a while these will
become the driving forces for next generation.
Design
Some articulate why this way
Routines most visible parts of life in org: daily, weekly, monthly, quarterly, annually.
Rites and rituals may be central in deciphering as well as communicating the cultural
assumptions.
Visible features
Symbolic purposes
Osborne and Gaebler suggest that governments should: 1) steer, not row (or as Mario
Cuomo put it, "it is not government's obligation to provide services, but to see that they're
provided"); 2) empower communities to solve their own problems rather than simply
deliver services; 3) encourage competition rather than monopolies; 4) be driven by
missions, rather than rules; 5) be results-oriented by funding outcomes rather than inputs;
6) meet the needs of the customer, not the bureaucracy; 7) concentrate on earning money
rather than spending it; 8) invest in preventing problems rather than curing crises; 9)
decentralize authority; and 10) solve problems by influencing market forces rather than
creating public programs.
The authors insist that this book does not offer original ideas. Rather, it is a
comprehensive compilation of the ideas and experiences of innovative practitioners and
activists across the country. The authors build on the work of a handful of political
scientists who have studied bureaucratic reform efforts, especially that of James Q.
Wilson, whose 1989 book Bureaucracylaid out key elements of what they call "a new
paradigm." They also count Robert Reich, Alvin Toffler, and Harry Boyte among their
chief influences. As they point out in the acknowledgements, however, the biggest
influence on their thinking comes not from government but from management
consultants like Thomas Peters, Edward Deming, and Peter Drucker. These writers all
recognize that corporations suffer from bureaucratic rigidities just like governments do,
and that the structures of both are rooted in bygone eras. Too many corporations are still
bound to the strict work rules and centralized command that marked the Industrial Age,
they insist. Similarly, most government agencies are bound by civil service rules and
other Progressive era reforms designed to control costs, eliminate patronage, and
guarantee uniform service to the public. "Hierarchical, centralized bureaucracies designed
in the 1930s or 1940s simply do not function well in the rapidly changing, information-
rich, knowledge-intensive society and economy of the 1990s," they write. Suffering from
the same rigidities, governments and businesses must transform themselves in essentially
the same way: by flattening hierarchies, decentralizing decision-making, pursuing
productivity-enhancing technologies, and stressing quality and customer satisfaction.
Osborne and Gaebler are careful to point out that while much of what is discussed in the
book could be summed up under the category of market-oriented government, markets
are only half the answer. Markets are impersonal, unforgiving, and, even under the most
structured circumstances, inequitable, they point out. As such, they must be coupled with
"the warmth and caring of families and neighborhoods and communities." They conclude
that entrepreneurial governments must embrace both markets and community as they
begin to shift away from administrative bureaucracies.
The search for a “third way” somewhere between socialism and capitalism is to the
modern era what the search for the philosopher’s stone was to the Dark Ages. Although
the implosion of the Soviet Union has put a damper on calls for more bureaucracy, most
people harbor various misconceptions and fallacies that make them equally distrustful of
fully free markets. So, yet another pair of would-be alchemists has set out in search of
that elusive goal big-government order without the big government.
In Reinventing Government, David Osborne and Ted Gaebler attempt to chart a course
between big government and laissez faire. They want nothing to do with “ideology.”
Rather, Osborne and Gaebler are technocrats in search of pragmatic
answers. “Reinventing Government,” they write, “addresses how governments work,
notwhat governments do.” Thus, from the standpoint of what governments do, the book is
a proverbial grab bag of policy prescriptions some good, some bad.
In the course of the book’s eleven chapters, Osborne and Gaebler lay the foundations for
what they call “entrepreneurial government.” That is, government that is active, but
bereft of bureaucracy and its attendant red tape and inefficiency. In Osborne and
Gaebler’s paradigm, the problems that America presently faces are all a result of the
reforms of the Progressive Era. The large bureaucracies set up to discourage corruption
and abuses of power actually waste more resources through regulations and procedures
than they save. To stop a small number of crooks, bureaucracy must tie the hands of
honest employees.
Osborne and Gaebler outline a series of sweeping reforms, all aimed at changing the
entire focus of government. The old way of thinking envisions a government that
identifies problems and then introduces an agency or program to solve the problem. The
result has been a plethora of large bureaucracies and programs targeted at specific
problems but achieving no real success. Entrepreneurial government identifies broad
social goals. Instead of being burdened with hierarchies and rules, agencies are allowed
great leeway within which to meet the goals. The civil service system is replaced with a
system that rewards innovation and holds employees responsible for failure to meet
goals.
Old methods of budgeting are scrapped. The line-item budget does not permit innovation
nor the flexibility needed to deal with the unforeseen. Furthermore, the practice of
throwing more money into programs that do not work must end. Such a policy
encourages failure. Instead, programs that succeed will be rewarded with the funds to
expand. Programs that fail can be weeded out over time.
The purpose of government, Osborne and Gaebler contend, is not actually to deliver
services, but to set policy. They call it steering rather than rowing. In order to deliver
services, governments can contract out to private providers, utilize government providers
in competition with private firms, or utilize different government firms in competition
with each other. Only in the case of so-called natural monopolies such as utilities-should
government be the sole provider.
The problem with Osborne and Gaebler’s analysis is its short-sighted empirical
framework. In their preoccupation with finding whatever solution will “work,” they
ignore basic principles of how human beings behave and how markets operate. When
policy planners structure the market, they change the incentive system. Resources flow
into areas in which they otherwise would not. The planners only have two choices: They
can send resources to where they think they are needed, or they can send resources to
where already overestimated customer demand is greatest. In either case, inefficiency
will result. Even with the political reforms Osborne and Gaebler mention (campaign
finance reform, term limits, etc.), decisions concerning resource allocation will still be
politically motivated not economically motivated.
Peter M. Senge (1947- ) was named a ‘Strategist of the Century’ by the Journal of
Business Strategy, one of 24 men and women who have ‘had the greatest impact on the
way we conduct business today’ (September/October 1999). While he has studied how
firms and organizations develop adaptive capabilities for many years at MIT
(Massachusetts Institute of Technology), it was Peter Senge’s 1990 book The Fifth
Discipline that brought him firmly into the limelight and popularized the concept of the
‘learning organization'. Since its publication, more than a million copies have been sold
and in 1997, Harvard Business Reviewidentified it as one of the seminal management
books of the past 75 years.
On this page we explore Peter Senge’s vision of the learning organization. We will focus
on the arguments in his (1990) book The Fifth Discipline as it is here we find the most
complete exposition of his thinking.
Peter Senge
Born in 1947, Peter Senge graduated in engineering from Stanford and then went on to
undertake a masters on social systems modeling at MIT (Massachusetts Institute of
Technology) before completing his PhD on Management. Said to be a rather unassuming
man, he is is a senior lecturer at the Massachusetts Institute of Technology. He is also
founding chair of the Society for Organizational Learning (SoL). His current areas of
special interest focus on decentralizing the role of leadership in organizations so as to
enhance the capacity of all people to work productively toward common goals.
Peter Senge describes himself as an 'idealistic pragmatist'. This orientation has allowed
him to explore and advocate some quite ‘utopian’ and abstract ideas (especially around
systems theory and the necessity of bringing human values to the workplace). At the
same time he has been able to mediate these so that they can be worked on and applied by
people in very different forms of organization. His areas of special interest are said to
focus on decentralizing the role of leadership in organizations so as to enhance the
capacity of all people to work productively toward common goals. One aspect of this is
Senge’s involvement in the Society for Organizational Learning (SoL), a Cambridge-
based, non-profit membership organization. Peter Senge is its chair and co-founder. SoL
is part of a ‘global community of corporations, researchers, and consultants’ dedicated to
discovering, integrating, and implementing ‘theories and practices for the interdependent
development of people and their institutions’. One of the interesting aspects of the Center
(and linked to the theme of idealistic pragmatism) has been its ability to attract corporate
sponsorship to fund pilot programmes that carry within them relatively idealistic
concerns.
Aside from writing The Fifth Discipline: The Art and Practice of The Learning
Organization (1990), Peter Senge has also co-authored a number of other books linked to
the themes first developed in The Fifth Discipline. These include The Fifth Discipline
Fieldbook: Strategies and Tools for Building a Learning Organization (1994); The
Dance of Change: The Challenges to Sustaining Momentum in Learning
Organizations (1999) and Schools That Learn (2000).
…organizations where people continually expand their capacity to create the results they
truly desire, where new and expansive patterns of thinking are nurtured, where collective
aspiration is set free, and where people are continually learning to see the whole together.
The basic rationale for such organizations is that in situations of rapid change only those
that are flexible, adaptive and productive will excel. For this to happen, it is argued,
organizations need to ‘discover how to tap people’s commitment and capacity to learn
atall levels’ (ibid.: 4).
While all people have the capacity to learn, the structures in which they have to function
are often not conducive to reflection and engagement. Furthermore, people may lack the
tools and guiding ideas to make sense of the situations they face. Organizations that are
continually expanding their capacity to create their future require a fundamental shift of
mind among their members.
When you ask people about what it is like being part of a great team, what is most
striking is the meaningfulness of the experience. People talk about being part of
something larger than themselves, of being connected, of being generative. It become
quite clear that, for many, their experiences as part of truly great teams stand out as
singular periods of life lived to the fullest. Some spend the rest of their lives looking for
ways to recapture that spirit. (Senge 1990: 13)
For Peter Senge, real learning gets to the heart of what it is to be human. We become able
to re-create ourselves. This applies to both individuals and organizations. Thus, for a
‘learning organization it is not enough to survive. ‘”Survival learning” or what is more
often termed “adaptive learning” is important – indeed it is necessary. But for a learning
organization, “adaptive learning” must be joined by “generative learning”, learning that
enhances our capacity to create’ (Senge 1990:14).
The dimension that distinguishes learning from more traditional organizations is the
mastery of certain basic disciplines or ‘component technologies’. The five that Peter
Senge identifies are said to be converging to innovate learning organizations. They are:
Systems thinking
Personal mastery
Mental models
Team learning
He adds to this recognition that people are agents, able to act upon the structures and
systems of which they are a part. All the disciplines are, in this way, ‘concerned with a
shift of mind from seeing parts to seeing wholes, from seeing people as helpless reactors
to seeing them as active participants in shaping their reality, from reacting to the present
to creating the future’ (Senge 1990: 69). It is to the disciplines that we will now turn.
A great virtue of Peter Senge’s work is the way in which he puts systems theory to
work.The Fifth Discipline provides a good introduction to the basics and uses of such
theory – and the way in which it can be brought together with other theoretical devices in
order to make sense of organizational questions and issues. Systemic thinking is the
conceptual cornerstone (‘The Fifth Discipline’) of his approach. It is the discipline that
integrates the others, fusing them into a coherent body of theory and practice (ibid.: 12).
Systems theory’s ability to comprehend and address the whole, and to examine the
interrelationship between the parts provides, for Peter Senge, both the incentive and the
means to integrate the disciplines.
Here is not the place to go into a detailed exploration of Senge’s presentation of systems
theory (I have included some links to primers below). However, it is necessary to
highlight one or two elements of his argument. First, while the basic tools of systems
theory are fairly straightforward they can build into sophisticated models. Peter Senge
argues that one of the key problems with much that is written about, and done in the
name of management, is that rather simplistic frameworks are applied to what are
complex systems. We tend to focus on the parts rather than seeing the whole, and to fail
to see organization as a dynamic process. Thus, the argument runs, a better appreciation
of systems will lead to more appropriate action.
‘We learn best from our experience, but we never directly experience the consequences
of many of our most important decisions’, Peter Senge (1990: 23) argues with regard to
organizations. We tend to think that cause and effect will be relatively near to one
another. Thus when faced with a problem, it is the ‘solutions’ that are close by that we
focus upon. Classically we look to actions that produce improvements in a relatively
short time span. However, when viewed in systems terms short-term improvements often
involve very significant long-term costs. For example, cutting back on research and
design can bring very quick cost savings, but can severely damage the long-term viability
of anorganization. Part of the problem is the nature of the feedback we receive. Some of
the feedback will be reinforcing (or amplifying) – with small changes building on
themselves. ‘Whatever movement occurs is amplified, producing more movement in the
same direction. A small action snowballs, with more and more and still more of the same,
resembling compound interest’ (Senge 1990: 81). Thus, we may cut our advertising
budgets, see the benefits in terms of cost savings, and in turn further trim spending in this
area. In the short run there may be little impact on people’s demands for our goods and
services, but longer term the decline in visibility may have severe penalties. An
appreciation of systems will lead to recognition of the use of, and problems with, such
reinforcing feedback, and also an understanding of the place of balancing (or stabilizing)
feedback. (See, also Kurt Lewin on feedback). A further key aspect of systems is the
extent to which they inevitably involve delays – ‘interruptions in the flow of influence
which make the consequences of an action occur gradually’ (ibid.: 90). Peter Senge
(1990: 92) concludes:
The systems viewpoint is generally oriented toward the long-term view. That’s why
delays and feedback loops are so important. In the short term, you can often ignore them;
they’re inconsequential. They only come back to haunt you in the long term.
Peter Senge advocates the use of ‘systems maps’ – diagrams that show the key elements
of systems and how they connect. However, people often have a problem ‘seeing’
systems, and it takes work to acquire the basic building blocks of systems theory, and to
apply them to your organization. On the other hand, failure to understand system
dynamics can lead us into ‘cycles of blaming and self-defense: the enemy is always out
there, and problems are always caused by someone else’ Bolam and Deal 1997: 27; see,
also, Senge 1990: 231).
Essences: the state of being those with high levels of mastery in the discipline (Senge
1990: 373).
Personal mastery. ‘Organizations learn only through individuals who learn. Individual
learning does not guarantee organizational learning. But without it no organizational
learning occurs’ (Senge 1990: 139). Personal mastery is the discipline of continually
clarifying and deepening our personal vision, of focusing our energies, of developing
patience, and of seeing reality objectively’ (ibid.: 7). It goes beyond competence and
skills, although it involves them. It goes beyond spiritual opening, although it involves
spiritual growth (ibid.: 141). Mastery is seen as a special kind of proficiency. It is not
about dominance, but rather about calling. Vision is vocation rather than simply just a
good idea.
People with a high level of personal mastery live in a continual learning mode. They
never ‘arrive’. Sometimes, language, such as the term ‘personal mastery’ creates a
misleading sense of definiteness, of black and white. But personal mastery is not
something you possess. It is a process. It is a lifelong discipline. People with a high level
of personal mastery are acutely aware of their ignorance, their incompetence, their
growth areas. And they are deeply self-confident. Paradoxical? Only for those who do not
see the ‘journey is the reward’. (Senge 1990: 142)
In writing such as this we can see the appeal of Peter Senge’s vision. It has deep echoes
in the concerns of writers such as M. Scott Peck (1990) and Erich Fromm (1979). The
discipline entails developing personal vision; holding creative tension (managing the gap
between our vision and reality); recognizing structural tensions and constraints, and our
own power (or lack of it) with regard to them; a commitment to truth; and using the sub-
conscious (ibid.: 147-167).
The discipline of mental models starts with turning the mirror inward; learning to unearth
our internal pictures of the world, to bring them to the surface and hold them rigorously
to scrutiny. It also includes the ability to carry on ‘learningful’ conversations that balance
inquiry and advocacy, where people expose their own thinking effectively and make that
thinking open to the influence of others. (Senge 1990: 9)
If organizations are to develop a capacity to work with mental models then it will be
necessary for people to learn new skills and develop new orientations, and for their to be
institutional changes that foster such change. ‘Entrenched mental models… thwart
changes that could come from systems thinking’ (ibid.: 203). Moving the organization in
the right direction entails working to transcend the sorts of internal politics and game
playing that dominate traditional organizations. In other words it means fostering
openness (Senge 1990: 273-286). It also involves seeking to distribute business
responsibly far more widely while retaining coordination and control. Learning
organizations are localized organizations (ibid.: 287-301).
Building shared vision. Peter Senge starts from the position that if any one idea about
leadership has inspired organizations for thousands of years, ‘it’s the capacity to hold a
share picture of the future we seek to create’ (1990: 9). Such a vision has the power to be
uplifting – and to encourage experimentation and innovation. Crucially, it is argued, it
can also foster a sense of the long-term, something that is fundamental to the ‘fifth
discipline’.
When there is a genuine vision (as opposed to the all-to-familiar ‘vision statement’),
people excel and learn, not because they are told to, but because they want to. But many
leaders have personal visions that never get translated into shared visions that galvanize
an organization… What has been lacking is a discipline for translating vision into shared
vision - not a ‘cookbook’ but a set of principles and guiding practices.
The practice of shared vision involves the skills of unearthing shared ‘pictures of the
future’ that foster genuine commitment and enrolment rather than compliance. In
mastering this discipline, leaders learn the counter-productiveness of trying to dictate a
vision, no matter how heartfelt. (Senge 1990: 9)
Team learning. Such learning is viewed as ‘the process of aligning and developing the
capacities of a team to create the results its members truly desire’ (Senge 1990: 236). It
builds on personal mastery and shared vision – but these are not enough. People need to
be able to act together. When teams learn together, Peter Senge suggests, not only can
there be good results for the organization, members will grow more rapidly than could
have occurred otherwise.
The discipline of team learning starts with ‘dialogue’, the capacity of members of a team
to suspend assumptions and enter into a genuine ‘thinking together’. To the Greeks dia-
logos meant a free-flowing if meaning through a group, allowing the group to discover
insights not attainable individually…. [It] also involves learning how to recognize the
patterns of interaction in teams that undermine learning. (Senge 1990: 10)
The notion of dialogue that flows through The Fifth Discipline is very heavily dependent
on the work of the physicist, David Bohm (where a group ‘becomes open to the flow of a
larger intelligence’, and thought is approached largely as collective phenomenon). When
dialogue is joined with systems thinking, Senge argues, there is the possibility of creating
a language more suited for dealing with complexity, and of focusing on deep-seated
structural issues and forces rather than being diverted by questions of personality and
leadership style. Indeed, such is the emphasis on dialogue in his work that it could almost
be put alongside systems thinking as a central feature of his approach.
Leading the learning organization
Peter Senge argues that learning organizations require a new view of leadership. He sees
the traditional view of leaders (as special people who set the direction, make key
decisions and energize the troops as deriving from a deeply individualistic and non-
systemic worldview (1990: 340). At its centre the traditional view of leadership, ‘is based
on assumptions of people’s powerlessness, their lack of personal vision and inability to
master the forces of change, deficits which can be remedied only by a few great leaders’
(op. cit.). Against this traditional view he sets a ‘new’ view of leadership that centres on
‘subtler and more important tasks’.
In a learning organization, leaders are designers, stewards and teachers. They are
responsible for building organizations were people continually expand their capabilities
to understand complexity, clarify vision, and improve shared mental models – that is they
are responsible for learning…. Learning organizations will remain a ‘good idea’… until
people take a stand for building such organizations. Taking this stand is the first
leadership act, the start of inspiring (literally ‘to breathe life into’) the vision of the
learning organization. (Senge 1990: 340)
Many of the qualities that Peter Senge discusses with regard to leading the learning
organization can be found in the shared leadership model (discussed elsewhere on these
pages). For example, what Senge approaches as inspiration, can be approached
asanimation. Here we will look at the three aspects of leadership that he identifies – and
link his discussion with some other writers on leadership.
Leader as designer. The functions of design are rarely visible, Peter Senge argues, yet
no one has a more sweeping influence than the designer (1990: 341). The organization’s
policies, strategies and ‘systems’ are key area of design, but leadership goes beyond this.
Integrating the five component technologies is fundamental. However, the first task
entails designing the governing ideas – the purpose, vision and core values by which
people should live. Building a shared vision is crucial early on as it ‘fosters a long-term
orientation and an imperative for learning’ (ibid.: 344). Other disciplines also need to be
attended to, but just how they are to be approached is dependent upon the situation faced.
In essence, ‘the leaders’ task is designing the learning processes whereby people
throughout the organization can deal productively with the critical issues they face, and
develop their mastery in the learning disciplines’ (ibid.: 345).
Leader as steward. While the notion of leader as steward is, perhaps, most commonly
associated with writers such as Peter Block (1993), Peter Senge has some interesting
insights on this strand. His starting point was the ‘purpose stories’ that the managers he
interviewed told about their organization. He came to realize that the managers were
doing more than telling stories, they were relating the story: ‘the overarching explanation
of why they do what they do, how their organization needs to evolve, and how that
evolution is part of something larger’ (Senge 1990: 346). Such purpose stories provide a
single set of integrating ideas that give meaning to all aspects of the leader’s work – and
not unexpectedly ‘the leader develops a unique relationship to his or her own personal
vision. He or she becomes a steward of the vision’ (op. cit.). One of the important things
to grasp here is that stewardship involves a commitment to, and responsibility for the
vision, but it does not mean that the leader owns it. It is not their possession. Leaders are
stewards of the vision, their task is to manage it for the benefit of others (hence the
subtitle of Block’s book – ‘Choosing service over self-interest’). Leaders learn to see
their vision as part of something larger. Purpose stories evolve as they are being told, ‘in
fact, they are as a result of being told’ (Senge 1990: 351). Leaders have to learn to listen
to other people’s vision and to change their own where necessary. Telling the story in this
way allows others to be involved and to help develop a vision that is both individual and
shared.
Leader as teacher. Peter Senge starts here with Max de Pree’s (1990) injunction that the
first responsibility of a leader is to define reality. While leaders may draw inspiration and
spiritual reserves from their sense of stewardship, ‘much of the leverage leaders can
actually exert lies in helping people achieve more accurate, more insightful and
moreempowering views of reality (Senge 1990: 353). Building on an existing ‘hierarchy
of explanation’ leaders, Peter Senge argues, can influence people’s view of reality at four
levels: events, patterns of behaviour, systemic structures and the ‘purpose story’. By and
large most managers and leaders tend to focus on the first two of these levels (and under
their influence organizations do likewise). Leaders in learning organizations attend to all
four, ‘but focus predominantly on purpose and systemic structure. Moreover they “teach”
people throughout the organization to do likewise’ (Senge 1993: 353). This allows them
to see ‘the big picture’ and to appreciate the structural forces that condition behaviour. By
attending to purpose, leaders can cultivate an understanding of what the organization (and
its members) are seeking to become. One of the issues here is that leaders often have
strengths in one or two of the areas but are unable, for example, to develop systemic
understanding. A key to success is being able to conceptualize insights so that they
become public knowledge, ‘open to challenge and further improvement’ (ibid.: 356).
“Leader as teacher” is not about “teaching” people how to achieve their vision. It is about
fostering learning, for everyone. Such leaders help people throughout the organization
develop systemic understandings. Accepting this responsibility is the antidote to one of
the most common downfalls of otherwise gifted teachers – losing their commitment to
the truth. (Senge 1990: 356)
Leaders have to create and manage creative tension – especially around the gap between
vision and reality. Mastery of such tension allows for a fundamental shift. It enables the
leader to see the truth in changing situations.
When making judgements about Peter Senge’s work, and the ideas he promotes, we need
to place his contribution in context. His is not meant to be a definitive addition to the
‘academic’ literature of organizational learning. Peter Senge writes for practicing and
aspiring managers and leaders. The concern is to identify how interventions can be made
to turn organizations into ‘learning organizations’. Much of his, and similar theorists’
efforts, have been ‘devoted to identifying templates, which real organizations could
attempt to emulate’ (Easterby-Smith and Araujo 1999: 2). In this field some of the
significant contributions have been based around studies of organizational practice,
others have ‘relied more on theoretical principles, such as systems dynamics or
psychological learning theory, from which implications for design and implementation
have been derived’ (op. cit.). Peter Senge, while making use of individual case studies,
tends to the latter orientation.
The most appropriate question in respect of this contribution would seem to be whether it
fosters praxis – informed, committed action on the part of those it is aimed at? This is an
especially pertinent question as Peter Senge looks to promote a more holistic vision of
organizations and the lives of people within them. Here we focus on three aspects. We
start with the organization.
Organizational imperatives. Here the case against Peter Senge is fairly simple. We can
find very few organizations that come close to the combination of characteristics that he
identifies with the learning organization. Within a capitalist system his vision of
companies and organizations turning wholehearted to the cultivation of the learning of
their members can only come into fruition in a limited number of instances. While those
in charge of organizations will usually look in some way to the long-term growth and
sustainability of their enterprise, they may not focus on developing the human resources
that the organization houses. The focus may well be on enhancing brand recognition and
status (Klein 2001); developing intellectual capital and knowledge (Leadbeater 2000);
delivering product innovation; and ensuring that production and distribution costs are
kept down. As Will Hutton (1995: 8) has argued, British companies’ priorities are
overwhelmingly financial. What is more, ‘the targets for profit are too high and time
horizons too short’ (1995: xi). Such conditions are hardly conducive to building the sort
of organization that Peter Senge proposes. Here the case against Senge is that within
capitalist organizations, where the bottom line is profit, a fundamental concern with the
learning and development of employees and associates is simply too idealistic.
Yet there are some currents running in Peter Senge’s favour. The need to focus on
knowledge generation within an increasingly globalized economy does bring us back in
some important respects to the people who have to create intellectual capital.
A failure to attend to the learning of groups and individuals in the organization spells
disaster in this context. As Leadbeater (2000: 70) has argued, companies need to invest
not just in new machinery to make production more efficient, but in the flow of know-
how that will sustain their business. Organizations need to be good at knowledge
generation, appropriation and exploitation. This process is not that easy:
Knowledge that is visible tends to be explicit, teachable, independent, detachable, it also
easy for competitors to imitate. Knowledge that is intangible, tacit, less teachable, less
observable, is more complex but more difficult to detach from the person who created it
or the context in which it is embedded. Knowledge carried by an individual only realizes
its commercial potential when it is replicated by an organization and becomes
organizational knowledge. (ibid.: 71)
Here we have a very significant pressure for the fostering of ‘learning organizations’. The
sort of know-how that Leadbeater is talking about here cannot be simply transmitted. It
has to be engaged with, talking about and embedded in organizational structures and
strategies. It has to become people’s own.
A question of sophistication and disposition. One of the biggest problems with Peter
Senge’s approach is nothing to do with the theory, it’s rightness, nor the way it is
presented. The issue here is that the people to whom it is addressed do not have the
disposition or theoretical tools to follow it through. One clue lies in his choice of
‘disciplines’ to describe the core of his approach. As we saw a discipline is a series of
principles and practices that we study, master and integrate into our lives. In other words,
the approach entails significant effort on the part of the practitioner. It also entails
developing quite complicated mental models, and being able to apply and adapt these to
different situations – often on the hoof. Classically, the approach involves a shift from
product to process (and back again). The question then becomes whether many people in
organizations can handle this. All this has a direct parallel within formal education. One
of the reasons that product approaches to curriculum (as exemplified in the concern for
SATs tests, examination performance and school attendance) have assumed such a
dominance is that alternative process approaches are much more difficult to do well.
They may be superior – but many teachers lack the sophistication to carry them forward.
There are also psychological and social barriers. As Lawrence Stenhouse put it some
years ago: ‘The close examination of one’s professional performance is personally
threatening; and the social climate in which teachers work generally offers little support
to those who might be disposed to face that threat’ (1975: 159). We can make the same
case for people in most organizations.
The process of exploring one’s performance, personality and fundamental aims in life
(and this is what Peter Senge is proposing) is a daunting task for most people. To do it we
need considerable support, and the motivation to carry the task through some very
uncomfortable periods. It calls for the integration of different aspects of our lives and
experiences. There is, here, a straightforward question concerning the vision – will
people want to sign up to it? To make sense of the sorts of experiences generated and
explored in a fully functioning ‘learning organization’ there needs to be ‘spiritual growth’
and the ability to locate these within some sort of framework of commitment. Thus, as
employees, we are not simply asked to do our jobs and to get paid. We are also requested
to join in something bigger. Many of us may just want to earn a living!
Politics and vision. Here we need to note two key problem areas. First, there is a
question of how Peter Senge applies systems theory. While he introduces all sorts of
broader appreciations and attends to values – his theory is not fully set in a political or
moral framework. There is not a consideration of questions of social justice, democracy
and exclusion. His approach largely operates at the level of organizational interests. This
is would not be such a significant problem if there was a more explicit vision of the sort
of society that he would like to see attained, and attention to this with regard to
management and leadership. As a contrast we might turn to Peter Drucker’s (1977: 36)
elegant discussion of the dimensions of management. He argued that there are three tasks
– ‘equally important but essentially different’ – that face the management of every
organization. These are:
To think through and define the specific purpose and mission of the institution, whether
business enterprise, hospital, or university.
He continues:
None of our institutions exists by itself and as an end in itself. Every one is an organ of
society and exists for the sake of society. Business is not exception. ‘Free enterprise’
cannot be justified as being good for business. It can only be justified as being good for
society. (Drucker 1977: 40)
If Peter Senge had attempted greater connection between the notion of the ‘learning
organization’ and the ‘learning society’, and paid attention to the political and social
impact of organizational activity then this area of criticism would be limited to the
question of the particular vision of society and human flourishing involved.
Second, there is some question with regard to political processes concerning his emphasis
on dialogue and shared vision. While Peter Senge clearly recognizes the political
dimensions of organizational life, there is sneaking suspicion that he may want to
transcend it. In some ways there is link here with the concerns and interests
ofcommunitarian thinkers like Amitai Etzioni (1995, 1997). As Richard Sennett (1998:
143) argues with regard to political communitarianism, it ‘falsely emphasizes unity as the
source of strength in a community and mistakenly fears that when conflicts arise in a
community, social bonds are threatened’. Within it (and arguably aspects of Peter
Senge’s vision of the learning organization) there seems, at times, to be a dislike of
politics and a tendency to see danger in plurality and difference. Here there is a tension
between the concern for dialogue and the interest in building a shared vision. An
alternative reading is that difference is good for democratic life (and organizational life)
provided that we cultivate a sense of reciprocity, and ways of working that encourage
deliberation. The search is not for the sort of common good that many communitarians
seek (Guttman and Thompson 1996: 92) but rather for ways in which people may share
in a common life. Moral disagreement will persist – the key is whether we can learn to
respect and engage with each other’s ideas, behaviours and beliefs.
Conclusion
John van Maurik (2001: 201) has suggested that Peter Senge has been ahead of his time
and that his arguments are insightful and revolutionary. He goes on to say that it is a
matter of regret ‘that more organizations have not taken his advice and have remained
geared to the quick fix’. As we have seen there are very deep-seated reasons why this
may have been the case. Beyond this, though, there is the questions of whether Senge’s
vision of the learning organization and the disciplines it requires has contributed to more
informed and committed action with regard to organizational life? Here we have little
concrete evidence to go on. However, we can make some judgements about the
possibilities of his theories and proposed practices. We could say that while there are
some issues and problems with his conceptualization, at least it does carry within it some
questions around what might make for human flourishing. The emphases on building a
shared vision, team working, personal mastery and the development of more
sophisticated mental models and the way he runs the notion of dialogue through these
does have the potential of allowing workplaces to be more convivial and creative. The
drawing together of the elements via the Fifth Discipline of systemic thinking, while not
being to everyone’s taste, also allows us to approach a more holistic understanding of
organizational life (although Peter Senge does himself stop short of asking some
important questions in this respect). These are still substantial achievements – and when
linked to his popularizing of the notion of the ‘learning organization’ – it is
understandable why Peter Senge has been recognized as a key thinker.
The Fifth Discipline: The Art and Practice of the Learning Organization (Senge 1990)
is a book by Peter Senge (a senior lecturer at MIT) focusing on group problem solving
using the systems thinking method in order to convert companies into learning
organizations. The five disciplines represent approaches (theories and methods) for
developing three core learning capabilities: fostering aspiration, developing reflective
conversation, and understanding complexity.
The five disciplines of the learning organization discussed in the book are:
1) "I am my position."
People fail to recognize their purpose as a part of the enterprise. Instead, they see
themselves as an inconsequential part of a system over which they have little influence,
leading them to limit themselves to the jobs they must perform at their own positions.
This makes it hard to pinpoint the reason an enterprise is failing, with so many hidden
'loose screws' around.
The tendency to see things as results of short-term events undermines our ability to see
things on a grander scale. Cave men needed to react to events quickly for survival.
However, the biggest threats we face nowadays are rarely sudden events, but slow,
gradual processes, such as environmental changes.
Deming returned to the US and spent some years in obscurity before the publication of
his book "Out of the crisis" in 1982. In this book, Deming set out 14 points which, if
applied to US manufacturing industry, would he believed, save the US from industrial
doom at the hands of the Japanese.
Although Deming does not use the term Total Quality Management in his book, it is
credited with launching the movement. Most of the central ideas of TQM are contained in
"Out of the crisis".
The 14 points seem at first sight to be a rag-bag of radical ideas, but the key to
understanding a number of them lies in Deming's thoughts about variation. Variation was
seen by Deming as the disease that threatened US manufacturing. The more variation - in
the length of parts supposed to be uniform, in delivery times, in prices, in work practices
- the more waste, he reasoned.
From this premise, he set out his 14 points for management, which we have paraphrased
here:
2."Adopt the new philosophy". The implication is that management should actually adopt
his philosophy, rather than merely expect the workforce to do so.
4."Move towards a single supplier for any one item." Multiple suppliers mean variation
between feedstocks.
6."Institute training on the job". If people are inadequately trained, they will not all work
the same way, and this will introduce variation.
8."Drive out fear". Deming sees management by fear as counter- productive in the long
term, because it prevents workers from acting in the organisation's best interests.
9."Break down barriers between departments". Another idea central to TQM is the
concept of the 'internal customer', that each department serves not the management, but
the other departments that use its outputs.
10."Eliminate slogans". Another central TQM idea is that it's not people who make most
mistakes - it's the process they are working within. Harassing the workforce without
improving the processes they use is counter-productive.
William Edwards Deming (October 14, 1900 – December 20, 1993) was
an American statistician, professor, author, lecturer, andconsultant. He is perhaps best
known for his work in Japan. There, from 1950 onward, he taught top management how
to improve design (and thus service), product quality, testing and sales (the last through
global markets)[1] through various methods, including the application of statistical
methods.
Deming made a significant contribution to Japan's later reputation for innovative high-
quality products and its economic power. He is regarded as having had more impact upon
Japanese manufacturing and business than any other individual not of Japanese heritage.
Despite being considered something of a hero in Japan, he was only just beginning to win
widespread recognition in the U.S. at the time of his death.
Dr. Deming's teachings and philosophy can be seen through the results they produced
when they were adopted by Japanese industry, as the following example shows: Ford
Motor Company was simultaneously manufacturing a car model with transmissions made
in Japan and the United States. Soon after the car model was on the market, Ford
customers were requesting the model with Japanese transmission over the USA-made
transmission, and they were willing to wait for the Japanese model. As both
transmissions were made to the same specifications, Ford engineers could not understand
the customer preference for the model with Japanese transmission. Finally, Ford
engineers decided to take apart the two different transmissions. The American-made car
parts were all within specified tolerance levels. On the other hand, the Japanese car parts
were virtually identical to each other, and much closer to the nominal values for the parts
- e.g., if a part were supposed to be one foot long, plus or minus 1/8 of an inch - then the
Japanese parts were within 1/16 of an inch. This made the Japanese cars run more
smoothly and customers experienced fewer problems. Engineers at Ford could not
understand how this was done, until they met Deming.[3]
Deming was the author of Out of the Crisis (1982–1986) and The New Economics for
Industry, Government, Education (1993), which includes his System of Profound
Knowledge and the 14 Points for Management (described below). Deming played flute &
drums and composed music throughout his life, including sacred choral compositions and
an arrangement of The Star Spangled Banner.[4]
In 1993, Deming founded the W. Edwards Deming Institute in Washington, D.C., where
the Deming Collection at the U.S. Library of Congress includes an extensive audiotape
and videotape archive. The aim of the W. Edwards Deming Institute is to foster
understanding of The Deming System of Profound Knowledge to advance commerce,
prosperity, and peace.
The 14 points are a basis for transformation of [American] industry. Adoption and action
on the 14 points are a signal that management intend to stay in business and aim to
protect investors and jobs. Such a system formed the basis for lessons for top
management in Japan in 1950 and in subsequent years.
The 14 points apply anywhere, to small organisations as well as to large ones, to the
service industry as well as to manufacturing. They apply to a division within a company.
1. Create constancy of purpose toward improvement of product and service, with the
aim to become competitive and to stay in business, and to provide jobs.
2. Adopt the new philosophy. We are in a new economic age. Western management
must awaken to the challenge, must learn their responsibilities, and take on
leadership for change.
3. Cease dependence on inspection to achieve quality. Eliminate the need for
inspection on a mass basis by building quality into the product in the first place.
4. End the practice of awarding business on the basis of price tag. Instead, minimise
total cost. Move towards a single supplier for any one item, on a long-term
relationship of loyalty and trust.
5. Improve constantly and forever the system of production and service, to improve
quality and productivity, and thus constantly decrease costs.
6. Institute training on the job.
7. Institute leadership. The aim of supervision should be to help people and
machines and gadgets to do a better job. Supervision of management is in need of
an overhaul, as well as supervision of production workers.
8. Drive out fear, so that everyone may work effectively for the company.
9. Break down barriers between departments. People in research, design, sales, and
production must work as a team, to foresee problems of production and in use that
may be encountered with the product or service.
10. Eliminate slogans, exhortations, and targets for the workforce asking for zero
defects and new levels of productivity. Such exhortations only create adversarial
relationships, as the bulk of the causes of low quality and low productivity belong
to the system and thus lie beyond the power of the work force.
11. a. Eliminate work standards (quotas) on the factory floor. Substitute leadership.
b. Eliminate management by objective. Eliminate management by numbers,
numerical goals. Substitute leadership.
12. a. Remove barriers that rob the hourly paid worker of his right to pride in
workmanship. The responsibility of supervisors must be changed from sheer
numbers to quality.
b. Remove barriers that rob people in management and engineering of their right
to pride in workmanship. This means, inter alia, abolishment of the annual or
merit rating and management by objective.
13. Institute a vigorous program of education and self-improvement.
14. Put everybody in the company to work to accomplish the transformation. The
transformation is everybody's job.
Point 1: Create constancy of purpose toward improvement of the product and service so
as to become competitive, stay in business and provide jobs.
Point 2: Adopt the new philosophy. We are in a new economic age. We no longer need
live with commonly accepted levels of delay, mistake, defective material and defective
workmanship.
Point 3: Cease dependence on mass inspection; require, instead, statistical evidence that
quality is built in.
Point 4: Improve the quality of incoming materials. End the practice of awarding
business on the basis of a price alone. Instead, depend on meaningful measures of quality,
along with price.
Point 5: Find the problems; constantly improve the system of production and service.
There should be continual reduction of waste and continual improvement of quality in
every activity so as to yield a continual rise in productivity and a decrease in costs.
Point 6: Institute modern methods of training and education for all. Modern methods of
on-the-job training use control charts to determine whether a worker has been properly
trained and is able to perform the job correctly. Statistical methods must be used to
discover when training is complete.
Point 8: Fear is a barrier to improvement so drive out fear by encouraging effective two-
way communication and other mechanisms that will enable everybody to be part
of change, and to belong to it.
Fear can often be found at all levels in an organization: fear of change, fear of the fact
that it may be necessary to learn a better way of working and fear that their positions
might be usurped frequently affect middle and higher management, whilst on the shop-
floor, workers can also fear the effects of change on their jobs.
Point 9: Break down barriers between departments and staff areas. People in different
areas such as research, design, sales, administration and production must work in
teams to tackle problems that may be encountered with products or service.
Point 10: Eliminate the use of slogans, posters and exhortations for the workforce,
demanding zero defects and new levels of productivity without providing methods. Such
exhortations only create adversarial relationships.
Point 11: Eliminate work standards that prescribe numerical quotas for the workforce
and numerical goals for people in management. Substitute aids and helpful leadership.
Point 12: Remove the barriers that rob hourly workers, and people in management, of
their right to pride of workmanship. This implies, abolition of the annual merit rating
(appraisal of performance) and of management by objectives.
Point 13: Institute a vigorous program of education, and encourage self-improvement for
everyone. What an organization needs is not just good people; it needs people that are
improving with education.
The concept of Gestalt was first introduced in contemporary philosophy and psychology
by Christian von Ehrenfels (a member of the School of Brentano). The idea of Gestalt has
its roots in theories by Johann Wolfgang von Goethe, Immanuel Kant, and Ernst
Mach. Max Wertheimer's unique contribution was to insist that the "Gestalt" is
perceptually primary, defining the parts of which it was composed, rather than being a
secondary quality that emerges from those parts, as von Ehrenfels's earlier Gestalt-
Qualität had been.
Both von Ehrenfels and Edmund Husserl seem to have been inspired by Mach's
work Beiträge zur Analyse der Empfindungen (Contributions to the Analysis of the
Sensations, 1886), in formulating their very similar concepts of Gestalt and Figural
Moment, respectively.
Early 20th century theorists, such as Kurt Koffka, Max Wertheimer, and Wolfgang
Köhler (students of Carl Stumpf) saw objects as perceived within an environment
according to all of their elements taken together as a global construct. This 'gestalt' or
'whole form' approach sought to define principles of perception -- seemingly innate
mental laws which determined the way in which objects were perceived. It is based on
the here and now, and in the way you view things. It can be broken up into two: figure or
ground, at first glance do you see the figure in front of you or the background?
These laws took several forms, such as the grouping of similar, or proximate, objects
together, within this global process. Although Gestalt has been criticized for being merely
descriptive, it has formed the basis of much further research into the perception of
patterns and objects ( Carlson et al. 2000), and of research into behavior, thinking,
problem solving and psychopathology.
The investigations developed at the beginning of the 20th century, based on traditional
scientific methodology, divided the object of study into a set of elements that could be
analyzed separately with the objective of reducing the complexity of this object. Contrary
to this methodology, the school of Gestalt practiced a series of theoretical and
methodological principles that attempted to redefine the approach to psychological
research.
Based on the principles above the following methodological principles are defined:
In the 30s and 40s Gestalt psychology was applied to visual perception, most notably by
Max Wertheimer, Wolfgang Köhler, and Kurt Koffka who founded the so-called gestalt
approaches to form perception. Their aim was to investigate the global and holistic
processes involved in perceiving structure in the environment (e.g. Sternberg 1996).
More specifically, they tried to explain human perception of groups of objects and how
we perceive parts of objects and form whole objects on the basis of these. The
investigations in this subject crystallised into "the gestalt laws of perceptual
organization." Some of these laws, which are often cited in the HCI or interaction design
community, are as follows.
Diffusion of innovations
Diffusion of Innovations is a theory of how, why, and at what rate
new ideas and technology spread through cultures. The concept was first studied by the
French sociologist Gabriel Tarde (1890) and by German and
Austrian anthropologists such as Friedrich Ratzel and Leo Frobenius.[1] Its basic
epidemiological or internal-influence form was formulated by H. Earl Pemberton[2], who
provided examples of institutional diffusion such as postage stamps and compulsory
school laws.
Element Definition
Rogers defines an innovation as "an idea, practice, or object that is
Innovation
perceived as new by an individual or other unit of adoption" [5].
Communication A communication channel is "the means by which messages get from
channels one individual to another" [6].
"The innovation-decision period is the length of time required to pass
through the innovation-decision process" [7]. "Rate of adoption is the
Time
relative speed with which an innovation is adopted by members of a
social system" [8].
"A social system is defined as a set of interrelated units that are
Social system
engaged in joint problem solving to accomplish a common goal" [9].
Decisions
Two factors determine what type a particular decision is :
Based on these considerations, three types of innovation-decisions have been identified within
diffusion of innovations.
Type Definition
Optional Innovation- This decision is made by an individual who is in some way distinguished from
Decision others in a social system.
Collective Innovation-
This decision is made collectively by all individuals of a social system.
Decision
Authority Innovation- This decision is made for the entire social system by few individuals in positions
Decision of influence or power.
Diffusion of an innovation occurs through a five–step process. This process is a type of
decision-making. It occurs through a series of communication channels over a period of
time among the members of a similar social system. Ryan and Gross first indicated the
identification of adoption as a process in 1943 (Rogers 1962, p. 79). Rogers categorizes
the five stages (steps) as: awareness, interest, evaluation, trial, and adoption. An
individual might reject an innovation at any time during or after the adoption process. In
later editions of the Diffusion of Innovations Rogers changes the terminology of the five
stages to: knowledge, persuasion, decision, implementation, and confirmation. However
the descriptions of the categories have remained similar throughout the editions.
The rate of adoption is defined as: the relative speed with which members of a social
system adopt an innovation. It is usually measured by the length of time required for a
certain percentage of the members of a social system to adopt an innovation (Rogers
1962, p. 134). The rates of adoption for innovations are determined by an individual’s
adopter category. In general individuals who first adopt an innovation require a shorter
adoption period (adoption process) than late adopters.
Within the rate of adoption there is a point at which an innovation reaches critical mass.
This is a point in time within the adoption curve that enough individuals have adopted an
innovation in order that the continued adoption of the innovation is self-sustaining. In
describing how an innovation reaches critical mass, Rogers outlines several strategies in
order to help an innovation reach this stage. These strategies are: have an innovation
adopted by a highly respected individual within a social network, creating an instinctive
desire for a specific innovation. Inject an innovation into a group of individuals who
would readily use an innovation, and provide positive reactions and benefits for early
adopters of an innovation.
One major reason for this lack of utilization is that educational technologists have
concentrated their efforts on developing instructionally sound and technically superior
products while giving less consideration to other issues. Technical superiority, while
important, is not the only factor that determines whether or not an innovation is widely
adopted--it might not even be the most important factor (Pool, 1997). A complex web of
social, economic, technical, organizational, and individual factors interact to influence
which technologies are adopted and to alter the effect of a technology after it has been
adopted (Segal, 1994). In order to fully understand the field, practitioners have to
understand more than just hardware, software, design models, and learning
theory. Understanding why people use educational technology and, perhaps more
importantly, why they don’t is at the core of the process. That’s where adoption,
diffusion, implementation, and institutionalization come in.
There has been a long and impressive history of research related to the adoption
and diffusion of innovations (Surry & Brennan, 1998). Many of the most important and
earliest studies in this area were conducted by researchers working in the field of rural
sociology (Rogers, 1995). In fact, a study that investigated the diffusion of hybrid-seed
corn (Ryan & Gross, 1943) is considered to be the first major, influential diffusion study
of the modern era (Rogers, 1995). Other researchers have investigated the diffusion of
innovations in such diverse fields as solar power (Keeler, 1976), farm innovations in
India (Sekon, 1968), and weather forecasting (Surry, 1993).
The most widely cited and most influential researcher in the area of adoption and
diffusion is Everett Rogers. Rogers’ Diffusion of Innovations is perhaps the single most
important book related to this topic and provides a comprehensive overview of adoption
and diffusion theory. It was first published in 1962 and now in its 4th edition (Rogers,
1995).
The concept of perceived attributes (Rogers, 1995) has served as the basis for a number
of diffusion studies (e.g., Fliegel & Kivlin, 1966; Wyner, 1974). Perceived attributes
refers to the opinions of potential adopters who base their feelings about of an innovation
on how they perceive that innovation in regard to five key attributes: Relative Advantage;
Compatibility; Complexity; Trialability, and; Observability. In short, this construct states
that people are more likely to adopt an innovation if the innovation offers them a better
way to do something, is compatible with their values, beliefs and needs, is not too
complex, can be tried out before adoption, and has observable benefits. Perceived
attributes are important because they show that potential adopters base their opinions of
an innovation on a variety of attributes, not just relative advantage. Educational
technologists, therefore, should try to think about how potential adopters will perceive
their innovations in terms of all of the five attributes, and not focus exclusively on
technical superiority.
The S-shaped adoption curve is another important idea that Rogers (1995) has
described. This curve shows that a successful innovation will go through a period of
slow adoption before experiencing a sudden period of rapid adoption and then a gradual
leveling off . When depicted on a graph , this slow growth, rapid expansion and leveling
off form an S-shaped curve (see Figure 3). The period of rapid expansion, for most
successful innovations, occurs when social and technical factors combine to permit the
innovation to experience dramatic growth. For example, one can think of the many
factors that combined to lead to the widespread acceptance of the World Wide Web
between the years 1993 and 1995.
Figure 3. Example of an S-curve showing initial slow growth, a period of rapid adoption,
and a gradual leveling off.
There appears to be a growing trend in innovation research away from adoption and
diffusion towards implementation and institutionalization. As the adoption and diffusion
process moves along, the actual use or implementation of an innovation in a specific
setting becomes more and more important. Of course, implementation should be an
integral part of a comprehensive and systematic change plan from the
beginning. Michael Fullan, prominent researcher in this area, defines implementation as
"...the actual use of an innovation in practice." Further, he calls the implementation
perspective, "...both the content and process of dealing with ideas, programs, activities,
structures, and policies that are new to the people involved" (Fullan, 1996). Until Fullan
and Pomfret (1977) spelled out the process and issues in their review of implementation
research, not much was said about the steps after diffusion and adoption.
Once professional educators realized that they could modify programs, products and
practices, it was a short step to an approach that was less “lock step” and more analogous
to constructivism. Local participation in the modifications created a greater sense of
ownership.
Other Models
One of the tools often used to guide implementation efforts in schools is Hall's
Concerns Based Adoption Model (CBAM) (Hall & Hord, 1987). In the implementation
phase of this model, the Levels of Use (LoU) scale is introduced (Hall & Loucks,
1975). The basic levels are: Nonuse; Orientation (initial information); Preparation (to
use); Mechanical use; Routine; Refinement; Integration; and Renewal. The last four
levels actually move into the area of institutionalization discussed later in this chapter. A
modification of the LoU, Levels of Technological Implementation (LoTi), based on
measurement of classroom use of computers, has been proposed by Moersch
(1995). Moersch modifies Hall's levels to provide guidance for determining the extent of
implementation using seven levels: Nonuse; Awareness; Exploration; Infusion;
Integration; Expansion; and Refinement.
Over the years there have been studies and explorations of the resistance factors that
thwart diffusion and implementation efforts. Prominent among those who have journeyed
into this puzzling morass are Zaltman and Duncan (1977). These authors define
resistance as "...any conduct that serves to maintain the status quo in the face of pressure
to alter the status quo." The basic argument has been that if we knew what types of
resistance exist, perhaps we could design strategies to combat them. There are many
different types of resistance. They can be classified as cultural, social, organizational and
psychological. This approach to implementation has been successful only when
strategies for overcoming specific points of resistance have been developed.
1. Dissatisfaction with the status quo. Things could be better. Others seem to be moving
ahead while we are standing still. Dissatisfaction is based on an innate feeling or is
induced by a "marketing." campaign.
2. Knowledge and skills exist. Knowledge and skills are those required by the ultimate
user of the innovation. Without them, people become frustrated and
immobilized. Training is usually a vital part of most successful innovations.
3. Availability of resources. Resources are the things that are required to make
implementation work--the hardware, software, audiovisual media and the like. Without
them, implementation is reduced.
7. Commitment. This condition demonstrates firm and visible evidence that there is
endorsement and continuing support for the innovation. This factor is seen most
frequently in those who advocate the innovation and their supervisors.
8. Leadership. This factor includes (1) leadership of the executive officer of the
organization and, sometimes, by a board and (2) leadership within the institution or
project related to the day-to-day activities of the innovation being implemented.
It is clear that the eight conditions are present in varying degrees whenever
examples of successful implementation are studied. What is not so clear is the role of the
setting in which the innovation is implemented. The setting and the nature of the
innovation are major factors influencing the degree to which each condition is
present. Some of the variables in the setting include organizational climate, political
complexity and certain demographic factors. Some of the most important variables
regarding the innovation are the attributes of the innovation discussed earlier--its relative
advantage (when compared with the current status), compatibility with the values of the
organization or institution, its complexity (or simplicity), trialability before wholesale
adoption and observability by other professionals or the public. But...is implementation
the final stage?
Indicators of Institutionalization
According to the Regional Laboratory for Educational Improvement of the Northeast and
Islands (Eiseman, Fleming & Roody, 1990), there are six commonly accepted indicators
of institutionalization:
4. Firm expectation that use of the practice and/or product will continue within the
institution or organization;
5. Continuation does not depend upon the actions of specific individuals but upon the
organizational culture, structure or procedures; and
Once implementation has been achieved, one more decision must be made: "Is this
innovation something we want to continue for the immediate future?" If it is, the above
criteria could be used to assess the extent to which the innovation is
institutionalized. Several other indicators of routine use, called "passages and cycles" are
listed by Yin and Quick (1978): support by local funds; new personnel classification;
changes in governance; internalization of training; and turnover of key personnel.
Beal, Rogers and Bohlen together developed a technology diffusion model[5] and
later Everett Rogers generalized the use of it in his widely acclaimed book, Diffusion of
Innovations[6](now in its fifth edition), describing how new ideas and technologies spread
in different cultures. Others have since used the model to describe how innovations
spread between states in the U.S.
Rogers' bell curve
The technology adoption lifecycle model describes the adoption or acceptance of a new
product or innovation, according to the demographic and psychological characteristics of
defined adopter groups. The process of adoption over time is typically illustrated as a
classical normal distribution or "bell curve." The model indicates that the first group of
people to use a new product is called "innovators," followed by "early adopters." Next
come the early and late majority, and the last group to eventually adopt a product are
called "laggards."
The demographic and psychological (or "psychographic") profiles of each adoption group
were originally specified by the North Central Rural Sociology Committee,
Subcommittee for the Study of the Diffusion of Farm Practices (as cited by Beal and
Bohlen in their study above).
innovators - had larger farms, were more educated, more prosperous and more
risk-oriented
early adopters - younger, more educated, tended to be community leaders
early majority - more conservative but open to new ideas, active in community
and influence to neighbour
late majority - older, less educated, fairly conservative and less socially active
laggards - very conservative, had small farms and capital, oldest and least
educated
Rogers had no plans to attend university until a school teacher drove him and some
classmates to Ames to visit Iowa State University. Rogers decided to pursue a degree
in agriculture there. He then served in the Korean War for two years. He returned to
Iowa State University to earn a Ph.D. in sociology and statistics in 1957.
When the first edition (1962) of Diffusion of Innovations was published, Rogers was
an assistant professor of rural sociology at Ohio State University. He was only 30
years old but was becoming a world-renowned academic figure. In the mid-
2000s,The Diffusion of Innovations became the second-most-cited book in the social
sciences. (Arvind Singhal: Introducing Professor Everett M. Rogers, 47th Annual
Research Lecturer, University of New Mexico)[1]. The fifth edition (2003, with
Nancy Singer Olaguera) addresses the spread of the Internet, and how it has
transformed the way human beings communicate and adopt new ideas.
Rogers proposes that adopters of any new innovation or idea can be categorized as
innovators (2.5%), early adopters(13.5%), early majority (34%), late majority (34%)
and laggards (16%), based on the mathematically-based Bell curve. These categories,
based on standard deviations from the mean of the normal curve, provide a common
language for innovation researchers. Each adopter's willingness and ability to adopt
an innovation depends on their awareness, interest, evaluation, trial, and adoption.
People can fall into different categories for different innovations—a farmer might be
an early adopter of mechanical innovations, but a late majority adopter of biological
innovations or VCRs.
When graphed, the rate of adoption formed what came to typify the Diffusion of
Innovations model, an “s-shaped curve.” (S curve) The graph essentially shows a
cumulative percentage of adopters over time – slow at the start, more rapid as
adoption increases, then leveling off until only a small percentage of laggards have
not adopted. (Rogers Diffusion Of Innovations 1983)
His research and work became widely accepted in communications and technology
adoption studies, and also found its way into a variety of other social
science studies. Geoffrey Moore's Crossing the Chasm drew from Rogers in
explaining how and why technology companies succeed. Rogers was also able to
relate his communications research to practical health problems,
including hygiene, family planning, cancer prevention, and drunk driving.
Belief systems relate to the fundamental values of the organization. Examples in this
category include mission statements and vision statements. Boundary systems
describeconstraints in terms of employee behavior, i.e., forbidden actions. Internal
control systems are related to protecting assets, while diagnostic systems theoretically
provide information indicating when a system is in control or out of control.
Interactive systems focus on communicating and implementing the organization's
strategy. The purpose of an interactive system is to promote debate related to the
assumptions underlying the organization's strategy and ultimately to promote learning
and growth.
The confusion is related to how and where the balanced scorecard fits into the levers
of control. According to Kaplan & Norton, successful balance scorecard adopters use
the scorecard as an interactive system (p. 350). Some balanced scorecard
implementations have failed because companies used the scorecard as only a
diagnostic system.
Simon's term "interactive system" seems to be essentially the same as Kaplan &
Norton's term "strategic system". The message is, to obtain the potential benefits of
the balanced scorecard; an organization has to use it as a strategic system.
Much of Simons' narrative is devoted to explaining what the new "levers of control" are,
how they interrelate, and how these constructs support or contradict other management
theories. Simons' familiarity with strategic management theory is evident and integrates
well with the discussions of the levers of control.
Simons focuses primarily on the informational aspects of management control systems, the
levers managers use to process and transmit information. According to Simons, if four constructs
are understood and analyzed -- core values, risks to be avoided, strategic uncertainties, and
critical performance variables -- then each construct can be controlled with one of four different
levers. The first lever consists of an organization's beliefs systems, which are often embodied in
the mission statement and are used to communicate core values. Lever two is made up of
boundary systems, which basically form the organization's own "Ten Commandments" and are
used to define acceptable risks and standards of business conduct. Diagnostic control systems,
which include the traditional methods used to measure critical performance variables and
interactive control systems, make up lever three. Finally, the fourth lever of control, interactive
control systems, consists of the formal information systems that managers use to involve
themselves regularly and personally in the decision-making activities of subordinates; the focus is
on process. It is in discussion of this fourth lever that Simons integrates other management
theories most heavily.
Traditional management controls such as diagnostic control systems are given a bad rap by
Simons. The focus of diagnostic systems is on outcomes, Simons maintains, and he argues that
managers either pay too little or too much attention to them. Simons suggests that heavy reliance
on staff groups such as internal auditors as the gatekeepers of diagnostic control systems yields
a number of organizational benefits.
Simons proposes that the levers of control work more -- or less -- effectively depending upon the
current phase of the firm's life cycle. He presents results from his ten-year examination of control
in several companies from ten different industries, including banking; computer, food, and
machinery manufacturing; and health aids. His evidence shows how the ten managers and their
organizations utilized the control levers to drive changes during the first 18 months of the
managers' tenures. Depending upon the reader's experience, the field study may enhance
understanding of the control lever and life cycle interrelationships.
Simons provides an excellent narrative on balancing empowerment and control and provides a
handy summary of what managers and staff groups must do to effectively implement the four
control systems. And, although well articulated, Simons' examination certainly does not
oversimplify the challenges experienced by all members of an organization as they collaborate to
achieve the enterprise's worthwhile goals.
Introduction
One of the most difficult problems managers face today is maintaining control,
efficiency, and productivity while still giving employees the freedom to be creative,
innovative and flexible.
Giving employees too much autonomy has led to disaster for many companies, including
such well-known names as Sears and Standard Chartered Bank. In these companies and
many others, employees had enough independence that they were able to engage in and
mask underhanded, and sometimes illegal, activities. When these deviant behaviors
finally came to light, the companies incurred substantial losses not only financially, but
also in internal company morale and external public relations.
One method of preventing these kinds of incidents is for companies to revert to the
“machinelike bureaucracies” of the 1950s and 60s. In these work environments,
employees were given very specific instructions on how to do their jobs and then were
watched constantly by superiors to ensure the instructions were carried out properly.
In the modern corporate world, this method of managing employees has all but been
abandoned except in those industries that lend themselves to standardization and
repetition of work activities (e.g., in casinos and on assembly lines). In most industries,
managers simply do not have time to watch everyone all the time. They must find ways
to encourage employees to think for themselves, to create new processes and methods,
while still retaining enough control to ensure that employee creativity will ultimately
benefit and improve the company.
There are four control levers or “systems” that can aid managers in achieving the
balance between employee empowerment and effective control:
1. diagnostic control systems,
2. beliefs systems,
3. boundary systems, and
4. interactive control systems.
This control lever relies on quantitative data, statistical analyses and variance
analyses. Managers use these and other numerical comparisons (e.g., actual to budget,
increases/decreases in overhead from month to month, etc.) to periodically scan for
anything unusual that might indicate a potential problem.
Diagnostic systems can be very useful for detecting some kinds of problems, but they can
also induce employees and even managers to behave unethically in order to meet some
kind of preset goal. Meeting the goal, no matter how it’s done, ensures the numbers
won’t fluctuate in a manner that would draw negative attention to a particular department
or person.
Employee bonuses (and sometimes even employment, itself) are often based on how well
performance goals have been met or exceeded, measured in quantitative terms. If the
goals are reasonable and attainable, the diagnostic system works quite well. It enables
managers to assign tasks and go on to other things, releasing them from the leash of
perpetual surveillance. Empowered employees are free to complete their work, under
some but not undue pressure to meet a deadline, productivity level, or other goal, and to
do it in a way that may be new or innovative.
However, when goals become unrealistic, empowered employees may sometimes use
their capacity for creativity to manipulate the factors under their control in order not to
fall short of their manager’s expectations. Such manipulations can only have very short-
term positive effects and can very possibly, depending on their magnitude, lead to long-
run disaster for the company.
Beliefs Systems
This control lever is used to communicate the tenets of corporate culture to every
employee of the company. Beliefs systems are generally broad and designed to appeal to
many different types of people working in many different departments.
In order for beliefs systems to be an effective lever of control, employees must be able to
see key values and ethics being upheld by those in supervisory and other top executive
positions. Senior management must be careful not to adopt a particular belief or mission
simply because it is in vogue to do so at the time, but because it reflects the true nature
and value system of the company as a whole.
It is easier for employees to understand on an informal, innate level the mission and credo
of a company that operates in only one industry, as did many companies in the past. As
companies grow more complex, however, it is becoming more and more necessary to
establish formal, written mission statements and codes of ethics so that there can be no
mistaking where the company is going and how it is going to get there.
Boundary Systems
This control lever is based on the idea that in an age of empowered employees, it has
become easier and more effective to set the rules regarding what is inappropriate rather
than what is appropriate. The effect of this kind of thinking is to allow employees to
create and define new solutions and methods within defined constraints. The constraints
are set by top management and are meant to steer employees clear of certain industries,
types of clients, etc. They are also intended to focus employee efforts on areas that have
been determined to be best for the company, in terms of profitability,
productivity, efficiency, etc.
Boundary systems can be thought of in terms of “minimum standards,” and can help to
guard the good name of a company, an asset that can be very difficult to rebuild once
damaged. Examples of these kinds of standards include forbidding employees to discuss
client matters outside the office or with anyone not employed by the company
(sometimes including even spouses) and refusing to work on projects or with clients
deemed to be “undesirable.”
Many times a company will implement a boundary system only after it has suffered a
major crisis due to the lack of one. It is important that companies begin to be proactive in
establishing boundaries before they are needed.
Boundary systems are the flipside of belief systems. They are the “dark, cold
constraints” to the “warm, positive, inspirational” tenets of belief systems.
The key to this control lever is the word “interactive.” In order for this kind of control
system to work, it is critical that subordinates and supervisors maintain regular, face-to-
face contact. Management must be able to glean what is most critical from all aspects of
an organization’s operations so that they can establish and maintain on a daily basis their
overall strategic plan for the company.
Though this may seem somewhat like the diagnostic control system discussed earlier,
there are four important characteristics which set the interactive control system
apart: 1) the interactive system focuses on constantly changing data of an overall
strategic nature, 2) the strategic nature of the data warrants attention from all levels of
management on a regular basis, 3) the data is best analyzed in a face-to-face setting in
groups that include all levels of employees, and 4) the system itself stimulates these
regular discussions.
Conclusion
Empowering employees is necessary for the continuing health and improvement of most
companies. Using the four levers of control discussed above in conjunction with one
another, managers can unleash the creative potential of their subordinates without losing
overall control of their team and its objectives.
ORGANIC ORGANIZATION
A term created by Tom Burns and G.M. Stalker in the late 1950s, organic organizations,
unlike mechanistic organizations (also coined by Burns and Stalker), are flexible and
value external knowledge.
Also called organismic organization, this form of organizational structure was widely
sought and proposed, but never proved to really exist since it, adversely to
the mechanistic organization, has the least hierarchy and specialization of functions. For
an organization to be organic, people in it should be equally leveled, with no job
descriptions or classifications, and communication to have a hub-network-like form. It
thrives on the power of personalities, lack of rigid procedures and communication and
can react quickly and easily to changes in the environment thus it is said to be the most
adaptive form of organization.
A term created by Tom Burns and G.M. Stalker in the late 1950s, organic organizations,
unlikemechanistic organizations (also coined by Burns and Stalker), are flexible and
value outside knowledge. Also called organismic organization, this form
of organizational structure was widely sought and proposed, but never proved to really
exist since it is, adversely to the mechanistic organization, it has the least hierarchy and
specialisation of functions. For an organization to be organic, people in it should be
equally levelled, with no job descriptions or classifications, and communication to have
a hub-network-like form. It thrives on the power of personalities, lack of rigid
procedures and communication and can react quickly and easily to changes in the
environment thus it is said to be the most adaptive form of organization. An organic
organization is a fluid and flexible network of multi-talented individuals who perform a
variety of tasks, as per the definition of D. A. Morand.
Burns and Stalker have an alternative and this is the organismic or Organic form of
management (and called Systemic). Gone are the formal roles and specialisms based on
assigned, precisely defined, tasks. Gone is the idea that overall knowledge and co-
ordination is found only at the top of the hierarchy.
In organismic management a continual adjustment and flexibility in individual tasks is
emphasised. Knowledge is collaborative rather than restricted into specialisms.
Communication is horizontal, vertical and diagonal as required by the types of work
involved. An organisational chart would depend on which job is being done and what
process it involves, and it may not last long. Everyone should consult and consider the
overall aims of the company as the situation keeps changing.
Technical additions and fast change means that experts are needed. Experts may know
more than many managers. Expert career structures go beyond the organisation (just as
do top executives) and may be based on individual reputations. The politicking then is
more diffuse. The power system leaks out at many levels.
There are a number of sociological analyses here. One is the Weberian ideal types of
mechanistic and organismic organisations. So they are not actual expected organisations
but tendencies for analysis. The mechanistic relates to Weber also on bureaucracy and its
rational-legal authority. This seemed to be the depressing summit of capitalist
organisation. However, they have shown it needs stability. Without any reference to
human fulfilment, they have argued for a need for a more human and responsive type of
institution. So there is more than a hint of the Parsonian sociology of functional systems
with adaptation, goal attainment, integration and pattern maintenance (Haralambos,
Holborn, 1995, 873), with manifest and latent functions - actually, motivations - in terms
of people using the manifest language of the overt system of formal control while
operating latently with other motivations (Merton, 1949, in Coser, Rosenberg, 1976,
528). One organisation then adapts successfully to a stable system, and one to a changing
system.
There is also history. A firm has to know its past and reveal to itself the three systems of
motivation. The mechanistic organisation defends itself through its people in positions of
power, career climbing and purposive decision taking. It takes a huge change to become
organismic if it can be done.
Because of the use of sociological categories, the mechanistic and the organismic can be
applied elsewhere. I applied it to historical and broad Christian Churches. Heterodox
liberal Christians inside these organisations were organismic (or systemic) in authority,
because they took it upon themselves to be the experts and specialists of theology in their
very diverse writings. Those heterodox who left to join specialist liberal denominations
pursued instead human relations authority, because they were essentially re-creators of
open gatherings that discussed truth. Orthodox liberal Christians are bureaucrats and
compromisers, unable to hold together a Church that is spiralling away into its new
denominational constituents, each with their own types of authority, namely the
charismatic, traditional and systemic.
PATH-GOAL THEORY
The path-goal theory, also known as the path-goal theory of leader effectiveness or
the path-goal model, is a leadership theory in the field of organizational
studies developed by Robert House, an Ohio State University graduate, in 1971 and
revised in 1996. The theory states that a leader's behavior is contingent to the satisfaction,
motivation and performance of his subordinates. The revised version also argues that the
leader engages in behaviors that complement subordinate's abilities and compensate for
deficiencies. The path-goal model can be classified both as a contingency or as
a transactional leadership theory.
The theory was inspired by the work of Martin G. Evans (1970),[1] in which the
leadership behaviors and the follower perceptions of the degree to which following a
particular behavior (path) will lead to a particular outcome (goal).[2] The path-goal theory
was also influenced by the expectancy theory of motivation developed by Victor
Vroom in 1964.
According to the original theory, the manager’s job is viewed as guiding workers to
choose the best paths to reach their goals, as well as the organizational goals. The theory
argues that leaders will have to engage in different types of leadership behavior
depending on the nature and the demands of a particular situation. It is the leader’s job to
assist followers in attaining goals and to provide the direction and support needed to
ensure that their goals are compatible with the organization’s goals.[4]
The directive path-goal clarifying leader behavior refers to situations where the
leader lets followers know what is expected of them and tells them how to
perform their tasks. The theory argues that this behavior has the most positive
effect when the subordinates' role and task demands are ambiguous and
intrinsically satisfying.[5]
The participative leader behavior involves leaders consulting with followers and
asking for their suggestions before making a decision. This behavior is
predominant when subordinates are highly personally involved in their work.[2]
Path-goal theory assumes that leaders are flexible and that they can change their style, as
situations require. The theory proposes two contingency variables, such as environment
and follower characteristics, that moderate the leader behavior-outcome relationship.
Environment is outside the control of the follower-task structure, authority system, and
work group. Environmental factors determine the type of leader behavior required if the
follower outcomes are to be maximized. Follower characteristics are the locus of control,
experience, and perceived ability. Personal characteristics of subordinates determine how
the environment and leader are interpreted. Effective leaders clarify the path to help their
followers achieve goals and make the journey easier by reducing roadblocks and
pitfalls. Research demonstrates that employee performance and satisfaction are positively
influenced when the leader compensates for the shortcomings in either the employee or
the work setting.
In contrast to the Fiedler contingency model, the path-goal model states that the four
leadership styles are fluid, and that leaders can adopt any of the four depending on what
the situation demands.
The Path-Goal Theory of Leadership was developed to describe the way that leaders
encourage and support their followers in achieving the goals they have been set by
making the path that they should take clear and easy.
In particular, leaders:
Leaders can take a strong or limited approach in these. In clarifying the path, they may be
directive or give vague hints. In removing roadblocks, they may scour the path or help
the follower move the bigger blocks. In increasing rewards, they may give occasional
encouragement or pave the way with gold.
This variation in approach will depend on the situation, including the follower's
capability and motivation, as well as the difficulty of the job and other contextual factors.
Supportive leadership
Considering the needs of the follower, showing concern for their welfare and creating a
friendly working environment. This includes increasing the follower's self-esteem and
making the job more interesting. This approach is best when the work is stressful, boring
or hazardous.
Directive leadership
Telling followers what needs to be done and giving appropriate guidance along the way.
This includes giving them schedules of specific work to be done at specific times.
Rewards may also be increased as needed and role ambiguity decreased (by telling them
what they should be doing).
This may be used when the task is unstructured and complex and the follower is
inexperienced. This increases the follower's sense of security and control and hence is
appropriate to the situation.
Participative leadership
Consulting with followers and taking their ideas into account when making decisions and
taking particular actions. This approach is best when the followers are expert and their
advice is both needed and they expect to be able to give it.
Achievement-oriented leadership
Setting challenging goals, both in work and in self-improvement (and often together).
High standards are demonstrated and expected. The leader shows faith in the capabilities
of the follower to succeed. This approach is best when the task is complex.
RACI is an acronym derived from the four key responsibilities most typically used:
Responsible, Accountable, Consulted, And Informed.
Responsible
Those who do the work to achieve the task. There is typically one role with a
participation type of Responsible, although others can be delegated to assist in the work
required (see also RASCI below for separately identifying those who participate in a
supporting role).
The one ultimately accountable for the correct and thorough completion of the
deliverable or task, and the one to whom Responsible is accountable. In other words,
an Accountablemust sign off (Approve) on work that Responsible provides.
There must be only one Accountable specified for each task or deliverable.
Consulted
Those whose opinions are sought; and with whom there is two-way communication.
Informed
Those who are kept up-to-date on progress, often only on completion of the task or
deliverable; and with whom there is just one-way communication.
Very often the role that is Accountable for a task or deliverable may also
be Responsible for completing it (indicated on the matrix by the task or deliverable
having a role Accountable for it, but no role Responsible for its completion, i.e. it is
implied). Outside of this exception, it is generally recommended that each role in the
project or process for each task receive, at most, just one of the participation types.
Where more than one participation type is shown, this generally implies that participation
has not yet been fully resolved, which can impede the value of this technique in clarifying
the participation of each role on each task.
Role Distinction
The matrix is typically created with a vertical axis (left-hand column) of tasks (e.g., from
a work breakdown structure WBS) or deliverables (e.g., from a product breakdown
structure PBS), and a horizontal axis (top row) of roles (e.g., from an organizational
chart) - as illustrated in the image of an example responsibility assignment (or RACI)
matrix.
The Seven Habits of Highly Effective People
The Seven Habits of Highly Effective People, first published in 1989, is a self-help book
written by Stephen R. Covey. It has sold over 15 million copies in 38 languages since
first publication, which was marked by the release of a 15th anniversary edition in 2004.
Covey presents an approach to being effective in attaining goals by aligning oneself to
what he calls "true north" principles of a character ethic that he presents as universal and
timeless.
Each chapter is dedicated to one of the habits, which are represented by the following
imperatives:
The First Three Habits surround moving from dependence to independence (i.e. self
mastery)
Habit 1: Be Proactive
Synopsis: Take initiative in life by realizing your decisions (and how they align with
life's principles) are the primary determining factor for effectiveness in your life. Taking
responsibility for your choices and the subsequent consequences that follow.
Synopsis: Self-discover and clarify your deeply important character values and life goals.
Envision the ideal characteristics for each of your various roles and relationships in life.
Synopsis: Planning, prioritizing, and executing your week's tasks based on importance
rather than urgency. Evaluating if your efforts exemplify your desired character values,
propel you towards goals, and enrich the roles and relationships elaborated in Habit 2.
The Next Three are to do with Interdependence (i.e. working with others)
Synopsis: The balancing and renewal of your resources, energy, and health to create a
sustainable long-term effective lifestyle.
Covey coined the term abundance mentality or abundance mindset, a concept in which
a person believes there are enough resources and success to share with others. It is
commonly contrasted with the scarcity mindset (i.e. destructive and unnecessary
competition), which is founded on the idea that, if someone else wins or is successful in a
situation, that means you lose; not considering the possibility of all parties winning (in
some way or another) in a given situation. Individuals with an abundance mentality are
able to celebrate the success of others rather than be threatened by it.
A number of books appearing in business press since then have discussed the idea.The
abundance mentality is believed to arrive from having a high self worth and security (see
Habits 1, 2, and 3), and leads to the sharing of profits, recognition and responsibility.
Organizations may also apply an abundance mentality while doing business.
Covey explains the "Upward Spiral" model in the sharpening the saw section. Through
our conscience, along with meaningful and consistent progress, the spiral will result in
growth, change, and constant improvement. In essence, one is always attempting to
integrate and master the principles outlined in The 7 Habits at progressively higher levels
at each iteration. Subsequent development on any habit will render a different experience
and you will learn the principles with a deeper understanding. The Upward Spiral model
consists of three parts: learn, commit, do. According to Covey, one must be increasingly
educating the conscience in order to grow and develop on the upward spiral. The idea of
renewal by education will propel one along the path of personal freedom, security,
wisdom, and power.
Dr Stephen Covey is a hugely influential management guru, whose book The Seven
Habits Of Highly Effective People, became a blueprint for personal development when it
was published in 1990. The Seven Habits are said by some to be easy to understand but
not as easy to apply. Don't let the challenge daunt you: The 'Seven Habits' are a
remarkable set of inspirational and aspirational standards for anyone who seeks to live a
full, purposeful and good life, and are applicable today more than ever, as the business
world becomes more attuned to humanist concepts. Covey's values are full of integrity
and humanity, and contrast strongly with the process-based ideologies that characterised
management thinking in earlier times.
Stephen Covey, as well as being a renowned writer, speaker, academic and humanist, has
also built a huge training and consultancy products and services business - Franklin
Covey which has a global reach, and has at one time or another consulted with and
provided training services to most of the world's leading corporations.
Habit 1 - be proactive®
This is the ability to control one's environment, rather than have it control you, as is so
often the case. Self determination, choice, and the power to decide response to stimulus,
conditions and circumstances
Covey calls this the habit of personal leadership - leading oneself that is, towards what
you consider your aims. By developing the habit of concentrating on relevant activities
you will build a platform to avoid distractions and become more productive and
successful.
Covey calls this the habit of personal management. This is about organizing and
implementing activities in line with the aims established in habit 2. Covey says that habit
2 is the first or mental creation; habit 3 is the second or physical creation. (See the section
on time management.)
Covey calls this the habit of interpersonal leadership, necessary because achievements are
largely dependent on co-operative efforts with others. He says that win-win is based on
the assumption that there is plenty for everyone, and that success follows a co-operative
approach more naturally than the confrontation of win-or-lose.
One of the great maxims of the modern age. This is Covey's habit of communication, and
it's extremely powerful. Covey helps to explain this in his simple analogy 'diagnose
before you prescribe'. Simple and effective, and essential for developing and maintaining
positive relationships in all aspects of life. (See the associated sections on Empathy,
Transactional Analysis, and the Johari Window.)
Habit 6 - synergize®
Covey says this is the habit of creative co-operation - the principle that the whole is
greater than the sum of its parts, which implicitly lays down the challenge to see the good
and potential in the other person's contribution.
This is the habit of self renewal, says Covey, and it necessarily surrounds all the other
habits, enabling and encouraging them to happen and grow. Covey interprets the self into
four parts: the spiritual, mental, physical and the social/emotional, which all need feeding
and developing.
Stephen Covey's Seven Habits are a simple set of rules for life - inter-related and
synergistic, and yet each one powerful and worthy of adopting and following in its own
right. For many people, reading Covey's work, or listening to him speak, literally changes
their lives. This is powerful stuff indeed and highly recommended.
This 7 Habits summary is just a brief overview - the full work is fascinating,
comprehensive, and thoroughly uplifting. Read the book, or listen to the full audio series
if you can get hold of it.
In his more recent book 'The 8th Habit', Stephen Covey introduced (logically) an the
eighth habit, which deals with personal fulfilment and helping others to achieve
fulfilment too, which aligns helpfully with Maslow's notions of 'Self-Actualization' and
'Transcendence' in the Hierarchy of Needs model, and also with the later life-stages in
Erikson's Psychosocial Life-Stage Theory. The 8th Habit book also focuses on
leadership, another distinct aspect of fulfilment through helping others. Time will tell
whether the 8th Habit achieves recognition and reputation close to Covey's classic
original 7 Habits work.
Stephen R. Covey (born October 24, 1932 in Salt Lake City, Utah) is the author of the
best-selling book, The Seven Habits of Highly Effective People. Other books he has
written include First Things First, Principle-Centered Leadership, and The Seven Habits
of Highly Effective Families. In 2004, Covey released The 8th Habit. In 2008, Covey
released The Leader In Me—How Schools and Parents Around the World Are Inspiring
Greatness, One Child at a Time. He is currently a professor at the Jon M. Huntsman
School of Business at Utah State University.
Michael Porter, Professor of Harvard Business School and acknowledged as the most
influential living management thinker, has seven surprises for new CEOs.
He says: “As a newly minted CEO, you may think you finally have the power to set
strategy, the authority to make things happen, and full access to the finer points of your
business. But if you expect the job to be as simple as that, you're in for an awakening.
Even though you bear full responsibility for your company's wellbeing, you are a few
steps removed from many of the factors that drive results.
“You have more power than anybody else in the corporation, but you need to use it with
extreme caution. Nothing - not even running a large business within the company - fully
prepares a person to be the chief executive.”
Professor Porter will return to South Africa on July 3 for a full-day event organised by
Global Leaders, following his half-day workshop for the Global Leaders Africa Summit a
year ago. He will present a cutting-edge programme covering corporate strategy, South
Africa’s global competitiveness and CSR initiatives.
He explained: “These surprises carry some important and subtle lessons. First, you must
learn to manage organisational context rather than focus on daily operations. Second, you
must recognise that your position does not confer the right to lead, nor does it guarantee
the loyalty of the organisation.
“Finally, you must remember that you are subject to a host of limitations, even though
others might treat you as omnipotent. How well and how quickly you understand, accept,
and confront the seven surprises will have a lot to do with your success or failure as a
CEO.”
Attitudes and their connection with industrial mental health are related
to Maslow's theory of motivation. His findings have had a considerable theoretical, as
well as a practical, influence on attitudes toward administration[2]. According to Herzberg,
individuals are not content with the satisfaction of lower-order needs at work, for
example, those associated with minimum salary levels or safe and pleasant working
conditions. Rather, individuals look for the gratification of higher-level psychological
needs having to do with achievement, recognition, responsibility, advancement, and the
nature of the work itself. So far, this appears to parallel Maslow's theory of a need
hierarchy. However, Herzberg added a new dimension to this theory by proposing a two-
factor model of motivation, based on the notion that the presence of one set of job
characteristics or incentives lead to worker satisfaction at work, while another and
separate set of job characteristics lead to dissatisfaction at work. Thus, satisfaction and
dissatisfaction are not on a continuum with one increasing as the other diminishes, but are
independent phenomena. This theory suggests that to improve job attitudes and
productivity, administrators must recognize and attend to both sets of characteristics and
not assume that an increase in satisfaction leads to decrease in unpleasurable
dissatisfaction.
Briefly, we asked our respondents to describe periods in their lives when they were
exceedingly happy and unhappy with their jobs. Each respondent gave as many
"sequences of events" as he could that met certain criteria—including a marked change in
feeling, a beginning and an end, and contained some substantive description other than
feelings and interpretations…
The proposed hypothesis appears verified. The factors on the right that led to satisfaction
(achievement, intrinsic interest in the work, responsibility, and advancement) are mostly
unipolar; that is, they contribute very little to job dissatisfaction. Conversely, the dis-
satisfiers (company policy and administrative practices, supervision, interpersonal
relationships, working conditions, and salary) contribute very little to job satisfaction[3].
Hygiene factors (e.g. status, job security, salary and fringe benefits) that do not
give positive satisfaction, though dissatisfaction results from their absence. These
are extrinsic to the work itself, and include aspects such as company policies,
supervisory practices, or wages/salary[4].
Unlike Maslow, who offered little data to support his ideas, Herzberg and others have
presented considerable empirical evidence to confirm the motivation-hygiene theory,
although their work has been criticized on methodological grounds.
Skandia Navigator Measuring Intangible Assets Software
The leading feature of Skandia's Navigator is its flexibility. The companies that comprise
the Skandia Corporation, maybe even departments in these companies, are not required to
adopt a set form or number of measures. They are not even required to report on the same
indicators from year to year, because the Navigator is primarily seen as a navigation tool
and not one that provides detailed implementation guidelines. Despite its pioneering work
and leadership in measuring IC, Skandia still believes in the value of learning through
taking an experimental approach. Nevertheless, the Navigator is adopted widely across
Skandia and has been incorporated in the MIS system of Skandia under the Dolphin
system.
Skandia applies the BSC idea to the Navigator by applying measures to monitor critical
business success factors under each of five focuses: financial, human, process, customer,
and renewal. Under the Navigator model, the measuring entity—whether the organization
or individual business units or departments—asks the question, "What are the critical
factors that enable us to achieve success under each of the focus areas?" Then a number
of indicators designed to reflect both present and future performance under these factors
are chosen.
Edvinsson explains that the measuring entity may also have a different starting point by
asking, "What are the key success factors for the measuring entity in general?" The entity
then asks, "What are the indicators that are needed to monitor present and future
performance for the chosen success factors?" Once these are determined, as many
measures as necessary are chosen to monitor them. Finally, these measures are examined
and placed under the five focuses depending on what they purport to measure.
For example, SkandiaLink asked senior managers to identify five separate key success
factors for the company in 1997. These included establishing long-term relationships with
satisfied customers, establishing long-term relationships with distributors (particularly
banks), implementing efficient administrative routines, creating an IT system that
supports operations, and employing satisfied and competent employees. Each of these
"success factors" generated a set of indicators, and a total of 24 were selected for
tracking. For the satisfied customer factor, for example, this generated the following
indicators:
• Customer barometer
• New sales
• Market share
• Lapse rate
These indicators are then grouped under the various focuses. As key success factors
change, the overall set of indicators for a certain period (strategic phase) that the
Navigator model monitors also changes. Not only does the Navigator allow this high
level of flexibility in the choice of indicators from time to time, but it also encourages
individual employees to express their goals and monitor their own and their team's
performance.
In one example, the Navigator model was used by Skandia's corporate IT to monitor its
vision of making IT the company's competitive edge. To that end, the IT department used
the following measures: Under the financial focus, the department measured return on
capital employed, operating results, and value added/employee. The customer focus
looked at the contracts that the department handled for Skandia-affiliated companies. The
indicators included number of contracts, savings/contract, surrender ratio, and points of
sale. The human focus tracked number of full-time employees, number of managers,
number of female managers, and training expense/ employee. Under the process focus the
department measured the number of contracts per employee, administrative
expense/gross premiums written, and IT/administrative expense.
Compared to the BSC model, where the measures are more or less prescribed, the
Navigator's underlying philosophy allows for multiple variations. The underlying
philosophy is to provide the highest level of flexibility within a defined framework.
Skandia wants the Navigator to be a tool for plotting a course rather than a detailed
guideline. The details can be filled in later as management steers the business toward
meeting its strategic goals. Being flexible and idiosyncratic to the needs of the measuring
unit, the Navigator ensures that the whole organization talks IC, while at the same time
allowing each measuring unit to develop its own dialect.
Despite inconsistencies and the huge number of indicators generated, Skandia automated
the Navigator, through the Dolphin system, and incorporated it into its management
information system (MIS). With time the Dolphin system will probably lead to
streamlining the various "navigators," and give rise to a more consistent set of indicators
through sharing and communication. It seems that Skandia is serious about
communication despite the inconsistency of the measures used; to an extent that it
reported these measures to external stakeholders. In 1993, Skandia appointed an IC (as
opposed to financial) controller to "systemically develop intellectual capital information
and accounting systems, which can then be integrated with traditional financial
accounting". Though IC reporting requires more consistent measures, or a well-defined
model, Skandia appears determined to balance between its desire to provide transparency
on how their organization is being run while continuing to experiment with the
Navigator.
In the King III Report (otherwise known as King Code of Governance for South Africa
2009), integrated reporting is referred to in this manner: "A key challenge for leadership
is to make sustainability issues mainstream. Strategy, risk, performance and sustainability
have become inseparable; hence the phrase ‘integrated reporting’ which is used
throughout this Report." [1]
Companies that produce integrated reports include BASF, Phillips, Novo Nordisk, United
Technologies Corporation (UTC) and American Electric Power (AEP). In 2008, UTC
was the first Dow Jones Industrial Average member to produce an integrated report.
The Prince of Wales' Accounting for Sustainability project introduced the Connected
Reporting Framework in 2007. Companies reporting using this framework, which links
sustainability performance reporting with financial reporting and strategic direction in a
connected way, include Aviva, BT and HSBC.[2]
Altman Z-score
The Z-score formula for predicting bankruptcy was published in 1968 by Edward I.
Altman, who was, at the time, an Assistant Professor of Finance at New York University.
The formula may be used to predict the probability that a firm will go into bankruptcy
within two years. Z-scores are used to predict corporate defaults and an easy-to-calculate
control measure for the financial distress status of companies in academic studies. The Z-
score uses multiple corporate income and balance sheet values to measure the financial
health of a company.
The Z-score is a linear combination of four or five common business ratios, weighted by
coefficients. The coefficients were estimated by identifying a set of firms which had
declared bankruptcy and then collecting a matched sample of firms which had survived,
with matching by industry and approximate size (assets).
The original data sample consisted of 66 firms, half of which had filed for bankruptcy
under Chapter 7. All businesses in the database were manufacturers, and small firms with
assets of <$1 million were eliminated.
T1 = Working Capital / Total Assets. Measures liquid assets in relation to the size of the
company.
T2 = Retained Earnings / Total Assets. Measures profitability that reflects the company's
age and earning power.
T3 = Earnings Before Interest and Taxes / Total Assets. Measures operating efficiency
apart from tax and leveraging factors. It recognizes operating earnings as being important
to long-term viability.
T4 = Market Value of Equity / Book Value of Total Liabilities. Adds market dimension
that can show up security price fluctuation as a possible red flag
T5 = Sales/ Total Assets. Standard measure for sales turnover (varies greatly from
industry to industry).
Altman found that the ratio profile for the bankrupt group fell at -0.25 avg, and for the
non-bankrupt group at +4.48 avg.
Z Score Analysis
(net working capital) / (total assets) (retained earnings) / (total assets) (EBIT) / (total
assets) (market value of common and preferred) / (book value of debt) (sales) / (total
assets).
Although the weights are not equal, the higher each ratio, the higher the Z score and the
lower the probability of bankruptcy Also called Zeta
Frank Bass published his paper "A new product growth model for consumer durables" in
1969.[2] Prior to this, Everett Rogers published Diffusion of Innovations, a highly
influential work that described the different stages of product adoption. Bass contributed
some mathematical ideas to the concept.
This model has been widely influential in marketing and management science. In 2004 it
was selected as one of the ten most frequently cited papers in the 50-year history of
Management Science [4]. It was ranked number five, and the only marketing paper in the
list. It was subsequently reprinted in the December 2004 issue of Management Science.
Model formulation
[2]
Where:
Sales is the rate of change of installed base (i.e. adoption) multiplied by the
ultimate market potential :
[2]
[2]
[edit]Explanation
The coefficient p is called the coefficient of innovation, external influence or advertising
effect. The coefficient q is called the coefficient of imitation, internal influence or word-
of-mouth effect.
The average value of p has been found to be 0.03, and is often less than 0.01
The average value of q has been found to be 0.38, with a typical range between
0.3 and 0.5
Although many extensions of the model have been proposed, only one of these reduces to
the Bass model under ordinary circumstances.[4]. This model was developed in 1994 by
Frank Bass, Trichy Krishnan and Dipak Jain:
Successive generations
Technology products succeed one another in generations. Norton and Bass extended the
model in 1987 for sales of products with continuous repeat purchasing. The formulation
for three generations is as follows:[4]
where
is the incremental number of ultimate adopters of the ith generation product
is the average (continuous) repeat buying rate among adopters of the ith
generation product
is the time since the introduction of the ith generation product
It has been found that the p and q terms are generally the same between successive
generations.
The Bass model is a special case of the Gamma/shifted Gompertz distribution (G/SG).
Use in online social networks The rapid, recent (as of early 2007) growth in online
social networks (and other virtual communities) has led to an increased use of the Bass
diffusion model. The Bass diffusion model is used to estimate the size and growth rate of
these social networks.
Here's what Ms. Aaker, and her father before her, researched:
These are:
Each facet is in turn measured by a set of traits. The trait measurements are done using a
five point scale (1 = not at all descriptive, 5 = extremely descriptive) rating the extent to
which each trait describes the specific brand.
o Down-to-earth
o Down to earth,
o Family-oriented,
o Small-town
o Honest
o Honest
o Sincere
o Real
o Wholesome
o Wholesome
o Original
o Cheerful
o Cheerful
o Sentimental
o Friendly
o Daring
o Daring
o Trendy
o Exciting
o Spirited
o Spirited
o Cool
o Young
o Imaginative
o Imaginative
o Unique
o Up to date
o Up to date
o Independent
o Contemporary
o Reliable
o Reliable
o Hard working
o Secure
o Intelligent
o Intelligent
o Technical
o Corporate
o Successful
o Successful
o Leader
o Confident
o Upper class
o Upper class
o Glamorous
o Good looking
o Charming
o Charming
o Feminine
o Smooth
o Outdoorsy
o Outdoorsy
o Masculine
o Western
o Tough
o Tough
o Rugged
It’s Saturday, and I have spent the morning reading and drinking coffee. I’m actually
feeling kind of lazy and thinking about the screening of the documentary I worked on
earlier this year. So, because I am feeling lazy, and not in the mood to post some lengthy
piece with loads of pictures, and such I am going to post a quick break down of the 5
dimensions of brand personality. (I just saw a bunch of you roll your eyes.)
How many times have you heard the statement, “the consumer owns the brand”?
It would probably be safe to say you’ve heard it a dozen or so times, and possibly uttered
it yourself, because it happens to be true. No matter what the product or service that an
organization is offering to its target audience, success or failure is dependent upon the
consumers’ buying in to what they’re selling.
Consumers make purchasing decisions based on any number of factors they associate
with individual brands, and companies spend millions on advertising and marketing
activities so that they can influence what those associations might be. Just as we each
chooses our friends based on their personalities, brands can elicit the same sort of
response in consumers. In light of this, wouldn’t it be interesting to know
which human personality traits consumers tend to apply to brands?
Well, it’s good thing for us that someone has studied this and given us a few answers:
The most exciting brands are daring, spirited, imaginative, and on the cutting edge. Not
only are Burton snowboards on the cutting edge of technology and performance, the
products bearing the Burton name are designed with their audience in mind. Funky
graphics and forward-thinking designs make Burton a leader in their competitive
industry.
A brand that is sophisticated is viewed as charming and fit for the upper classes. When it
comes to esteem and seemingly eternal longevity, the Chanel brand is unequaled. In
good times and bad, this brand remains strong as a symbol of a life lived in all the right
places, doing all the right things.
CRISIS MANAGEMENT
Crisis management is the process by which an organization deals with a major event that
threatens to harm the organization, its stakeholders, or the general public. Three elements
are common to most definitions of crisis: (a) a threat to the organization, (b) the element
of surprise, and (c) a short decision time.[1] Venette[2] argues that "crisis is a process of
transformation where the old system can no longer be maintained." Therefore the fourth
defining quality is the need for change. If change is not needed, the event could more
accurately be described as a failure or incident.
In contrast to risk management, which involves assessing potential threats and finding the
best ways to avoid those threats, crisis management involves dealing with threats after
they have occurred. It is a discipline within the broader context of management consisting
of skills and techniques required to identify, assess, understand, and cope with a serious
situation, especially from the moment it first occurs to the point that recovery procedures
start.
Establishing metrics to define what scenarios constitute a crisis and should consequently
trigger the necessary response mechanisms.
The related terms emergency management and business continuity management focus
respectively on the prompt but short lived "first aid" type of response (e.g. putting the fire
out) and the longer term recovery and restoration phases (e.g. moving operations to
another site). Crisis is also a facet of risk management, although it is probably untrue to
say that Crisis Management represents a failure of Risk Management since it will never
be possible to totally mitigate the chances of catastrophes occurring.
During the crisis management process, it is important to identify types of crises in that
different crises necessitate the use of different crisis management strategies.Potential
crises are enormous, but crises can be clustered.
Natural disaster
Technological crises
Confrontation
Malevolence
Crisis of deception
Contingency planning
Preparing contingency plans in advance, as part of a crisis management plan, is the first
step to ensuring an organization is appropriately prepared for a crisis. Crisis management
teams can rehearse a crisis plan by developing a simulated scenario to use as a drill. The
plan should clearly stipulate that the only people to speak publicly about the crisis are the
designated persons, such as the company spokesperson or crisis team members. The first
hours after a crisis breaks are the most crucial, so working with speed and efficiency is
important, and the plan should indicate how quickly each function should be performed.
When preparing to offer a statement externally as well as internally, information should
be accurate. Providing incorrect or manipulated information has a tendency to backfire
and will greatly exacerbate the situation. The contingency plan should contain
information and guidance that will help decision makers to consider not only the short-
term consequences, but the long-term effects of every decision.
In the fall of 1982, a murderer added 65 milligrams of cyanide to some Tylenol capsules
on store shelves, killing seven people, including three in one family. Johnson & Johnson
recalled and destroyed 31 million capsules at a cost of $100 million. The affable CEO,
James Burke, appeared in television ads and at news conferences informing consumers of
the company's actions. Tamper-resistant packaging was rapidly introduced, and Tylenol
sales swiftly bounced back to near pre-crisis levels.[16]
When another bottle of tainted Tylenol was discovered in a store, it took only a matter of
minutes for the manufacturer to issue a nationwide warning that people should not use the
medication in its capsule form.[17]
Odwalla Foods
When Odwalla's apple juice was thought to be the cause of an outbreak of E. coli
infection, the company lost a third of its market value. In October 1996, an outbreak of E.
coli bacteria in Washington state, California, Colorado and British Columbia was traced
to unpasteurized apple juice manufactured by natural juice maker Odwalla Inc. Forty-
nine cases were reported, including the death of a small child. Within 24 hours, Odwalla
conferred with the FDA and Washington state health officials; established a schedule of
daily press briefings; sent out press releases which announced the recall; expressed
remorse, concern and apology, and took responsibility for anyone harmed by their
products; detailed symptoms of E. coli poisoning; and explained what consumers should
do with any affected products. Odwalla then developed - through the help of consultants -
effective thermal processes that would not harm the products' flavors when production
resumed. All of these steps were communicated through close relations with the media
and through full-page newspaper ads.[18]
Mattel
Mattel Inc., the toy maker, has been plagued with more than 28 product recalls and in
Summer of 2007, amongst problems with exports from China, faced two product recall in
two weeks. The company "did everything it could to get its message out, earning high
marks from consumers and retailers. Though upset by the situation, they were
appreciative of the company's response. At Mattel, just after the 7 a.m. recall
announcement by federal officials, a public relations staff of 16 was set to call reporters
at the 40 biggest media outlets. They told each to check their e-mail for a news release
outlining the recalls, invited them to a teleconference call with executives and scheduled
TV appearances or phone conversations with Mattel's chief executive. The Mattel CEO
Robert Eckert did 14 TV interviews on a Tuesday in August and about 20 calls with
individual reporters. By the week's end, Mattel had responded to more than 300 media
inquiries in the U.S. alone."[19]
Pepsi
The Pepsi Corporation faced a crisis in 1993 which started with claims of syringes being
found in cans of diet Pepsi. Pepsi urged stores not to remove the product from shelves
while it had the cans and the situation investigated. This led to an arrest, which Pepsi
made public and then followed with their first video news release, showing the
production process to demonstrate that such tampering was impossible within their
factories. A second video news release displayed the man arrested. A third video news
release showed surveillance from a convenience store where a woman was caught
replicating the tampering incident. The company simultaneously publicly worked with
the FDA during the crisis. The corporation was completely open with the public
throughout, and every employee of Pepsi was kept aware of the details.[citation needed]
This made public communications effective throughout the crisis. After the crisis had
been resolved, the corporation ran a series of special campaigns designed to thank the
public for standing by the corporation, along with coupons for further compensation. This
case served as a design for how to handle other crisis situations.[20][citation needed]
Bhopal
The Bhopal disaster in which poor communication before, during, and after the crisis cost
thousands of lives, illustrates the importance of incorporating cross-cultural
communication in crisis management plans. According to American University’s Trade
Environmental Database Case Studies (1997), local residents were not sure how to react
to warnings of potential threats from the Union Carbide plant. Operating manuals printed
only in English is an extreme example of mismanagement but indicative of systemic
barriers to information diffusion. According to Union Carbide’s own chronology of the
incident (2006), a day after the crisis Union Carbide’s upper management arrived in India
but was unable to assist in the relief efforts because they were placed under house arrest
by the Indian government. Symbolic intervention can be counter productive; a crisis
management strategy can help upper management make more calculated decisions in
how they should respond to disaster scenarios. The Bhopal incident illustrates the
difficulty in consistently applying management standards to multi-national operations and
the blame shifting that often results from the lack of a clear management plan.[21]
The Ford-Firestone Tire and Rubber Company dispute transpired in August 2000. In
response to claims that their 15-inch Wilderness AT, radial ATX and ATX II tire treads
were separating from the tire core—leading to grisly, spectacular crashes—
Bridgestone/Firestone recalled 6.5 million tires. These tires were mostly used on the Ford
Explorer, the world's top-selling sport utility vehicle (SUV).[22]
The two companies committed three major blunders early on, say crisis experts. First,
they blamed consumers for not inflating their tires properly. Then they blamed each other
for faulty tires and faulty vehicle design. Then they said very little about what they were
doing to solve a problem that had caused more than 100 deaths—until they got called to
Washington to testify before Congress.[23]
Exxon
On March 24, 1989, a tanker belonging to the Exxon Corporation ran aground in the
Prince William Sound in Alaska. The Exxon Valdez spilled millions of gallons of crude
oil into the waters off Valdez, killing thousands of fish, fowl, and sea otters. Hundreds of
miles of coastline were polluted and salmon spawning runs disrupted; numerous
fishermen, especially Native Americans, lost their livelihoods. Exxon, by contrast, did
not react quickly in terms of dealing with the media and the public; the CEO, Lawrence
Rawl, did not become an active part of the public relations effort and actually shunned
public involvement; the company had neither a communication plan nor a
communication team in place to handle the event—in fact, the company did not appoint a
public relations manager to its management team until 1993, 4 years after the incident;
Exxon established its media center in Valdez, a location too small and too remote to
handle the onslaught of media attention; and the company acted defensively in its
response to its publics, even laying blame, at times, on other groups such as the Coast
Guard. These responses also happened within days of the incident.
POSITIONING TROUT
In marketing, positioning has come to mean the process by which marketers try to create
an image or identity in the minds of their target market for its product, brand, or
organization.
The original work on Positioning was consumer marketing oriented, and was not as much
focused on the question relative to competitive products as much as it was focused on
cutting through the ambient "noise" and establishing a moment of real contact with the
intended recipient. In the classic example of Avis claiming "No.2, We Try Harder", the
point was to say something so shocking (it was by the standards of the day) that it cleared
space in your brain and made you forget all about who was #1, and not to make some
philosophical point about being "hungry" for business.
The growth of high-tech marketing may have had much to do with the shift in definition
towards competitive positioning. An important component of hi-tech marketing in the age
of the World Wide Web is positioning in major search engines such as Google, Yahoo
and Bing, which can be accomplished through Search Engine Optimization, also known
as SEO. This is an especially important component when attempting to improve
competitive positioning among a younger demographic, which tends to be web oriented
in their shopping and purchasing habits as a result of being highly connected and
involved in social media in general.
Although there are different definitions of Positioning, probably the most common is:
identifying a market niche for a brand, product or service utilizing traditional marketing
placement strategies (i.e. price, promotion, distribution, packaging, and competition).
Also positioning is defined as the way by which the marketers creates impression in the
customers mind.
This differs slightly from the context in which the term was first published in 1969 by
Jack Trout in the paper "Positioning" is a game people play in today’s me-too market
place" in the publication Industrial Marketing, in which the case is made that the typical
consumer is overwhelmed with unwanted advertising, and has a natural tendency to
discard all information that does not immediately find a comfortable (and empty) slot in
the consumers mind. It was then expanded into their ground-breaking first book,
"Positioning: The Battle for Your Mind," in which they define Positioning as "an
organized system for finding a window in the mind. It is based on the concept that
communication can only take place at the right time and under the right circumstances"
(p. 19 of 2001 paperback edition)
What most will agree on is that Positioning is something (perception) that happens in the
minds of the target market. It is the aggregate perception the market has of a particular
company, product or service in relation to their perceptions of the competitors in the
same category. It will happen whether or not a company's management is proactive,
reactive or passive about the on-going process of evolving a position. But a company can
positively influence the perceptions through enlightened strategic actions.
Defining the market in which the product or brand will compete (who the relevant buyers
are)
Identifying the attributes (also called dimensions) that define the product 'space'
Position.
(Faheem, 2010) The process is similar for positioning your company's services. Services,
however, don't have the physical attributes of products - that is, we can't feel them or
touch them or show nice product pictures. So you need to ask first your customers and
then yourself, what value do clients get from my services? How are they better off from
doing business with me? Also ask: is there a characteristic that makes my services
different?
Write out the value customers derive and the attributes your services offer to create the
first draft of your positioning. Test it on people who don't really know what you do or
what you sell, watch their facial expressions and listen for their response. When they
want to know more because you've piqued their interest and started a conversation, you'll
know you're on the right track.
1. Functional positions
Solve problems
Provide benefits to customers
Get favorable perception by investors (stock profile) and lenders
2. Symbolic positions
Self-image enhancement
Ego identification
Belongingness and social meaningfulness
Affective fulfillment
3. Experiential positions
Provide sensory stimulation
Provide cognitive stimulation
Brand management in highly competitive and dynamic markets, will only be effective if
the brand itself stays close to its roots of uniqueness and core values, focuses on specific
market segments and captures a competitive positioning in a specific market. The two
brand management tools that could fulfil that role are brand identity and brand
positioning. Brand identity and brand positioning need to be connected within their own
specific functions; brand identity to express the brand tangible and intangible unique
characteristics on the long term, whereas brand positioning a competitive orientated
combat tool fulfils on the short term. Positioning communicates a specific aspect of
identity at a given time in a given specific market segment within a field of competition.
Hence positioning derives from identity and may chance over time and/or differ per
product (Kapferer, 2007:95-102).
Brand positioning is the sum of all activities that position the brand in the mind of the
customer relative to its competition. Positioning is not about creating something new or
different, but to manipulate the mind set and to retie existing connections (Ries & Trout,
2001:2-5). Kotler and Keller define brand positioning as an "act of designing the
company's offering and image to occupy a distinct place in the mind of the target
market." The objective of positioning is to locate the brand into the minds of
stakeholders; customers and prospect in particular. A recognizable and trusted customer-
focused value proposition can be the result of a successful positioning without doing
something to the product itself. It's the rational and persuasive reason to buy the brand in
highly competitive target markets (Kotler & Keller, 2006:310). Therefore it is essential to
understand and to know the position a brand owns in the mind of a customer instead of
defining what the brand stands for. To position a brand efficiently within its market, it is
critical to evaluate the brand objectively and assess how the brand is viewed by
customers and prospects (Ries & Trout, 2001:193-206).
Positioning the brand in highly over-communicated B2B environments could easily fail
when the minds of customers and prospect obviously get confused. Trout suspects that
people discriminate information to access their mind as a self-defence mechanism. For
this reason, management needs to understand five mental elements in the positioning
process, to position the brand successful in the mind of customers and prospects (Trout,
1995:3-8):
• According to social scientists our selective process has at least three rings of
defence: (1) selective exposure, (2) selective attention, (3) selective retention
(Trout, 1995:11-12).
• "Learning is simply remembering what we're interested in" (Trout, 1995:13).
• The mind accepts only (new) information which matches its current mindset,
every mismatch will be filtered out and blocked. (Ries & Trout, 2001:29).
• To avoid confusion emphasize simplicity and focus on the most obvious powerful
attribute and position it well into the mind (Trout, 1995:11-24).
• According Petty and Cacioppo (as quoted in Trout, 1995:35)"...beliefs are thought
to provide the cognitive foundation of an attitude. In order to change an attitude,
then, it is presumably necessary to modify the information on which that attitude
rests. It is generally necessary, therefore, to change a person's belief, eliminate old
beliefs, or introduce new beliefs."
• Once the market has made up it mind about a brand it is impossible to chance that
mind (Trout, 1995:34).
• In general, the mind is sensitive to prior knowledge or experience. (Ries & Trout,
2001:6) At the end it comes back to what the market is familiar and already
comfortable with (Trout, 1995:34-35).
• Variations to the brand, for example line extensions, can leverage the distortion of
minds, in other words: the mind loses focus. To enforce the mindset it is
necessary to stay focussed and consistent on the key attributes of the brand.
Positioning is in essence a strategy to position the brand against other brands (Trout,
1995:146). Consequently, positioning requires a balance of ideal points of parity and
point of different brand associations within the given market and competitive
environment. Establishing brand positioning starts with identifying: (1) the target market,
(2) the nature of competition, (3) the points of parity (POP), and (4) the points of
difference (POD) (Keller, 2006:98-99).
Marketeers can use a brand mantra to emphasize the core brand associations that reflects
the "heart and soul" of the brand. The brand mantra is a three- to five-word (phrase) that
captures the indisputable "heart and soul"; the essence or spirit, of the brand positioning.
The brand mantra communicates what the brand is and what it is not. Next to that, it can
provide a brand guidance to appropriate product extension, line extension, acquisitions
and mergers, internal communication. A brand mantra provides (as well) a short list of
crucial brand considerations which leverage a consistent brand image and internal
branding. Keller distinguish three determinant categories to design a brand mantra: (1)
the emotional modifier, (2) the descriptive modifier, (3) the brand function (Keller,
2006:121-123).
Positioning results from an analytical process based on four questions: (1) a brand for
what, (2) a brand for whom, (3) a brand for when, (4) a brand against whom? Obviously
it indicates the brand's distinctive characteristics, essential points of difference,
attractiveness to the market and "raison d'être". Kapferer's standard approach to achieve
the desired positioning is based on four determinations; (1) definition of target market,
(2) definition of the frame of reference and subjective category, (3) promise or consumer
benefit, (4) reason to believe (Kapferer, 2007:100-102).
According to Ries and Trout the secret of a successful position is to balance a unique
position with an appeal that's not too narrow. Organizations should look for manageable
smaller targets which deliver the appropriate and unique value proposition rather than a
bigger homogeneous highly competitive market. Its success is captured by the
willingness to sacrifice a minor role the total market in return for leadership in specific
oligopolistic market segments (Ries & Trout, 2001:208).
Treacy and Wiersema argue that leadership comes with a strong focus on delivering
excellent customer value. Based on a three-year study of 40 companies they distinguish
three value disciplines on which organizations should focus; (1) operational excellence,
(2) customer intimacy, or (3) product leadership. See figure 18. The challenge for
organizations is to sustain the chosen value disciple persistent through the organization
and to create internal long term consistency. Two golden rules to success; (1) excel in one
of the three value disciplines to create a leadership position and (2) deliver adequate level
of excellence in the two other value disciplines (Treacy & Wiersema, 1993:3-14).
According to Kapferer there is a direct connection between the brand essence, brand
identity and brand position that enables the brand to change over (long term) time within
certain degrees of freedom and still remain it self. Brand positioning capitalizes on a
specific part of identity within the playing field which varies by segment, demography,
market dynamics and time. For that Kapferer argues that on a global level a unified
identity can initiate multiple specific market strategies for different markets without
jeopardizing the brand essence and identity (Kapferer, 2007:105).
The Zachman Framework is not a methodology in that it lacks specific methods and
processes for collecting, managing, or using the information that it describes.The
Framework is named after its creator John Zachman, who first developed the concept in
the 1980s at IBM. It has been updated several times since.
The Zachman "Framework" is a taxonomy for organizing architectural artifacts (in other
words, design documents, specifications, and models) that takes into account both whom
the artifact targets (for example, business owner and builder) and what particular issue
(for example, data and functionality) is being addressed.
The term "Zachman Framework" has multiple meanings. It can refer to any of the
frameworks proposed by John Zachman:
The Zachman Framework for Enterprise Architecture, an update of the 1987 original in
the 1990s extended and renamed .[6]
One of the later versions of the Zachman Framework, offered by Zachman International
as industry standard.In other sources the Zachman Framework is introduced as a
framework, originated by and named after John Zachman, represented in numerous ways,
see image. This framework is explained as, for example:
The framework is a simple and logical structure for classifying and organizing the
descriptive representations of an enterprise. It is significant to both the management of
the enterprise, and the actors involved in the development of enterprise systems.[13]
While there is not order of priority for the columns of the Framework, the top-down order
of the rows is significant to the alignment of business concepts and the actual physical
enterprise. The level of detail in the Framework is a function of each cell (and not the
rows). When done by IT the lower level of focus is on information technology, however
it can apply equally to physical material (ball valves, piping, transformers, fuse boxes for
example) and the associated physical processes, roles, locations etc. related to those
items.
In the 1997 paper "Concepts of the Framework for Enterprise Architecture" Zachman
explained that the framework should be referred to as a "Framework for Enterprise
Architecture", and should have from the beginning. In the early 1980s however,
according to Zachman, there was "little interest in the idea of Enterprise Reengineering
or Enterprise Modeling and the use of formalisms and models was generally limited to
some aspects of application development within the Information Systems community".[20]
In 2008 Zachman Enterprise introduced the Zachman Framework: The Official Concise
Definition as a new Zachman Framework standard.
Planner's View (Scope) - The first architectural sketch is a "bubble chart" or Venn
diagram, which depicts in gross terms the size, shape, partial relationships, and
basic purpose of the final structure. It corresponds to an executive summary for a
planner or investor who wants an overview or estimate of the scope of the system,
what it would cost, and how it would relate to the general environment in which it
will operate.
Owner's View (Enterprise or Business Model) - Next are the architect's drawings
that depict the final building from the perspective of the owner, who will have to
live with it in the daily routines of business. They correspond to the enterprise
(business) models, which constitute the designs of the business and show the
business entities and processes and how they relate.
Designer's View (Information Systems Model) - The architect's plans are the
translation of the drawings into detail requirements representations from the
designer's perspective. They correspond to the system model designed by a
systems analyst who must determine the data elements, logical process flows, and
functions that represent business entities and processes.
Builder's View (Technology Model) - The contractor must redraw the architect's
plans to represent the builder's perspective, with sufficient detail to understand the
constraints of tools, technology, and materials. The builder's plans correspond to
the technology models, which must adapt the information systems model to the
details of the programming languages, input/output (I/O) devices, or other
required supporting technology.
Subcontractor View (Detailed Specifications) - Subcontractors work from shop
plans that specify the details of parts or subsections. These correspond to the
detailed specifications that are given to programmers who code individual
modules without being concerned with the overall context or structure of the
system. Alternatively, they could represent the detailed requirements for
various commercial-off-the-shelf (COTS), government off-the-shelf (GOTS), or
components of modular systems software being procured and implemented rather
than built.
Actual System View or The Functioning Enterprise
Focus or Columns
In Zachman’s opinion, the single factor that makes his framework unique is that each
element on either axis of the matrix is explicitly distinguishable from all the other
elements on that axis. The representations in each cell of the matrix are not merely
successive levels of increasing detail, but actually are different representations —
different in context, meaning, motivation, and use. Because each of the elements on
either axis is explicitly different from the others, it is possible to define precisely what
belongs in each cell.
We believe that the focus of this type of training should be on the situational use of
leadership styles, and the flexing of those styles to varying circumstances at work.
For example, what is the most effective style to use when placed in a certain
situation? This is one of the guiding principals behind the various models of
leadership styles.
This last point is an important one. Research has demonstrated that the leader's
ability to adopt his or her leadership style to the situation at hand is important to their
organization's success. The best leaders are skilled at several styles, and instinctively
understand when to use them at work.
Choosing a Leadership Style
In the following sections we are going to explain the six different leadership styles
that were identified by Daniel Goleman in connection with his theory of emotional
intelligence. We've chosen Goleman's model of leadership style because it's both
simple and all-encompassing.
In his writings, Goleman described a total of six different leadership styles. Much of
this information already appears in our article on situational leadership. If you're
interested in the effective application of different leadership styles, then you might
want to look at that article too because it also speaks to the theory put forth by Ken
Blanchard and Paul Hersey.
The examples of leadership styles appearing below contain a brief description of the
leader's characteristics, as well as an example of when the styles are most effective.
Coaching Leaders
In the Coaching Leadership Style the leader focuses on helping others in their
personal development, and in their job-related activities. The coaching leader aids
others to get up to speed by working closely with them to make sure they have the
knowledge and tools to be successful. This situational leadership style works best
when the employee already understands their weaknesses, and is receptive to
improvement suggestions or ideas.
Pacesetting Leaders
When employees are self-motivated and highly skilled, the Pacesetting Leadership
Style is extremely effective. The pacesetting leader sets very high performance
standards for themselves and the group. They exemplify the behaviors they are
seeking from other members of the group. This leadership style needs to be used
sparingly since workers can often "burn out" due to the demanding pace of this style.
Democratic Leaders
The Democratic Leadership Style gives members of the work group a vote, or a say,
in nearly every decision the team makes. When used effectively, the democratic
leader builds flexibility and responsibility. They can help identify new ways to do
things with fresh ideas. Be careful with this style, however, because the level of
involvement required by this approach, as well as the decision-making process, can
be very time consuming.
Affiliative Leaders
The Affiliative Leadership Style is most effective in situations where morale is low
or teambuilding is needed. This leader is easily recognized by their theme of
"employee first." Employees can expect much praise from this style; unfortunately,
poor performance may also go without correction.
Authoritative Leaders
Coercive Leaders
The Coercive Leadership Style should be used with caution because it's based on the
concept of "command and control," which usually causes a decrease in motivation
among those interacting with this type of manager. The coercive leader is most
effective in situations where the company or group requires a complete turnaround. It
is also effective during disasters, or dealing with under performing employees -
usually as a last resort.
The formula for a leader's success is really quite simple: The more leadership styles
that you are able to master, the better the leader you will become. Certainly the
ability to switch between styles, as situations warrant, will result in superior results
and workplace climate.
In fact, Goleman's research revealed that leaders who were able to master four or
more leadership styles - especially the democratic, authoritative, affiliative and
coaching styles - often achieved superior performance from their followers as well as
a healthy climate in which to work.
It's not easy to master multiple leadership styles. In order to master a new way of
leading others, we may need to unlearn old habits. This is especially important for
leaders that fall back on the pacesetting and coercive leadership styles, which have a
negative affect on thework environment.
Learning a new leadership style therefore takes practice and perseverance. The more
often the new style or behavior is repeated, the stronger the link between the situation
at hand and the desired reaction.
You can work with a coach, a mentor, or keep your own notes on how you reacted
under certain conditions. Learning a new skill requires time, patience, feedback, and
even rewards to stay motivated. If you're going to attempt to learn a different
leadership style make sure your approach contains each of these elements.
Daniel Goleman, Richard Boyatzis and Annie McKee, in Primal Leadership, describe
six styles of leading that have different effects on the emotions of the target followers.
These are styles, not types. Any leader can use any style, and a good mix that is
customised to the situation is generally the most effective approach.
The Visionary Leader moves people towards a shared vision, telling them where to go
but not how to get there - thus motivating them to struggle forwards. They openly
share information, hence giving knowledge power to others.
They can fail when trying to motivate more experienced experts or peers.
The Affiliative Leader creates people connections and thus harmony within the
organization. It is a very collaborative style which focuses on emotional needs over
work needs.
It is best used for healing rifts and getting through stressful situations.
When done badly, it looks like lots of listening but very little effective action.
It is best used to gain buy-in or when simple inputs are needed ( when you are
uncertain).
The Pace-setting Leader builds challenge and exciting goals for people, expecting
excellence and often exemplifying it themselves. They identify poor performers and
demand more of them. If necessary, they will roll up their sleeves and rescue the
situation themselves.
They tend to be low on guidance, expecting people to know what to do. They get
short term results but over the long term this style can lead to exhaustion and decline.
It often has a very negative effect on climate (because it is often poorly done).
The Commanding Leader soothes fears and gives clear directions by his or her
powerful stance, commanding and expecting full compliance (agreement is not
needed). They need emotional self-control for success and can seem cold and distant.
This approach is best in times of crisis when you need unquestioned rapid action and
with problem employees who do not respond to other methods.
JUST IN TIME
An inventory strategy companies employ to increase efficiency and decrease waste
by receiving goods only as they are needed in the production process, thereby reducing
inventory costs.
This method requires that producers are able to accurately forecast demand.
A good example would be a car manufacturer that operates with very low inventory
levels, relying on their supply chain to deliver the parts they need to build cars. The
parts needed to manufacture the cars do not arrive before nor after they are needed,
rather do they arrive just as they are needed.
This inventory supply system represents a shift away from the older "just in case"
strategy where producers carried large inventories in case higher demand had to be
met.
Quick notice that stock depletion requires personnel to order new stock is critical to
the inventory reduction at the center of JIT. This saves warehouse space and costs.
However, the complete mechanism for making this work is often misunderstood.
For instance, its effective application cannot be independent of other key components
of a lean manufacturing system or it can "...end up with the opposite of the desired
result."[1] In recent years manufacturers have continued to try to hone forecasting
methods (such as applying a trailing 13 week average as a better predictor for JIT
planning,[2] however some research demonstrates that basing JIT on the presumption
of stability is inherently flawed.
Philosophy of JIT is simple: inventory is waste. JIT inventory systems expose hidden
causes of inventory keeping, and are therefore not a simple solution for a company to
adopt. The company must follow an array of new methods to manage the
consequences of the change. The ideas in this way of working come from many
different disciplines including statistics, industrial engineering, production
management, and behavioral science. The JIT inventory philosophy defines how
inventory is viewed and how it relates to management.
Inventory is seen as incurring costs, or waste, instead of adding and storing value,
contrary to traditional accounting. This does not mean to say JIT is implemented
without an awareness that removing inventory exposes pre-existing manufacturing
issues. This way of working encourages businesses to eliminate inventory that does
not compensate for manufacturing process issues, and to constantly improve those
processes to require less inventory. Secondly, allowing any stock habituates
management to stock keeping. Management may be tempted to keep stock to hide
production problems. These problems include backups at work centers, machine
reliability, process variability, lack of flexibility of employees and equipment, and
inadequate capacity.
In short, the just-in-time inventory system focus is having “the right material, at the
right time, at the right place, and in the exact amount”-Ryan Grabosky, without the
safety net of inventory. The JIT system has broad implications for implementers.
During the birth of JIT, multiple daily deliveries were often made by bicycle.
Increased scale has required a move to vans and lorries (trucks). Cusumano (1994)
highlighted the potential and actual problems this causes with regard to gridlock and
burning of fossil fuels. This violates three JIT waste guidelines:
Benefits
Reduced setup time. Cutting setup time allows the company to reduce or eliminate
inventory for "changeover" time. The tool used here is SMED (single-minute
exchange of dies).
The flow of goods from warehouse to shelves improves. Small or individual piece
lot sizes reduce lot delay inventories, which simplifies inventory flow and its
management.
Employees with multiple skills are used more efficiently. Having employees
trained to work on different parts of the process allows companies to move
workers where they are needed.
Production scheduling and work hour consistency synchronized with demand. If
there is no demand for a product at the time, it is not made. This saves the
company money, either by not having to pay workers overtime or by having them
focus on other work or participate in training.
Increased emphasis on supplier relationships. A company without inventory does
not want a supply system problem that creates a part shortage. This makes
supplier relationships extremely important.
Supplies come in at regular intervals throughout the production day. Supply is
synchronized with production demand and the optimal amount of inventory is on
hand at any time. When parts move directly from the truck to the point of
assembly, the need for storage facilities is reduced.
• Continuous improvement.
o Attacking fundamental problems - anything that does not add value to the
product.
o Devising systems to identify problems.
o Striving for simplicity - simpler systems may be easier to understand,
easier to manage and less likely to go wrong.
o A product oriented layout - produces less time spent moving of materials
and parts.
o Quality control at source - each worker is responsible for the quality of
their own output.
o Poka-yoke - `foolproof' tools, methods, jigs etc. prevent mistakes
o Preventative maintenance, Total productive maintenance - ensuring
machinery and equipment functions perfectly when it is required, and
continually improving it.
• Eliminating waste. There are seven types of waste:
o waste from overproduction.
o waste of waiting time.
o transportation waste.
o processing waste.
o inventory waste.
o waste of motion.
o waste from product defects.
• Good housekeeping - workplace cleanliness and organisation.
• Set-up time reduction - increases flexibility and allows smaller batches. Ideal
batch size is 1item. Multi-process handling - a multi-skilled workforce has greater
productivity, flexibility and job satisfaction.
• Levelled / mixed production - to smooth the flow of products through the factory.
• Kanbans - simple tools to `pull' products and components through the process.
• Jidoka (Autonomation) - providing machines with the autonomous capability to
use judgement, so workers can do more useful things than standing watching
them work.
• Andon (trouble lights) - to signal problems to initiate corrective action.
Just in Time or JIT method creates the movement of material into a specific location
at the required time, i.e. just before the material is needed in the manufacturing
process. The technique works when each operation is closely synchronized with the
subsequent ones to make that operation possible. JIT is a method of inventory control
that brings material into the production process, warehouse or to the customer just in
time to be used, which reduces the need to store excessive levels of material in the
warehouse.
Just in time is a ‘pull’ system of production, so actual orders provide a signal for
when a product should be manufactured. Demand-pull enables a firm to produce only
what is required, in the correct quantity and at the correct time.
This means that stock levels of raw materials, components, work in progress and
finished goods can be kept to a minimum. This requires a carefully planned
scheduling and flow of resources through the production process. Modern
manufacturing firms use sophisticated production scheduling software to plan
production for each period of time, which includes ordering the correct stock.
Information is exchanged with suppliers and customers through EDI (Electronic
Data Interchange) to help ensure that every detail is correct.
Supplies are delivered right to the production line only when they are needed. For
example, a car manufacturing plant might receive exactly the right number and type
of tyres for one day’s production, and the supplier would be expected to deliver them
to the correct loading bay on the production line within a very narrow time slot.
Advantages of JIT
• Lower stock holding means a reduction in storage space which saves rent and
insurance costs
• As stock is only obtained when it is needed, less working capital is tied up in
stock
• There is less likelihood of stock perishing, becoming obsolete or out of date
• Avoids the build-up of unsold finished product that can occur with sudden
changes in demand
• Less time is spent on checking and re-working the product of others as the
emphasis is on getting the work right first time
Disadvantages of JIT
• There is little room for mistakes as minimal stock is kept for re-working faulty
product
• Production is very reliant on suppliers and if stock is not delivered on time, the
whole production schedule can be delayed
• There is no spare finished product available to meet unexpected orders, because
all product is made to meet actual orders – however, JIT is a very responsive
method of production
1. Influence.
2. Interpersonal facilitation.
3. Relational creativity.
4. Team leadership.
Many of us are strong in at least one of these areas – but we may be strong in several
areas, or in none of them.
It's not relevant which area is stronger. What is relevant is that if we, or our team
members, have a strength in one area, we should try to match their work to that
strength.
Butler and Waldroop argue that a good match will make both the manager and the
team happier, because everyone will be using their natural strengths. This should also
improve the team's performance and productivity.
1. Influence
People who are strong in this dimension enjoy being able to influence others. They're
great at negotiating and persuading, and they love having knowledge and ideas that
they can share. Influencers are also good at creating networks: they excel at making
strategic friendships and connections.
Influencers don't always have to be in a sales role to use this strength effectively.
Perhaps a team member always seems able to "lift" tired colleagues. Or maybe a
manager can be relied on to persuade clients to give his team a little more time on a
deadline. Both are effective influencers.
2. Interpersonal Facilitation
Team members who are strong in this area are often "behind the scenes" workers.
They're good at sensing people's emotions and motivations. They're also skilled at
helping others cope with emotional issues and conflict.
For instance, if you suspect that someone you're dealing with has a "hidden agenda"
during group meetings, then you may need to ask for help from someone on your
team who is strong in interpersonal facilitation. A person with strong intuition will
likely have some insight into what is motivating this other team member.
3. Relational Creativity
People who are strong in this dimension are masters at using pictures and words to
create emotion, build relationships, or motivate others to act.
4. Team Leadership
Team members who are strong in team leadership succeed through their interactions
with others.
This area also might sound like the influencing dimension, but there's an important
difference. Influencers thrive on the end result and the role they play in closing a deal.
But team leaders thrive on working through other people to accomplish goals, and
they're more interested in the people and processes necessary to reach the goal.
Tip:
You can also apply the Four Dimensions of Relational Work to yourself when
thinking about your own career development. For example, if you're strong in
interpersonal facilitation, you may decide to pursue a career that uses that strength.
It's generally easy to evaluate technical skills when you're recruiting or reviewing a
team member's work history. However, identifying someone's interpersonal skills and
strengths takes more effort.
Use the following tips to help you to assess your current team members, or to ensure
that you're hiring the right person for a position.
• Listen carefully - For example, when you ask a job candidate to explain the best
moment at her last job, listen closely. If she talks about when she influenced a key
decision, she might be strong in the influence dimension. Remember, influencers
love to impact and shape decisions, so also try to find out if she's ever served on a
committee or executive board.
• Structure your conversation around a specific skill - For instance, if you need
to find a new team member who is strong in interpersonal facilitation, then
structure your interview or performance appraisal around that skill. Ask the
candidate to describe how he would resolve a conflict between two other
colleagues. You could even try role playing.
• Ask when the person experiences "flow" - Finding someone skilled at relational
creativity can be difficult. This is because someone may be strong in this area, but
has never had a job, project, or task that used this strength. Ask your team
member or candidate to describe a time when she experiencedflow. If her task at
that time was creative, she might be strong in relational creativity.
• Notice how the person makes you feel - It's often easy to identify a person
skilled in team leadership, even if he has never held a management position. Pay
attention to how you feel when talking to this person, and how that person
interacts with other members of his team. If he gets people excited and motivated
about their work, or about the opportunities that the organization faces, then he
might excel at team leadership.
As well as using the four dimensions to build your team, and assign tasks and projects
to the most appropriate people, you can also use the model to reward your
team effectively. Relational work is often ignored or undervalued. But these
interpersonal traits are what make the organization function effectively.
It's important to compensate your team members for these skills, because the more
they're rewarded, the more they'll use those skills.
Start by educating your team members about their own dimension. You could do this
in informal, one-on-one conversations or during their performance appraisals. Try
to connect some type of compensation to their skill, and make sure they understand
that they'll be rewarded for using their strengths.
You can also reward team members by giving them work that uses their strength.
This may require you to create a new role, or mean simply reshaping the role that a
person has now. It doesn't have to be a huge change; adding tasks or projects that use
people's strengths can influence dramatically how satisfied they are with their jobs –
and with the organization.
Tip 1:
To help ensure balance, try to structure your teams so that all four dimensions are
represented by someone. (Of course, this may not be a suitable approach for all teams - so
use your best judgment.)
Tip 2:
When you look for people to fill each dimension, don't make decisions based on job
titles, because team members may not currently be in roles or positions that use their
strengths.
Key Points
The Four Dimensions of Relational Work can help you understand team members'
interpersonal strengths, as well as your own strengths. The four dimensions are
influence, interpersonal facilitation, relational creativity, and team leadership.
Matching people's strongest dimensions with the work they do benefits everyone.
When you and your team are using your strengths, you're all more satisfied and
excited about what you're doing – and your organization benefits from improved
productivity and engagement.