Professional Documents
Culture Documents
ALKHAFAJI
EDITORS BUSINESS
BUSINESS
RESEARCH RESEARCH
YEARBOOK
Volume XIII
2006
YEARBOOK
Global Business
Perspectives
VOLUME XIII 2006
MARJORIE G. ADAMS
International
Academy of
ABBASS ALKHAFAJI
Business EDITORS
Disciplines
Publication of the International
Academy of Business Disciplines
Editors
Marjorie G. Adams
Morgan State University
Abbass F. Alkhafaji
Slippery Rock University
IABD
Copyright 2006
by the
International Academy of Business Disciplines
International Graphics
10710 Tucker Street
Beltsville, MD 20705
(301) 595-5999 office
ISBN
1-889754-10-2
PREFACE
This volume contains an extensive summary of most of the papers presented during
the Thirteenth Annual Conference of the International Academy of Business Disciplines held
in San Diego, California April 6 – 9, 2006. This volume is part of the continuing efforts of
IABD to make available current research findings and other contributions to practitioners and
academics.
The IABD has evolved into a strong global organization during the past eighteen
years, thanks to immense support provided by many dedicated individuals and institutions.
The objectives and far-reaching visions of the IABD have created interest and excitement
among people from all over the world.
The Academy is indebted to all those responsible for this year’s program, particularly
Ahmad Tootoonchi, Frostburg State University, who served as Program Chair, and to those
who served as active track chairs. Those individuals did an excellent job of coordinating the
review process and organizing the sessions. A special thanks also goes to the IABD officers
and Board of Directors for their continuing dedication to this conference.
Our appreciation also extends to the authors of papers presented in the conference.
The high quality of papers submitted for presentation attests to the Academy’s growing
reputation, and provides the means for publishing this current volume.
The editors would like to extend their personal thanks to Dr. Otis Thomas, Dean of
the School of Business and Management, Morgan State University, for his support.
i
TABLE OF CONTENTS
Preface
Table of Contents
CHAPTER 1: INTRODUCTION....................................................................................... 1
An Analysis of Investment Performance And Malmquist Productivity Index For Life Insurers
In Taiwan
Shu-Hua Hsiao, Leader University
Yi-Feng Yang, Leader University
Grant G.L. Yang, Leader University ........................................................................... 27
Connecting ABI Acceptance Measures to Task Complexity, Ease of Use, User Involvement
and Training
Aretha Y. Hill, Florida A&M University
Ira W. Bates, Florida A&M University ....................................................................... 32
The Effects of Ambient Scent on Perceived Time: Implications for Retail and Gaming
John E. Gault, West Chester University of Pennsylvania............................................ 45
iii
The Relationship Between Age, Education, Gender, Marital Status and Ethics
Ziad Swaidan, University of Houston-Victoria
Peggy A. Cloninger, University of Houston-Victoria
Mihai Nica, Jackson State University.......................................................................... 52
Pick a Flick: Moviegoers’ Use and Trust of Advertising and Uncontrolled Sources
Thomas Kim Hixson, University of Wisconsin-Whitewater....................................... 70
University Brand Identity: A Content Analysis of Four-Year U.S. Higher Education Web Site
Home Pages
Andy Lynch, American University Of Sharjah ........................................................... 82
Brand Knowledge, Brand Attitude, Purchases & Amount Willing To Pay For Self & Others:
Third-Person Perception & The Brand
Thomas J. Prinsen, The University Of South Dakota.................................................. 88
iv
Decision Support Systems: An Investigation of Characteristics
Roger L. Hayen, Central Michigan University
Monica C. Holmes, Central Michigan University .....................................................117
Tourism Market Potential of Small Resource-Based Economies: The Case of Fiji Islands
Erdener Kaynak, The Pennsylvania State University at Harrisburg
Raghuvar D. Pathak, The University Of The South Pacific .....................................123
I’m With the Broadband: The Economic Impact of Broadband Internet Access on the Music
Industry
Matthew A. Gilbert, Clear Pixel Communications....................................................160
U.S. Attempts to Slow Global Expansion of Internet Retailing Meets Legal Resistance
Theodore R. Bolema, Central Michigan University ..................................................166
Segmenting Cell Phone Users by Gender, Perceptions, and Attitude toward Internet and
Wireless Promotions
Alex Wang, University of Connecticut
Adams Acar, University of Connecticut....................................................................172
v
E-Business Based SME Growth: Virtual Partnerships and Knowledge Equivalency
Zoe Dann, Liverpool John Moores University
Paul Otterson, Liverpool John Moores University
Keith Porter, Liverpool John Moores University ......................................................178
CHAPTER 7: ECONOMICS...........................................................................................185
The Valuation Abilities of the Price-Earnings-To-Growth Ratio and Its Association with
Executive Compensation
Essam Elshafie, University Of Texas At Brownsville
Pervaiz Alam, Kent State University .........................................................................207
vi
Genetic Engineering, Biotechnology, and Indian Agriculture, IPR Issues in Focus
Prabir Bagchi, Sims, Ghaziabad ................................................................................246
Response of Building Costs to Unexpected Changes in Real Economic Activity and Risk
Bradley T. Ewing, Texas Tech University
Daan Liang, Texas Tech University
Mark A. Thompson, University Of Arkansas-Little Rock ........................................251
The Perils of Strategic Alliances: The Case of Performance Dimensions International, LLC
Robert A. Page, Jr., Southern Connecticut State University
Edward W. Tamson, Performance Dimensions International LLC
Edward H. Hernandez, California State University, Stanislaus
Alfred R. Petrosky, California State University, Stanislaus ......................................273
Intelligent Agents-Belief, Desire, and Intent Framework Using Lora: A Program Independent
Approach
Fred Mills, Bowie State University
Jagannathan V. Iyengar, North Carolina Central University.....................................280
The Propensity for Military Service of the American Youth: An Application of Generalized
Exchange Theory
Ulysses J. Brown, III, Savannah State University
Dharam S. Rana, Jackson State University................................................................286
vii
CHAPTER 10: FINANCE .................................................................................................302
The Short Term and Long Term Impact of the Stock Recommendations Published in
Barron’s
Francis Cai, William Paterson University
Wenhui Li, Buruch College, Cuny ............................................................................303
Predicting Internet Use: Technology Acceptance Facilitating Group Projects in a Web Design
Course
Azad I. Ali, Indiana University of Pennsylvania .......................................................325
Predicting Internet Use with the Technology Acceptance Model and the Theory of Planned
Behavior
Marcelline Fusilier, Northwestern State University of Louisiana
Subhash Durlabhji, Northwestern State University of Louisiana..............................330
Government Regulation of the Oath of Hippocrates: How Far Can the Government Go?
Roy Whitehead, University of Central Arkansas
Kenneth Griffin, University of Central Arkansas
Phillip Balsmeier, Nicholls State University .............................................................358
viii
What Are The Benefits, Challenges, And Motivational Issues Of Academic Teams?
Blaise J. Bergiel, Nicholls State University
Erich B. Bergiel, Mississippi State University ..........................................................362
Navigating Illness by Navigating the Net: Seeking Information about Sexually Transmitted
Infections
Kelly A. Dorgan, East Tennessee State University
Linda E. Bambino, East Tennessee State University.................................................380
Communication as Cause and Cure: Sources of Anxiety for International Medical Graduates
in Rural Appalachia
Kelly A. Dorgan, East Tennessee State University
Linda E. Bambino, East Tennessee State University
Michael Floyd, East Tennessee State University.......................................................385
Integrated Social Marketing and Visual Messages of Breast Cancer Information to African
American Women
S. Diane Mc Farland, Ph.D. Buffalo State, Suny.......................................................390
ix
Implications of the Fairpay Overtime Initiative to Human Resource Management
C. W. Von Bergen, Southeastern Oklahoma State University
Patricia W. Pool, Southeastern Oklahoma State University
Kitty Campbell, Southeastern Oklahoma State University........................................420
Changing the Media in the Middle East: Lebanon Improves Journalism and Mass
Communication Education
Ali Kanso, University of Texas at San Antonio ........................................................442
The Protean Career Module: Applied Management and Finance Exercises for Aspiring
Professionals
Angela J. Murphy, Florida A & M University...........................................................447
x
CHAPTER 15: INTERNATIONAL BUSINESS AND MARKETING.........................482
Domain Knowledge Specificity and Joint New Product Development: Mediating Effect of
Relational Capital
Pi-Chuan Sun, Tatung University
Yung Sung Wen, Chiang Kai-Shek International Airport Office..............................494
Student Leadership at the Local, National and Global Level: Engaging the Public and Making
a Difference
J. Gregory Payne, Emerson College
David Twomey, Emerson College.............................................................................512
xi
The Effect of Gender on Apology Strategies
Astrid M. Beckers, Jackson State University
Mohammad Z. Bsat, Jackson State University ..........................................................535
xii
Does Charity Truly Begin at Home?
Louis K. Falk, University of Texas at Brownsville
Hy Sockel, Youngstown State University
John A. Cook, University of Texas at Brownsville ...................................................594
Newspaper Endorsements and Election Result Headlines in the 2004 U.S. Presidential
Election
John Mark King, East Tennessee State University
Adriane Dishner Flanary, East Tennessee State University ......................................607
The Effect of Ambiguous Understanding of Problem and Instructions on Service Quality and
Productivity
Palaniappan Thiagarajan, Jackson State University
Yegammai Thiagarajan Esq.
Sheila C. Porterfield, Jackson State University .........................................................613
Karma-Yoga and Its Implications for Management Thought and Institutional Reform
Rashmi Prasad, University of Alaska Anchorage
Irfan Ahmed, Sam Houston State University ......................................................................635
Good Game, Good Game: Applying Servqual to/and Assessing an NFL Concession’s Service
Quality
Brian V. Larson, Widener University
Doug Seymour, Widener University..........................................................................641
xiii
Convergence In Mississippi: A Spatial Approach.
Mihai Nica, Jackson State University
Ziad Swaidan, University of Houston Victoria..........................................................647
xiv
Cultural Adaptation of Austrian and U.S.-American Websites: A Comparison Using
Hofestede’s Cultural Patterns
Wesley McMahon, California State University, Chico
Dominik Maurer, California State University, Chico ...............................................714
xv
CHAPTER 1
INTRODUCTION
1
THE BUSINESS RESEARCH YEARBOOK
PUBLICATION PERSPECTIVES
The International Academy of Business Disciplines (IABD) is hosting its 18th Annual
Conference in 2006. There are many challenges that we face in the new millennium. Some of
them are: the rapid evolution of technology, globalization of the market place, green
marketing, and the increasing diversity of the global workforce. We as teachers and scholars
should strive hard to meet and overcome these challenges. We hope that IABD will continue
to provide an interacting forum to identify, discuss, and evaluate alternate solutions to many
of these challenges.
The selection process leading to publication is rigorous. The overall acceptance rate for
submissions to BRY is about 60%. All papers accepted for presentation at the IABD annual
conference, with the exception of special invited workshops, go through peer review using a
double-blind procedure typical of all the better academic organizations. Based upon the
recommendations of the reviewers, the track chair may either accept or reject papers, also
requesting revisions. Once a paper is accepted for presentation, then it is eligible to be
considered for publication in Business Research Yearbook.
The Editors
2
CHAPTER 2
ACCOUNTING THEORY
3
THE 28% CAPITAL GAINS TAX - AN ANTIQUE?
ABSTRACT
Transactions involving collectibles are proliferating because of new venues for buying
and selling, such as the Internet. The tax rules categorize collectors as hobbyists, investors or
dealers. The criteria for the above categories do not seem to take into account the ease with
which collectors can add to or dispose of items from a collection today. The long-term capital
gains rate of 28 percent on collectibles and limitation on losses and related expenses are not
favorable to collectors. In many instances, the collector is better off attempting to engage in
enough activity to be a dealer with a profit motive, rather than being taxed as an investor on
profitable dispositions.
I. COLLECTIBLES
With the ease of market access through internet auction sites, the treasure from trash
excitement aired weekly on Antiques Roadshow, maturing babyboomers needing investment
income, these, and other, factors combine to make the collectibles market, once the province
of Larsonesque stamp enthusiasts, DAR ladies, and rich Sotheby’s patrons, a booming market.
In 2003, for example, average Americans buying and selling merchandise on eBay alone, a
large percentage of those items to hobbyists and collectors spent over two billion dollars.
An example of the collectors’ boom was chronicled by the Wall Street Journal
(January 23, 2004), which outlined the art market’s recent 15-year rise using a cross-section
of 50 artists of varying styles and periods whose works had traded frequently in those years.
The artist whose works had appreciated the most was the photographer Cindy Sherman, with a
372% increase over a 10-year period. In comparison, the Dow Jones Industrial Average
(DJIA) grew 179% over the same period, according to the article, and the average gain for all
artists’ works in the survey was 102%. Although the increase in appreciation for paintings
was not equal to that of the Dow, we must consider that the rise in the Dow over that period
was unprecedented and unlikely to be repeated. Although the Jobs and Growth Tax Relief
Reconciliation Act of 2003 reduced the rate on long-term capital gains, it left collectibles as
28% property. The gains in the last 15 years in the art market, one small sector of the
collectors field, make collecting a worthwhile area for anyone to consider who derives
pleasure from owning and living with material possessions.
Collectible objects fall into one of three categories, depending upon their economic
performance during a downturn. The most collectible items never, or infrequently, appear on
the market. Such items are rare and unique and their current owners have little desire to part
with them. The market for these items; old master paintings, 17th and 18th Century furniture,
large, flawless diamonds, will decline little in a financial downturn, and will appreciate
inevitably to higher levels than before. The second tier of collectibles, valuable, but not the
ultimate, rare, but not unique, include many items that we see for sale in the papers or on the
Internet daily – fine paintings, porcelains, collectibles, which can be replaced. There is a thick
market for these items with less price support and knowledgeable buyers will wait until the
4
prices of these objects hit bottom before buying. The third level, illustrated aptly by the Wall
Street Journal article, is more faddish and frequently follows fashion, as the hapless buyer of a
painting by a modern artist, whose works had recently been acquired by glamorous people,
found. The market collapsed when the stars stopped buying and his painting didn’t command
nearly the price he had anticipated. These items really do not develop much of a market, they
have no price support, and when the price drops, there is usually no recovery. (Crumbley,
1981, 77-79).
Second and third level collectors gain expertise in their area and can make money by
an occasional sale or trading. Becoming knowledge in a specialty and trading are now much
easier than in the past with the plethora of publications, newspaper articles, museum events
and internet information. Individuals engaging in such transactions, however, have limited
guidance with respect to the tax treatment of their collecting activities. The tax consequences
of investing and trading collectibles are confusing and uncertain. The rules are subjective and
limit expenses and losses resulting from collecting activities unless taxpayer can demonstrate
a profit motive. Congress or the Internal Revenue Service has not promulgated new rules that
address the ease with which individuals can find buyers for an array of collectibles over the
Internet. It is timely to review the myriad of tax rules that currently exist.
Section 408(m)(2) defines collectibles as any work of art, any rug or antique, any
metal or gem, any stamp or coin, any alcoholic beverage, or any other tangible personal
property specified by the Secretary and held more than one year. Certain types of coins are
specifically excluded. As an investor of collectibles, how do you manage your investment to
enable you to take the maximum deduction for your expenditures, pay the least amount of tax
on your gains and be able to write off any losses you may incur? There are three possible
classifications for a collector – hobbyist, investor, or dealer. The hobbyist has no profit
motive, whereas the investor and dealer do. A loss or expense associated with the hobby, is
considered personal, and is, therefore is deductible only to the extent of hobby income, but if
an individual is able to show profit motive from an activity, losses from the activity are fully
deductible.
5
•Amounts deductible under other sections if the activity has been for profit, but only if
those amounts do not affect adjusted basis. Examples include maintenance, utilities,
and supplies.
• Amounts that affect adjusted basis and would be deductible under other sections if the
activity had been engaged in for profit. Examples include depreciation, amortization
and depletion.
These deductions are deductible from adjusted gross income (AGI) as itemized deductions to
the extent they exceed 2 percent of AGI. If the taxpayer uses the standard deduction rather
than itemizing, all hobby losses expenses are nondeductible even though the revenues from
sales need to be reported elsewhere on the return.
There are certain measures you can take to establish the earnestness of your business endeavor
in an effort to avoid the hobby loss rules. Some of them include:
• Maintain a businesslike attitude in all transactions.
• Emphasize the profit potential of your collection or hobby rather than the pleasant or
recreational aspects of it.
• Any aspects of your collection or hobby that are personal and not for investment or
profit, you should keep separate, with separate records, including separate checking
accounts.
• All transaction must be recorded and the records preserved.
• Ask dealer contacts to make investment suggestions and note their suggestions.
• Do not attempt to deduct literature that emphasizes the hobby aspect of your activity.
• Do not use investment property for personal use (no collections, bought as an
investment in the home).
• The greater your wealth, the harder it will be to prove profit and pleasure is the
primary goal of your hobby or collection.
• If all else fails and you are classified as a hobbyist, sell enough of your collection each
year to pay for your maintenance expenses. (Crumbley, 1981, 18-19).
It has been said that in order to make money in collectibles you need to buy like dealer
– as close to wholesale as possible, you need to think like a hobbyist – avoid fads and look for
steady increases in value over a period of years, and keep records like an investor – by
keeping proper records, you will lose less to taxes (Crumbley, 1981, 43). Should, however,
something goes amiss, and you incur a net loss, the tax law provides three categories by which
losses can be deducted:
• Loss is incurred in a trade or business (applicable to dealers)
• Loss is incurred in any transaction entered into for profit (investor)
• Loss is not connected to a trade of business and arose form fire, storm, shipwreck,
casualty, or theft (dealer, investor, or hobbyist)
Let’s pretend you have made the mistake anyone who has done much collecting has
and bought a counterfeit item. If you paid $1,800 for your counterfeit item, and you find it is
worth $100, the loss is deducted when you sell it, if you are a dealer or an investor. If you are
a collector you may take a deduction as a theft loss in the year you made the bad purchase,
since there is no reasonable prospect of recovery. If you are a collector and sell it, the loss is
characterized as a capital loss, if it can be deducted at all. Capital losses are of limited benefit
unless the taxpayer has enough long term or short-term capital gain to offset the loss.
6
If you have followed the criteria for a business and the IRS is still interested in your
records, the Supreme Court has given certain illustrations from which acts of conduct the
attempt to evade or defeat any tax can be inferred. (IRS Manual 9.1.3.3.2.2.2)
• Keeping a double set of books.
• Making false entries or alterations of invoices or documents.
• Destroying books and records.
• Concealing assets or covering up sources of income.
• The handling of one’s affairs to avoid making the records that is usual in transactions
of the kind.
• Any conduct, the likely effect of which would be to mislead or conceal
Violations of the types given above make the headlines everyday but many cases that
end up in court did not involve acts as blatantly fraudulent as the ones listed. They were,
instead, mostly serious collectors who were making, or intending to make money on their
collections and honestly believed they should receive a deduction for expenses and losses
associated with their collections.
The Wrightsmans incurred expenses in their collection activities for which they
wanted to claim a tax deduction. They relied, in support of their contention that their
collecting activity was for investment purposes, upon a previous court case, George F. Tyler
(March 6, 1947). In 1947, Mr. Tyler was awarded the ruling in his case that expenses and
losses associated with his stamp collection were deductible since the collection was held for
investment purposes. The IRS ruled that Mr. Tyler exhibited scant knowledge of, or interest
in stamps, and that he did not interact with stamp hobbyists. Although he got some pleasure
out of the stamps, his activities were undertaken primarily for profit. Mr. Tyler’s case is a
rare exception. In an overview of profit versus pleasure cases to come before the IRS over a
51-year span, his is one of the few cases where the tax courts ruled in the taxpayer’s favor.
The Wrightmans lost their case.
V. FINAL COMMENTS
Definitions are subjective and vague in this area. Existing case law was decided in an
era with limited outlet for disposing of collectibles at their fair value, so they really do not
address today’s approach to buying and selling all kinds of items that may come under the
heading of collectibles. Additional expenses in connection with advertising, freight, and
travel may be incurred by today’s collector. How does this new approach to buying and
selling collectibles fit within the hobbyist, dealer or investor classifications?
REFERENCES
Charles B. Wrightman and Jayne Wrightsman v. The United States, U. S. Court of Claims,
No. 364-66, 428 F2d 1316, July15, 1970.
Crumbley, Larry; and Jerry Curtis. Donate Less to the IRS. The Vestal Press Ltd., 1981.
G.F. Tyler, 6 TCM 275, Dec 15,671 (M).
Internal Revenue Manual. Attempt to Evade or Defeat Any Tax, Section 9.1.3.3.2.2.2.
Revision date: 2003-08-11.
Jackson v. Commissioner, 59 T.C. 312. 1972.
The Wall Street Journal, January 23, 2004.
7
DOES THE IMPLEMENTATION OF SFAS NO. 131
CONVEY USEFUL INFORMATION?
ABSTRACT
The purpose of this paper is to test the information content of statement of financial
accounting standard sfas no. 131. “disclosures about segments of an enterprise and the related
information” a random sample of one hundred and eleven companies of those listed in
business week global 1000 for the year 1997 was selected. Two statistical techniques a
dummy variable and analysis of covariance were utilized to test whether the new standard
conveys useful information or not. The results indicate that application of new standard does
not statistically convey significant additional information that is useful. Investors and other
users either have already had access to the information disclosed under SFAS no. 131 or
managers can find ways to avoid the hidden costs of disclosing information that may harm the
company but benefit investors and competitors.
I. INTRODUCTION
The FASB issued in 1976 the statement no. 14 “financial reporting for segments of business
enterprise”. It requires listed companies to disclose segment information by both line of
business and geographic area in their annual reports. The absence of precise definition of
business segment and lack of consideration of internal organization of the company as well as
relatively high cost of providing such information led interested parties to express great
dissatisfaction with the statement. Many including the association of investment management
research (amir 1993) complained that the definition of segment is imprecise and there are
many practical problems in applying this definition. The amir also recommended that segment
disclosure in annual report should be based on internal organization of the company.
The AICPA special committee on financial reporting (1994) provided similar
recommendations and asked standard setting bodies to give the highest priority to this
issue. Sfas no. 14 was subsequently amended by statement of financial accounting standard
(SFAS) no. 94, “consolidation of all majority-owned subsidiaries” to remove the special
disclosure requirements for previously unconsolidated subsidiaries, and later superseded by
SFAS no. 131, “disclosures about segments of an enterprise and the related information,” but
retains the requirement to report information about major customer. The new standard
requires that a public business enterprise report financial and descriptive information about its
operating segment. Sfas no. 131 implemented a management approach, focusing on the way
in which management organizes segments internally to make operating decisions and to assess
performance. The objective of this approach is to harmonize internal and external reporting.
The statement became effective for years beginning after december 1997.
Recent studies by herrmann and thomas (2000) and street et al. (2002) show that, with
application of new standard, companies are reporting greater number of line of business
segments, more information about each segment, and there is more improvement in
consistency of segment information with other parts of the annual report. What is missing in
previous studies is the analysis of behavior of the other side of the market, that is demand side
of information. The purpose of this study is to focus market reaction to the above standard.
8
II. LITERATURE REVIEW
Researchers on (sfas) no. 131 have examined different aspects of its application in
order to evaluate its usefulness. Herrmann and thomas (2000) surveyed the annual reports of a
sample of u.s. multisegment firms listed in the 1998 fortune 500 to compare the segment
reporting disclosure under sfas no. 131 with those reported the previous year under sfas no.
14. They found that over two-third of the sample firms changed segment definitions upon
adoption of the new standard. They showed that the application of management approach has
resulted in several improvements. First, the new standard has increased the number of firms
disclosing segment information. Second, companies are disclosing more items for each
segment.
Street et al. (2000) assessed the 1997 and 1998 annual reports of a sample of the
largest publicly traded u.s. companies to determine whether sfas no.131 adequately addressed
user concerns about segment disclosures and the extent to which the expected benefits set
forth in the new standard materialized. The findings suggest that, in general, the new standard
has improved business reporting. The improvement includes the increase in the number of
reported segments and significant consistency of segment information in 1998 compared to
the year before.
In general, the previous studies tested the magnitude of the information disclosed by
suppliers of information, under the new standards, but they did not test the usefulness of such
information by users. The purpose of this paper is to test the usefulness of application of sfas
131 to investors.
IV. METHODOLOGY
The objective of this paper is to measure the structural change, if any, in beta due to
the segmental data disclosure. There are three ways to do that. First is to use dummy variable
technique, second is to use analysis of variance, and third is to use contingency table analysis.
With respect to the dummy variable, the subject of this inquiry is the single-index market
model shown in equation (1).
9
Rit = αi + βirmt + eit (1)
Where rit denotes the return for the ith firm in the tth week; rmt represents the market return for
the tth week, α i and βi are the regression intercept and slope, and eit is the unsystematic risk.
Equation (2) is a modified version of the single-index market model shown in equation (1),
which is formulated to test the structural change in betas.
Rit = α i1+ βi1rmt + βi2 (dtrmt) + uit (2)
the dt variable in equation (2) is a binary variable which assumes the value of unity in the
period of segmental reporting disclosure and zero elsewhere. The coefficient of the dummy
variable (dtrm) βi2 measures the differential effect of segmental reporting disclosure on beta,
βi1 for the ith firm. But over the non-disclosure period, equation (2) reduces to equation (1). If
the beta for the ith firm differs over the non-disclosure and disclosure of segmental data then
βi2 will be significantly different from zero. T-test was used to test the significance of
individual coefficients while f-test was used to test the significance of whole regression.
Therefore, it is necessary for f-test to be significant in order to accept or reject a hypothesis.
an alternative test procedure is to use analysis-of covariance (ancova). Equations (1)
and (2) and analysis of variance shown above will be used to test for the following
hypotheses:
HYPOTHESIS:
There is no difference in the firm's perceived risk before and after the implementation
of SFAS No. 131. That is
H 0 : β1 = β2
Where β1, and β2 represent the firm's perceived risk before and after the disclosure of
segmental data, respectively.
V. RESULTS
The decision rule is that if the coefficient of the dummy variable is significant at the .05 level
for individual firms in this group, then the null hypothesis, that the application of SFAS 131
conveys useful information will be accepted. A t-test will be used to determine whether beta
changes significantly with lob segmental reporting disclosure.
The results of computed beta together with computed t-test are reported in table i
below. The coefficients mean of the dummy variable is 0.00806 while the mean for t-test is -
0.106, which is insignificant at 95% level of confidence.
Table I
Descriptive Statistics: Coefficient and T-test
VARIABLE N MEAN MEDIAN TRMEAN
STDEV SE MEAN
10
Table II shows the results of F-test for the whole regression. The mean for F-test is 10.57,
which indicates that the relationship between independent variables and dependent variable is
weak. When the regression is run without the dummy variable, the mean of F-test is
significantly higher, 32.78.
Table II
Descriptive Statistics: F-test for the whole regression
VARIABLE N MEAN MEDIAN TRMEAN STDEV
SE MEAN
F-TEST* 111 10.57 2.50 8.72 14.17
1.34
When the ancova are applied, the results are not change. Table iii shows the results of
ancova. The mean for f-test is 0.7843, which is insignificant at 95% level of confidence.
Table III
Descriptive Statistics of analysis of variance
VARIABLE N MEAN MEDIAN TRMEAN STDEV
SE MEAN
F-test 111 0.7843 0.3400 0.6489 1.0312 0.0949
Based on the foregoing results, the hypothesis that there is no difference in the firm's
perceived risk before and after the implementation of SFAS No. 131 cannot be rejected. The
forgoing results are consistent with the logic of information. It is important to differentiate
between two impacts of the usefulness of information. The first is when the role of
information is confined only to the reduction of uncertainty surrounding the decision. In this
case, the decision is right but the decision-maker is uncertain. That is because there is
insufficient information. The decision-maker at first utilizes the quantitative information given
in the annual reports and information from other sources. The role of information (segmental
reporting) here is to confirm the previous decision by reducing the spread of probability
distribution around the mean. In this case, the information is new and useful, but there is no
significant reaction in the market to the release of useful information. Its impact can be
measured by asking investors and other users of financial statements to assign a probability
distribution to their expected return.
The second impact is when the effect of information released extends to induce
significant revision in the previous decision. It is this kind of information whose effect can be
captured indirectly by measuring the movement in share prices. Thus, even when the
information released is useful, share prices are not expected to be affected. The current results
reflect these facts.
Moreover, there are situations in which information has no impact at all. These could
occur when the information disclosed is perceived as useless or redundant. In this case, share
prices would not be expected to react to the release of such information.
11
VI. CONCLUSION
The results of this study indicate that application of sfas no. 131 does not statistically
convey significant additional information that is relevant to the investors and other users of
financial statements. Probably investors and other users of financial statements either have
already had access to the information disclosed under this standard.
REFERENCES
Street, donna l., nancy b. Nichols,. “lob and geographical segment disclosures: an analysis of
the impact of ias 14 revised”. Journal of international accounting auditing and
taxation, vol. 11 issue 2, 2002, pp. 91-123.
Street, donna l., nancy b. Nichols, and sidney gray, “segment disclosure under sfas no. 131:
has business segment reporting improved?” Accounting horizons, vol. 14, number 3,
september, 2000.pp. 259-285.
Chen, f. Peter, and guochang zhang, “heterogeneous investment opportunities in multi-
segment firms and the incremental value relevance of segment accounting data”. The
accounting review, vol. 78, no. 2, 2003 pp. 397-428.
12
INDUSTRY AND MARKET’S EFFECTS ON THE
USEFULNESS OF BOOK VALUE
ABSTRACT
This study explored the industry and market’s effects on the usefulness of book value
of equity in the valuation of companies reporting negative earnings. Our evidence suggests
that the existence of anomalous negative price-earnings relation is conditional on both the
economic environment and the industry group, and is only consistently present for high-tech
companies throughout our sample period. In addition, the usefulness of book value of equity
in eliminating such anomaly and improving the model explanation power also varies with
respect to different industry sectors.
I. INTRODUCTION
Although Collins et al argued in their 1999 paper that the anomalous negative
relationship between stock prices and negative earnings was induced by the misspecification
of simple earnings capitalization model which can be eliminated by augmenting the model
with book value of equity, we doubt this conclusion is universal across industry groups given
different market conditions. In the first place, with the quick growth in high-tech industry,
especially in the past decade, a much larger share in loss-reporting firms is taken by high-tech
companies, where big investments are made in intangible assets and R&D expenditures (must
be fully expensed). Thus for high-tech firms, earnings are not as important and book value can
hardly be a measure of a firm’s true wealth (Barron et al, 2002). However, low-tech firms
usually have stable income and large long-term capital investments, book value of equity,
therefore, is expected to play a bigger role in their valuation process. Secondly, after climbing
to its historical peak, 5,048.62, in March 2000, Nasdaq experienced a severe fall since April
2000, which started the bear market period. In just 1 year time, the Nasdaq Composite Index
dropped by more than 3,000 points and touched the bottom in the last quarter of 2002. Many
high-tech stocks fell as much as 90% during this stock market crash. Apparently, technology
stocks had been dramatically overstated, which leaves the valuation of high-tech stocks in
even bigger puzzle. Meanwhile, how the 2000 Nasdaq fall would affect value relevance of
accounting information provided by the low-tech firms is another question in interest.
To further explore the industry and market’s effects indicated above, we investigated
the anomalous negative price-earnings relation and usefulness of book value of equity for both
high-teach and low-tech firms in two distinct time periods: before and after the 2000 stock
market crash. Our evidence suggested that the existence of anomalous negative price-earnings
relation is conditional on both the economic environment and the industry group. Meanwhile,
we found persistence of such anomaly in high-tech industry group after the inclusion of book
value of equity, in both pre- and post- crash periods. On the contrary, we found book value of
equity plays a more significant role in the “low-tech” loss firm valuation process: inclusion of
book value in the simple earnings capitalization model not only eliminates the negative price-
13
earnings relation on average but also improves the model explanation power dramatically over
our 10-year sample period.
The rest of this paper will be organized as following: Section II introduces the sample
selection and data. Section III presents and analyzes the empirical evidence for the price-
earnings relation by using simply earnings capitalization model. Section IV reports the results
and findings by including book value into the valuation model. Section V concludes this
research.
In this study, we used a very present sample period, year 1995 to 2004, which is
chosen to (1) provide long enough time periods as required by association studies; (2) ensure
identical event years before and after the 2000 Nasdaq crash to make comparisons. To
accomplish our test, we created two contrasting industry groups, high-tech and low-tech, by
using 3-digit SIC codes (see Table I below). Such industry grouping is based on, among other
reasons, whether firms in the industry are likely to have significant intangible assets, reported
or unreported, which is consistent with Francis and Schipper’s 1999 study. For each industry
group, we used all firm-year observations during the 10-year sample period from
COMPUSTAT current and research databases to construct our initial data set and eliminated
the observations, among other criteria, of which: (1) December is not the FYE (to mitigate
temporal effect of the stock market fluctuation on stock price), (2) Stock price, EPS or book
value per share data is missing or is three deviations from its respective mean, and (3) Total
number of common shares outstanding decreases 1/3 from the previous year as it is suspect of
a reverse stock split to maneuver EPS. Our data selection process finally generated 10,785
usable observations in high-tech sub-sample (55% loss firms) and 2,413 usable observations
in low-tech sub-sample (30% loss firms)
Table I
High-Tech And Low-Tech Industry Groups
Industries 3-digit SIC codes
283, 357, 360, 361, 362, 363, 364, 365, 366, 367, 368,
High-tech Group
481, 737, 873
020, 160, 170, 202, 220, 240, 245, 260, 300, 307, 324,
Low-tech Group
331, 356, 371, 399, 401, 421, 440, 451, 541
We first replicated the price-earnings relation study in each of our industry groups by
using the simple earnings capitalization model:
Pt = α + β E t + ε t (1)
where Pt is cum-dividend price of the firm’s stock price three months after the end of the
fiscal year t plus its dividend per share for year t. Et is the bottom-line earnings per share
including discontinued operations, extraordinary items, and accounting changes.
Our empirical results partially confirmed Jan & Ou and Collins et al’s research:
although for “all firms combined”, earnings coefficients are significantly positive in most
years (unreported), if dividing into “loss firms” and “profit firms” groups, an anomalous
negative price-earnings relation exists for loss firms, taking together (unreported). However,
14
as shown in Table II below, such anomalous negative relation is present in high-tech loss
firms throughout whole sample period; but for low-tech sector, it’s only present in the
marking boosting period: for the 5 years prior to the Nasdaq’s fall.
Table II
Coefficient Estimates From Regressing Price On
Negative Earnings – Modle (1)
Pt = α + β Et + ε t
Specifically, for high-tech firms, when taking together, the estimated coefficients on
earnings are significantly positive in 7 of the 10 sample years (unreported). However, when
look into groups, for high-tech “profit” firms, there is a positive and homogeneous relation
between price and earnings (unreported). But for the high-tech “loss” firms, the estimated
coefficient on earnings is significantly negative in 8 of the 10 sample years, so is for the 10-
year pooled data, which is in support of the existence of anomalous negative price-earnings
relation. For low-tech “loss” firms, however, the results differ with respect to different sub-
periods: the earnings coefficient is negative in all 5 years prior to the Nasdaq fall, and is
significant in 3 of these 5 years. However, for the bear market period, the earnings coefficients
are not significantly different from zero, indicating a homogenous negative price-earnings
relation is doubtful given distinct market conditions, even within the same industry group.
The results suggested that the existence of the negative price-earnings anomaly is conditional
on both industries and market conditions.
To further examine the usefulness of book value of equity and its ability to eliminate
the price-earnings anomaly for loss firms, as is argued in prior studies, we included BVt-1, the
beginning of year book value per share at year t, into the simple price-earnings capitalization
model. This second regression model is derived from Ohlson’s 1995 model and the clean
surplus relation as explained by Collins et al in the appendix to their 1999 work.
Pt = α + β Et + γ BVt-1 + ε t (2)
15
Our test results, as presented in Table III on next page, revealed different valuation
patterns for different industry sectors given bull or bear markets. In particular, for high-tech
loss firms, inconsistent with Collins et al’s argument, we found the inclusion of book value
does not eliminate the anomalous negative relationship between price and earnings: the
estimated coefficient on negative earnings remained negative and significant for 7 of the 10
years over our sample period. Also, the overall explanation power the valuation model is not
improved as argued by other researchers: the adjusted R2 only increased by about 1%, from
9% to 10%. However, our findings showed some evidence in support of the improvement of
the usefulness of book value of equity given recession market condition as compared to the
flourishing market condition. Specifically, in the pre-2000 period, the estimated coefficient on
book value is both positive and significant in only 2 of the 5 years; however, in the post-crash
period, the coefficient on book value is significantly positive for 4 years, suggesting a shifted
emphasis on book value for the valuation of high-tech loss firms in bear market. For low-tech
firms, however, our test results are very much different. First of all, when taking together, the
estimated coefficient on earnings remains both negative and significant in only 2 years, and
although negative, the overall estimated coefficient on earnings is not significantly different
from zero with the addition of book value, indicating the elimination of the negative price-
earnings anomaly.
Table III
Coefficient Estimates From Regressing Price On
Negative Earnings And Book Value – Modle (2)
Pt = α + β Et + γ BVt-1 + ε t
16
Meanwhile, the estimated coefficient on book value is significantly positive in 7 of the 10
years from 1995 to 2004, which are evenly distributed in both boosting and declining periods
and there’s no significant change in the investors’ reliance on book value over time. In
addition, the adjusted R2 increased from 8% to 28.4% after inclusion of book value into the
regression model. The results indicate that for low-tech loss firms, book value of equity is
useful information in the valuation process as an additional proxy for the expected future
earnings.
V. CONCLUSION
This study explored the industry and market’s effects on the usefulness of book value
of equity in valuing the companies reporting negative earnings. Our evidence from the high-
tech sector demonstrates the persistence of the anomalous negative relation with the inclusion
of book value in the model, and although marginally, usefulness of book value improved
during market recession. On the contrary, evidence from the low-tech sector suggests the
anomalous price-earnings relation only exits in the pre-crash period when the market is
flourishing. For the down period, there’s no significant evidence in support of such relation. In
addition, for low-tech loss firms, book value of equity plays a significant role in the valuation
in that including book value of equity in the valuation specification not only eliminates the
anomalous price-earnings relation but also improves the model explanation power
dramatically.
REFERENCES
Ball, R. and P. Brown. “An Empirical Evaluation of Accounting Income Numbers.” Journal
of Accounting Research, Autumn, 1968, 159-78
Barron, O., D. Byard, C. Kile, and E. Riedl. “High-technology Intangibles and Analysts'
Forecasts” Journal of Accounting Research, 40, 2002, 289-312.
Burgstahler, D., and I. Dichev. “Earnings, adaptation and equity value.” The Accounting
Review, 72, 1997, 187-215.
Collins, D., M Pincus, and H Xie. “Equity Valuation and Negative Earnings: The Role of
Book Value of Equity” The Accounting Review, 74, 1999, 29-61
Francis, J., and K. Schipper. “Have Financial Statements Lost Their Relevance?” Journal of
Accounting Research, 37, 1999, 319-352.
Jan, C., and J. Ou. “The Role of Negative Earnings in the Valuation of Equity Stocks.”
Working Paper, 1995, New York Univ. and Santa Clara Univ.
Jahnke, W. “Valuing New Economy Stocks.” Journal of Financial Planning, 13 (6), 2000, 46-
48.
Kothari, S. P. “Capital Markets Research in Accounting.” Journal of Accounting and
Economics, 31, 2001, 105-231.
Lev, B. and Ohlson, J. “Market Based Empirical Research in Accounting: a Review,
Interpretation, and Extensions.” Journal of Accounting Research, 20 (Supplement),
1982, 249-322.
Lev, B. “On the Usefulness of Earnings and Earnings Research: Lessons and Directions from
Two Decades of Empirical Research”, Journal of Accounting Research, 27
(Supplement), 1989, 153-192
Lev, B. and T. Sougiannis. “Capitalization, Amortization, and Value-Relevance of R&D.”
Journal of Accounting & Economics, 21, 1996, 107-138.
Ohlson, J. “Earnings, Book Values, and Dividends in Security Valuation.” Contemporary
Accounting Research , 11, 1995, 661-687.
17
EXAMINING PERCEPTIONS OF STUDENT SOLUTION STRATEGIES IN AN
INTRODUCTORY ACCOUNTING COURSE
ABSTRACT
The purpose of this research is to take a closer examination of the reasoning ability of
students in an introductory financial accounting course using protocol analysis. Prior research
has shown that accounting faculty believes, to a higher degree, that accounting students have
poor reasoning ability. This research finds that the shift in accounting professors’ perceptions
that students have poor reasoning ability is not warranted. This research suggests that it’s not
the students’ poor reasoning ability but the lack of prior training in how businesses view
economic transactions.
I. INTRODUCTION
In 1985, Tanner and Caruth examined faculty perceptions regarding the “academic
preparation and motivation of their students.” This assessment was conducted by distributing
a questionnaire with Likert-type attitudinal statements in an effort to measure accounting
faculty responses. In 1995, decade later, Tanner, Tontaro, and Wilson reexamined this issue
by resending the questionnaire to another set of accounting faculty.
The responses of the new study were compared against the responses of the previous
study to determine if any significant differences existed. One of the major findings of this
comparative study is that “while both faculty groups disagreed that accounting majors had
poor reasoning ability, the 1985 faculty respondents showed a significantly stronger level of
disagreement.” This shift would indicate that there appears to be increasing skepticism in the
reasoning ability of accounting students.
The shift in perceptions over time is the foundation of this research. Essentially,
previous research revealed that accounting professors disagreed that accounting majors
displayed poor reasoning ability; yet over a span of ten years professors disagreed less that
accounting majors had poor reasoning skills. Clearly, this indicates a perceptual shift that
attributes poor performance to the reasoning ability of students. In this study, we ask, “Is the
shift in accounting professors’ perceptions of students reasoning ability warranted?”
The purpose of this research is to take a closer examination of the reasoning ability of
students in an introductory financial accounting course using protocol analysis. Protocol
analysis was used because we wanted to compare the solution strategies of students with the
solution strategies of accounting professors. Therefore, this study makes a contribution by
assessing both accounting professors’ and students’ solutions strategies. Whereas, Tanner et
al (1985, 1995-96) utilized a quantitative analysis of faculty perceptions regarding student
18
skills, this research extends previous studies by making a qualitative assessment of both
faculty and student perceptions.
II. METHOD
This research was conducted in a large southwestern university. Two junior classified
business administration students, one male and one female, participated in the study. Since
the objective was to understand solution strategies or what participants think regarding how to
solve problems, a qualitative methodology was utilized to assess those perceptions. Protocol
analysis was employed because unlike the typical quantitative survey it affords an in-depth
examination of participants rationales.
The two students who participated in the protocol analysis had completed both an
introductory financial accounting course and an introductory managerial accounting course.
The students who participated in the protocol analysis were instructed to manually solve
multiple choice exam questions which were previously given on a summative introductory
accounting exam. These questions were selected because anecdotal evidence suggests that
these concepts are difficult for introductory accounting students. In particular, the questions
involve counterintuitive reasoning processes, logical inferences, and timing sequences.
Solution strategies employed by students could possibly assist in understanding why these
concepts seem more difficult for students to comprehend. The students were instructed to
verbalize their thoughts and mental processes as they solved the questions. The verbalizations
of the students solving the problems were taped, transcribed, and analyzed in a process matrix.
The following section presents the process matrix as follows: question statement; professor’s
solution strategy; students’ solution strategies; analysis of student protocols according to the
professors’ ideal response.
III. RESULTS
An exam question (see Table I) asked the students to select the option that contains
one of the criteria for revenue recognition under the accrual method of accounting (additional
questions and responses are available from the author upon request). The question asked the
student, when should an accounting entity record or include in revenue the results of an
economic business event or transaction under the accrual method of accounting? Of the 350
students who took this exam, 57.4% (201) selected “A” (the correct response), 18% (63)
selected “B”, 17.7% (62) selected “C”, and 6.8% (24) selected “D”. Of those students who
missed this question, a similar number of students picked option “B” (63) or “C” (62).
Which of the following is one of the criteria for revenue recognition under the accrual
method?
a. a measurable asset is received.
b. cash is collected.
c. an agreement is signed for a service.
d. revenues must exceed expenses.
The basic two-pronged rule of revenue recognition under the accrual method is that (1)
goods and/or services must be delivered to the customer (i.e., revenue must be earned) and (2)
the monetary value of the good and/or service must be known. Those students who chose
19
option “B” may either have had difficulty understanding the “accrual basis” of revenue
recognition or had difficulty distinguishing between the “cash basis” of revenue recognition
and the “accrual basis” of revenue recognition. Under the accrual basis of accounting, revenue
is recognized when it is earned, regardless of when cash is received while under the cash basis
of accounting, revenue is recognized when cash is received. If the students either did not
understand the accrual basis of accounting or were unable to differentiate between the cash
basis and the accrual basis of revenue recognition, the student may have believed anytime
cash is received for services that were either rendered or will be rendered, revenue should be
recognized.
The students who selected option “C” may have assumed that a signed agreement
would be both enforceable and state a monetary value for either goods to be received or
services to be performed. Under this assumption, only one of the rules of revenue recognition
would have been met and the economic event could not be recognized as revenue. What is
similar in options “B” and “C” is that both may be done during the business transaction and
neither option requires that the goods and/or services be delivered. This may account for the
relatively similar number who selected either option “B” or “C”. Because so few students
picked option “D”, this option is not included in the analysis.
Student protocol responses are shown in Table II. Both students recognized that the
key to answering this question was the differentiation between the accrual and cash method of
revenue recognition. However, both students failed to demonstrate an understanding that
revenue is recognized, under the accrual method, when it is earned.
Student 1 Student 2
C, I can’t even remember the difference C, Um, I need to know the difference
between the cash and accrual method, but between accrual vs cash so I know
I want to give it a shot. Um, a measurable that “B” is not an answer. Cash
asset is received, no. Cash is collected, collected could be for a prior year
no that is the cash method. An agreement agreement, and it wouldn’t be
is signed for service, yes. That would be counted as revenue. A measurable
revenue under the accrual method. asset is received that doesn’t
necessarily mean revenue, you could
have gone out and purchased a
building, that leaves “C” an
agreement is signed for services. I
worked for a construction company
and I know that you needed the
contracts signed and lined up in
order to start the work so you can get
paid.
Both students recognized that the key to answering this question was the
differentiation between the accrual and cash method of revenue recognition. However, both
students failed to demonstrate an understanding that revenue is recognized, under the accrual
method, when it is earned.
20
The inference of the researcher on students choosing option “C,” was that the students
chose option “C” because they believed that a signed agreement would be both enforceable
and state a monetary value for either goods to be received or services to be performed. The
result of the protocol analysis is consistent with the inferred test item solution strategy.
IV. CONCLUSION
Prior research has shown that accounting faculty believes, to a higher degree, that
accounting students have poor reasoning ability. Our research does not confirm that students
in an introductory accounting course have poor reasoning ability. An analysis of student
protocol reveals that errors in problem solutions came as a result of students not
understanding terminology that is counterintuitive or concept(s) which underlie how business
transactions are viewed from an accounting perspective.
Therefore, this research finds that the shift in accounting professors’ perceptions that
students have poor reasoning ability is not warranted. Prior research may have fallen victim
to the fundamental attribution error (Kelley, 1972; Miller & Lawson, 1989) or the tendency
to underestimate external factors and overestimate the influence of internal or personal traits.
Essentially, this research suggests that it’s not the students’ poor reasoning ability but the lack
of prior training in how businesses view economic transactions. Future research should
examine this relationship.
REFERENCES
21
THE IMPACT OF MERIT PAY ON RESEARCH
OUTCOMES FOR ACCOUNTING PROFESSORS
ABSTRACT
Merit pay for professors to encourage better teaching, research and service is
controversial. Its effectiveness can be examined empirically. In this study, the existence of a
merit plan and ACT scores of incoming freshmen were strongly associated with measurable
research outcomes. Additional study is needed to test the association of merit systems with the
other dimensions of faculty performance, teaching and service.
I. INTRODUCTION
An article in the Santa Rosa, California, Press Democrat, dated October 24, 2001,
suggests merit pay for professors is controversial:
“Faculty at Sonoma State University staged a protest of their own on Tuesday.
Frustrated by the administration's ‘corporate’ management style, professors held a daylong
teach-in at the Student Union and central quad to draw attention to a laundry list of
grievances. Speakers at the event railed against the merit system. The merit pay system,
established in 1995, is a major sticking point in stalled contract negotiations with professors.
The faculty association wants to scrap the system, which bases pay raises partially on reviews
by both faculty and administrators. ‘The system pits professors against one another and
rewards those who pander to administrators,’ said Rick Luttmann, the Sonoma State faculty
chair (sic) and math professor.”
Compensation practices vary widely across colleges and universities. Periodically, the
College and University Personnel Association (CUPA) surveys over 3,000 higher education
institutions regarding their policies and methods for adjusting individual salaries. Methods
considered in the survey included: annual general wage adjustment, automatic length of
service adjustment, a merit pay plan, lump sum incentive payment, bonus, gainsharing, skill-
and competency based pay, team incentives, and combination across-the-board and merit pay
plans. The CUPA data published in 1999 indicates that merit systems are used by 23.7 percent
of responding institutions and plans that combine across-the-board pay raises with merit pay
are used by another 26.4 percent of institutions. Since the data is aggregated, there is no way
of knowing which individual schools use a merit-based or partially merit-based pay system.
The rationale behind merit systems is to reward and thus encourage better performance
in the key areas of teaching, research and service. Typically, teaching performance is
measured with student evaluations or outcomes assessment tests such as the ETS Major Field
22
tests. Research performance is most frequently measured with a count of a professor’s
publications. An acceptable service measure has been elusive. Given the ongoing controversy
over the usefulness of merit pay plans, we are now asking whether the presence of a merit
system might be an institutional determinant of faculty research output.
Increasing restrictions on public funding and the desire of university administrators for
greater discretion over faculty salaries have led to a move away from traditional seniority-
based compensation systems (Grant, 1998). For a merit plan to be feasible, however, there
must be a clear link between individual effort and performance, and that performance must be
accurately measured (Heneman and Young, 1991). It has been vociferously argued that merit
pay schemes are just not practical in a university setting, because the performance of
individual faculty members is too difficult or specialized to measure objectively (Johnston,
1978).
A field study of public school deans showed they do believe merit systems promote
better teachers and higher quality research output, (Taylor, Hunnicutt, and Keefe, 1991).
However, this study, as well as the faculty protests at Sonoma State University, evidence only
opinions. We suggest that, at least in the context of an accounting program, the question of the
value or effectiveness of merit pay can be addressed as an empirical issue.
Of the three areas of faculty productivity -- research, teaching and service -- this study
is intended to develop empirical evidence of the impact of merit pay systems on research
outputs. If merit pay systems have the desired impact of improving faculty performance in
measured areas, then schools with merit systems would be expected to exhibit stronger than
average faculty performance in this area.
III. HYPOTHESES
H1: Ceteris paribus, there is a statistically significant relationship between the research
output of the faculty of an accounting program and the presence or absence of a
faculty merit pay plan.
23
represent the quality of each institution's incoming student body and its relationship with a
measure of faculty research output was tested in a second hypothesis:
H2: Ceteris paribus, there is a statistically significant positive association between the
average ACT score of a school’s incoming freshmen and the research output of the
school’s accounting faculty.
IV. METHODOLOGY
The e-mail addresses of department chairs of 500 of the 800+ accounting programs in
the United States were identified using Hasselback’s Accounting Faculty Directory 2003-
2004. Each of the 500 chairs was e-mailed a survey using the CUPA taxonomy of methods
currently used to adjust individual salary rates. The chair’s response to this survey revealed
whether or not a merit plan was in place at that school.
Average ACT scores were obtained from Profiles of American Colleges, 2002
published by Barron’s. If the ACT score was not reported, the California State University
System’s Eligibility Index Table for California High School Graduates or Residents of
California was used to convert the SAT score into an ACT score.
The names of individuals comprising the accounting faculty of each school were
obtained from the Hasselback directory. The research output of each of these individuals was
obtained from the Economic Literature Database 2002 compiled by Jean Louis Heck. The
average number of publications per faculty member per school was then calculated.
A regression was then run. The dependent variable is the average number of
publications per faculty member of a school. In the regression, the two independent variables
are: an indicator variable assigned the value of 0 if the school does not have a merit program,
and a value of 1 if it does; and the school’s mean ACT score of incoming freshmen.
Therefore, the model to be tested is:
Sixty-one of the 500 surveys (12%) were returned. Eleven of these were not usable,
leaving 50 usable surveys (10%). Only 4 types of faculty salary adjustments were reported.
Some schools used multiple methods. As seen in Table I, correlation coefficients show that
schools with merit programs tend not to offer ‘time in grade’ pay adjustments.
In the regression, the F is 6.642 and significant at the .004 level. Therefore, it is highly
unlikely that all of the regression coefficients are equal to zero. The adjusted R square is .267,
so about a quarter of the variance of the publications variable is explained by the variance of
the ACT and Merit variables. The estimated coefficient on the Merit variable is 1.946 and
significant at the .007 level. The estimated coefficient on the ACT variable is .464 and
significant at the .012 level. These results are consistent with both hypotheses.
24
Table I
N = 50
COLA STEPS MERIT BONUS
COLA 1.0
Legend:
COLA = Annual General Wage Adjustment, used by 31 (62%) schools
STEPS = Automatic Length of Service Adjustment, used by 8 (16%) schools MERIT = Merit
Pay Plan, used by 34 (68%) schools, used by 2 (4%) schools
** = Coefficient is significant at the .01 level
These are quite robust results indicating that there is a strong relationship between a
faculty's research output and the existence of a merit system as well as a strong relationship
between the quality of the student body and faculty research output. If the only purpose of
merit pay were to encourage additional research productivity, it would be easy to conclude
that such systems are effective.
The minutes of a faculty discussion posted on the Drew University website in 2005
show that some faculty regard merit pay as an incentive to encourage and focus their work
while others believe it is simply a means of “recognition” of work that would otherwise have
been accomplished. However faculty interpret their merit system, merit pay for faculty
remains a controversial means to encourage and/or reward faculty efforts and excellence in
multiple dimensions including teaching and service as well as in research.
The results developed here certainly suggest that those campuses more attractive to
higher performing students, as measured by ACT scores, also seem to attract more productive
faculty scholars, as measured by research output. In addition, campuses with a merit system in
place clearly have faculties with higher research outputs.
25
These simple tests could have been influenced by unidentified confounding factors.
More to the point, additional tests are needed to determine whether merit pay systems are also
associated with better outcomes for the other dimensions of rewarded faculty performance:
teaching and service. Both faculty and administrators need to continue to examine the design
and implementation of merit systems. Perhaps additional empirical work will make the
continued discussion less adversarial than it was at Sonoma State University in 2001.
REFERENCES
26
AN ANALYSIS OF INVESTMENT PERFORMANCE AND MALMQUIST
PRODUCTIVITY INDEX FOR LIFE INSURERS IN TAIWAN
ABSTRACT
After the insurance market opened in 1987, the whole market structure changed. Facing
more highly intensive competition, life insurers in Taiwan set a goal of a higher efficiency of
investment performance and profitability. Notably, insurers may become insolvent when
inefficient performance of investment occurs, because declining profit could lead to serious
interest spread loss. The main purpose of this study is to determine the capital investment
efficiency based on the DEA results and Malmquist Productivity Index. Further goals are to
explore if there is a statistically significant difference among different groups. Results
expressed that some life insurers have an efficient investment performance for the overall
efficiency and scale efficiency and/or pure technical efficiency individually.
I. INTRODUCTION
It is important to study the profitability and investment performance for life insurers,
because companies may become insolvent when failure leads to declining profit, and even to
serious interest spread loss. In Taiwan, the main sources of a life insurer’s profit, financial
receipts, depend on the investment performance. Obviously, premiums received only cover
commission and business expenses, although this amount is about eighty percent of the total
income (Yen, Sheu, & Cheng, 2001). Thus, whether the investment performance is efficient
or not is a key factor that relates to the whole performance of business management.
The purpose of this study is to adopt the DEA, which was developed by Charnes,
Cooper, and Rhodes (1978) to measure the relative efficiency and investment performance of
life insurers in Taiwan. Further, the MPI, which was developed by Fare, Grosskopf, Lindgren,
and Ross (1989), showed the efficiency change of companies from 1998 to 2002. Based on
the MPI that includes technical efficiency change, technological change, pure technical
efficiency change, scale efficiency change, and total factor productivity (TFP) change, life
insurers can revise their input and output factors. Thirdly, the investment performance of life
insurers was compared among the domestic original, new entrant, and foreign companies.
Finally, results can provide information of strategies raising their competitive ability.
27
III. METHODOLOGY
The participants of this study, based on an annual report of life insurers in Taiwan,
were classified three groups: eight year old domestic companies, nine year old new domestic,
and nine year old foreign group life insurers. The analysis period of this study will cover the
years from 1998 to 2002. The Kuo Hua Life Insurance Companies were eliminated because of
missing data or incompleteness in their financial annual report.
DEA is a non-parametric technique to measure the relative efficiency. There are two
models in DEA: the CCR and the BCC model. Based on the insurance laws in Taiwan,
investment targets of life insurers involve: deposits in bank, securities, real estates, loans to
policyholders, mortgages, loans, foreign investments, as well as authorized projects or public
investments. However, not every item was made for some insurers. Thus, input variables in
this study will be classified into deposits, securities, loans, and four other items. The output
variable is financial receipts, which involve three items: interest income, gain on investment-
securities, and gain on investment-real estate.
DEA was limited to the ability to analyze performance in one year, but MPI was
extended to analyze the productivity change for a continuous different year. Based on Fare
Grosskopf, Norris, and Zang (1994), MPI provide five indices: technical efficiency change
(effch), technological change (techch), pure technical efficiency change (pech), scale
efficiency change (sech), and total factor productivity change (tfpch).
An inclination of the future investment showed that securities of investment are the
most important tool for life insurers in Taiwan. The next is mortgage loans. Authorized
projects or public investment keeps the minimum for each year. A deposit was an important
investment item before 1997, but the securities item increased sharply after 1997. Further,
foreign investment has also increased very quickly since 2000. Moreover, the possession rate
of government and treasury bonds is the maximum in securities investment. Corporation
bonds and benefit certificates appear to be not as important as government and treasury bonds.
The Pearson correlation coefficients of life insurers are summarized in TABLE 1. The
correlation coefficients between input and output variables are all greater than 0.85. the
performance measurement for life insurers from 1998 to 2002 is shown in TABLE 2. Both
Nan Shan and Hontai Life have an overall efficiency and scale efficiency of 100%. Most life
insurers possess a good pure technical efficiency of 100%. They include: Cathay, Nan Shan,
Manulife, American, and Hontai Life Insurer. Only Shin Kong Life Insurer owns an
increasing “return to scale.” The insurers with a “Constant return to scale” are: Cathay, Nan
Shan, Hontai, American, and Manulife Life Insurers. Furthermore, TABLE 3 lists the
efficiency investment performance of life insurers for each year. The DEA model can be
expressed as following:
Financial receipt = f (deposit in bank, securities, mortgage, other).
28
Table 1 Pearson Correlation Coefficients
Input Variables
Year Deposit Securities Loan Other
Output 1998 0.9815 0.9458 0.9829 0.9984
variable 1999 0.9776 0.8996 0.9832 0.9952
2000 0.9699 0.8770 0.9823 0.9897
2001 0.8928 0.8028 0.9573 0.9929
2002 0.8432 0.8842 0.9614 0.9540
Note: Output variable is the financial receipt of insurers
This study used DEA to calculate the optimal input for inefficient insurers during 1998-
2002. For example, the decision making of the least two inefficient insurers are shown in
TABLE 4. However, insurers 23 and 25 can maximize their overall efficiency up to 907% and
1210%, respectively. Insurers 16 and 1 can maximize their pure technical efficiency up to
591% and 177%, respectively. Thus, it is necessary to revise their investment to raise their
efficiency. TABLE 5 illustrates five means of MPI. Obviously, effch decrease year by year.
29
Table 5 Means Of Malquist Productivity Indices (Mpi)
Year effch techch pech sech tfpch
1998-1999 1.29 0.769 1.178 1.095 0.993
1999-2000 1.226 0.573 0.967 1.268 0.702
2000-2001 1.725 0.742 1.292 1.335 1.280
2001-2002 1.133 1.284 1.055 1.074 1.455
Mean 1.344 0.842 1.123 1.193 1.108
This research adopts ANOVA and non-parameter methods to test the hypotheses. To
achieve these objectives, some null hypotheses are created in this study:
Ho1: there is no significant difference in rank of CCR between domestic and foreign
life insurers for each year.
Ho2: there is no significant difference in rank of BCC between domestic and foreign
life insurers for each year.
Ho3: there is no significant difference in rank of CCR among five different years from
1998 to 2002.
Ho4: there is no significant difference in rank of BCC among five different years from
1998 to 2002.
Ho5: there is no significant difference in MPI among original domestic companies, new
entrant domestic companies, and foreign companies.
Referred to in TABLE 6, outcomes of hypotheses one and two state that there is no
significant difference in rank of overall efficiency or pure technical efficiency between the
domestic and foreign life insurers. Furthermore, TABLE 7 shows that there are significant
differences in rank of overall efficiency during the periods of 1998-1999, 2000-2001, and
2001-2002. However, there are no significant differences within the five-year period from
1998 to 2002. Finally, Results of hypothesis five show that there is no significant difference
for MPI among three groups. The p-value of effch, techch, pech, sech, and tfpch were 0.780,
0.191, 0.200, 0.079, and 0.776 respectively.
30
VI. CONCLUSION
The financial solvency of life insurers has been worsening since 1997 in Taiwan. For
example, Hong Fu Life Ins. Co. recently experienced a financial crisis in 1997 due to a worse
investment environment, resulting in interest spread loss and poor performance of capital
investment after the Financial Crisis in Southeast Asia. A more competitive climate has
formed because of the four financial impacts: the declining interest rate, liberalization and
internationalization, natural or man-made catastrophes, and the “fuzzy boundary” of industry.
Given those impacts, life insurers must maintain their profitability and financial solvency.
Total income of life insurers is generally classified into two parts: financial received
and premium received. Financial received investment is the main profit source of life insurers.
The premium received has possession of about eighty percent of total income, but premium
received only covers commission and business expenses (Yen, Sheu, & Cheng, 2001). Thus,
the investment performance and efficiency are very important since they are the determinants
to a business’s performance. The strategies may further affect business performance with a
result of business failure. In Taiwan, the capital structure of investments of life insurers from
1998 to 2002 showed that securities had the largest proportion of 33.42%. The mortgage,
deposit, and loan during the period possessed 14.85%, 13.1%, and 18.97% respectfully. The
rest were all less than 10%. Obviously, security risk assessment and management are
extremely important, because higher benefit should bear higher risk. Thus, it is proper to use
the DEA to evaluate the overall efficiency, pure technical efficiency, scale efficiency, return
to scale, and the MPI to express the productivity changes.
REFERENCES
Charnes, A., W. W. Cooper, and E. Rhodes. “Measuring the efficiency of decision making
units.” European Journal of Operational Research, 2, 1978, 429-444.
Fare, R., S. Grosskopf, B. Lindgren, and P. Ross. Productivity developments in Swedish
Hospitals: A Malmquist output index approach, in Charnes, A., Cooper, W. W., Lewin,
A. Y. and Seiford, DEA: Theory, Methodology and applications,1989.
Fare, R., S. Grosskopf, M. Norris, and Z. Zhang. Productivity growth, technical progress, and
efficiency change in industrialized countries. American Economic Review, 84(1), 1994,
66-83.
Yen, L. J., H. J. Sheu, and C. L. Cheng. “Measuring the investment performance of the postal
simple life insurance department.” Insurance Monograph, 66, 2001, 48-69.
31
CONNECTING ABI ACCEPTANCE MEASURES TO TASK COMPLEXITY, EASE
OF USE, USER INVOLVEMENT AND TRAINING
ABSTRACT
This study contends that activity-based costing (ABC) success ultimately depends on
user acceptance of activity based information (ABI) in the early stages of the ABC system
implementation. The results of this study reveals that the level of effort required to use ABI
will have a significant influence on user acceptance of ABI. The findings also suggest that the
expected benefits of using ABI, use of ABI, and satisfaction with ABI are related to the
complexity of the users’ task activities and level of involvement in the ABC system design.
I. INTRODUCTION
In recent years both practitioners and researchers are increasingly evaluating whether
activity based management (ABCM) is effective as a cost management strategy. This recent
skepticism is attributed to the growing evidence that many organizations are achieving
performance gains from their ABC systems while others are not (Shields, 1995, Gosselin,
1997). The aim of this study is to further understand the individual-level factors that may
influence the effective use of ABI. In this paper, user acceptance of ABI is considered the
foremost antecedent of success of an ABC system. This study explores task complexity, ease
of use, user involvement, and training as determinants of user acceptance of ABI. Univariate
results of one-way analysis of variances (ANOVA) and regression techniques are utilized to
analysis data collected from employees of a large telecommunication firm.
Many ABC systems are actually implemented and to some extent used, but many not
be considered successful because there is no action taken on the information provided (e.g.,
elimination of non-valued-added activities) or improved decision-making performance. The
results of several studies provide evidence that an increasing number of ABC adopters are
experiencing problems getting their employees to take actions based on activity-based
information (ABI) (Shields, 1995, Anderson and Young, 1999). It is probable that the
uniqueness of the information provided by the ABC system will have an impact on the extent
to which users will accept ABI, process the information and believe that it is unreliable or
irrelevant for decision-making. We suggest that employees may reject ABI if they face
complex job tasks, believe that the information is difficult to use, experiencing task overload
and/or did not receive effective training on the selection and effective use of ABI. The
research framework is depicted in Figure I.
32
FIGURE I THEORETICAL FRAMEWORK OF ABI ACCEPTANCE
Contextual Factors: ABI Acceptance:
Task Complexity
Perceived Ease of Use Perceived Usefulness
ABI Use
Implementation Factors: User Satisfaction
User Involvement
ABI Training
Task Complexity. Task complexity refers to the analyzability of the tasks and the
extent to which the task can be performed by following well-defined procedures or steps (Kim
et al., 1998). We suggest that the use of ABI when task complexity is low may create
unfavorable user perceptions. When individuals are faced with more analyzable and routine
task activities, the use of detailed “standardized” ABC reports may be viewed as irrelevant or
redundant and interfere with their simple information needs. Likewise, the use of voluminous
broad scope yet detailed “standardized” and irrelevant ABI in highly uncertain complex
environments may result in task overload, unfavorable perceptions, and possibly sub-optimal
use of ABI and less satisfaction.
H1a: Perceived usefulness of ABI is negatively associated with task complexity.
H1b: ABI usage is negatively associated with task complexity.
H1c: Users’ satisfaction with ABI is negatively associated with task complexity.
Ease of Use. Ease of use refers to the degree to which an individual believes that
using a system and its output is effortless (Davis 1993). Self-efficacy theory suggests that
perceived ease of use is one of the basic determinants of information system use behaviors
and perceived usefulness (Davis 1993; Igbaria et al. 1997). It is anticipated that individuals
are more likely to use and have favorable opinions of ABI to the extent that the information is
fairly easy to use.
H2a: The perceived usefulness of ABI is positively associated with the ease of use.
H2b: ABI usage is positively associated with the ease of use.
H2c: Users’ satisfaction with ABI is positively associated with the ease of use.
User Involvement. Prior literature suggests that user involvement has a positive effect
on system success (i.e., ABC) by providing increased knowledge about the user groups and
may reduce unrealistic expectations and resistance to the change (Guimaraes et al., 1992,
McGowan 1997). As such, when employees participate in the development of the ABC
system, they are more likely to accept ABI.
H3a: The perceived usefulness of ABI is positively associated with user
involvement.
H3b: ABI usage is positively associated with their involvement in the system
design.
H3c: Users’ satisfaction with ABI is positively associated with their involvement.
ABI Training. The results of numerous management information system (MIS) and
ABC implementation studies suggest that there is significant and positive relationship
between the adequacy of user training received in using ABI and system success (Igbaria et
al., 1997).
H4a: The perceived usefulness of ABI is positively associated with the adequacy of
training received.
H4b: ABI usage is positively associated with the adequacy of training received.
H4c: Users’ satisfaction is positively associated with the adequacy of training
received.
33
In addition, the following “fit” hypotheses relate to the motivation that the level of task
complexity as well as the adequacy of the ABI training received will affect an individual’s
information needs, perceptions of information sources, intentions, and behaviors.
H5: Perceptions and acceptance of ABI will be more favorable for individuals
facing highly complex tasks activities than for individuals facing lower
complex tasks.
H6: Users perceptions and acceptance of ABI will differ significantly across the
adequacy of the ABI training received.
III. METHODOLOGY
IV. RESULTS
The descriptive statistics indicated that ABI users, on average, have somewhat neutral
opinions about the level of effort required to use ABI (mean = 2.95), had little involvement in
the design of the ABC system (mean = 1.66), rate the training received for using ABI as less
than adequate (mean = 2.30) and their tasks are characterized by a moderate degree of
complexity (mean = 2.63). The means for perceived usefulness (2.81), ABI use (2.19) and
user satisfaction (2.69), indicate that, overall, the sampled ABI users have somewhat neutral
opinions about the usefulness of ABI, do not extensively use ABI, and are generally less often
satisfied with ABI.
Hypotheses (H1-H4) Testing. To test the specified hypotheses (H1 –H4), a series of
multiple regression analysis were run using the three measures of ABI acceptance (perceived
usefulness, ABI use and user satisfaction). The results of the regression analysis presented in
Table I provide support for several determinants of ABI acceptance. The complexity of users
34
job-related tasks is not significantly linked (although the coefficient is negative) to the
perceived usefulness of ABI (H1a; B = -0.20, p = 0.194). However, as hypothesized in H1b
and H1c, task complexity is significantly and negatively linked with reported ABI usage (B =
-0.20, p = 0.089) and user satisfaction with ABI (B = -0.28, p = 0.049). The results support the
hypotheses that the ease of using and comprehending ABI is positively and significantly
related to favorable perceptions regarding the usefulness of ABI (H2a; B = 0.60, p = 0.001);
use of ABI (H2b; B = 0.24, p = 0.054); and user satisfaction with ABI (H2c; B = 0.26, p =
0.095). Although the results do not support the hypothesis that user involvement in the ABC
system design will promote favorable perceptions regarding the usefulness of ABI (H3a; B =
0.14, p = 0.252), the results do provide support for the hypotheses that user involvement is
positively and significantly related to use of ABI (H3b; B = 0.28, p =0.003) and user
satisfaction with ABI (H3c; B = 0.20, p = 0.089). Finally, the results indicate that the
adequacy of ABI training received is not significantly linked to the three ABI acceptance
measures, therefore H4a, H4b and H4c are not supported.
TABLE I
REGRESSION ANALYSIS
X5 (ABI Acceptance) = αo + B1X1 (Task Complexity) + B2X2 (Perceived Ease of Use) + B3X3 (User Involvement) +
B4X4(ABI Training) + е
Coefficient
Variable (predicted sign) Estimate t-value p Tolerance VIF
Panel A: Perceived Usefulness
Intercept αo 1.81 2.67 0.009 - -
Task Complexity (H1a) B1 (-) -0.20 -1.31 0.194 0.79 1.26
Perceived Ease of Use (H2a) B2 (+) 0.61 3.55 0.001 0.68 1.46
User Involvement (H3a) B3(+) 0.14 1.16 0.252 0.86 1.16
User Training (H4a) B4 (+) -0.22 -1.63 0.108 0.73 1.38
R2 = 0.47, Adjusted R2 = 0.221, F[4, 65] = 4.614, p = 0.002
Panel B: ABI Use
Intercept αo 1.44 2.96 0.004 - -
Task Complexity (H1b) B1 (-) -0.19 -1.73 0.089 0.79 1.26
Perceived Ease of Use (H2b) B2 (+) 0.24 1.96 0.054 0.68 1.46
User Involvement (H3b) B3 (+) 0.28 3.10 0.003 0.86 1.16
User Training (H4b) B4 (+) 0.02 0.32 0.753 0.73 1.38
2 2
R = 0.48, Adjusted R = 0.235, F [4, 65] = 4.982, p = 0.001
Panel C: User Satisfaction
Intercept αo 2.00 3.28 0.002 - -
Task Complexity (H1c) B1 (-) -0.28 -2.01 0.049 0.79 1.26
Perceived Ease of Use (H2c) Β2 (+) 0.26 1.70 0.095 0.68 1.46
User Involvement (H3c) B3 (+) 0.20 1.73 0.089 0.86 1.16
User Training (H4c) B4 (+) 0.14 1.67 0.247 0.73 1.38
2 2
R = 0.47, Adjusted R = 0.224, F [4, 65] = 4.685, p = 0.002
Fit Hypotheses (H5 and H6). While ABI users’ tasks, on average, are characterized
by a moderate degree of complexity. As predicted, the one-way analysis of variances results
presented in Table II reveal significant differences in users’ perceptions of ABI, involvement
in the ABC system design, and satisfaction with ABI, contrasted by whether they face highly
complex or low complex tasks and decision-making activities. ABI users facing less complex
tasks and decision-making activities (mean = 3.36) perceive ABI as being easier to use than
users facing more complex task and decision-making activities (mean = 2.60) (t = 4.08, p =
0.00). User involvement in the ABC implementation, in general, was significantly greater for
users facing highly complex tasks and decisions (mean = 1.92) than users facing less complex
tasks and decisions (mean = 1.38) (t = 2.122, p = 0.03). With regards to acceptance of ABI,
users facing less complex tasks perceived ABI as being more useful in performing task related
35
activities (means = 3.08) than users facing more complex tasks (means = 2.62) (t = 1.77, p =
0.08). Also, on average, users facing less task complexity (mean = 3.09) are significantly
more satisfied with ABI (t = 2.43, p = 0.02) than users facing high task complexity (mean =
2.54). These results thereby provide partial support for H5.
TABLE II
ANALYSIS OF DIFFERENCE BETWEEN MEANS: COMPLEXITY AND TRAINING
Panel A: Analysis of Differences between Means: Task Complexity
High Task Low Task t
Variable Complexity Complexity Statistic Sig.
Perceived Ease of Use 2.60 3.36 4.08 0.00
User Involvement 1.92 1.38 2.22 0.03
User Training 2.09 2.45 1.37 0.18
Perceived Usefulness 2.62 3.08 1.77 0.08
ABI Use 1.17 2.38 1.13 0.27
User Satisfaction 2.54 3.09 2.43 0.02
Panel B: Analysis of Differences between Means: Adequacy of ABI Training
Not very Moderately Very
Adequate Adequate Adequate F Sig.
Perceived Ease of Use 2.41 2.81 3.55 14.68 0.00
Perceived Usefulness 3.05 2.49 3.15 3.34 0.04
ABI Use 2.17 2.06 2.57 3.50 0.04
User Satisfaction 2.33 2.57 3.27 6.69 0.00
ABI users, overall, rate the training received and their preparedness for using ABI as
less than adequate; however, the comparison of means (Table 2) across the adequacy of ABI
training categories indicate significant differences. All mean scores of the ease of use and ABI
acceptance measures are significantly greater for users who rate the ABI training received as
very adequate. As expected, it appears that users that receive more adequate training on
selecting and using relevant ABI are better prepared to use and have more favorable
perceptions regarding the ease of using and comprehending the information.
It appears that there are significant differences in user perceptions regarding the
usefulness of ABI in performing job-related task activities across ABI training categories (t =
3.34, p = 0.04): not very adequate (mean = 3.05), moderately adequate (mean = 2.49), and
very adequate (mean = 3.15). With regards to reported ABI use (t = 3.50, p = 0.04), it appears
that users that receive more adequate training on selecting and using relevant ABI actually use
the information to a greater extent than users who ABI training was less adequate: not very
adequate (mean = 2.17), moderately adequate (mean = 2.06), and very adequate (mean =
2.57). As compared to users receiving moderate ABI training (mean = 2.81) or very little
adequate ABI training (mean = 2.41), overall, users receiving very adequate ABI training
(mean = 3.27) on selecting and using relevant ABI are more satisfied with the information (t =
14.67, p = 0.00). The results do not indicate any statistically significant differences in the
adequacy of user training and ABI use based on task complexity, thus supporting H6.
V. CONCLUSION
The study’s findings suggest that the extent to which individuals use and rely on ABI,
as well as, their perceptions and level of overall satisfaction with the information is largely
influenced by individual-level contextual factors, such as, task complexity, ease of use and
user involvement. The relative associations of the contextual variables with the three ABI
acceptance measures (perceived usefulness, ABI use and user satisfaction) indicate that
employees may use different standards to evaluate ABI and suggest that alternative measures
36
of ABC success are distinctly related to certain determinants (Foster and Swenson, 1997).
Ease of use is significantly associated to all three measures of ABI acceptance, whereas, task
complexity and user involvement are only significantly associated with ABI use and user
satisfaction. User training is not significantly related to the three measures of ABI acceptance.
The results revealed that compared to users facing less complex tasks and decisions,
users that perform more complex tasks and decisions are more likely to believe that ABI does
not have any performance benefits, not use ABI, be the least satisfied and the most demanding
with regards to the level of effort necessary to comprehend and use ABI. Consultants and
managers may be better able to identify specific situations most conducive to the application
of ABI and the extent to which certain types of ABI will be used most optimally. If properly
addressed, these concerns may facilitate early ABI acceptance.
REFERENCES
Anderson, S. W. and Young, S.M. “The Impact of Contextual and Process Factors on the
Evaluation of Activity-Based Costing Systems.” Accounting, Organizations and Society,
24, 1999, 525-559.
Davis, F.D. “User Acceptance of Information Technology: System Characteristics, User
Perceptions and Behavioral Impacts.” International Journal of Man-Machine Studies, 38,
1993, 475-487.
Foster, G., and Swenson, D.W. “Measuring the Success of Activity-Based Cost Management
and Its Determinants.” Journal of Management Accounting Research, 9, 1997, 107-139.
Gosselin, M. “The Effect of Strategy and Organizational Structure on the Adoption and
Implementation of Activity-Based Costing.” Accounting, Organizations and Society, 22,
1997, 105-122.
Guimaraes, T., Igbaria, M. and Lu, M. “The Determinants of DSS Success: An Integrated
Model.” Decision Sciences, 23, 1992, 409-430.
Hair, J. F., Anderson, R.E., Tatham, R.L., and Black, W.C. Multivariate Data Analysis with
Readings. Englewood Cliffs, NJ: Prentice-Hall, 1998.
Igbaria, M., Zinatelli, N., Cragg, P., and Cavaye, A.L.. “Personal Computing Acceptance
Factors in Small Firms: A Structural Equation Model.” MIS Quarterly, 21, 1997, 279-305.
Kim, C., Suh, K., and Lee, J. “Utilization and User Satisfaction in End-User Computing: A
Task Contingent Model.” Information Resources Management Journal, 11, 1998, 11-24.
Klammer, T.P., et.al. “Satisfaction with Activity-Based Cost Management Implementation.”
Journal of Management Accounting Research, 9, 1997, 217-237.
McGowan, A.S. “Perceived Benefits of ABC Implementation.” Accounting Horizons, 12,
1997, 31-50.
Shields, M. D. “An Empirical Analysis of Firms’ Implementation Experiences with Activity-
Based Costing.” Journal of Management Accounting Research, 7, 1995, 148-166.
37
THE IRS CRACKS DOWN ON DEDUCTIONS FOR MBA EDUCATION COSTS
ABSTRACT
Accounting professionals often choose to complete the MBA degree after they enter
the workforce. The number of MBA degrees awarded has grown from 35,000 in 1974 to
more than 120,000 in 2002. Congress allows various tax incentives to encourage education,
but many of these provisions limit the deductible amount as well as the type of expenses that
are deductible. MBA students typically utilize IRC (Internal Revenue Code of 1986) §
(Section) 162 and deduct their qualified educational expenses as an itemized deduction
subject to a two percent of AGI limitation. The purpose of this paper is to discuss the
provisions of IRC §162 and to summarize recent challenges by the IRS (Internal Revenue
Service) and the courts in deducting the costs of obtaining an MBA under IRC §162.
I. INTRODUCTION
Accounting professionals who choose careers outside public accounting often find that
completing an MBA degree offers an edge over pursuing the CPA certification or adds value
to their CPA designation because of the broad knowledge base gained through an MBA
curriculum. As the business environment has become more complex and challenging, the
number of MBAs awarded has risen. In 1974 35,000 MBA degrees were awarded compared
to more than 90,000 in 1995 (Johnson 1997). The number has continued to rise with over
120,000 MBA degrees conferred in 2002 (Kim 2004).
The MBA has traditionally provided graduates with an edge in the job market in the
areas of salary and position and has enabled older executives to remain competitive with
younger colleagues who hold graduate degrees upon entering the work force. In the eight
years prior to 2004, average salaries for MBA graduates increased globally by 27 percent and,
in 2004, companies expected MBA salaries to rise by more than nine percent from 2003 to
over $82,000 (Quacquarelli and Saldanha 2004). Many corporations are willing to foot the
bill for MBA degrees for their middle- and upper-level managers. However, with executive
development budgets tightening, a growing number of companies will pick up only a portion
of the cost of post-graduate studies and some require employees to sign an agreement to stay
with the company for several years or repay the education costs (Johnson 1997).
The purpose of this paper is twofold: to discuss the provisions of IRC §162 and to
summarize recent challenges by the IRS and the courts in deducting the costs of obtaining an
MBA under IRC §162. Section II provides a summary of the provisions found in IRC §162.
Section III presents several IRS decisions and court rulings challenging the deductibility of
MBA educational costs under IRC §162. Section IV recaps tax incentives available for
38
taxpayers to consider as alternatives to the IRC §162 deduction. Section V presents brief
concluding remarks.
One of the oldest and most widely used provisions, IRC §162, allows an employee to
deduct expenses incurred for education as an ordinary and necessary business expense. In the
past, this provision has been a popular way for accountants and other executives to defray at
least a portion of the costs of their MBA education. However, the deduction is classified as an
itemized deduction subject to a two percent of adjusted gross income threshold. Although the
IRC §162 deduction is phased out for higher-income bracket taxpayers, taxpayers often
choose it over other provisions because the phase-out begins at a higher income level, thereby
providing a larger deduction. IRC §162 permits an employee to deduct expenses incurred for
education as an ordinary and necessary business expense if the expenses are incurred for
either (1) maintaining or improving existing skills in their present job or (2) meeting the
express requirements of the employer or those imposed by law necessary to retain their
employment status. Many taxpayers who take a deduction for expenses incurred in the pursuit
of an MBA degree do so with the intent of meeting the first criteria of improving existing
skills in their present job.
39
In spite of the above disadvantages, §162 has been a very popular method for
taxpayers to write off their education expenditures. First, although the deduction is subject to
the two percent floor, it is not limited to a particular dollar amount. Second, the deduction is
allowable for more than tuition, books and related expenses. Education expenses deducted
under §162 are considered to be ordinary, necessary business expenses and include not only
books and tuition, but also transportation to and from school and travel expenses such as
meals and lodging while away from home. In addition, a taxpayer can deduct under §162 any
excess qualifying costs not eligible for a deduction or credit under another provision of the
Code.
The rationale for deducting the cost of an MBA degree by most accounting and
business executives rests on two assumptions. First, an MBA would presumably improve the
skills of a business person or accountant. Second, since many older executives do not hold an
MBA degree, it is not viewed as a minimum requirement for a job. As discussed in the
following section, education costs deducted under §162 have recently been challenged by the
IRS and upheld by the U.S. Tax Court. A pattern has been developing that leads many to
believe that the IRS is sending the message that taxpayers already working in a business field
will no longer be able to deduct the cost of obtaining an MBA. Robert Willens, a tax and
accounting expert at Lehman Brothers Holdings, Inc. speculates that those claiming the MBA
deduction may be more susceptible to IRS audits and suggests that it is going to be virtually
impossible to take a deduction for MBA-related education expenses (Kim 2004).
In a 1986 letter ruling, the IRS concluded that educational expenses incurred while
pursuing an MBA degree were not business expenses for purposes of the deduction provided
by §162 of the Code (Letter Ruling 8714064). In this instance, the taxpayer entered a two-
year MBA program as a full-time student after electing to resign his position as a machine
sales representative when he was unable to obtain a leave of absence. The IRS considered
several factors:
• the length of time the taxpayer was out of the work force while he incurred the
educational expenses;
• whether the period of study was a temporary suspension of employment; and
• whether the studies were pursued at the request or advice of the employer in order to
maintain or improve the skills required by the taxpayer’s position.
The IRS concluded that the taxpayer was not required or advised by the employer to obtain an
MBA degree in order to maintain or improve the skills required for the taxpayer’s position
and there was no indication that the taxpayer expected to resume employment with that
company at any definite time in the future. The IRS viewed the taxpayer’s period of studies
as an interruption of his employment rather than a temporary suspension of such employment
or trade or business. Therefore the costs were not “business expenses” as defined in §162.
In 2002, the IRS denied a deduction for education expenses in the pursuit of an MBA
degree by an employee in the telecommunications industry (TC Summary Opinion 2002-49).
The taxpayer quit his job to complete an MBA degree. The taxpayer argued that he needed to
improve his accounting, financial, and general business administration skills as his job
required him to negotiate complex contracts, and he argued that he did not pursue other
positions. The Tax Court disallowed the deduction. The Court reasoned that if the course of
study qualifies the taxpayer for a new trade or business, the expenditures are not deductible
even though the studies are required by the employer and the taxpayer does not intend to enter
a new field of endeavor, or the taxpayer’s duties are not significantly different after the
education from what they had been before the education.
40
In 2003 the Tax Court denied the deduction for an attorney’s education expenses in
pursuit of the MBA degree (TC Memo 2003-68). The taxpayer obtained an LLM in corporate
finance, a JD and an MBA, enrolling in each degree program immediately after completion of
the preceding one. He was advised to obtain a JD to increase his marketability in a
competitive environment. Subsequently he chose to extend his studies by one year in order to
obtain joint JD/MBA degrees. Although the taxpayer had worked as a summer associate for
law firms while a member of the state bar, the IRS contended that he did not establish that his
assignments and compensation were different from other full-time associates (not practicing
attorneys) and thus he had not established himself in his trade or business of practicing law.
The Court reasoned that it was necessary to break the education cycle and engage in a trade or
business before deducting education expenses.
In 2004, the Tax Court again denied a deduction for education expenses in the pursuit
of an MBA degree by a financial analyst working in the investment banking industry (TC
Summary Opinion 2004-107). An employee with a BA degree working as a financial analyst
quit her job in the investment banking industry and pursued an MBA full-time. She
concluded that it was impractical to continue working because of the long hours required of
financial analysts. To be promoted from “financial analyst” to “associate” the investment
banking industry required an MBA degree. Upon completion of her MBA degree, she did not
return to the investment banking industry, but instead took a position in the manufacturing
industry in a “General Management Program.” Acceptance into the program required an
MBA or equivalent. The taxpayer argued that the expenditures were incurred to maintain and
improve her skills and that the expenditures were required as a condition to the retention of an
existing employment relationship, status, or rate of compensation. She focused on the
similarities between her duties as a financial analyst and those of the associates at the
investment banks firms where she worked.
The IRS contended that the fact that an individual is already performing service in an
employment status does not establish that she has met the minimum educational requirements
for qualification in that employment. Although the duties of analyst and associate overlapped,
the analyst position was a subordinate temporary position lasting for a maximum of three
years. In its decision, the Court cited Income Tax Regs §1.162-5(b)(3) stating that the
expenses were not deductible even though the taxpayer did not intend to enter a new field of
endeavor, and even though the taxpayer’s duties were not significantly different after the
education from what they had been before the education. In each of these cases, the courts
were clear that the costs of pursuing an MBA degree were not deductible education expenses.
The income tax regulations, under Section 1.162-5(b)(3), disallow a deduction for education
expenses if the educational study qualifies the taxpayer for a new trade or business, even
though the studies are required by the employer, and the taxpayer does not intend to enter a
new field of endeavor, or even though the taxpayer’s duties are not significantly different after
the education from what they had been before the education.
In addition to IRC §162, there are several other provisions for exclusions, deductions,
or credits that might be available while pursuing an MBA degree. Listed below is a brief
recap of these provisions:
• IRC §127 allows qualified employer-paid educational costs. However, the company
must have a separate written educational assistance plan, the assistance must be for the
41
exclusive benefit of its employees, and the exclusion is subject to a $5,250 annual
ceiling.
• For qualified employee-paid educational costs, an MBA-pursuing taxpayer may be
eligible for the lifetime learning credit under IRC §25A. This credit is phased out for
higher income taxpayers.
• An above-the-line deduction of up to $4,000 for qualified educational expenses is
available. The deduction is available for obtaining a basic skill, an improvement over
the §162 deduction discussed in the previous section. It is disallowed for taxpayers
who choose the IRC §25 credit for the tax year. The deduction is phased out for
higher-income taxpayers, but may be more attractive for many accounting executives
since the phase-out begins at a higher income level than for the IRC §25 credit.
However, the phase-out begins at a lower income level than for the §162 deduction.
• Congress has created an income exclusion associated with state college tuition
prepayment programs under IRC §529 to exclude the earnings of contributed funds if
they are used for qualified higher education costs.
• The interest on Series EE U.S. government savings bonds may be excluded from
income under IRC §135 if the proceeds are used to pay qualified higher education
expenses.
• IRC §530 allows a nondeductible contribution of $2,000 to an education IRA.
Earnings are excluded from taxable income and distributions to pay qualified higher
education costs are generally made tax-free.
Several restrictions apply to most of the above incentives. First, phase-outs of the credits
and deductions may reduce or eliminate the benefit for middle-income and high-income
taxpayers. Many accounting executives who pursue an MBA may not be able to take
advantage of the benefits because their salaries are above the phase-out levels. Second,
taxpayers cannot “double-dip.” That is, a deduction for an expenditure can be taken under
only one provision. Third, deductions created by the provisions of the Economic Growth and
Tax Relief Reconciliation Act of 2001 expire in 2010. Fourth, specific criteria must be met
for some provisions. For example, educational savings bonds must be issued after December
31, 1989 to individuals who are at least 24 years old at the time of the issuance. Finally, some
of the provisions are not available to taxpayers who are married, filing separately. For
example, the $4,000 above-the-line deduction and the lifetime learning credit are not available
to those who choose to file married, filing separately.
IV. CONCLUSION
Although the IRS and Courts have denied the deduction for several business
professionals and an attorney, will accounting professionals who are already in the work force
42
be able to argue that an MBA improves their skills, but does not qualify them for a new trade
or business? For those who become CPAs upon completion of an MBA, will the IRS
challenge that the MBA has simply allowed them to meet the minimum educational standards
for an existing job? Will the newest rulings denying the deductibility of MBA expenses
change the landscape of our graduate business schools? Or will accounting executives see the
MBA degree as a benefit to their professional career that surpasses the additional costs
resulting from their possible inability to deduct their expenses? Only time will tell.
REFERENCES
43
CHAPTER 3
44
THE EFFECTS OF AMBIENT SCENT ON PERCEIVED TIME:
IMPLICATIONS FOR RETAIL AND GAMING
ABSTRACT
Olfaction has long been regarded as a very powerful and enduring sense, yet attempts
to harness the power of scent have proved rather elusive. While manipulating ambient scent
may influence consumptive behavior, consistently predicting the presence and magnitude of
effects has proved rather difficult. Casino operators accept the risk, and regularly use scent to
alter perceived time to induce gamblers to play longer. The scientific evidence to support this
use of scent is scant and mixed. The current study aims to reduce the ambiguity by
experimentally testing how ambient scent influences the enjoyment of, and willingness to
remain in an environment. Eight treatment levels employed include pleasant/unpleasant
scents of low/high arousal in situations of low/high involvement. Results indicate significant
effects for scent pleasantness and subjects’ level of involvement. There was, however, no
significant difference for arousal.
I. INTRODUCTION
45
Further research on involvement in attitude change (Petty, Cacioppo, and Schuman
1983) and situational effects (Belk 1974, 1975, 1984), suggests situational influences such as
task involvement may moderate the relationship between environmental stimuli and
consumptive responses. Purchase decision involvement (PDI) is therefore a potential
moderator of affective induction. The literature also suggests a scent’s activation (arousal)
property may interact with involvement to influence the intensity of any environmentally
induced affective state.
Arousal theory states that activation (arousal) reinforces other affective feeling states
(Gilligan and Bower 1984) and so may influence evaluations of an environment (Russell and
Pratt 1980) and perception of time spent in the environment (Kellaris and Kent, 1992).
Olfactory researchers Ludvigson and Rottman (1989) found scents low in activation induced a
relaxing effect. Arousal’s influence may be tested by comparing a pleasant/arousing scent to
a pleasant/relaxing one. ELM and arousal theory suggest increased positive responses to a
pleasant/arousing scent vs. the no experimental scent control in situations of low PDI. Effects
of a pleasant/relaxing scent should lie between those of a pleasant/arousing scent and the no
scent control. PDI will moderate the relationship between a scent’s pleasant/arousal qualities
and a shopper’s enjoyment of and willingness to remain in an environment as illustrated in
Figure 2.
II. HYPOTHESES
H1a: Evaluations of the environment are higher in the presence of pleasant ambient
scents than in the no-experimental scent condition under low rather than high PDI
46
H1c: Evaluations of the environment are lower in the presence of unpleasant ambient
scent than in the no-experimental scent (control) under low rather than high PDI.
FIGURE 2
High
Satisfaction
with the
(scented)
environment; high pleasure / high arousal
scent
Likeliness to
remain in high pleasure / relaxing scent
the (scented)
environment no experimental scent
(control)
unpleasant scent
Low High
Purchase Decision Involvement (PDI)
H2a: Perceived time is less than actual time in the presence of pleasant ambient scent
when purchase decision involvement is low rather than high.
H2b: Perceived time is a less than actual time for pleasant/high arousing scents than
for pleasant/relaxing scents when purchase decision involvement is low rather than
high.
47
H2c: Perceived time is greater than actual time in the presence of unpleasant ambient
scent when purchase decision involvement is low rather than high.
II. METHODOLOGY
The next section describes the methodology used to test the hypotheses, including the
research design, subjects, setting, scent stimuli, cover story, and statistical models.
48
High PDI Cells – A professor in a white lab coat told the students that a new type of
vending machine was about to be test marketed on their campus which would accept debit
cards in addition to cash, and that their responses would determine the product lines sold.
Low PDI Cover Story – A casually dressed undergraduate lab assistant told the
students a school supply company was interested in marketing its products to students in other
states, and would not be on their campus for another two years. No mention was made of the
new vending machine and the students were asked not to spend too much time on (their)
responses.
Scent Selection
Scent recommendations were made by experts at a well-known smell institute housed
at a nearby major research university, and by senior project scientists at the world’s largest
producer of flavors and fragrances. Pre-testing quickly narrowed the selection to two pleasant
scents, clementine and vanilla, known for inducing high levels of arousal and pleasantness
respectively. Galbanum was selected as the unpleasant scent for its potent yet non-toxic
noxious odor.
Hypotheses H1a and H1c are supported, as the facility’s environmental evaluations
were significantly higher in the pleasant/arousing scent condition and significantly lower in
the unpleasant scent condition. Although the difference between pleasant/relaxing vanilla and
the no scent control was not significant (p=.075), the combined means of vanilla and
clementine (i.e., pleasant scents) was significantly higher than the control group at the p=.01
level, so H1a is supported. H1b was however rejected, as subjects exposed to
pleasant/arousing vanilla reported facility evaluation scores not significantly different from
those exposed to pleasant/relaxing clementine.
49
scent pleasantness on perceived time was not significant for pleasant/relaxing vanilla (H2a) or
the pleasant/arousing clementine scent (H2b) vs. the no scent control. The only significant
difference was for the unpleasant/arousing galbanum scent condition (H2c).
V. CONCLUSION
Theoretical Implications
This investigation advances the application of theory by expanding the research
linking environmentally induced feelings and consumptive behavior. The study provides
evidence that situational purchase decision involvement (PDI) was a significant moderator of
the relationship between the underlying dimensions of ambient scent and shoppers’ affective
feeling states, and the subsequent transfer of these feelings to evaluations of the environment
and perceived time.
Managerial Implications
Retailer choice may precede brand choice (Darden 1983), and atmospherics plays a
key role in attracting customers to a particular establishment (Kotler (1973). With many
decisions finalized at point-of-purchase (Sarel 1981; Keller 1987), lighting, music, color, and
scent may have more immediate effects on decision making than non point-of-purchase
marketing inputs such as radio, print and television advertising (Baker et al. 1994). The
results for unpleasant scent suggest that when offensive odors are present (e.g., from tobacco
smoke, nearby sewage plants) managers should consider using pleasant scent as an odor
masking technique. For casino operators, the results indicate using pleasant scent to extend
playing time is more likely with lower involvement games such as slot machines or wheels of
chance, than with higher involvement games such as black jack, poker, craps, and roulette.
Finally, smaller casinos with lower service levels and client interaction may benefit
significantly from improved atmospherics.
50
REFERENCES
Baker, J., A. Parasuraman, D. Grewal, & G. B. Voss (2002), “The Influence of Multiple Store
Environment Cues on Perceived Merchandise Values & Patronage Intentions.” Journal
of Marketing, Vol.66 (April), 120-141.
Baker, J., & A. Parasuraman (1994), “The Influence of Store Environment on Quality
Inferences and Store Image,” Journal of Academy of Marketing Sciences, 22 (4) 328-
339.
Belk, R. (1975), “Situational Variables and Consumer Behavior,” Journal of Consumer
Research, 11(2), (December), 157-164.
Machleit, Karen A. and Sevgin A. Eroglu (2000), “Describing and Measuring Emotional
Response to Shopping Experience.” Journal of Business Research, Vol. 49, Issue 2,
(August), 101-111.
Mazursky, D. and Y. Ganzach (1998), “Does Involvement Moderate Time-Dependent Biases
in Consumer Multi-Attribute Judgment?” Journal of Business Research, Vol. 41, 95-
103.
Mehrabian, A. and J. A. Russell (1974), An Approach To Environmental Psychology.
Cambridge, MA: MIT Press.
Michon, R., J. Chebat, and L. Turley (2004), Mall atmospherics: the interaction effects of mall
environment on shopping behavior,” Journal of Business Research, Vol. (57): 883-
892.
Petty, R. E., J. T. Cacioppo, and D. Schuman (1983), “Central and Peripheral Routes to
Advertising Effectiveness: The Moderating Role of Involvement,” Journal of
Consumer Research, 10 (Sept.), 135-146.
Petty, R. E., and J. T. Cacioppo (1986), Elaboration Likelihood Model of Persuasion. In L.
Berkowitz (Ed.), Advances in Experimental Social Psychology, 19, NY: Academic
Press.
Spangenberg, E., A. Crowley and P. Henderson (1996), “Improving the Store Environment:
Do Olfactory Cues Affect Evaluations and Behaviors?” Journal of Marketing, 60
(April), 67-80.
51
THE RELATIONSHIP BETWEEN AGE, EDUCATION, GENDER,
MARITAL STATUS AND ETHICS
ABSTRACT
This research proposes that personal factors influence consumer ethics and are
important variables that help provide a theoretical base for designing more effective marketing
strategies. Specifically, this study explores the relationship between four personal variables
(i.e., age, education, gender, and marital status) and the Muncy and Vitell consumer ethics
model (i.e., illegal, active, passive, and no harm). This study obtained and analyzed data from
seven hundred sixty-one African-Americans. The findings indicate that females and married
participants were more sensitive to ethical issues than males and single consumers.
I. INTRODUCTION
Studying the relationship between personal factors and consumer ethics is vital for
marketing decision makers. Demographic segmentation is the most popular method to
segment consumer markets. One reason that demographics segmentation is so popular is that
consumer wants and needs are often associated with demographic characteristics.
Demographic variables are also easier to measure than other variables such as psychographic
variables. This research proposes and tests the proposition that demographics also influence
consumer ethics. Bartels (1967, p. 21) defined ethics as “a standard by which, business action
may be judged “right or wrong.” Standards differ from one consumer to another, and so
actions regarded “right” by one consumer may be in conflict with and judged unethical by
another consumer. Thus, this research contents that personal factors are important variables
that influence a consumer’s judgment about the “rightness” or “wrongness” of consumer
dealings. In summary, this research proposes that a greater understanding of how
demographics influence consumer ethics will allow marketers to develop better strategies that
include consumers’ ethical characteristics. This study explores the relationship between
consumer ethics and four important demographic variables: age, education, gender, and
marital status.
Many studies have investigated ethics in the marketplace; however, most of these
studies have focused primarily on the seller side of the buyer/seller dyad. After reviewing
research in marketing ethics, Murphy and Laczniak (1981) concluded that the vast majority of
studies had examined ethics of businesses. Past research emphasized that what we know
about consumers’ ethical decision-making is still very limited (Vitell, Singhapakdi, &
Thomas, 2001). In short, relatively few studies have examined consumer ethics in the
52
marketplace, yet consumers are the most important component of the business process.
Ignoring consumer ethics in research may result in the development of wrong marketing
strategies since all aspects of consumer behavior (e.g., the acquisition, use and disposition of
goods) have an integral ethical component. In this study, the focus is on ethics of final
consumers.
III. HYPOTHESES
Hunt and Vitell (1986) in their general theory of marketing ethics proposed that
personal characteristics affect individuals’ ethical beliefs and ethical decisions. Vitell (2003)
encouraged researchers to explore the relationship between demographic variables and
consumer ethics. Thus, several demographic characteristics-age, education, gender, marital
status- have been included in this study.
Age. Evidence suggests that consumer ethics change with age. Past research has
indicated that older individuals are more ethical than younger ones (e.g., Vitell, 2003). For
example, Fullerton et al. (1996) found that younger consumers were more accepting of
unethical behavior, Rawwas and Singhapakdi (1998) found that adults 20-79 years old were
more ethical than teenagers, Ruegger and King (1992) found that older students were more
ethical than younger ones, and Serwinek (1992) found that older workers had stricter
interpretations of ethical standards. Similarly in a study of Japanese consumers, Erffmeyer et
al. (1999) found that younger Japanese consumers were more relativistic and that they tended
to perceive questionable consumer activities as less wrong. These findings are in keeping
with Kohlberg (1984) suggestion that mature people have “higher level” of moral reasoning
than younger people. Therefore, this research proposes
H1: Older consumers will be less tolerant of questionable consumer activities than their
younger counterparts.
Education. Past research has suggested that education is associated with consumer
ethical beliefs. A great deal of research has found that more educated subjects tend to make
more ethical decisions (Goolsby & Hunt, 1992; Kelley, Ferrell, & Skinner, 1990). Similarly,
Browning and Zabriskie (1983) found that purchasing managers with more years of education
viewed gifts and favors as more unethical than less educated managers. These findings are
consistent with Kohlberg’s (1984) assertions that exposure to, and interaction with, more
sophisticated and complex moral situations increases one’s ability to render more appropriate
moral decisions. In short, education improves one’s ability to make moral judgments.
Therefore, this study proposes,
H2: Consumers with higher levels of education will be less tolerant of questionable consumer
activities than their counterparts with lower levels of education.
Gender. The variable gender has frequently been researched (Ford & Richardson,
1994), and found to be related to ethical beliefs Vitell (2003). Overall, the evidence strongly
suggests that women are less tolerant of ethically questionable behaviors than men (Franke et
al. 1997). In studies of students, for example, Beltramini, Peterson, and Kozmetsky (1984)
established that female students were more concerned with ethical issues than male students
were. Singhapakdi (2004) found that male students tend to be less ethical in their intentions
than female students. And Ruegger and King (1992) found that female business students tend
to be more ethical than male business students in their evaluation of different hypothetical
53
business situations. In studies of managers and marketers, Chonko and Hunt (1985) reported
that female managers noticed more ethical problems than males did. Ferrell and Skinner
(1988) reported that female marketing researchers exhibited higher levels of ethical behavior.
And Jones and Gautschi (1988) reported that females were less likely to be loyal to their
company in an ethically questionable environment. Women have also been reported to
demonstrate higher levels of moral reasoning than men (Loe & Weeks, 2000) and to be more
critical of ethical issues than men (Whipple & Swords, 1992). Ang et al. (2001), for instance,
found that males were more likely to have favorable attitudes towards piracy than women.
Based on this evidence we hypothesize that,
H3: Female consumers will be less tolerant of questionable consumer activities than their
male counterparts.
Marital Status. Previous studies of the relationship between marital status and ethics
have yielded mixed results. For example, in studying academic misconduct, Rawwas and
Isakson (2000) did not find a relationship between marital status and ethics. Serwinck (1992)
revealed similar results when studying the ethical views of small businesses. Similarly,
marital status has failed to predict psychologists’ attitudes toward the ethicality of sexual
contact scores (Collins, 1999). On the other hand, other studies have reported that married
consumers have different ethics than unmarried, divorced, or widowed consumers. For
example, Effmeyer, Keillor, and LeClair (1999) reported that married consumers were more
likely to be classified as either relativistic or Machiavellians than unmarried ones. In other
studies, divorced persons were found to have less relational trust than those who are married
(Hargrave & Bomba, 1993). Fournier, 2000 found that a widow’s psychological marital
status, whether she perceives herself as still married or single-again, appears to be an
important predictor of questionable intimacy. Finally, Poorsoltan, Amin, and Tootoonchi
(1991) reported that married students are more conservative and moral than their unmarried
counterparts. The above research strongly suggests that marital status is deserving of further
study. Although the evidence is mixed, overall evidence appears to support the proposition
that married individuals tend to be more ethical than unmarried consumers. Consequently, we
hypothesize that:
H4: Married consumers will be less tolerant of questionable consumer activities than their
non-married counterparts.
IV. METHODOLOGY
Sample. The data used in this study were collected in the U.S. via direct interception
method. All surveys were hand distributed and collected using sealed boxes by marketing
research assistants from an American university. Assistants were trained to distribute and
collect surveys in simulated and actual settings before this study. The sampling occurred at a
variety of public locations (e.g., shopping areas) over a wide range of days and times. The
sample consisted of 761 consumers. Most of the participants were females (61.4%). Almost
half of the sample was married (49.8%). Many of the participants were between the ages of
25 and 34 (39.7%), 15.3% were between the ages of 18 and 24, 34.6% were between the ages
of 35 and 49, and 10.4% were over 50 years old. Over forty-two percent of the respondents
had four years college degrees (42.8%), 32.4% of the respondents had less than four years of
college, and 24.7% had graduate degrees.
54
Measurement of the Constructs. Constructs in this study were measured using a one-
page questionnaire. The instrument consisted of two parts. The first part of the survey
measured the ethical beliefs of participants using Muncy-Vitell questionnaire (hereafter
MVQ). The second part of the survey measured the demographics of the participants. MVQ
was used to measure consumers’ beliefs regarding 26 statements that have potential ethical
implications. This questionnaire, developed by Muncy and Vitell (1992), has since been used
and validated by various studies (e.g., Kenhove, Vermeir & Verniers, 2001). Evidence
suggests that MVQ is a well validated measurement scale that is applicable for studying
ethical behaviors in a wide variety of situations (Chan et al. 1998). Responses to MVQ
statements were coded so that a high score indicates high ethical beliefs and low score
indicates low ethical beliefs. A five-point Likert scale with descriptive anchors ranging from
"strongly believe that it is NOT wrong" (coded 1), to "strongly believe that it is wrong" (coded
5) was used. The MVQ is categorized along four dimensions. The first dimension is
“benefiting from illegal activities” (ILLEGAL). Actions in this dimension are initiated by
consumers and are either illegal or likely to be perceived as illegal by most consumers. The
second dimension, “benefiting from questionable activities” (ACTIVE), is also one where the
consumer initiates the action. While, these actions are not as likely to be perceived as illegal,
they are still morally questionable. The third dimension, “passively benefiting from
questionable activities” (PASV), is one where consumers benefit from sellers’ mistakes rather
than their own actions. Finally, the fourth dimension is “no harm/indirect harm questionable
activities” (NOHARM). These are actions that most consumers perceive as not resulting in
any harm and, therefore, many consumers perceive them as acceptable actions. The above
Cronbach alpha coefficients for the ILLEGAL, ACTIVE, PASV, and NOHARM suggest that
these dimensions are internally consistent.
V. DATA ANALYSIS
ANOVA was used to explore the relationship between the demographic variables and
consumer ethics. The four personal factors (i.e., age, education, gender, and marital status)
were the independent variables and the four dimensions of the MVQ (i.e., ILLEGAL,
ACTIVE, PASV, and NOHARM) were the dependent variables. As shown in Table 1, the
ANOVA results found that relationships between the age four categories and illegal, active,
passive, and no harm ethical conditions are significant in this sample (Table 1). Means of the
four age categories across the four consumer ethics dependent variables found that older
consumers reject illegal, active, passive, and no harm activities more than younger consumers.
For example, participants who are 50 years and older are the most sensitive and consumers
who are between the age of 18 and 25 are the least sensitive to MVQ ethical statements.
These results support Hypothesis 1. ANOVA found that the relationships between education
categories and illegal, active, and passive activities are significant in this sample. More
educated consumers reject illegal, active, and passive activities more than less educated
consumers. The only relationship that was not significant was the relationship between
education and no harm activities. These results mostly support the Hypothesis 2. In this
sample, means of females and males across the four consumer ethics variables indicate that
females reject illegal, active, passive, and no harm activities more than males. These results
support hypothesis 3. Means of married and single consumers across the three ethics
variables indicate that married consumers reject illegal, active, and passive activities more
than single consumers. Again, the only relationship that was not significant was the
relationship between education and no harm activities. These results mostly support the
Hypothesis 4.
55
Table 1: ANOVA Results
Dependent Variables & Means
Independent Variables
Illegal Active Passive No harm
VI. CONCLUSION
There is strong believe that if marketers can develop a better comprehension of the
variety of individual factors that exist within their target markets, they will develop better
marketing mixes that meet the needs and wants of those markets. The findings of this
research provide strong support that older, more educated, female, and married consumers are
more sensitive to ethically questionable activities than younger, less educated, male and
unmarried consumers. Each one of these results has some implications for international
marketers. The results presented in this study suggest that demographics are related to
consumer ethics and provide additional evidence that marketers should focus on the
demographics of individual and individual subgroups when developing their marketing
strategies. The results of this study had important theoretical and practical implications. This
research confirms that marketers concerned with international ethical decision-making must
study demographic variables. While this study makes an initial contribution by exploring the
relationship between four personal factors and consumer ethics much research is to be done to
explore the relationship between personal characteristics and ethical orientations.
REFERENCES
Beltramini, R.F., R.A. Peterson, and G. Kozmetsky. ”Concerns of College Students Regarding
Business Ethics.” Journal of Business Ethics, 3, (3), 1984, 195-200.
Erffmeyer, Robert C.; Bruce D. Keillor; and Debbie Thorne LeClair. “An Empirical
Investigation of Japanese Consumer Ethics.” Journal of Business Ethics, 18, (1), 1999, 35-
50.
Ford, Robert C.; Woodrow D. Richardson. “Ethical decision making: A review of the
empirical literature.” Journal of Business Ethics, 13, (3), 1994, 205-221.
Franke, George R., Deborah F. Crown, and Deborah F. Spake. “Gender Differences in Ethical
Perceptions of Business Practices,” Journal of Applied Psychology, 82, (December),
1997, 920-34.
56
Kenhove, Patrick Van; Iris Vermeir; and Steven Verniers. “An Empirical Investigation of the
Relationships between Ethical Beliefs, Ethical Ideology, Political Preference and Need
for Closure,” Journal of Business Ethics, 32, (4), 2001, 347-361.
Muncy, James A.; and Scott J. Vitell. “Consumer Ethics: An Investigation of the Ethical
Beliefs of the Final Consumer,” Journal of Business Research, 24, (4), 1992, 297-311.
Rawwas, Mohammed Y. A.; and Anusorn Singhapakdi. “Do Consumers' Ethical Beliefs Vary
with Age? A Substantiation of Kohlberg's Typology in Marketing,” Journal of
Marketing Theory and Practice, 6, (2), 1998, 26-38.
Rawwas, M.Y.A. and H. Isakson, ”Ethics of Tomorrow’s Business Managers: The Influence
of Personal Beliefs and Values, Individual Characteristics, and Situational factors,” Journal
of Education for Business, 75, (6), 2000, 321-330.
Vitell, S. J. “Consumer Ethics Research: Review, Synthesis and Suggestions for the Future,”
Journal of Business Ethics, 43, 2003, 33–47.
Vitell, S. J., Anusorn Singhapakdi, and James Thomas (2001), “Consumer Ethics: An
Application and Empirical Testing of the Hunt-Vitell Theory of Ethics,” The Journal
of Consumer Marketing, 2, (18), 2001, 153-178.
57
A CONTENT ANALYSIS OF AN ATTEMPT BY VICTORIA’S SECRET
TO GENERATE BRAND MENTIONS THROUGH PROVOCATIVE DISPLAYS
ABSTRACT
A content analysis of newspapers from around the world and transcripts of American
TV shows was performed to measure increases in brand mentions of Victoria’s Secret after
the company used provocative window displays in two shopping malls in Wisconsin and
Virginia in the fall of 2005. Results showed differences in press coverage after the
controversial tactic was employed. Brand mentions increased 15 percent overall, but there
appeared to be a media effect; frequency of brand mentions decreased 19 percent in
newspapers, but increased 38 percent in TV shows. Brand mentions were significantly more
frequently negative overall in the week after the displays were launched than the week before.
Brand mentions in both newspapers and television stories were significantly more negative
after the displays than before. The controversial strategy had no effect on whether the brand
mention landed on front pages or in story leads.
I. INTRODUCTION
This study examined how effective this controversial PR strategy was for Victoria’s
Secret by measuring the frequency, placement, and tone of brand mentions in newspapers
from around the world and in transcripts of American television shows the week before the
controversial window displays were displayed and the week after. The displays attracted
protesters of all ages who threatened to boycott the shopping centers.
The use of such controversial displays was likely to attract media coverage, but was
the change in the frequency of Victoria’s Secret brand mentions substantial? Did the brand
mentions appear more prominently in newspaper and television stories after the displays were
used? Was the media coverage more frequently negative or positive? Was there a difference
between newspaper and television coverage? This research aimed to answer these questions.
58
II. LITERATURE REVIEW
Store window displays, such as those employed by Victoria’s Secret, may be related to
store atmospherics, branding theory, mere exposure theory, and the use of controversial
strategies and tactics in some public relations efforts.
Kotler (1973) defined atmospherics as a conscious effort to design space for the
purpose of creating specific effects among buyers. Mehrabian and Russell (1974) introduced
the concept that consumers would react in one of two ways in regard to response to store
environment: approach or avoidance. Donovan and Rossiter (1982) determined as arousal
increases, enjoyment, money spent in the store and time spent in the store increased.
The physical attractiveness of the store has been shown to generate higher purchase
intentions (Baker, Levy, and Grewal, 1992). Many retail purchase decisions are made in the
store environment (Keller, 1987). It has been suggested that emotional responses produced by
the store environment can influence purchase intention (Donovan, Rossiter, Marcoolyn, and
Nesdale, 1994). Misuse of just one atmospheric element could have a negative impact on
purchase behavior (Turley and Chebat, 2002). Consumers will shop in an environment that is
unattractive, but they will spend less money (Sherman, Mathur, and Smith, 1997). If
consumers enjoy the environment, they are likely to re-patronize that environment (Wakefield
and Baker, 1998).
Victoria’s Secret planners may have been banking on branding theory when they used
the provocative displays in an attempt to attract media attention. This process has proved
effective whether it is used within a class or for across-class products, or if it is negative or
positive. Across-class branding (Van Auken and Adams, 2005, page165) has been verified as
an effective strategic option when one wants to anchor two products with different advantages
to the public. Negative publicity of a product has been long the worst fear of professionals in
the field (Weinberger, Allen, and Dillon, 1981, page 20), but Victoria’s Secret planners may
have disregarded this.
Brands that are strong in terms of connection with consumers may enter the mind of
the consumers as memory. The larger the number of cues linked to the brand or information,
the greater the likelihood the information or brand will be recalled (Isen, 1992). Fueling cues
to a brand is also supported by the theory of the mere exposure, which states that simply
exposing consumers to products may stimulate product purchases and that the more exposure
one has to a stimulus, the more one will tend to like it, and eventually choose it (Zajonc,
1968). “…Mere exposure effects persist when initial exposures to brand names and product
packages are incidental, devoid of any intentional effort to process the brand information
(Janiszewski, 1993, page 389).” Houston and Scott (1984, page 27) found a “…decay in
59
attention to an advertisement as advertising volume in an issue increases.” Furthermore, there
have been studies on the relationship between few exposures and the impact they have.
Lastovicka (1983, page 333) tested the theory of the main three psychological exposures that
impact the viewer. Initially, the decoding of the product occurs; then the second exposure
provides the recognition and evaluation phase, while the third acts as a reinforcement of the
previous evaluation. Gibson (1996) later raised the issue of single-exposure effect and
concluded that there is a gradual body of evidence of the effectiveness of a single exposure of
a TV advertisement. “Behavioral response to advertising exposure is probably nonlinear, like
the attitudinal and cognitive responses found in numerous experimental studies,” however
“two to three exposures is optimum (Tellis, 1988, page134).” In addition to these theories,
there is also the use of controversial public relations strategies and tactics to attract media
coverage. The planners for Victoria’s Secret were not the first to use this method of
inexpensive promotion. The Chrysler Group produced a controversial ad for Dodge trucks
three years ago that received huge media coverage (Halliday, 2002). A billboard
advertisement for a movie produced a lot of commotion in Los Angeles over its controversial
lines (PR Week, 2004). One of the most recent controversial campaigns was conducted last
year by Benetton, which attracted not only free publicity but also analysis. The major
observation was that, with no exception, all the marketing representatives denied having any
strategy, and expressed their amazement at such media coverage (PR Week, 2005). These
supporting theories and practices in the field of advertising and PR provided the basis on
which the efficacy of Victoria’s Secret strategy was analyzed.
H1: Using the controversial display method will result in more mentions in the media for
Victoria’s Secret brand, as compared to the mentions before the display. H2: Using the
controversial display method will result in more prominent page placement in the media for
Victoria’s Secret brand name, as compared to page placement before the event. H3: Using the
controversial display method will result in more prominent story placement in the media for
Victoria’s Secret brand name, as compared to story placement before the event. H4: Using
the controversial display method will result in more negative attributes in the media for
Victoria’s Secret brand name, as compared to the attributes before the event. Exploratory
question 1: Will using the controversial display result in more negative attributes in the
newspapers for Victoria’s Secret brand name, as compared to the attributes before the event?
Exploratory question 2: Will using the controversial display result in more negative attributes
on American TV for Victoria’s Secret brand name, as compared to the attributes before the
event?
IV. METHODOLOGY
Researchers conducted a content analysis of newspapers from all over the world and
transcripts of American TV shows, using LexisNexus. Articles and TV transcripts analyzed
covered September 8-21, 2005, one week before the display and one week after. The displays
were launched on Sept. 15, 2005. The unit of analysis was any mention of the Victoria’s
Secret brand. The independent variables were the brand name and the date. Dependent
variables were page placement, story position, and tone (positive, negative or neutral). An
intercoder reliability test achieved 100 percent agreement on each of the variables after two
rounds.
60
V. RESEARCH
Table I supports the first hypothesis which predicted that in the week after the
controversial displays, the Victoria’s Secret brand would get more mentions in the media, as
compared to mentions before the exhibits. However, the increase from 116 to 133 mentions is
not large, representing a 15 percent increase overall.
H2 and H3 were not supported; there was no effect on page placement or story
position.
Table III supports H4, which predicted that the brand mentions after the display would
become more negative. Negative mentions increased dramatically from 5.2 percent to 33.8
percent, while positive mentions decreased dramatically from 26.7 percent to 2.3 percent.
Table IV reflects the first exploratory question. Negative mentions increased from
12.5 percent to 23.1 percent, and positives dropped from 20.8 percent to 2.6 percent in
newspapers.
61
Table IV: Tone Of Newspaper Story Before/After Displays
Date Negative Neutral Positive
Before displays 6 32 10
12.5% 66.7% 20.8%
After displays 9 29 1
23.1% 74.4% 2.6%
Note. N= 87; Chi-Square= 7.26; df= 2; p <.05.
VI. CONCLUSION
If the strategists at Victoria’s Secret were more interested in quantity of exposure than the
quality of exposure, then the strategy had moderate success. If they were concerned about
how the media characterized the controversial displays, then the strategy largely failed.
REFERENCES
Auty, S., and Lewis, C. “Exploring Children's Choices: The Remainder Effect of Product
Placement.” Psychology & Marketing, 21, (9), 2004, 697-713.
Baker, J., Levy, M., and Grewal, D. “An Experimental Approach to Making Retail Store
Environment Decisions.” Journal of Retailing, 68, 1992, 445-460.
Belk, R. “Possessions and the Extended Self.” Journal of Consumer Research, 15,1988, 139-
168.
Bettman, J. An Information Process Theory of Consumer Choice. Reading, MA: Wesley
Publishing, 1979.
“Billboards Turn Indie Flick into Social, Political Story.” PR Week, May10, 2004, 3.
Bloemer, J., and Ruyter, K. “On the Relationship between Store Image, Store Satisfaction and
Store Loyalty.” European Journal of Marketing, 32, 1998, 499-513.
Chattopadhyaya, A., and Alba, J. “The Situation Importance of Recall and Inference in
Consumer Decision Making.” Journal of Consumer Behavior, 15, 1988, 1-12.
Donovan, R., and Rossiter, J. “Store Atmosphere: An Environmental Approach.” Journal of
Retailing, 58, 1982, 34-57.
62
Donovan, R., Rossiter, J., Marcoolyn, G., and Nesdale, A. “Store Atmosphere and Purchase
Behavior.” Journal of Retailing, 70, 1994, 283-294.
Fink, E., Monahan, J., and Kaplowitz, S. “A Spatial Model of the Mere Exposure Effect.”
Communication Research, 16, (6), 1989, 746-769.
Gardner, B and Levy, S. “The Product and the Brand.” Harvard Business Review, 33, March-
April, 1955.
Gibson, L. “What Can One TV Exposure Do?” Journal of Advertising Research, 36, (2),
1996, 9-18.
Gregory, J., and Sellers, L. “Building Corporate Brands.” Pharmaceutical Executive, 2002,
38-44.
Halliday, J. “Dodge Spot Courts Controversy; ‘Urinating boy’ by BBDO Scores High in
Awareness Study.” Advertising Age, 73, (40), October 7, 2002, 3.
Holmes, P. “Controversy Is an Acceptable by-Product for a Company That Stays True to Its
Core Values.” PR Week, September 5, 2005, 11.
Houston, F., and Scott, D. “The Determinants of Advertising Page Exposure.” Journal of
Advertising, 13, (2), 1984, 27-33.
Isen, A. “The Influence of Positive Affect on Cognitive Organization: Some Implications for
the Influence of Advertising on Decisions about Products and Brands. In A.
Mitchell, ed., Advertising Exposure, Memory and Choice. Hillsdale, NJ: Lawrence
Erlbaum, 1992.
Janiszewski, C. “Preattentive Mere Exposure Effects.” Journal of Consumer Research, 20,
1993, 376-392.
Keller, K. “Memory Factors in Advertising: The Effects of Advertising Retrieval Cues on
Brand Evaluations.” Journal of Consumer Research, 14, 1987, 316-333.
Keller, K. “Conceptualizing, Measuring and Managing Customer-based Brand Equity.”
Journal of Marketing, 57, 1993, 1-22.
Kotler, P. “Atmospherics as a Marketing Tool.” Journal of Retailing, 49, 1973, 48-64.
Lastovicka, J. “A Pilot Test of Krugman's Three-Exposure Theory.” In Percy, L. and
Woodside, A., eds., Advertising and Consumer Psychology. Lexington, MA: D.C.
Heath, 1983, 333-344.
Mehrabian, A., and J. Russell. An Approach to Environmental Psychology. MIT Press, 1974.
Obermiller, C. “Varieties of Mere Exposure: the Effects of Processing Style and Repetition on
Affective Response.” Journal of Consumer Research, 12, (1), 1985, 17-30.
Riondino, M. “Branding on the Web: a Real Revolution?” Journal of Brand Management, 9,
(1), 2001, 8-19.
Rossiter, J., and L. Percy. Advertising and Promotion Management. New York: McGraw-
Hill, 1987.
Sherman, E., Mathur, A., and Smith, R. “Store Environment and Consumer Purchase
Behavior: Mediating Role of Consumer Emotions.” Psychology and Marketing, 14,
1997, 361-378.
Tellis, G. “Advertising Exposure, Loyalty, and Brand Purchase: A Two-Stage Model of
Choice.” Journal of Marketing Research, 25, (2), 1988, 134-144.
Turley, L., and Chebat, J. “Linking Retail Strategy, Atmospheric Design, and Shopping
Behavior.” Journal of Marketing Management, 18, 2002, 125-144.
Van Auken, S., and Adams, A. “Validating Across-Class Brand Anchoring Theory: Issues and
Implications.” Journal of Brand Management, 12, (3), 2005, 165-176.
Wakefield, K., and Baker, J. “Excitement at the Mall: Determinants and Effects on Shopping
Response.” Journal of Retailing, 74, 1998, 515-533.
63
EFFECTIVENESS OF EMOTIONAL ADVERTISING:
A REVIEW PAPER ON THE STATE OF THE ART
ABSTRACT
I. INTRODUCTION
Many companies have had difficulty basing their competitive advantage on the
functional aspects of their products, and have begun to rely on more emotional advertising to
attract consumers' attention. Review of the literature shows that the focus of research has
been in several areas. Research and marketplace findings have identified that the consumers'
emotional response toward a brand and/or ad can be a powerful motivator to purchase,
influence brand recall, and determine brand differentiation (Hazlett & Hazlett, 1999). Today,
creators of advertisements are using an array of sensory images including computer graphics,
music, drama, and emotion to grab the attention of the viewer. Some of the most common
emotional appeals focus on humor, fear, and self-idealization. Zeitlin and Westwood (1986)
emphasized the use of fear as a motivator to influence consumer response to advertising
messages. The fear appeal could range in intensity from mild to severe. That research
suggests that, in order to be effective, a fear based message has to be followed by a reasonable
solution which the product/service advertised can provide. Although some advertisers view
the use of humor in advertising as critical to get consumers to attend to messages, research
tends to temper that judgment. A study by Kover, Goldberg, James (1995) indicates that the
humorous side of the massage may result in the loss of product message. The investigation
also indicated that ads, which are based on consumers' desire to accomplish personal
enhancement tend to be highly effective.
64
Another stream of research is aimed at comparing response to emotional advertising
across different countries (Chan, 1996; Morris, 1995; De Pelsmacker and Geuens, 1998;
Huang, 1998). The ultimate goal of any advertisement is to create strong linkages between
the brand and the viewer. The advertisement should convince viewers that the brand is
relevant to them, should influence how good they feel about buying and using the brand, and
should influence their pre-dispositions toward purchasing the brand. (Kamp & MacInnis,
1995) Past advertising research has often focused on conscious, deliberate, and rational
processing of product information, but in actuality, the consumer is often unaware of what
elements of an ad or attributes of a brand influenced their choice. (Hazlett & Hazlett, 1999)
Most processing of advertising messages is subconscious, implicit, and intuitive. (Hazlett &
Hazlett, 1999) Based on those assumptions, emotion is primarily a motivator of consumer
behavior and that the affect attached to the ad or brand may play a more critical role in an ad's
effectiveness than the attitude or thoughts about the brand. (Hazlett & Hazlett, 1999).
Another stream of research can be described as the modeling approach or a comprehensive
attempt to explain how emotional advertising works and what are the underlying forces that
shape this process (Stout and Lockenby, 1986; Kamp and MacInnis, 1995.
The focus of this paper was an investigation of three models that explain the role of
emotions in advertising response. Each model will be explained in detail, followed by the
results of each model. We compared and discussed research methodology used in explaining
the emotional models of advertising in the second part. The final portion of the paper is a
reflection on what the models contributed to the understanding of this topic.
The results of Stout and Leckenby’s study found that respondents who had an
experiential emotional response had a better attitude towards the ad itself, the brand, purchase
intent, brand recall, and ad content playback. Respondents who had an empathic emotional
response had more brand recall, and ad content playback. Respondents who had a descriptive
emotional response had a better brand attitude, more purchase intent, and ad content playback.
The results can conclude that consumers experiencing an emotional response have better
brand recall, ad content playback, attitudes toward the brand, and purchase intentions.
However, there was a follow up article addressing some of the findings in this study
by a different group of authors (Page, et. all. (1988). They questioned the validity of the
constructs (descriptive, emphatic and experiential response to advertising) used in the Stout
and Lockenby article and their research findings indicating a strong connection between
emotional advertising and its effectiveness.
65
IV. THE KAMP AND MACINNIS MODEL
Kamp and MacInnis (1995) have developed a slightly different approach to how
viewers respond to emotional advertisements. They determined that what was depicted in the
ad also affected what viewers felt in response to the ad. They have identified two constructs
related to the portrayal of emotions in advertisements; emotional flow and emotional
integration. Both constructs have been shown by Kamp and MacInnis to influence the nature
and intensity of consumer's emotional responses, and effect involvement with the brand,
attitudes, self-brand image, and purchase intentions. Emotional Flow can be defined as "the
extent to which emotions portrayed in the advertisement are perceived to change in their
nature and/or intensity during the course of the advertisement. “ (Kamp & MacInnis, 1995)
Advertisements can vary in emotional intensity from negative to positive and from low to
high arousal.
The Burke and Edell Model (1989) focuses more on a viewer's attitude towards
feelings. Burke and Edell believe that understanding a consumer's feelings is as important as
understanding their thoughts. The main idea of their model is that feelings generated by the
ad are different from thoughts about the ad, and that both are important and contribute to
explaining the effects of advertising. (Burke & Edell, 1989) Thinking and feeling are two
independent evaluation systems, respondents tend to tell the emotions they see in the ad, and
not how it makes them feel. Three dimensions of feelings were identified: upbeat, warm, and
negative; and three types of judgments were identified: evaluation, activity, and gentleness.
The model does not indicate whether judgments or feelings occur first, or whether judgments
of the ad’s characteristics are made at the same time as evaluations of the brand’s attributes.
The effects of feelings on judgments about the ad concluded that all three of the feelings were
related to all three of the judgments. Feelings contribute to the evaluation judgment (activity,
evaluation, and gentleness) of an ad. The influence of upbeat feelings is generally positive.
Ads that produce upbeat feelings are evaluated positively in terms of both evaluation and
activity. The effects of upbeat ads lead to positive brand attribute evaluations, and evaluation
judgment. In conclusion, brands with ads that generate upbeat feelings are evaluated more
positively.
The influence of warm feelings work through the evaluation judgment to positively
effect attitudes toward the ad and positive brand attribute evaluations. Warm feelings have a
positive effect on attitudes towards the brand that occurs through attitudes towards the ad and
brand attribute evaluations. The influence of negative feelings affects attitudes towards
brands directly through an effect on attitudes towards the ad. For example, if a consumer
views an ad that evokes negative feelings, those feelings are immediately transferred to the
brand. All of the effects of negative feelings, both indirect and direct are negative. In
conclusion, positive emotions such as warmth, and upbeat feelings, lead to attitudes that are
more positive, evaluations, and judgments than negative emotions in ads.
The three studies required participants to view a series of advertisements and answer
questions following the ads. The studies asked a set of questions that required the participants
to respond to the advertisements themselves, and questions involving any emotional responses
they may have had. There was a significant difference in the sample sizes, the probable
66
reason was cost. This conclusion can be made based on the fact that Stout & Leckenby's
study had the largest sample size (1498), and they did not offer any money to their
participants. Kamp & MacInnis, and Edell & Burke, had smaller sample sizes, but chose to
pay their participants. Stout & Leckenby's and Kamp & MacInnis' sample sizes both had a
preponderance of female participants. Stout & Leckenby's and Kamp & MacInnis’ studies
took place in several malls located in U.S. cities. Therefore, this may not have been the ideal
location to perform this study unless they had intentionally planned to recruit more females
than males. Unfortunately, the reason for having more female participants was not included
in Stout & Leckenby's study. It was mentioned that the reason for Kamp & MacInnis' 75
percent female sample size was because the ads used were primarily products used by
females. The products were not revealed in the study, due to certain legal issues. Edell &
Burke's study included 191 respondents from a University campus, with no specific
male/female ratio mentioned.
Stout & Leckenby's study and Kamp & MacInnis' studies used samples obtained from
mall intercepts from several U.S. cities. By taking their sample from several U.S. cities, it
decreases the probability of having participants with similar values, ethics, and attitudes.
Edell & Burke's sample was drawn from a University campus.
VII. CONCLUSION
The purpose of this paper was to investigate the influence of emotional advertising on
brand/advertisement recall, purchase intentions, and developing positive attitudes towards and
advertisement and its sponsor. Review and analysis of the literature indicates that this is a
very complex topic and that the results are not always conclusive. It seems that the bulk of
research indicates that emotional advertising works to improve advertising effectiveness.
67
Different models were examined. The literature review suggests that emotions play a
significant role in the design of advertising messages.
Different emotions lead to different results in brand recognition, recall, attitudes, and
purchase intent. Non-emotional advertisements lead to the least favorable emotional
reactions. Consumers showed less interest in non-emotional ads, over any other emotional
execution examined. Positive emotions, especially humor showed the most favorable results.
Humor outperformed all the other emotions with respect to ad recognition, purchase intent,
and attitudes towards the brand. However, even this element was not without its dissenting
voices. We can conclude that humor/positive emotions in advertising result in the most
desired advertising outcomes. Warm and negative feelings should be used in ads under
certain circumstances.
REFERENCES
68
Morris, John D. "Observations SAM: The Self-Assessment Manikin-An Efficient Cross-
Cultural Measurement of Emotional Response", Journal of Advertising Research., 6,
November-December, 1995, 63-68.
Page, T., Daugherty, P., D. Erogoly, D. Hartman, S.D. Johnson, D. Lee. "Measuring
Emotional Advertising: A Comment on Stout and Leckenby.” Journal of Adver-
tising., 17, (4), 1988, 49-52.
Stout, Patricia A., John D. Leckenby. “Measuring Emotional Response to Advertising.”
Journal of Advertising., 15, (4), 1986, 35-43.
Stout, Patricia and Roland T. Rust. "Emotional Feelings and Evaluative Dimensions of
Advertising: Are They Related?" Journal of Advertising., 22 , 1993, 61-71.
Zeitlin, David and Richard A. Westwood. "Measuring Emotional Response." Journal of
Advertising Research., 5, October-November, 1986, 34-44.
69
PICK A FLICK: MOVIEGOERS’ USE AND TRUST OF ADVERTISING
AND UNCONTROLLED SOURCES
ABSTRACT
Moviegoers have a variety of advertising and uncontrolled sources to use for information
about movies, but which do they prefer? One hundred seventy-five moviegoers were
surveyed on the frequency of use, level of trust, and level of utility they have for a variety of
movie information sources. Word-of-mouth communication and advertising sources that
provide a “sample” of the movie were most used, useful and trusted in helping moviegoers
choose a movie. Surprisingly, Internet movie information sources were little used, little
trusted, and not regarded as being very useful.
I. INTRODUCTION
Movie consumers can use a variety of sources to help them select a movie to attend.
Some sources, such as advertising, are controlled by movie marketers, but other sources, such
as word-of-mouth are not under their control. Scholars, marketers and industry pundits,
however, have disagreed about which sources are most effective in influencing a moviegoer’s
decision about what movie to attend. The advent of the Internet has movie advertisers
reevaluating the value of traditional media (Galloway, 2003). Some movie marketers believe
that controlled (advertising) sources are becoming more important due to the necessity of a
strong opening weekend for a movie (Kuklenski, 2004) as opposed to letting uncontrolled
word-of-mouth “buzz” spread. This study will examine which sources are most trusted, most
useful, and most often used in movie selection. It also will examine which sources are most
often consulted for screening times and locations. Furthermore, comparisons will be made of
the use of advertising and other sources by moviegoers with Internet access and those with no
access.
The movie marketplace sees an average of nine new movies released each week
(Motion Picture Association, 2005, p. 12). Movie advertisers want to make their movie the
one their target audience wants to see this week. Achieving this goal usually requires massive
amounts of advertising. Indeed, a positive relationship has been found between advertising
expenditures for a movie and its opening week box-office success (Elberse & Eliashberg,
2003). The costs of advertising and promoting a movie have risen from an average of $13.9
million in 1994 to $30.6 million in 2004 (Motion Picture Association, 2005, p. 20).
70
Movie trailers have been found to be a powerful media source in attracting people to a
movie (Faber & O’Guinn, 1984; Hixson, 2000). Conversely, two movie industry studies
found that less than 10% of moviegoers are made aware of a movie through trailers (Klady,
1994). Movie trailers can be targeted to moviegoers based on their behavior of attending
movies at that specific theater (Goodale, 1998) and also on their movie genre preferences
(Hixson, 2005).
Movies were once heavily advertised in newspapers. Now, many movie executives
believe that newspaper ads are no longer useful in motivating people to attend a particular
movie (Dotinga, 2001) but are useful in providing the movie location and time. The telephone
is used to advertise movies and theaters as moviegoers have traditionally telephoned theaters
for show times and locations. Moviefone is the largest interactive movie guide/ticketing
service, with telephone and online services. The increase in advertising spending per movie on
the Internet from $168,000 in 2000 to $735,000 in 2004 (Motion Picture Association, 2005, p.
21) seems logical, as heavy computer usage has been associated with a greater use of movies
(Robinson, Barth & Kohut, 1997). Also, young adult moviegoers depend more on the Internet
than other movie information sources (Friedman, 2002). Despite the convenience, and the
Internet’s penetration into many households, people are used to accessing newspapers to find
movie times and locations. Therefore, the following hypotheses and research question are
generated:
H1: Moviegoers will trust trailers in the theater more than other sources to give them a
true sense of what a movie is really like.
H2: Moviegoers will consult newspapers more often than other sources to learn movie
locations and times.
RQ1: Is there any difference in the use and trust of movie advertising sources by
those who have Internet access and those who do not?
Movie studios release press kits and value the publicity that can be generated. An
online poll found, however, that publicity is not especially valuable in “selling” a movie
(Friedman, 2002). Movie reviewers/critics play a publicity role and other roles in their
relationship with moviegoers. Included are creating awareness, assessing entertainment value
and providing movie information (Austin, 1988). However, movie reviewers are of little
value to most moviegoers (Friedman, 2002). Based on this information the following
hypotheses are generated:
H3: Moviegoers will find Word-of-mouth to be more useful than other sources in
making a movie attendance decision.
H4: Moviegoers will use Word-of-mouth more often than other sources in making a
movie attendance decision
71
III. METHODOLOGY
The instrument consisted of six banks of items. 1.) Asked about movie viewing habits.
2.) Asked to what degree respondents trusted a source of information to give them a “true
sense of what a movie is really like.” 3.) Asked to what degree respondents found a source of
information useful in helping them decide which movie to attend. The Chronbach’s alphas
were .832 and .848, respectively. 4.) Asked how often respondents used a source to help them
decide which movie to attend. 5.) Asked how often respondents consulted a source to find a
show time and location of a movie already selected. The Chronbach’s alphas were .839 and
.581, respectively. 6.) Asked for demographic information including computer ownership and
use.
IV. RESULTS
Participating in the survey were 175 moviegoers. Nine out of ten participants were
Caucasian. The mean age was 39.8, with 31% under age 30. Females accounted for 51% of
the participants. Three out of ten (31.4%) participants reported an income of more than
$70,000 per year, while 32.8% reported incomes of less than $40,000 per year. Six out of ten
(61.7%) had no college degree, 14.9% had a graduate degree. One hundred fifty-two (87%)
had Internet access either at home, work or school.
One-sample t-tests were used to compare the means in testing hypotheses 1 – 4. The
test statistic used for each set of variables was the largest mean for each set. Hypothesis 1 was
not supported. Participants trusted word-of-mouth most to provide them the truest sense of
what a movie is like (Table I). Hypothesis 2 was supported. Word-of-mouth was also the
source of information that is most useful in helping the participants decide which movie to
attend (Table II). Hypothesis 3 was also supported. “Word-of-mouth” was used more often
than other sources to help participants make a movie attendance decision (Table III).
Hypothesis 4 was also supported. Participants used newspaper ads and listings most often to
learn the location and time of a movie they wanted to see (Table IV).
For research question 1, only three significant differences were found when comparing
moviegoers with Internet access and those without in how often those sources (listed in Table
IV) were consulted for movie locations and show times. One was Moviefone. Those with
Internet access (mean = 2.33) consulted it more than moviegoers with no access (mean= 1.40),
(F(1, 162) 4.31, p < .05). Not surprisingly, the two other sources that had significant
differences were found to be online information sources: theater/chain websites and a
movie’s website. Noteworthy is that no significant differences were found when comparing
moviegoers with Internet access and those with no access in how often newspaper ads/listings
and phoning a theater were consulted for movie locations and show times. As these two
sources have long been used by most people for this purpose, we can surmise that media use
habits are hard to break. There were significant differences in two sources of information in
72
TABLE I TABLE II
TRUST OF THE SOURCE TO USEFULNESS OF INFORMATION
PROVIDE A “TRUE SENSE” SOURCES IN MAKING A
OF A MOVIE MOVIE ATTENDANCE DECISION
Source n mean S.D. t Source n mean S.D. t
Word-of-mouth 171 5.62 1.59 --- Word-of-mouth 169 5.40 1.76 ---
Theater trailers 173 4.53 1.78 -8.04 Theater trailers 173 4.94 1.65 -3.66
TV ads 170 4.50 1.77 -8.24 DVD/vid trailers 167 4.50 1.93 -6.06
Clips/TV shows 172 4.37 1.71 -9.62 Clips/TV shows 169 4.36 1.90 -.7.14
DVD/vid trailers 169 4.26 1.93 -9.15 TV ads 170 4.35 1.75 -7.82
Critics TV 169 3.67 1.80 -14.11 Critics nsp/mag 169 3.69 1.81 -12.31
Critics nsp/mag 166 3.66 1.72 -14.69 Critics TV 168 3.67 1.73 -12.95
Internet trailers 161 3.42 1.76 -15.90 Newspaper Ads 170 3.56 1.63 -14.17
Movie website 161 3.40 1.95 -14.48 Radio Ads 173 3.38 1.69 -15.75
Radio Ads 166 3.33 1.57 -18.84 Internet trailers 160 3.26 1.81 -14.96
Magazine Ads 168 3.31 1.53 -19.60 Magazine Ads 168 3.18 1.74 -16.51
Newspaper Ads 162 3.16 1.69 -18.56 Movie website 164 3.01 1.84 -16.59
note: All t-values significant (p<.001) in note: All t-values significant (p<.001) in
comparison of means to word-of-mouth comparison of means to word-of-mouth
mean in a one-sample t-test. 7-point mean in a one-sample t-test. 7-point
Likert-type scales were used with 1 = Not Likert-type scales were used with 1 = Not
a true sense, 7 = A very true sense. very useful, 7 = Very useful.
“trust in the source to provide a true sense of what the movie is really like” when the means
of moviegoers with Internet access and those with no access were compared. Those with
Internet access (mean = 3.57) trusted trailers on the Internet more than moviegoers with no
73
access (mean= 2.06), (F(1, 159) 11.19, p < .001). Likewise, those with Internet access (mean
= 3.52) trusted the movie’s website more than those with no access (mean = 2.39), (F(1, 159)
5.56, p < .019). Those moviegoers with Internet access (mean= 5.05) were significantly more
likely to use trailers in a theater “to help make a decision about whether to attend a particular
movie” than those moviegoers with no Internet access (mean = 4.23), (F(1,171) 4.87, p <
.029). Only one significant difference was found when comparing those who have Internet
Access (mean =3.33) and those who have no access (mean = 4.30) in their use of newspaper
critics in “how often a source is used to decide which movie to see” (F(1, 166) 4.58, p < .034).
V. CONCLUSION
Word-of-mouth is the source most trusted, most useful and most used by moviegoers
in helping them decide which movie to attend. These findings support findings earlier studies
(Faber & O’Guinn, 1984; Austin 1988), but contradict at least one industry poll (Klady, 1994)
and seem to be at odds with the movie industry’s practice of opening movies in wide release
that requires a heavy advertising effort. More in line with industry practice, those sources
that provide moviegoers a “sample” of the movie, such as trailers in the theater, television ads,
movie clips on television programs, and trailers on DVD/video, followed next in these three
variables. Interestingly, among the least useful and least used sources were trailers on the
Internet that do provide a “sample” of the movie. Perhaps explaining moviegoers’ less than
enthusiastic use of this medium, is the fact that many of these Internet trailers are usually
displayed on a very small screen within a computer screen.
In fact, a surprising finding was how little trusted, useful, and used was a movie’s
website. This sample had a higher percentage (87%) of Internet access than the U.S. adult
population (63%) and a higher percentage of households (82%) with the Internet than the U.S.
population (55%) (U.S. Census Bureau, 2004), yet, despite the convenience of the Internet for
movie information these participants used it very little. Another interesting finding was that
moviegoers with no Internet access had a significantly higher use of newspaper
critic/reviewers than those with Internet access. Taking into consideration the convenience of
using the Internet coupled with its low use in this study for information access, it would seem
that those with no Internet access had a greater need for information. They gratified this need
more than those with Internet access who seemingly have information at their mouse-
manipulating fingertips.
There are several implications of this study for the movie industry. Because of current
market conditions, movie marketers often cannot have a limited release of a movie to enable
word-of-mouth, the most trusted, useful, and used source in this study, to spread. Therefore,
they should continue to try to provide moviegoers’ a “sample” of the movie as much as
possible as these advertising sources were the next most trusted, useful and used sources.
Movie marketers should also emphasize their websites more in traditional advertising to drive
Internet users to their movie’s website. This online activity needs to be emphasized to train
moviegoers to seek out movie information from the Internet. Perhaps, the low use of the
74
Internet for movie information is due to the several locations on the Internet where
information might be found.
REFERENCES
Austin, Bruce A. “Which Show to See?” Boxoffice, 124. Oct., 1988, 57-66.
Dotinga, R. “Film Strip.” Editor & Publisher. Jan. 15, 2001, 19-25.
Elberse, Anita. and Eliashberg, Jehoshua. “Demand and Supply Dynamics for Sequentially
Released Products in International Markets: The Case of Motion Pictures.” Marketing
Science, 22, (3), 2003, 329-354.
Friedman, Wayne. “Trailers Lead the Way to Some Films.” Ad Age, 73, May 27, 2002, 43.
Faber, Ronald J. and O’Guinn, Thomas C. “Effect of Media Advertising and Other Sources on
Movie Selection.” Journalism Quarterly, 61, 1984, 371-7.
Galloway, Stephen. “A Tangled Web.” Hollywood Reporter, 378, May 20, 2003, S-1.
Goodale, Gloria. “Coming Attractions May Not Be Suitable for Children.” Christian Science
Monitor, 90, n133, June 5, 1998, b6.
Hixson, Thomas K. The Effects of Motion Picture Trailers as an Advertising Medium on
Moviegoers’ Expected Gratifications. Unpublished doctoral dissertation, SIU-
Carbondale, 2000.
Hixson, Thomas K. “Targeting Movie Audiences Through Behavior and Preference
Segmentation.” Business Research Yearbook, 12,(1), 2005, 81-85.
Klady, Leonard. “Tyranny of TV Still Governs Movie Choices.” Variety, 354, June 27,
1994, 1.
Kuklenski, V. “Believe the Hype.” The Daily News of Los Angeles, June 3, 2004, u4.
available:
http://web.lexisnexis.com/universe/document?_m=332cb68b523fe3b0ad651f0
Motion Picture Association of America “U.S. Entertainment Industry: 2004 MPAA Market
Statistics.” available through www.mpaa.org, 2005.
Robinson, John P., Barth, K., and Kohut, A. “Social Impact Research: Personal Computers,
Mass Media, and Use of Time.” Social Science Computer Review, 15, n1, Sp, 1997,
65-82.
U.S. Census Bureau. “Internet Access and Usage and Online Service Usage: 2003.” Statistical
Abstract of the United States: The National Data Book, 1156. available at:
www.census.gov/statab, 2004.
Zufryden, Fred S. “Linking Advertising to Box Office Performance of New Film Releases: A
Marketing Planning Model.” Journal of Advertising Research, July/August, 1996.
29-41.
75
WHEN WEB PAGES INFLUENCE WEB USABILITY
ABSTRACT
The purpose of this study is to examine the effects of strategic communication in the
context of web usability by comparing single and multiple publicity and advertising messages.
This study conducted a 4-condition experiment comparing the effects of exposure to a single
publicity article, a single ad, similar message from a publicity article and an ad, and varied
messages form a publicity article and an ad with 325 participants. The results suggested that
when similar or varied advertising and publicity messages were integrated and linked in a
website, the message effects in either condition operated similarly to repetition effects as the
messages generated more positive effects on web usability, measured by attitude toward the
website, purchase intention, communication sharing, and future website use, than a single
message condition, especially the publicity article.
I. INTRODUCTION
The present research seeks to study priming effects in the context of multiple messages
on web usability by comparing priming to repetition effects. Research has provided different
perspectives on multiple messages’ effects and repetition effects. Mere repetitions, generally
considered as an opportunity enhancement strategy, facilitate learning and contribute to liking
a message since multiple exposures increase consumers’ familiarity with the messages,
leading consumers to increase their positive associations with the messages (Harkins and
Petty, 1987). The priming literature suggests that first message has the potential to frame the
issue in a particular manner, orienting consumers to interpret the second message in a manner
consistent with the frame in the first message (Domke, Shah, and Wackamn, 1998). Focusing
on the case of similar messages, the first, priming message may enhance the effectiveness of
the subsequent message. Alternatively, the two messages may merely have the same effect as
viewing the second message alone would have had. Thus, multiple but similar messages may
work through increased familiarity to enhance liking of the messages. For instance, Moorthy
and Hawkins (2003) found consumers liked an ad more after they saw it repetitively.
76
In addition to merely repeating the same message, presenting information in different
sources can stimulate thinking (Anand and Sternthal, 1990). Hearing about a subject from two
or more independent sources, or hearing different executions of the same message with the
same theme or story line can result in greater cognitive effort, more elaboration, and higher
learning (Harkins and Petty, 1987). Following a priming explanation for varied messages, a
prime framed in a particular way sets up certain evaluative criteria consistent with the frame
and enhances the effectiveness of a subsequent and different message beyond what happens
from mere repetition. In the same way, viewing different publicity and advertising messages
may be more effective than seeing the same ad or the same article twice. Thus, integrating
advertising and product publicity to carry varied messages may increase consumers’ positive
attitudes toward the messages. This proposition echoes the concept behind synergy in strategic
communication, a coordination of messages for delivering more impact (Moriarty, 1996).
“This impact is created through synergy-the linkages that are created in a receiver's mind as a
result of messages that connect to create impact beyond the power of any one message on its
own” (Moriarty, 1996, page 333). This study builds on the previous research (Wang, 2005) to
study the effectiveness of integrating advertising and product publicity messages on
consumers’ perceived web usability.
Web usability research suggests that evaluations of web usability include attitude
toward a website (ATW), purchase intention, communication sharing with others, and future
use of a website (Hallahan, 2001, Wang, 2005). This study uses these variables as indicators
of individuals’ behaviors beneficial to advertisers since ultimate measures of successful web
relationship-building are consumers’ long-term engagements in behaviors that help advertisers
achieve their financial goals (Ba and Pavlou, 2002; Gefen, Karahanna, and Straub, 2003). In
sum, this study tests two hypotheses and a research question.
H1: Exposure to varied advertising and publicity messages will have a better effect on
consumers’ (a) ATW, (b) purchase intentions, (c) communication sharing, and (d)
future website uses than exposure only to either advertising or publicity messages.
H2: Exposure to similar advertising and publicity messages will have a better effect on
consumers’ (a) ATW, (b) purchase intentions, (c) communication sharing, and (d)
future website uses than exposure only to either advertising or publicity messages.
RQ: Are there significant differences between the varied and similar messages
conditions regarding consumers’ ATW, purchase intentions, communication sharing,
and future website uses?
III. METHOD
77
evaluations of a website and a tennis racquet featured in the website. The second page of the
booklet informed them that they would follow the instructions and review specific
information about the tennis racquet. An ad and an article were selected as two
communication forms presenting advertising and product publicity. The article was embedded
into Tennis Magazine’s website and shown in the center when participants clicked on the
article link. The same situation applied to the ad when participants clicked on the ad link
except the ad was not embedded into any third-party organizations’ websites. No participants
had heard of or brought the tested racquet before.
Participants’ attitudes toward the website were measured by asking the participants
about “how the website is for buying tennis products” where 1 indicated ‘not a good website
at all’ and 7 indicated ‘an extremely good website’ (Chen and Wells, 1999). Two questions
were asked about whether the participants would “recommend the racquet to a friend” and
“tell another friend about the racquet featured in the website” where 1 indicated ‘not likely at
all’ and 7 indicated ‘extremely likely.’ These two questions measured participants’
communication sharing with others. Finally, participants’ purchase intentions were measured
by asking the participants whether they would buy the racquet. Participants’ future website
uses were measured by asking the participants whether they would ‘use the website for future
information search about tennis products’ and ‘return to the website.’ Likert-type scales were
used to measure participants’ purchase intentions and future website uses yielding scores
ranging from 1 (extremely disagree) to 7 (extremely agree) for each.
IV. RESULTS
Table I documented the main constructs measured in this study. All Cronbach’s α
values were larger than 0.73 if available. To test for main effects, a MANCOVA procedure was
used with participants’ ATW, purchase intentions, communication sharing, and future website
uses as the dependent variables. Message manipulation (single ad, single publicity, similar ad
and publicity, and varied ad and publicity) was used as the independent variable. The results
showed that there was a main effect for message manipulation, Wilks’ λ = .91, F (4, 311) =
6.57, p = .000, η2 = .08; the mean vectors were not equal and the set of means among
78
conditions were different. Thus, statistically significant differences in conditions with respect
to the dependent variables were established.
The tests of between-participants effects based on the individual univariate tests were
reported in Table II. H1 and H2 were partially supported as the effects of message
manipulation on three of the four dependent variables were established: ATW, F (3, 312) =
1.65, p = .178, η2 = .016; PI, F (3, 312) = 3.17, p = .025, η2 = .03; communication sharing, F
(3, 312) = 6.13, p = .001, η2 = .06; future website use, F (3, 312) = 5.83, p = .001, η2 = .05.
Several post hoc tests further disclosed that the participants exposed to varied messages (M =
5.25, SD = 1.07) exhibited higher inclination of communication sharing than the participants
exposed only to the article (M = 4.32, SD = 1.8, p = .000) and the ad (M = 4.76, SD = 1.44, p
= .005). The participants exposed to similar messages (M = 5, SD = 1.38) had higher
inclination of communication sharing than the participants exposed only to the article (M =
4.32, SD = 1.8, p = .002). Moreover, the participants exposed only to the article (M = 3.97, SD
= 1.78) had lower inclination of future website use than the participants exposed only to the ad
(M = 4.66, SD = 1.55, p = .03), similar messages (M = 4.83, SD = 1.52, p = .003), and varied
messages (M = 4.8, SD = 1.55, p = .000).
The research question queried whether significant differences between the varied and
similar messages conditions would materialize on participants’ ATW, purchase intentions,
communication sharing, and future website uses. The results revealed that there were no
differences between the varied and similar messages conditions on participants’ ATW,
purchase intentions, communication sharing, and future website uses.
V. CONCLUSION
The most striking findings in this study were those that illustrated the effects of
exposure to similar or varied multiple messages: the varied or similar messages had
significant effects on participants’ purchase intentions, communication sharing, and future
website uses but not on their ATW. Particularly, exposure to similar or varied messages
resulted in no different levels of ATW, purchase intention, communication sharing, and future
website use. Moreover, publicity messages seemed to generate the weakest effect on purchase
intention, communication sharing, and future website use. It seemed that participants were not
overly concerned with the publicity and thus the priming effect did not materialize for varied
advertising and publicity condition. In other words, varied and similar advertising and
79
publicity messages affected participants in the same way as repetition effects occurred. These
results might suggest product review articles that provided endorsements of products based on
industry research sources and advertising might not exude either significant usability or
relevance to those consumers whose evaluations of a website were influenced by a host of
other factors that could produce stronger influence on their ATW. For instance, payment
method, security, or interactivity of a website could all have a strong impact on consumers’
ATW.
Another possible explanation for article’s weak influence is in fact closely related to
the unregulated nature of Internet publishing. Most consumers can receive product news from
various online sources, in addition to a growing number of sites affiliated with blogs. Given
the ease of accessing the multiplicity of product news online, which at times provides
contradictory information, it is likely that consumers may not perceive product information or
news form many of these sources with high credibility. This is especially worrisome with the
recent proliferation of online articles that have little or no sourcing information. Thus, the
editorial independence of publicity from the advertisers in this multi-sourced product
information environment may also be perceived as dubious. As consumers often receive
multiple product reviews, this situation may overburden their capacity for information
processing. As such, consumers may simply ignore the significance of the product publicity.
Some limitations of this study should be acknowledged. First, the differences in the
effect sizes were very small, and so should be replicated before making much of the findings.
One limitation stems from the study’s laboratory settings. This study suffers from the generic
limitations of all laboratory studies with forced exposure to the ad and the article. It is possible
that under natural viewing conditions, consumers will choose not to read any of the ad or
article, and thus the observed effects of integration may not materialize. Other factors that the
study did not control were participants’ perceptions about the credibility of the article and the
ad and their perceptions about the credibility between paid advertising and product review
articles. It is possible that participants perceived product review articles as being low in
credibility and hence handily dismissed the possible priming effect on the ad.
Future research can also consider other pairings of communication formats and their
degree of difference online, such as using a game-playing as a prime activity and an ad as the
message. How similar do a prime and a message need to be, in order to have an increased
effect over simply viewing the similar message? Are there conditions under which a prime
plus a message will outperform a repetition effect? It is unknown whether priming plus
80
subsequent ad viewing or repetition effects may last longer. Thus, it is beneficial for future
research to examine the priming and repetition effects over time. In addition, websites vary
considerably based on their functionalities or the degree to which the technology can be used
adequately. Future work in designing aspects of web usability, examining the effect of such
moderating variables or their interactive effects, can further contribute to the ongoing
development of web communication strategy.
REFERENCES
81
UNIVERSITY BRAND IDENTITY: A CONTENT ANALYSIS OF
FOUR-YEAR U.S. HIGHER EDUCATION WEB SITE HOME PAGES
ABSTRACT
This exploratory content analysis of 1329 HEI home pages established a snapshot of
current brand identity practices of U.S. four-year HEI categories (National, Liberal Arts,
Masters, and Comprehensive). Findings indicate that the majority of HEI’s spell out their
brand name fully, utilize positioning statements, and incorporate brand symbols on their home
pages. More prominent institutions are more likely to focus attention on their brand name and
traditional academic iconic symbols (crests/shields) while less-established HEI’s aggressively
incorporate positioning statements and modern logos to establish a distinctive brand identity.
I. INTRODUCTION
HEI’s have long positioned themselves guided by the four P’s of marketing: product;
price; place; and promotion (McCarthy, 1960). Modern marketing concepts now extend these
four pillars to a consumer focus and include a paralleling 4 C’s: consumer; cost; convenience;
and communication (Duncan, 2002). To varying degrees, all product-focused and customer-
focused variables are considered when developing an effective brand position and should be
done in comparison to the marketing mix of competing services (Palmer, 2005).
Students, parents, faculty, staff, and donors who experience one or more brand
messages are able to form an image of the institution (Braxton, 1979, Kotler, & Fox, 1995).
The image portrayed by HEI’s plays a critical role in the public perception toward that
institution (Yavas, & Shemwell, 1996; Landrum et al., 1998). Many elite U.S. “National” and
“Liberal Arts” HEI’s that still have an image built on academic rigor and prestige of their
alumni annually turn away thousands of applicants (Table I)(Boshier et al., 2001). Gutman
and Miaoulis (2003) found that older HEI’s in the United Kingdom (UK) are product-oriented
and focus on marketing their academic products and overall reputation. In contrast, many U.S.
“Masters” and “Comprehensive” HEI’s aggressively market themselves through traditional
media and by investing in areas of the institution that are highly visible such as athletics
programs (Table I) (Palmer, 2005). Newer less established HEI’s in the UK place an emphasis
on selling their service to individual prospective student groups and personalizing their
marketing with targeted promotional activities (Gutman, & Miaoulis 2003). These aggressive
marketing tactics are used by HEI’s to maintain or develop a distinct image to create a
competitive advantage in an increasingly competitive market (Paramewaran, & Flowacka,
1995).
82
TABLE I: OPERATIONAL DEFINITIONS
Item/Variable Definition
HEI Category Higher Education Institution
National University 248 HEI’s that offer a wide range of undergraduate
majors as well as master's and doctoral degrees;
many strongly emphasize research.
Liberal Arts Colleges 215 HEI’s that emphasize undergraduate education
and award at least 50 percent of their degrees in the
liberal arts.
Universities-Masters 570 HEI’s provide a full range of undergraduate and
master's programs. But they offer few, if any,
doctoral programs.
Comprehensive 324 HEI’s focus primarily on undergraduate
Colleges-Bachelors education just as the liberal arts colleges do but grant
fewer than 50 percent of their degrees in liberal arts
disciplines.
Brand Name Execution Spelled-out = Harvard University
Acronym = USC
Combination = UNC Asheville
Positioning Statement Intangible: ex. “dream a little…”
Product-focus: ex. “A world-class education”
Customer-focus: ex. “You Belong Here.
Positioning Statement Positioning statement is with brand name
Presence Positioning statement is isolated on home page
Brand Symbol
Ex. Ex.
Academic Icon Modern Logo
Studying online brand identity practices will be helpful in assessing how HEI’s
position themselves online in this highly competitive market. The Internet has emerged as the
single most important marketing communication tool for students (Integrating high-tech tools,
2002) and has been preferred to traditional print marketing materials for reliability and
usefulness (Wolff, & Bryant, 1999).
Based on the literature, four research questions were developed for U.S. four-year HEI
home pages. RQ1: How do HEI’s execute their brand names? RQ2: To what extent do HEI’s
incorporate positioning statements into their online brand identity strategies? RQ3: To what
extent do HEI’s incorporate brand symbols into their brand identity strategies? RQ4: To what
extent do HEI positioning statements incorporate tangible and intangible attributes?
Four sets of hypotheses were developed to support these research questions. H1: The
number of National HEI’s utilizing positioning statements will be significantly less than
National HEI’s that do utilize positioning statements. H1a: The number of Liberal Arts HEI’s
utilizing positioning statements will be significantly less than Liberal Arts HEI’s that do
utilize positioning statements. H1b: The number of Masters HEI’s utilizing positioning
statements will be significantly more than Masters HEI’s that do not utilize positioning
83
statements. H1c: The number of Comprehensive HEI’s utilizing positioning statements will be
significantly more than Masters HEI’s that do not utilize positioning statements.
H2: The number of National HEI’s isolating positioning statements on their home
pages will be significantly greater than National HEI’s that associate positioning statements
directly with the brand name. H2a: The number of Liberal Arts HEI’s isolating positioning
statements on their home pages will be significantly greater than Liberal Arts HEI’s that
associate positioning statements directly with the brand name. H2b: The number of Masters
HEI’s associating their positioning statements directly with the brand name will be
significantly greater than Masters HEI’s that isolate positioning statements from their brand
name. H2c: The number of Comprehensive HEI’s associating their positioning statements
directly with the brand name will be significantly greater than Comprehensive HEI’s that
isolate positioning statements from their brand name.
H3: National HEI’s will utilize traditional academic icon brand symbols significantly
more than modern brand symbols. H3a: Liberal Arts HEI’s will utilize traditional academic
icon brand symbols significantly more than modern brand symbols. H3b: Masters HEI’s will
utilize modern brand symbols significantly more than traditional academic icon brand
symbols. H3c: Comprehensive HEI’s will utilize modern brand symbols significantly more
than traditional academic icon brand symbols.
H4: National HEI’s will use product-focused strategies more frequently than
consumer-focused strategies. H4a: Liberal Arts HEI’s will use product-focused strategies
more frequently than consumer-focused strategies. H4b: Masters HEI’s will use consumer-
focused strategies more frequently than product-focused strategies. H4c: Comprehensive
HEI’s will use consumer-focused strategies more frequently than product-focused strategies.
IV. METHODOLOGY
Two trained research assistants electronically captured all four-year U.S. HEI home
pages listed on the U.S. News and World Report database (www.usnews.com) Jan. 15, 2005 –
Jan. 30, 2005 using Snag-it software. U.S. News and World Report first assigns schools to a
group of their peers, based on categories developed by the Carnegie Foundation for the
Advancement of Teaching. All home pages were analyzed to provide a snap shot of U.S. four-
year HEI brand identity strategies. The HEI home page was selected as the unit of analysis
because of its ability to project brand identity to all visitors (Boshier et al., 2001).
The independent variable analyzed was HEI “category” (Table I). Dependent
variables analyzed were “brand name execution,” “positioning statement,” “positioning
statement placement,” “brand symbol,” and “positioning statement strategy.” Two coders
achieved 100 percent agreement on all variables after three rounds. Individual items that
coders could not reach agreement on were removed from the study.
Chi-square analysis was used to explore any association between HEI category
(National; Liberal Arts; Masters; Comprehensive) and all dependent variables (brand name,
positioning statement, brand symbol, strategy). The Chi-square test was deemed appropriate
because all variables are categorical. Chi-square tests can be misleading if any cell has an
expected value of less than 1.0 or more than 20% of the cells have expected values less than 5.
All Chi-square tests were valid and did not exceed these parameters.
84
V. RESULTS
This exploratory content analysis of 1329 HEI home pages (Table II) found the
majority of institutions (94.4%) “spell out” their complete brand name (Table III), incorporate
“positioning statements” (51.5%) (Table IV), and utilize “brand symbol” (66.3%) to establish
their online brand identity (Table V).
Table VI indicates that all HEI categories feature their complete brand name
significantly more than alternative executions.
H1 and H1a were both supported (Table VII).The majority of “National” (64.2%) and
“Liberal Arts” (53.3%) HEI’s do not use positioning statements to establish their online brand
identity. H1b and H1c were also supported. The majority of “Masters” (52.5%) and
“Comprehensive” (65.0%) HEI’s incorporate positioning statements into their online brand
identity strategies.
85
brand name execution. H2b and H2c were not supported. The majority of “Masters” (54.1%)
and “Comprehensive” (52.8%) HEI’s isolate their positioning statements away from their
brand name more than through direct association.
VI. CONCLUSIONS
Overall, U.S. four-year HEI’s utilize a variety of brand name executions, positioning
statements, and brand symbol strategies to project brand identity to visitors of their respective
home pages. Understanding brand identity strategies implemented by HEI’s category will
provide a foundation for future research and information for HEI’s contemplating marketing
strategy based on their competition.
86
Future studies should replicate portions of this study to analyze marketing
communication trends by all HEI’s classifications. Expanding the view to two-year and
international HEI’s are also areas that this study should direct future researchers. Higher
education is a product that impacts a great deal of consumers and this study identified various
U.S. four-year HEI’s differ in their online brand identity marketing communication strategies.
REFERENCES
Boshier, R., Brand, S., Dabiri, A., Fujitsuka, T. & Tsai, C. (2001) Virtual universities
revealed: More than just a pretty interface. Distance Education, 22 (2), pp. 212-231.
Braxton, J. M. (1979) The influence of student recruitment activities: Relationship between
experiencing an activity and enrollment. Paper presented at the annual forum of the
Association for Institutional Research (19th, San Diego, California, May 13-19).
Cook, R. & Zallocoo, R. (1983) Predicting university preference and attendance: applied
marketing in higher education administration. Research in Higher Education, 19 (2), pp.
197-211.
Duncan, T. (2002) IMC: Using advertising, & promotion to build brands. New York:
McGraw-Hill Irwin.
Evelyn, J. (2002) For many community colleges, enrollment equals capacity. Chronicle of
Higher Education, 48 (33), A41.
Gutman, J. & Miaoulis, G. (2003) “Communicating a quality position in service delivery: An
application in higher education Managing Service.” Quality Bedford, 13 (2), 105-
111 (7 pp.).
Integrating High-Tech Tools with Traditional Recruitment Strategies (March 2002). Project
Connect; A study by Carnegie Communications. LLC, 1-10, (978) 692-2313, [Online].
Available: www.carnegiecomm.com.
Kotler, P. & Fox, K. (1985) Strategic marketing for educational institutions. Englewood
Cliffs, NJ: Prentice-Hall.
87
BRAND KNOWLEDGE, BRAND ATTITUDE, PURCHASES & AMOUNT WILLING
TO PAY FOR SELF & OTHERS: THIRD-PERSON PERCEPTION & THE BRAND
ABSTRACT
This study illustrates the relationship between the third-person perception hypothesis
and branding theory. Survey respondents thought others would 1) have more knowledge about
the Nike brand, 2) have a more positive attitude toward Nike, 3) be more likely to purchase
Nike shoes the next time they purchase shoes, 4) pay more for Nike shoes, 5) own more Nike
shoes, 6) be more likely to have their image of self influenced by wearing Nike shoes, and 7)
be more likely to have their image of others influenced by the wearing of Nike shoes by
others. Also, “perceived differences between self and others” and “respondents’ attitudes
toward being influenced by branding” are correlated with many aspects of branding.
I. INTRODUCTION
This study was designed to test for a relationship between third-person perception and
branding. Branding influence was divided into brand knowledge, brand attitudes, purchase
intention, amount willing to pay, past purchases, self-image, image of others, and attitude
toward being influenced by branding. The third-person effect posits that people tend to see
others as more influenced by media messages than they themselves are. The difference
between the amount of perceived influence on self and others should be evident in each of the
previously mentioned areas of branding.
Davison (1983) wrote, “individuals who are members of an audience that is exposed to
a persuasive communication (whether or not this communication is intended to be persuasive)
will expect the communication to have a greater effect on others than on themselves” (p. 3).
Since 1983, this “third-person effect” has been found in research conducted on social issues
such as: censorship of pornography (Gunther, 1995), censorship of rap lyrics (McLoed,
Eveland, & Nathanson, 1997), body image (Choi & Leshner, 2003; David & Johnson, 1998),
television violence (Hoffner, et al., 2001), direct-to-consumer prescription drug advertising
(Huh, 2003) and others.
Gender has been a variable in several third-person studies. Tiedge, Silverblatt, Havice
and Rosenfeld (1991) found that gender had no significant effect on perceived first-person
effects, perceived third-person effects, or discrepancy scores when respondents were asked
about media effects. Rojas, Shah and Faber (1996) found no significant difference in third-
person perception between male and female respondents but did find that females were more
willing to censor pornography. When surveyed about the O.J. Simpson trial and their ability to
serve as impartial jurors, women were more likely than men to perceive a third-person
perception (Driscoll & Salwen 1997).
Attitude toward being influenced has been a variable in third-person studies. Salwen
and Dupagne (1999), David and Johnson (1998) and Brosius and Engel (1996) found that
88
individuals perceive that being influenced by the media is a negative phenomenon. Perloff
(1993) found that the third-person effect is most likely to appear when people find the
communication message to not be personally beneficial, when the message is personally
important, and when they feel that the source has a negative bias. Many third-person studies
have focused on negative issues such as alcohol (David, Liu, & Myser, 2004), misogynic rap
lyrics (McLoed et al., 1997), controversial sex tapes (Chia, Lu, & McLoed, 2004),
pornography (Gunther, 1995), and television violence (Hoffner, et al., 2001). Preservation of
self-image is similar to attitude toward being influenced. If the effect of being influenced is
seen as negative, it would degrade a person’s self-image to admit to being influenced
(Gunther & Mundy, 1993). Being influenced by media or other outside forces indicates a loss
of one’s freedom (Brosius and Engel, 1996).
III. BRANDING
The next step beyond brand knowledge is brand attitude. “The most abstract and
highest-level type of brand associations are attitudes” (Keller, 1998, p. 100). If consumer’s
attitudes toward brands are positive, there is a greater likelihood of a purchase and a purchase
is, after all, the goal of the marketing campaign. There are several different definitions of
brand equity but “in a general sense, most marketing observers agree that brand equity is
defined in terms of the marketing effects uniquely attributable to the brand” (Keller, 1998, p.
42). Keller goes on to specifically define customer-based brand equity “as the differential
effect that brand knowledge has on consumer response to the marketing of that brand” (p. 45).
One key area in which brand equity manifests itself is pricing. Brand equity allows companies
to charge more for their branded products. Purchase intention is related to brand knowledge.
Consumers who are more familiar with a brand are more likely to show an intention to
purchase that brand (Laroche, Kim, & Zhou, 1996). Grewal, Krishnan, Baker, and Borin
(1998) also found a relationship. Respondents with high brand knowledge were more
influenced by brand name. Low knowledge respondents were likely to be influenced by
techniques like price discounts. Although respondents with high brand knowledge are more
influenced by brand name, they might not be willing to admit it. Repeat purchases can have a
significant impact on the profitability of a company. Kirmani, Sood, and Bridges (1999)
studied the relationship between current owners and line stretches. “Compared with
nonowners, most owners are likely to have greater liking, familiarity, knowledge, and
involvement with the brand” (p. 2). These attributes of owners make them likely to become
repeat purchasers.
89
“A consumer’s self-concept (self-image) can be defined, maintained, and enhanced
through the products they purchase and use” (Graeff, 1996). This concept makes image
congruence important to marketers. Marketers must match the image of their product with the
self-images of their consumers to have a strong appeal. Related to the concept of self-image
are the concepts of how people see others and how people think others see them. Essentially,
people are concerned about how they fit into society. Jim Crimmins, , DDB’s world-wide
brand planning director, states that “We’re not marketing just to isolated individuals. We’re
marketing to society. How I feel about a brand is directly related and affected by how others
feel about that brand” (Kranhold, 2000, p. 1).
IV. HYPOTHESES
1. Respondents will estimate that others will have more knowledge about Nike shoes than
self.
2. Respondents will estimate that others will have a more positive attitude toward Nike
shoes than self.
3a. Respondents will think that others will be more likely to purchase Nike shoes than self.
3b. Respondents will estimate that others will be willing to pay more for a pair of Nike shoes
than self. (third-person perception)
4. Respondents will estimate that others own more Nike shoes than themselves.
5a. Respondents will think that wearing Nike shoes has a stronger influence on the "image of
self" in others than in themselves.
5b. Respondents will think that others’ image of “others” will be more influenced if “others”
were wearing Nike shoes than self's image of others in the same situation.
A secondary hypothesis concerning gender was added to each of the previously noted
hypotheses. For example: H1B would read “Female respondents will show more evidence of
third-person perception, in the estimate of self and others’ knowledge about Nike, than male
respondents.”
6. Perceived "difference" between self and "others" will be positively related to the amount
of third-person perception, (…as measured by brand familiarity [H6A].) (…as measured
by brand attitude [H6B].) (…as measured by purchase intention [H6C].) (…as measured
by amount willing to pay [H6D].) (…as measured by past purchases [H6E].) (…as
measured by “image of self” [H6F].) (…as measured by “image of others” [H6G].)
7. The amount of positive attitude toward being influenced by brand will be negatively
related to the amount of third-person perception, (…as measured by brand familiarity
[H7A].) (…as measured by brand attitude[H7B].) (…as measured by purchase intention
[H7C].) (…as measured by amount willing to pay [H7D].) (…as measured by past
purchases [H7E].) (…as measured by “image of self” [H7F].) (…as measured by “image
of others” [H7G].)
V. METHOD
The questionnaire was divided into segments according to the aspects of branding
noted in each hypothesis, including brand knowledge, brand attitude, purchase intention,
90
amount willing to pay, past purchases, self-image and image of others. Cronbach’s Alpha was
conducted on each scale and questions were dropped as appropriate until alpha levels were
acceptable. Response set was avoided by designing both positive and negative statements and
by varying the order of responses. “Self” statements were placed before “others” statements to
ensure that question order did not determine the results.
VI. RESULTS
Hypotheses 1A, 2A, 3aA, 4A, 5aA, and 5bA were supported. Third-person perception
was evident in brand knowledge F(1, 173) = 33.46, p < .01, brand attitude F(1, 172) = 41.76,
p < .01, purchase intention F(1, 173) = 105.65, p < .01, amount willing to pay F(1, 164) =
93.47, p < .01, past purchases F(1, 167) = 6.07, p < .05, image of self F(1, 174) = 33.13, p <
.01, and image of others F(1, 173) = 16.90, p < .01.
Gender was only significant for hypothesis 3aB F(1, 173) = 4.10, p = .05. Female
respondents showed more evidence of third-person perception than male respondents as
associated with purchase intention. Hypotheses 1B, 2B, 3bB, 4B, 5aB, and 5bB were not
supported.
A mixed relationship was found between the perceived difference between self and
others and the amount of third-person perception associated with each dependent variable.
Dependent variables with a significant relationship are brand knowledge (hypothesis 6A;
r=.20; p=.01), brand attitude (hypothesis 6B; r=.18; p=.02), past purchases (hypothesis 6E; r=-
.17; p=.03), image of self (hypothesis 6F; r=.19; p=.01), and image of others (hypothesis 6G;
r=.25; p=.00). However, a significant relationship was not found with purchase intention
(hypotheses 6C) or amount willing to pay (hypothesis 6D).
A mixed relationship was also found between the amount of positive attitude toward
being influenced by brand and the amount of third-person perception associated with each
dependent variable. Dependent variables with a significant relationship are brand knowledge
(hypothesis 7A; r=26; p=.00), brand attitude (hypothesis 7B; r=.32; p=.02), purchase intention
(hypothesis 7C; r=.26; p=.00), past purchases (hypothesis 7E; r=-.36; p=.00), and image of
self (hypothesis 7F; r=.20; p=.01). The significant relationships, except past purchases, were
in the opposite direction of the proposed relationship. A significant relationship was not found
with amount willing to pay (hypothesis 7D) or image of others (hypothesis 7G).
VII. CONCLUSION
This study illustrates the relationship between the third-person perception hypothesis
and branding theory. Respondents perceived others as being more influenced than self on
every aspect of branding. Respondents thought others would 1) have more knowledge about
the Nike brand, 2) have a more positive attitude toward Nike, 3) be more likely to purchase
Nike shoes the next time they purchase shoes, 4) pay more ($34.77 more on average) for Nike
shoes, 5) own more Nike shoes, 6) be more likely to have their image of self influenced by
wearing Nike shoes, and 7) be more likely to have their image of others influenced by the
wearing of Nike shoes by others. The findings underscore the importance of the old phrase
that “advertising is a battle of perception.” Consumer’s thoughts about the brand may not be
as important as what consumers think others think about the brand. This finding is also
consistent with the “double jeopardy” theory in which the leading brand maintains its position
because it is the leading brand. Perceived difference between self and others is an important
91
aspect of third-person perception and was found to be correlated with many of the aspects of
branding mentioned in this study. These correlations show that branding and third-person
perception are also related to one another and possibly share some of the same principles.
Future research can further explore this relationship to test if brand influence can be measured
by measuring third-person perception.
Rather than the hypothesized negative relationship between a positive attitude toward
being influenced by branding and the branding variables, there was a positive relationship. As
respondents knew more about Nike shoes, liked Nike shoes, intended to purchase Nike shoes,
and improved their self-image by wearing Nike shoes, the more positive their influence
toward being influenced by branding. It is impossible to determine exactly how respondents
interpreted survey statements, but it is possible that the final scale was taken under the context
of the Nike survey rather than an attitude toward being influenced by branding in general.
Being positively influenced by a brand you know and like makes sense. The reliability of the
scale is low and should be improved in future research into this topic.
REFERENCES
Brosius, H. B., and Engel, D. The causes of third-person effects: Unrealistic optimism,
impersonal impact, or generalized negative attitudes towards media influence?.
International Journal of Public Opinion Research., 8, 1996, 142-163.
Chia, S. C., Lu, K., & McLeod, D. M. Sex, lies, and video compact disc: A case study of
third-person perception and motivations for media censorship. Communication
Research., 31, (1), 2004, 109-130.
Choi, Y., & Leshner, G. Who are the “others”? Third-person effects of idealized body image
in magazine advertisements. Paper presented at the Association for Education in
Journalism and Mass Communication Conference, Kansas City, MO. 2003, August.
David, P., and Johnson, M. A. The role of self in third-person effects about body image.
Journal of Communication., 48, (4), 1998, 37-58.
David, P., Liu, K., & Myser, M. Methodological artifact or persistent bias?
Testing the robustness of the third-person and reverse third-person effects for alcohol
messages. Communication Research., 31, (2), 2004, 206-233.
Davison, W. P. The third-person effect in communication. Public Opinion Quarterly., 47,
1983, 1-15.
Driscoll, P. D., Salwen, M. B. Consequences of third-person perception in support of press
restrictions in the O.J. Simpson trial. Journal of Communication., 47, (2), 1997, 60-78.
Duncan, T. Principles of Advertising & IMC (2nd ed.). Boston: McGraw-Hill, 2005.
Eveland Jr. W. P., Nathanson, A. I., Detenber, B. H., McLeod, D. M. Rethinking the social
distance corallary: Perceived likelihood of exposure and the third-person perception.
Communication Research, 26, (3), 1999, 275.
Graeff, T. R. Using promotional messages to manage the effects of brand and self-image on
brand evaluations. Journal of Consumer Marketing., 13, 1996, 4-18.
Grewal, D. R., Krishnan, J., Baker, & Borin, N. The effect of store name, brand name and
price discounts on consumers’ evaluations and purchase intentions. Journal of
Retailing., 74, (3), 1998.
Gunther, A. C. Overrating the X-rating: The third-person perception and support for
censorship of pornography. Journal of Communication., 45,(1), 1995, 27-39.
Gunther, A. C., & Mundy, P. Biased optimism and the third-person effect. Journal of
Communication., 70, 1993, 58-67.
92
Hoffner, C., Plotkin, R. S., Buchanan, M., Anderson, J. D., Kamigaki, S. K., Hubbs, L. A., et
al. The third-person effect in perceptions of the influence of television violence.
Journal of Communication., 51, (2), 2001, 283-299.
Huh, J. (2003). Perceived effects, mediating influences, and behavioral outcomes of
direct-to-consumer prescription drug advertising: Applying the third-person effect
framework (Doctoral dissertation, The University of Georgia, 2003). Dissertation
Abstracts International, 64(4), 2690.
Keller, K. L. Strategic brand management Building, measuring, and managing brand
equity. Upper River Valley, NJ: Prentice Hall, 1998.
Kirmani, A., Sood, S., & Bridges, S. The ownership effect in consumer responses to brand
line stretches. Journal of Marketing., 63, (1), 1999, 88-101.
Kohli, C., & Thakor, M. Branding consumer goods: insights from theory and practice.
Journal of Consumer Marketing., 14, (3), 1997, 206-219.
Kranhold, K. Agencies beef up research to identify customer preferences. Wall Street
Journal., 2000, March 9, pp. B14.
Laroche, M., Kim, C., & Zhou, L. Brand familiarity and confidence as determinants of
purchase intention: An empirical test in a multiple brand context. Journal of Business
Research., 37, 1996, 115-121.
McLeod, D. M., Eveland Jr., W. P., & Nathanson, A. I. Support for censorship of violent and
misogynic rap lyrics: An analysis of the third-person effect. Communication
Research., 24, (2), 1997, 153-174.
Perloff, R. M. Third-person effect research 1983-1992: A review and synthesis.
International Journal of Public Opinion Research., 5, (5), 1993, 167-184.
Price, V. and Tewksbury, D. (1996). Measuring the third-person effect of news: the impact of
question order, contrast and knowledge. International Journal of Public Opinion
Research, 8, 120-142.
Price, V., Tewksbury, L., and Huang, L. Third-person effects on publication of a Holocaust-
denial advertisement. Journal of Communication., 48, (2), 1998, 3-26.
Richards, I., Foster, D., & Morgan, R. Brand knowledge management: Growing brand equity.
Journal of Knowledge Management., 2, 1998, 47-54.
93
CONGRUENCY IN STRATEGIC CORPORATE SOCIAL RESPONSIBILITY:
CONSUMER ATTITUDE TOWARD THE COMPANY & PURCHASE INTENTION
ABSTRACT
This study examines the effects of the corporate sponsorship that supports social
marketing programs as a part of corporate social responsibility (CSR) and the congruency
effect of sponsorship linkage that impacts consumers’ attitudes toward the sponsor and
purchase intentions. Through empirical research, this study found that Public Service
Advertisements (PSA) with congruent linkage between sponsor and a sponsored marketing
program are more persuasive than an incongruent linkage. Participants who watched a PSA
congruent with the sponsoring company favored the company more and had greater purchase
intention.
I. INTRODUCTION
Social responsibility in modern society requires a company to be a corporate citizen
conforming to the needs of customers in addition to supplying good products at a reasonable
price and thus contributing to society (Lee, 2002). In the long-term, pro-social activities are
profitable not only to the society and people, but also to the stakeholders and the company
itself by improving the company image and further increasing the sales. Some scholars with
this economic view of corporate engagement in good deeds call it “social investment” (Stump,
1999). In this view, profit and social responsibility are not separate issues. Companies
consider their interests before the desire to do good deeds. If activities such as encouraging
employees, improving the company image, and reducing government intervention are
profitable, companies would participate in social responsibility more actively.
94
II. REVIEW OF LITERATURE
Corporate Social Responsibility
Business is a social institution and thus obliged to use its power responsibly.
Wood (1991) described three driving principles of social responsibility, which are:
• businesses are responsible for the outcomes relating to their areas of involvement with
society; and
• individual managers are moral agents who are obliged to exercise discretion in their
decision making.
Lantos (2001) classified CSR into three types based on their nature (required versus optional)
and purpose (for stakeholders’ good, for the firm’s good, or for both): ethical CSR, altruistic
CSR, and strategic CSR. Ethical CSR is “morally mandatory and goes beyond fulfilling a
firm’s economic and legal obligations, to its responsibilities to avoid harms or social injuries,
even if the business might not benefit from this “(p.605).
Theoretical Framework
Studies of successful social marketing programs have found that social marketing
programs that are strongly related to products and the category of the sponsoring company
were effective in achieving desired short-term effects. Here, we can assume that if the
message of a social marketing program and the category of the sponsoring company or its
product are congruent (matched), it might be possible to increase the effectiveness of the
message, while at the same time increasing the awareness of the product or company. To
understand match-up effect, it is necessary to touch on associative learning theory. The key
95
variable in an associative link between two concepts (such as a brand and an endorser) is
“belongness, relatedness, fit, or similarity” (Till & Busler, 2000). Generally, the more similar
two concepts are, the more likely the two concepts will become integrated within an
associative network (e.g., Hamm, Vaitl, & Lang, 1989). The associative link between a brand
and an endorser predicts endorser effects and match-up effects.
III. METHOD
The purpose of this study was to examine how the congruency of PSA campaign and
the category of the sponsoring company affect viewer attitudes toward the company and
viewer intentions to purchase products from the company.
Instrumentation
Testing Stimuli. Two Ad Council PSA campaigns were selected for this experiment.
One is a PSA campaign that has possible semantic relevance for for-profit companies. The
other PSA campaign was not relevant to for-profit companies. In this study, the PSA
campaign selected from the first category was infant and child nutrition promotion, while the
PSA campaign selected from the second category was preventing drunk driving. In addition,
among many product categories, two food companies were selected for congruence with
infant and child nutrition campaign. One company was familiar, and the other was not. This
research project was part of a larger investigation that also included the participant’s
familiarity with the company. The familiarity results are not presented in this article.
Treatment. Participants watched a 14-minute video clip with three news segments taken from
CNN Headline News presentation and three breaks. The breaks consisted of commercials,
PSAs without for-profit organization sponsorship, and PSAs with four different conditions
that were manipulated for the study. The news segments and break #1 and #3 were randomly
assigned commercials and PSAs without for-profit organization sponsorship and were
presented to all participants. For treatment, break #2 consisted of 2 commercials (the same for
all participants), and a PSA that was manipulated for the experiment. The sponsorships were
manipulated by showing a brief text (e.g., “sponsored by …. “) with the sponsor logo on a
black screen at the end of each PSA campaign. The logo of the unfamiliar sponsor was
created.
96
The first group saw a PSA that was congruent with its sponsor, which was a familiar
company; the second group saw a PSA that was also congruent with its sponsor, but which
was an unfamiliar company; the third group saw a PSA that was incongruent with its sponsor,
which was a familiar company; and the last group saw a PSA that was incongruent with its
sponsor, which was an unfamiliar company.
IV. RESULTS
To test the hypotheses, ANOVA (Analysis of Variance) was used. A factorial analysis
of familiarity of sponsoring company (familiar versus unfamiliar) and congruency of PSA
campaign and the category of sponsoring company (congruent versus incongruent) as between
subjects factors was conducted. After the data of aggregated scores were collected and divided
by the number of students in the group, the scores of groups were compared for familiarity
and unfamiliarity of sponsors, for congruency of sponsor and PSA campaign and
incongruency. This was used to determine the significance of the main effects and interaction
at the .05 level. First, reliability for attitude and purchase intention was checked and
manipulation for congruency and familiarity was checked. And then, main effects were
analyzed.
Preliminary Analysis
Reliability. Four items for attitude toward the company and four items for purchase
intention for products of the company were checked for their reliability scores. The alpha
scores were .819 for attitude and .839 for purchase intention. This is higher than .75 for basic
research recommended by Wimmer and Dominick (2003).
Attention to news items. To determine how many subjects paid attention to experimental
stimuli, the analysis of the ability to choose the correct answers was conducted. A cumulative
index score was created by adding the number of correct answers out of seven news items
questions. The results of analyzed participants’ recalled news items showed that overall
processing level for attending experiment was not significantly different.
Participants
The participants in this study were 193 graduate and undergraduate students (45.3%
male, 54.7% female) from various departments at a Midwestern university. Participants were
awarded a nominal amount of extra credit in exchange for their participation. They were
assigned randomly to four groups. Since twenty responses were discarded because they
marked that they have never heard the company and one response was discarded because of
incomplete answers, responses from 172 participants (89%) were analyzed.
97
Main Analyses
Hypothesis 1 predicted that a PSA congruent with the category of the company would
generate more favorable attitude than an incongruent PSA. According to ANOVA results, a
congruent PSA (child nutrition PSA sponsored by Post or D-food) (M = 5.08) was shown to
create a more favorable attitude toward the company (F (1, 172) = 47.452, p < .000) than an
incongruent PSA (preventing drunk driving sponsored by Post or D-food) (M = 3.91).
Therefore, H1 was supported (Table 1).
Table 1.The ANOVA results for attitude toward the sponsoring companies.
F p
Corrected Model 16.224 .000
Intercept 2777.876 .000
Congruency 47.452 .000
Familiarity .096 .757
Congruency × Familiarity .391 .532
Hypothesis 2 stated that a congruent PSA would generate higher purchase intentions
than an incongruent PSA. ANOVA results (see Table 2) showed that subjects who watched a
PSA (M = 4.31) congruent with sponsor produced higher purchase intentions (F (1,172) =
7.015, p < .009) than an incongruent PSA (M = 3.92). Therefore, H2 was also supported.
Table 2.The ANOVA results for purchase intention for products
of sponsoring company.
F p
V. CONCLUSION
This research examined the congruency effect of sponsor and sponsored social
marketing programs in terms of attitude toward the sponsor and purchase intention for the
products of the sponsor. Overall, the results of the current study suggest that congruency
between sponsor and sponsored social marketing program can be an important factor for
effective marketing communication. Hypotheses 1 and 2 were supported, showing that
sponsors who are closely associated with a congruent PSA campaign are more persuasive than
when they are associated with an incongruent PSA campaign. A congruent PSA was more
likely to generate favorable attitudes and purchase intentions for the products of the sponsor
than an incongruent PSA.
Empirical research on the match-up hypothesis asserts that a positive image from
“doing good things” is transferred to purchase intentions by matching sponsor to a relevant
98
social marketing program (Kahle & Homer, 1985). The match-up hypothesis asserts the
attitude-to-behavior process. This study confirms that congruency effect also applies to the
sponsorship linkage between company and sponsored social marketing program, supporting
the findings of McDaniel (1999) that a matched sponsor-sponsored event relationship is more
affective than a mismatched relationship in terms of attitude and purchase intentions.
REFERENCES
Andreasen, A. R. “Social marketing: Its definition and domain.” Journal of Public Policy and
Marketing, 13(1), 1994, 108-114.
Brenkert, G.B. “Private corporations and public welfare.” In R.A. Larmer. (ed.). Ethics in the
workplace: Selected readings in business ethics. Minneapolis/ St. Paul, MN: West
Publishing Company, 1996.
D’Astrous, A., and Bitz, P. “Consumer evaluations of sponsorship programmes.” European
Journal of Marketing, 29(12), 1995, 6-22.
Hamm, A.O., Vaitl, D., and Lang, P.J. “Fear conditioning, meaning, and belongingness: A
selective association analysis.” Journal of Abnormal Psychology, 98(4), 1989, 395-
406.
Hastings, G.B. “Sponsorship works differently from advertising.” International Journal of
Advertising, 3, 1984, 171-176.
Kahle, L.R., and Homer, P. “Physical attractiveness of the celebrity endorser: A social
adaptation perspective.” Journal of Consumer Research, 11(March), 1985, 954-961.
Lantos, G.P. “The boundaries of strategic corporate social responsibility.” Journal of
Consumer Marketing, 18(7), 2001, 595-630.
Lee, S.M. “Corporate social responsibility: The comparison of corporate social contribution
between Korea and America.” Korean Socialogical Association, 36(2), 2002, 77-111.
McDaniel, S.R. “An investigation of match-up effects in sports sponsorship advertising: the
implications of consumer advertising schemas.” Psychology and Marketing, 16(2),
1999, 163-184.
McWilliams, A., and Siegal, D. “Corporate social responsibility: a theory of the firm
perspective.” Academy of Management Review, 26(1), 2001, 117-127.
Meenaghan, T. “The role of sponsorship in the marketing communication mix.” International
Journal of Advertising, 10(1), 1991, 35-47.
Quester, P.G., and Thompson, B. “Advertising and promotion leverage on arts sponsorship
effectiveness.” Journal of Advertising Research, 41(1), 2001, 33-47.
Rodgers, S. “The effects of sponsor relevance on consumer reactions to internet
sponsorships.” Journal of Advertising, 32(4), 2003-4, 67-76.
99
INTERNET ADVERTISING AND ITS REFLECTION OF
AMERICAN CULTURAL VALUES
ABSTRACT
This study explores dominant cultural values in Internet advertising of the top 100
U.S. Web sites. The findings reveal that Internet advertising reflects more utilitarian values
than symbolic values. The study also found that the type of advertising appeal is associated
with product categories. The results indicate that Internet advertising reflects a convergence of
the typical cultural norms of the American society and the particular features of Internet
advertising medium. The dominance of utilitarian values in the U.S. banner advertisements
fits the American low-context culture, which prefers logical and factual manners to
communicate thoughts and actions.
I. INTRODUCTION
Cultural value is a complex and multifaceted construct. The term “value” has been
defined as “an enduring belief that a specific mode of conduct or end-state of existence is
personally or socially preferable to alternative modes of conduct or end-state of existence
(Rokeach, 1968, page 160). Pollay (1983) developed a measurement scheme to describe the
cultural characters of advertising. He recognized that cultural values, norms, characteristics
are integrated into advertising appeals, which are specific approaches advertisers use to
communicate how their products will satisfy customer needs. Constructing a list of 42 ad
appeals, Pollay’s measurement covered almost all common cultural values in advertising and
was conventionally applied in later advertising cultural studies. For example, Cheng &
Schweitzer (1996) conducted a comparative study of the cultural values reflected in Chinese
and U.S. television commercials. They constructed a list of 30 cultural values based on
Pollay’s measurement of the advertising appeals. Their study found three dominant appeals in
U.S. TV commercials: “enjoyment,” “individualism,” and “economy.” Described as the two
most common approaches used in advertising (Snyder & DeBono, 1985), utilitarian and
symbolic ad appeals have been discussed extensively in the advertising literature. Symbolic
appeal holds a creative objective to produce an image of the generalized user of the advertised
product and evoke emotions and thinking. Utilitarian appeal highlights the functional features
of the product, such as the performance, quality and price (Johar & Sirgy, 1991).
100
1996), which refers to the progression of a certain advertising industry. Leiss et al. (1990)
identified a historical pattern of advertisement growth in a historical analysis of U.S.
advertisements. The pattern indicates a shift from “informational to symbolic presentation” of
human values in advertising, as the advertising industry grows “mature.” However, the
advertising evolution discussed relates only to pre-Internet industry.
RQ1: What are the dominant cultural values presented in U.S. online banner advertising?
According to the literature, U.S. Internet advertising, like advertising in other media,
will reflect cultural values of the American society that creates it. At the same time, Internet
advertising is expected to convey some special themes such as “technology” and
“information” with respect to its unique communication technology characteristics.
H1: Internet advertising reflects more utilitarian values than symbolic values.
According to Leiss et al.’s (1990) historical pattern of advertising evolution, the preference for
using utilitarian or symbolic values reflects the maturity level of an advertising industry.
Different from other media users, Internet users are actively looking for information (Sterne,
1997), which may result in that the Internet advertising bears more utilitarian values that
consumers could perceive immediately.
H2: The type of advertising appeal is associated with the type of products (product
categories).
As the literature indicates, product-related characteristics have much to do with advertising
appeal. This hypothesis would test whether the type of advertising appeal in the Internet
advertising is associated with product-related characteristics, in this case, product categories.
II. METHOD
This study employed a content analysis by selecting the banner ads from the top 100
U.S. Web site. The ranking of the top 100 Web sites was provided by the monthly Internet
Ratings report from PC DataOnline, one of the three leading companies that specialize in
ranking Web site popularity. Web site popularity was ranked by unduplicated audience reach
and number of unique visitors. The sampling process lasted 10 days, from May 19, 2003 to
May 28, 2003. All banner ads on the home page or the front page were collected. A total of
268 banner ads from the top 100 most popular U.S. Web sites were downloaded for the study.
Banner Ad refers to a typically rectangular graphic element that acts as an advertisement and
entices the viewer to click it for further information. They often use GIF, Java, or Shockwave
101
animations and include some text, such as a phrase or a slogan and the advertiser’s name or a
Web address. If the user finds the ad intriguing enough, he or she may click on the ad, which
activates an embedded link, to visit the advertiser’s Web site (Sterne, 1997).
Cultural value refers to intangible beliefs, norms and characteristics that are embedded in
advertising appeals (Zhang and Gelb, 1996). The measurement of cultural value in this study
is largely based on Cheng and Schweitzer’s (1996) measurement instrument. It contains 32
items, of which 29 were borrowed or modified from Cheng and Schweitzer’s study.
Advertising appeal is the specific approaches advertisers use to communicate how their
products will satisfy customer needs. Advertising appeals are typically carried in the
illustration and headlines of the ad (Arens & Bovee, 1994). They are divided into two groups:
utilitarian value and symbolic value. Utilitarian value involves informing consumers of one or
more key benefits that are perceived to be highly functional or important to the target
consumers. Utilitarian value emphasizes product features or qualities, such as “convenience,”
“economy,” and “effectiveness.” Symbolic value evokes a wide range of emotional responses
from the ad’s audience. Symbolic values are those suggesting human emotions such as
“enjoyment,” “individualism,” and “social status.”
Product Categories. The products or services advertised in Internet banner ads are divided
into 16 categories. The product categories of Cheng and Schweitzer’s (1996) study was
modified to create a list of product for this study by adding Internet-related categories such as
online shopping and Online community/service.
The unit of analysis is one banner ad on the homepage or front-page of each Web site. If a
Web site did not post ads on their homepage, the front page was used to collect the banner
ads. Two coders participated in the coding. A subsample of 40 banner ads was used to test
intercoder reliability. Scott’s (1955) pi formula was used to calculate intercoder reliability.
The Intercoder agreement averaged 87.7 %, which is higher than the 85% standard for content
analysis (Kassarjian, 1977).
III. FINDINGS
The analysis of the data revealed six most dominant values. They are “economy”
(25.7%), “effectiveness” (12.3%), “incentive” (11.2%), “enjoyment” (10.1%), “informative”
(6.7%) and “convenience” (6.7%).
The four most dominant product categories were “business and finance” (17.6%), “computer
and Internet product” (16.0%), “online community/service” (14.6%), and “entertainment”
(9.4%).
Among the 16 product categories examined, 13 categories were more utilitarian-
centered; 3 categories were more symbolic-centered percentage wise. The hypothesis that
online advertising reflects more utilitarian values than symbolic values was supported. About
three fourths (75%) banner ads reflected utilitarian values, whereas 25% banner ads displayed
symbolic values.
The findings also supported the second hypothesis that the type of advertising appeal is
related to the type of products (product categories) (X 2 = 27.74, p < .05). The Chi-square test
shows a statistical significance in the relationship between the type of advertising appeal, the
utilitarian/symbolic values, and the product category (the type of product/service).
102
Table 1. Cultural Values In Internet Advertising of the Top 100 U.S. Web Sites
The results showed that the type of advertising appeal was more likely to be associated
with the product categories that possess the characteristics of the type of advertising appeal.
For example, “online service/ community,” “computer and Internet product,” and “business
103
and finance” are utilitarian centered, whereas “entertainment” is more symbolic-oriented.
“online service/ community” ads reflected utilitarian values such as “informative” and
“economy;” “computer and Internet products” ads demonstrated utilitarian values such as
“effectiveness” and “economy;.” and “business and finance” ads featured utilitarian values
such as “economy.” Meanwhile, “entertainment” ads reflected more symbolic values such as
“enjoyment.”
IV. CONCLUSION
This study found six dominant cultural values reflected in U.S. online advertising. This
value profile showed a convergence of the typical culture of the American society and the
particular features of Internet advertising environment. The findings are consistent with
Cheng and Schweitzer’s (1996) television advertising study on dominant cultural values in
advertising. To survive in the highly competitive U.S. market, advertisers have to provide
solid information in advertising to attract their target consumers. Symbolic values evoke
human emotions, which are often intangible in a quick glimpse. It is not surprising that
utilitarian values, which convey more tangible and straightforward information, would be
favored in online ads strategies.
The United States is regarded as a “low-context culture,” which relies heavily on its
Western rhetoric and logic tradition to relate thoughts and actions to people and their
environment (Hall and Hall, 1987). Therefore, advertisements in the United States were
presented in a more factual and logical manner. The low-context culture of the United State
orients its Internet banner ads to show more utilitarian values that emphasize product
attributes and quality. The dominance of the utilitarian values also fits the unique
characteristics of the online advertising environment. The small amount of space for banner
ads on a Web page requires conciseness and brevity. Also, different from other media users,
Internet users are actively looking for information (Sterne, 1997). Online users browse Web
pages and links so quickly that most of them seldom give a second glance to ads. Therefore,
messages in banner ads have to be direct and enticing instead of being symbolic and subtle.
Banner ads containing “economy,” “convenience” or “effectiveness” are usually clear,
straightforward, and compelling.
The United States is regarded as typical of the Western culture (Belk et al., 1985). The
value profile of the six dominant values in this study indicates that online advertising in many
ways conveys cultural norms of the Western societies. Like other traditional media, U.S.
online advertisers have tailored ads to reflect the prevalent cultural perceptions of the Western
societies, such as “enjoyment” and “incentive.” Meanwhile, the particular characteristics of
the Internet medium in some way have influenced Internet advertisers’ preference of cultural
values such as “informative.”
The result of hypothesis 2 yielded evidence consistent with previous studies of the
relationship between product categories and culture values. Cultural value varies greatly from
ad category to category. By looking into the cultural value difference in ad categories, this
study found that the type of advertising appeal was associated with the product categories.
This observed tendency seems to be consistent with the traditional view of advertising
effectiveness that a particular ad appeal is contingent on the type of product being advertised.
Johar and Sirgy (1991) found that value-expressive (symbolic) advertising appeals are
effective when the product is value-expressive (symbolic), whereas utilitarian appeals are
104
effective when the product is utilitarian. Although the percentage of the overall value of these
categories exhibits a consistent pattern as Johar and Sirgy found in their study, part of the
findings of this study suggested something different. For example, the value with a highest
percentage in the “business and finance” is “competition,” which is not a utilitarian value but
a symbolic value. Also, the most frequently found value in “entertainment” is “incentive,”
which is not a symbolic value but a utilitarian value. As Johar and Sirgy (1991) pointed out,
the pattern is moderated by a variety of factors including product-related characteristics such
as product life cycle, scarcity, and differentiation or consumer-related factors such as
consumer involvement, prior knowledge and self-monitoring. In other words, under some
situations, advertising utilitarian/symbolic appeals may not match the product’s utilitarian or
symbolic characteristics. Johar and Sirgy’s observation on pattern moderation offers some
explanations to the unexpected high percentages of unmatched values in some product
categories in this study.
REFERENCES
Andrén, G., Ericsson, L., Ohlsson, R. & Tännsjő, T. (1978). Rhetoric and ideology in
advertising: A content analytical study of American advertising. Sweden: LiberFőrlag.
Arens, W. F. & Bovee, C. (1994). Contemporary Advertising (5th Ed.). Burr Wood, Illinois:
Irwin.
Cheng, H & Schwitzer, J. (1996). Cultural values reflected in Chinese and U.S. television
commercials. Journal of Advertising Research, 36(3), 27-36.
Gross, M. (Ed.). (1996). Advertising and culture: Theoretical perspectives. Connecticut:
Praeger.
Hall, E. & Hall, R. (1987). Hidden difference: Doing business with the Japanese. New York:
Anchor Press.
Holbrook, M. & O’Shaughnessy, J. (1984). The role of emotion of advertising. Psychology
and Marketing, 1(Summer), 45-64.
Johar, J. S. and Sirgy, M. J. (1991). Value Expressive Versus Utilitarian Advertising Appeals:
When and Why to Use Which Appeal. Journal of Advertising, (September), 23-34.
McCracken, G. (1986). Culture and consumption: A theoretical account of the structure and
movement of the cultural meaning of consumer goods. Journal of Consumer Research,
13(1), 71-84.
Pollay, R.W. (1983). Measuring the cultural values manifest in advertising. Current Issues and
Research in Advertising. James H. L. and Claude R. M., eds, Ann Arbor, MI:
University of Michigan Press. 71-92.
Pollay, R. & Gallagher, K. (1990). Advertising and cultural values: Reflections in the
distorted mirror. International Journal of Advertising, 9, 359-372.
Rokeach, M. (1968). Beliefs, Attitudes and Values. San Francisco: Jossey-Bass.
Scott, W. (1955). Reliability of content analysis: the case of nominal scale coding. Public
Opinion Quarterly, 17, 321-325.
Sterne, J. (1997). What makes people click: advertising on the web. Indianapolis: Que
Corporation.
Zhang, Y. B. & Harwood, J. (2004). Modernization and tradition in an age of globalization:
cultural values in Chinese television commercials, Journal of Communication, 54(1),
156-172.
105
CHAPTER 4
106
STUDENT ONLINE PURCHASE DECISION MAKING:
AN ANALYSIS BY PRODUCT CATEGORY
ABSTRACT
I. INTRODUCTION
“To shop online or not to shop online.” That question could be viewed as one of the
new Shakespearian dilemmas for today’s consumer. Previous research has suggested that
only 40% of undergraduates, for example, purchase products online (King and Case, 2005).
Of those students who performed Internet shopping, 43% merely purchased one item. These
findings are in contrast to a study by Feedback Research, a market research firm, that found a
majority of students (62%) plan to buy or buy books or textbooks online (Syllabus.com,
2004). Moreover, 73% of students who bought or planned to buy books/textbooks indicated
online research before purchase.
Privacy and security are factors that affect the decision to purchase online. In a June
2005 survey of 2,322 U.S. adults, 67% decided not to register at a Web site or shop because
the privacy policy was unclear (PC Magazine, 2005). In addition, 64% chose not to buy
online at least once because of concerns over personal information. Furthermore, fear of
identity theft may also cause individuals to avoid shopping online. In the prior survey, 20%
of respondents stated that they had been victims of identity theft. A study of undergraduates
found, however, that only 3% of respondents indicated being a victim of Internet fraud (Case
and King, 2004). Forty-four percent of students perceived Internet purchasing as highly
secure.
In an effort to increase online sales, companies have begun to provide "live help"
functions, through instant messaging or text chatting, on their Web sites to facilitate
interactions between online consumers and customer service representatives (CSRs). Because
text-based communication limits nonverbal communication with consumers and the social
contexts for the information conveyed, emerging multimedia technologies (such as computer-
generated voice and humanoid avatars) are being used to enrich the interactive experiences of
customers. One study demonstrated that the presence of text-to-speech voice with a 3-
dimensional avatar (humanoid representation of a CSR) significantly increases consumers'
cognitive and emotional trust toward the CSR (Lingyun and Benbasat, 2005).
Prior research has examined facets such as customer service and willingness to buy. A
March 2005 market survey conducted in the U.S. by the National Retail Federations' NRF
Foundation and American Express Company found that 99% of the shoppers believe customer
service is at least somewhat important when deciding to make a purchase (Zid, Linda Abu-
Shalback, 2005). Results indicate that shoppers are more satisfied with the customer service
they receive online than they are with service at traditional retail stores. Only 16% of
traditional, retail shoppers surveyed were extremely satisfied with their most recent customer
service experience, while an additional 51% were very satisfied. However, online shoppers
were nearly three times (44%) as likely to be extremely satisfied, while 45% were very
satisfied. The most important component of good customer service for 88% of online
shoppers was that the web site was safe and secure.
This study employs a survey research design. The research was conducted at a
private, northeastern U.S. University. A Student Internet Purchasing Survey instrument was
developed and administered in March 2004 and April 2005 to students enrolled in a School of
Business course. A convenience sample of class sections was selected. The courses
included Business Information Systems, Business Telecommunications, Introduction to
Managerial Accounting, Statistics II, Business Policy, and Entrepreneurship.
The survey instrument was utilized to collect student demographic data and examine
student Internet purchasing behavior. The survey requested that each student list the details of
each product purchased during the study month. The survey was distributed at the end of the
prior month and collected at the beginning of month following the study month. In the log,
purchase detail such as purchase date, item description, quantity, total price, method of
payment, and reason for using the Internet was collected. Survey data was subsequently
entered into a microcomputer-based database management system to aid in data analysis. All
surveys were anonymous and students were informed that results would have no effect on
their semester grade.
IV. RESULTS
A sample of 220 usable surveys was obtained. 123 (56%) of the respondents were male and
108
97 (44%) were female. The response rate indicates that respondents are relatively equally
distributed by class (Table I). Twenty-seven percent of students were Freshmen, 25% were
Sophomores, 29% were Juniors, and 19% are Seniors.
Freshmen 60 27%
Sophomore 55 25%
Junior 64 29%
Senior 41 19%
Total 220 100%
To quantify behavior, students were asked to itemize each day the quantity, price, and
reason for each online purchase. Students were given 12 fixed purchase categories and four
fixed choices for why the purchase was made via the Internet. The reason options included:
convenience, price, variety of choice, and cannot find the item in store. Table II details the
quantity, number of transactions, average quantity per transaction, average price per item, and
purchase reason (summarized by percentage of incidence) for each category of purchase.
Results show that clothes are the most common transaction (36 of 146 transactions) and the
highest volume (85 of 300 total quantity) item. Concert/event tickets were second in
transactions (14) and quantity (51). All other categories were much less common except for
the “other” category (44 transactions). In terms of average quantity per transaction, antiques
were the highest (4.0), followed by concert/event tickets (3.6). Antiques, however, only
accounted for two of the transactions. The highest average price per item categories included
air/rail/hotel ($437.94), other ($108.29), computer equipment ($71.67), and concert/event
tickets ($63.10). The highest rated decision-making reason within a category was “price”
(specified in four categories), “no response” (four categories), “cannot find in store” (1.5),
“convenience” (1), and “variety” (.5).
per Transaction
No Response
Convenience
Quantity
Average
Price
Item
109
Vitamins/crèmes 5 3 1.7 26.80 0% 33% 0% 0% 67%
Games 5 4 1.3 20.00 0% 50% 0% 25% 25%
Calling cards 1 1 1.0 20 0% 0% 0% 100% 0%
Jewelry 0 0 0 0 0% 0% 0% 0% 0%
Other 79 44 1.8 108.29 18% 27% 2% 16% 36%
Overall 300 146 2.1 $78.54 15% 26% 6% 16% 37%
The study also found that 69%, or the majority of students, do not purchase online. The
primary reason, as indicated by 41% of respondents, is the lack of money. Table III illustrates
that 26% of students do not have a specific reason, 11% do not have a credit card, 9%
perceive a lack of security, 8% want to see a product in the store prior to purchasing, 3%
cannot find the product they desire, and 1% indicated a reason not previously listed in the
survey.
IV. CONCLUSION
Consistent with previous study findings, this study also found that 69%, or the
majority of students, do not purchase online. Forty-one percent indicated that the primary
reason was the lack of money. Only 9% perceived a lack of security.
110
There are two important implications as a result of these findings. One implication is
that since student online decision-making behavior varies by purchase category, marketing
DSS developers and users may need to refine their information systems to better measure and
predict purchase activity. Type of product may be a key factor in the purchase decision-
making process. Future research is needed to explore if categories should be further
subdivided for each market segment and if purchase reason can be manipulated to maximize
profit for the organization. Moreover, research needs to be conducted to determine how to
model the online decision-making process within the DSS.
A second implication is that the student online market is largely untapped and a
marketing opportunity. Most students indicated that they did not purchase online during the
survey month. Further research is necessary to determine if the survey months are anomalies
or representative of general purchase behavior.
The limitations of this study are primarily a function of sample size and type of
research. Even though responses were relatively equally distributed among academic class
and gender, a larger sample size and use of additional universities would increase the
robustness of results. Another limitation relates to the self-reported nature of the survey. This
limitation is minimized through respondent anonymity and the use of a log to collect data as it
occurs. The final weakness relates to the chosen months. Purchase activity may not be
representative of other months, thus, replication with other months would improve study
conclusions. Future research is also need to explore what items are included in the “other”
category and why students did not have a reason for certain purchases.
REFERENCES
Case, Carl J. and Darwin L. King. “Student Online Auction Fraud: Perception
and Reality.” Business Research Yearbook, Global Business Perspectives Volume
XI, 2004, 163-167.
King, Darwin L. and Carl J. Case. “A Review of Student Internet Purchase
Behavior.” Proceedings of the American Society of Business and Behavioral
Sciences, 12, (1), Las Vegas, NV, February 24-27, 2005, 972-978.
Lingyun, Qiu and Izak Benbasat. “Online Consumer Trust and Live Help Interfaces:
The Effects of Text-to-Speech and Three-Dimensional Avators.” International
Journal of Human-Computer Interaction, 19, (1), September, 2005, 75-94.
Teo, Thompson S.H. and Yuanyou Yu. “Online Buying Behavior: A Transaction Cost
Economics Perspective.” Omega, 33, (5), October 2005, 451-465.
Zid, Linda Abu-Shalback. “Another Satisfied Customer.” Marketing Management, 14, (2),
March/April, 2005, 5.
“The Perils of Online Shopping.” PC Magazine, 24, (14), August 23, 2005, 23.
“Two-Thirds Turn to the Internet for Back-to-School Textbooks.” Syllabus.com, September
14, 2004, SyllabusNewsUpdate@newsletters.101com.com.
111
ANALYZING ROLE OF OPERATIONS RESEARCH MODELS
IN BANKING
ABSTRACT
I. INTRODUCTION
Nearly twenty years have past since Zanakis, Mavrides, and Roussakis (1986) research
on Management Science (MS) in banking was published. At that time, bankers were
experiencing significant competitive pressures due to the deregulation of institutional
proceeds (Zanakis et. al, 1986). The effective result was the adoption of Management Science
techniques by banking managers seeking sustainable competitive advantage. Management
Science allows decision makers to model real-life situations, evaluate different alternatives,
and determine the best course of action under the model’s assumptions to help bankers with
decision making (Zanakis et al, 1986). The objective of this study is to determine how MS
models are currently being utilized in the banking industry to optimize performance and
productivity while achieving sustained competitive advantages. Zanakis et al. (1986) created
a two-way classification scheme that elicited MS research into banking areas not fully
developed. Consequently, decision-makers in the banking industry may seek to revisit MS
strategies of the past to facilitate substantive performance gains now and in the future.
The American economy has faced and flirted with recession over the past four years.
This has caused instability in the global economy. The road to recovery has been rocky and
another crisis like that of the Savings & Loan’s in the 1980’s could throw the U.S and world
economy into a downward spiral. The connection of banks and the economy was made by
Cohen et al. (1981) who conjectured that banks play a crucial role in the nation’s financial
system. Beck and Levine (2004) recently performed a longitudinal investigation on banks and
economic growth from 1976 to 1998 which proved the merit of Cohen et al (1981) argument.
They found that stock markets and banks positively influence economic growth. A direct
lesson on this causal relationship can be learned from the Japanese banking crises which sent
112
its economy into a tailspin. In fact, Kawamoto (2004) discussed the need for structural reform
of Japan’s financial system to ensure long-term economic growth. In summary, it behooves
international and domestic bankers to fully employ MS techniques in underutilized banking
application areas to ensure performance and productivity levels are met and maintained. This
research attempts to provide banking managers a comprehensive view of the MS techniques
and tools available for application in different areas of banking. The current research provides
valuable information to banking decision makers in the following areas:
1) Identify trends regarding applications of MS models in banking,
2) Identify likely skills gaps or areas that need improvement, where MS
techniques have not been applied.
This research provided valuable insight into current trends in the banking industry.
The results of data analysis indicate that banking priorities seem to have shifted from that of
corporate planning for financial/liquidity management to bank operations (e.g. check
processing, profitability, productivity). In contrast, the study by Zanakis et al. (1986) found
that MS/OR was used most frequently in the banking application areas of financial
management, portfolio management, customer credit scoring and check operations. The most
frequently used techniques in their study were statistical analysis, linear programming,
forecasting and simulation. At that time, more research was needed in
productivity/profitability operations and international activities (arbitrage and currency
swaps). In our study, Forecasting and Simulation are not among the most frequently used MS
techniques. Our research findings show that technological advances as predicted by Zanakis
et al (1986), brought forth sophisticated information systems to aid bankers in daily operations
and decision-making (DeFarrari and Palmer, 2001). However, few empirical studies have
been performed to assess the quantitative benefits of such systems. Perhaps, this identifies an
area suitable for further study by MS researchers. Between 1986 and 2004, numerous
researchers employed techniques investigating areas for improvements in productivity and
profitability. Our findings reveal that MS tools were extensively used by the banks to achieve
greater productivity, and performance.
The results of our research do not fully support the conjecture of Zanakis et al. (1986)
that there will be a greater usage of MIS/DSS in banking in the future. In fact, this study
113
found that techniques preferred 20 years ago (Statistical Analysis and Linear Programming)
are still the top techniques in use today. Linear Programming has increased in popularity due
to technological advances such as data envelope analysis. Data envelope analysis is a linear
programming technique for measuring the performance of organizational units where the
presence of multiple inputs and outputs makes comparisons difficult (Ali, A. I., 1990). Kantor
and Maital (1999) contended that data envelope analysis (DEA) is used extensively to provide
quantitative measures of each branch’s efficiency relative to other similar branches. The most
frequently used MS/OR techniques (as shown in Table 1) are: linear programming (23.99%),
statistical analysis (18.07%), MIS/EDP (10.59%) and simulation (7.48%). User familiarity
with certain techniques may also have played a role in these results.
Table I
MS/OR TECHNIQUES FREQUENCY UTILIZATION
1986 – 2004
The Zanakis et al. (1986) study generated massive research in the banking operations
area. Table 2 illustrates the banking areas where MS/OR techniques were most frequently
applied including: operations, loan management and financial planning. This research found
that over the last twenty years, 38% of published articles were in the banking operations area.
15% of banking articles covered loan management, while 12% investigated financial
management. By far, most MS/OR studies during the past twenty years focused on bank
profitability, performance, and efficiency. DeFarrari and Palmer (2001) contended that
globalization, regulatory restrictions and increased competition in financial markets triggered
consolidation which brought forth larger banks. The trend toward larger, universal banks
(Rime and Stiroh, 2003), sparked an intense preoccupation with performance, efficiency and
productivity gains (hereafter referred to as, the big three).
114
Table II
FREQUENCY OF MS USE IN BANKING APPLICATION AREAS
Banking Area % use¹
Marketing and Organization 7%
Liquidity Management 3%
Loan Management³ 15%
Investment Management 8%
Financial Planning 12%
Balance/Sheet Management 7%
Operations² 38%
Manpower Planning 6%
Miscellaneous 3%
Total: 100%
¹Indicates frequency of MS techniques applied to banking areas (1986 -
2004)
²Operations area where MS was most frequently applied: Productivity,
Profitability assessment and improvement - 27%
³Loan Management area where MS most frequently applied: Risk measurement
And management - 6%
IV. CONCLUSION
Results from this study reverberate that of Kumbhakar and Sarkar’s (2002) who found
that in 1980’s, the banking sector underwent mass liberalization to boost performance,
productivity and efficiency. The two-way classification scheme presented denotes a
continued emphasis on improvement by banks throughout the 1990’s and into the new
millennia. Domestic banks may respond to the threat of foreign banks by further streamlining
operations since researchers purport that foreign entry in domestic markets lowers interest rate
spreads and margins (Unite and Sullivan, 2003). Subsequently, this may increase the usage of
those MS techniques that forecast profitability, and productivity. More research is still needed
in the international arena. However, a recent study by Berger, Dai, Ongena and Smith (2003)
suggests the future of bank globalization will be limited since firms frequently use host nation
banks for cash management services.
Results of this research clearly show that Linear Programming and Statistical Analysis
were the most frequently used techniques, followed by MIS/EDP, Simulation, and Stochastic
Programming; Integer Programming, Goal Programming, Queuing Theory and Heuristics
were the least preferred techniques. A review of MS applications in different sub areas of
banking shows that banking Operations (Productivity, Profitability Assessment and
Improvement) had the highest usage of MS models. It was followed by Loan Management,
Financial Planning, Investment Management, Balance Sheet Management, and Marketing and
Organization.
REFERENCES
115
Berger, A. N., Dai, Q., Ongena S., and Smith, D. C. “To what extent will the banking industry
be globalized? A study of bank nationality and reach in 20 European nations.”
Journal of Banking & Finance, 27, (3), 2003, 383.
Cohen, K.J., Maier, Steven F., and Vander Weide, J.H. “Recent Developments in
Management Science in Banking.” 27, (10), 1981, 1097-1119.
DeFarrari, Lisa M., and Palmer, David E. “Supervision of Large Complex Banking
Organizations.” Federal Reserve Bulletin, 87, (2), 2001, 47-57.
Kantor, J. and Maital, S. “Measuring Efficiency by Product Group: Integrating DEA with
Activity-Based Accounting in a Large Mideast Bank”, Interfaces, 29,
(3), 1999, 27-36.
Kawamoto, Yuko, “Fixing Japan’s Banking System.” McKinsey Quarterly, 3, 2004, 118-122.
Kumbhakar, S. C., and Sarkar, S. “Deregulation, Ownership, and Productivity Growth in the
Banking Industry: Evidence from India”, Journal of Money, Credit, and Banking, 35,
(3), 2002, 403-424.
Rime, Bertrand, and Stiroh, Kevin J. “The Performance Of Universal Banks: Evidence From
Switzerland.” Journal of Banking & Finance, 27, (11), 2003, 2121-2150.
Unite, A. A., and Sullivan, M. J. “The Effect Of Foreign Entry And Ownership Structure On
The Philippine Domestic Banking Market.” Journal of Banking & Finance, 27, (12),
2003, 2323-2345.
Zanakis, Stelios H., Mavrides, Lazaros P., and Roussakis, Emmanuel N. “Notes and
Communications Applications of Management Science in Banking.” Decision
Sciences, 17, 1986, 114-128.
Note: This paper is a part of on going research. A copy of the complete paper can be obtained
from Dr. Rana at: dsrana@jsums.edu
116
DECISION SUPPORT SYSTEMS:
AN INVESTIGATION OF CHARACTERISTICS
Roger L. Hayen, Central Michigan University
roger.hayen@cmich.edu
ABSTRACT
Since the early 1970s decision support system (DSS) frameworks have been
formulated to describe DSS characteristics. This research examines several of those
frameworks using published case-based research. The purpose is to determine whether the
characteristics outlined by the frameworks are observed in these published cases. This is
useful in determining those characteristics of actual DSS that lead to their classification.
Overall, the research supports the framework characteristics. However, the results indicate
that the source and scope of information lack a strong relationship to the ad hoc and
institutional DSS decision categories.
Keywords: DSS, decision support systems, characteristics of information
I. INTRODUCTION
Decision support systems (DSS) frameworks have been developed to describe the
characteristics of DSS and their relationships. Being forward looking, they provide a view for
assisting in their classifications. However, advances in computer technology are rapid and
impact Information System (IS) applications including DSS, resulting in a suite of constantly
changing DSS applications. Generally, this dynamic nature of DSS applications make it
difficult for chief information officers and other managers to clearly define a static suite of
DSS applications through their characteristics. Yet the identification of DSS applications is
critical in planning organization strategy for the deployment of IT. This study examines
several frameworks to determine their efficacy by employing case-based research of DSS
applications to provide a perspective of DSS characteristics. First, the definition of DSS is
examined; next using a framework of information requirements by decision categories, data
are analyzed to evaluate DSS characteristics; and finally, the findings on DSS characteristics
are summarized.
A DSS is defined using a computer to (Keen and Scott Morton, 1978): (1) assist
managers with their decision process in semi-structured tasks; (2) support, rather then replace
managerial judgment; and (3) improve the effectiveness of decision making rather than its
efficiency. (p. 1). Others (Marakas, 2003, Power, 2002, and Sprague and Carlson, 1982) have
also provided definitions for a DSS. Their definitions support the Keen and Scott Morton’s
(1978) definition, thus making it acceptable for this analysis.
Gorry and Scott Morton (1978) provide a context for the semi-structured characteristic
of this DSS definition, relating the work of Simon and Newell to a framework of structured
and unstructured decision-making processes. A fully structured problem is one where all
three decision-making phases – intelligence, design, and choice – are structured. A fully-
117
unstructured problem is one where all three decision-phases are unstructured. A semi-
structured problem is one where one or two, but not all, of the decision-making phases are
unstructured. They define IS that are largely structured as Structured Decision Systems
(SDS), whereas those that are semi-structured or unstructured are DSS. This viewpoint is
reinforced and summarized by Power (2002) as any IS that is not a SDS/TPS (transaction
processing system) is frequently labeled as a DSS. Therefore the definition of a DSS is
qualified by (1) the categories of use and (2) movement along the structured/unstructured
continuum. Furthermore, DSS can be divided meaningfully into two categories: institutional
DSS which deal with decisions of a recurring nature (repetitive), and ad hoc DSS which deal
with specific decisions which are not usually anticipated or recurring (one-shot) (Donovan, &
Madnick, 1977, Keen & Scott Morton, 1978).
For this analysis, the DSS application is distinct from the DSS tool. The DSS tool or
DSS generator, is the software used in the creation of a specific DSS application as the
enabling technology. The application is the specific system that actually accomplishes the
work and supplies a decision maker with the required information. Modern DSS utilize
computer-based tools to create more advanced DSS applications (Eom, 1999). An IS tool that
at one time is used with a primary focus for building an ad hoc DSS may at a latter time find
its use as primarily an institutional DSS. Because the tool was initially created for use in
building an ad hoc DSS, it does not infer that all IS subsequently created using that tool are
DSS. The fundamental DSS definition needs to be applied in determining whether or not the
application is, indeed, a DSS.
118
systems development life cycle (SDLC) and prototyping are the main development strategies
employed in constructing a DSS. (Marakas,2003, Power, 2002) The development strategy
will be one of the parameters to be investigated. The frameworks are examined to determine
the primary attributes to be included when reviewing DSS cases applications or use cases.
IV. METHODOLOGY
V. RESEARCH ANALYSIS
Several general characteristics of DSS are analyzed. There was a nearly even split
between the ad hoc cases (53 %) and the institutional cases (47 %). Nearly all these specific
DSS (96%) were developed using prototyping, whereas the system development life cycle
(SDLC) was used for the other developments (4%) and these were all institutional DSS. So,
essentially all the observed cases were developed using prototyping. This suggests that the
SDLC has no significant use in the development of DSS, regardless of the type of DSS
applications. Several other overall characteristics of the evaluated DSS were observed. This
includes a measure of the extent of the structure of the decision. Of the observed cases, 69 %
were determined to be semi-structured and 27 % were structured. All Ad Hoc DSS were
semi-structured, whereas all the structured were included in the Institutional DSS category.
The primary types of DSS applications were observed for the use cases. Modeling
(47%) and expert systems (24%) are the most common types of DSS applications. The
occurrences of these two categories of applications over time indicate that modeling case
applications have been reported in a continuous stream from 1972 forward. On the other
hand, expert systems applications were reported from the period of 1988 to 1996. The rise
and fall in the application of expert systems in DSS could have occurred at this time.
The Gorry and Scott-Morton’s framework (Table 1, presented above) was examined
using the characteristic data collected on the case-based applications. This evaluation
confirmed the information characteristic requirement set forth in that framework. The
operational control level was recorded as an institutional DSS, whereas the strategic planning
level was recorded as an ad hoc DSS. This framework was examined first by using neural
119
network exploratory modeling to determine if the framework characteristic could serve to
predict the category of the application as either ad hoc or institutional.
The first neural network model include all the characteristics of the Gorry and Scott-
Morton framework. However, the result of Model One, was not significant (χ, p =0.12865).
Each characteristic parameter was evaluated individually to determine the relationship
between the ad hoc/institutional categorization and the characteristic using a pair-wise
analysis. In Figures 1 and 2, along with Table 2, the occurrence of both the source of
information and the scope of information appears to be nearly equal for the ad hoc and the
institutional applications. In Table 2, the source of information and the scope of information
do not produce a significant relationship. This suggests that both ad hoc and institutional DSS
make equal use of internal and external information and that both are as likely to have a
narrow or a wide scope of information. It is unlikely these two characteristics distinguish ad
hoc from institutional application. Both categories are about equally likely to include the
information component.
100%
100%
90%
90%
80% 80%
Ad Hoc vs. Institutional
Using the same procedure, each characteristic was examined individually against the ad
hoc/institutional decision type. Figures 3 and 4 together with Table 2 present the results of
that analysis. Each information characteristic is significantly related to ad hoc and
institutional DSS, distinguishing between the two DSS types. As a result, the source and
scope were removed as characteristic parameters for the neural network Model Two
(Figure 5), producing a significant model (χ, p =0.02981) for predicting the decision type
based on the remaining characteristic parameters of this framework.
Table 2. Significance of Information Characteristics
Characteristic of χ, Significant at
Information probability 0.05 p-Value
Source 0.7724 No
Scope 0.5644 No
Aggregation Level < 0.0001 Yes
Time Horizon 0.0037 Yes
Currency 0.0041 Yes
Required Accuracy < 0.0001 Yes
Frequency of Use < 0.0001 Yes
120
100% 100%
90% 90%
80% 80%
60% 60%
50% 50%
40% 40%
30% 30%
20% 20%
10% 10%
0%
0%
Historical Current Projected
Current Old
Time Horizon
Currency
Ad Hoc Institutional
Ad Hoc Institutional
VI. CONCLUSION
REFERENCES
Adam, F., Fahy, M., and Murphy, C. (1998). A framework for the classification of DSS
usage across organizations, Decision Support Systems, 22(1), 1-13.
Donovan, J. & Madnick, S. (1977). Institutional and ad hoc DSS and their effective use,
Database, 8(3), 79-88.Eom S. B. (1999). Decision support systems research: current
state and trends, Industrial Management & Data Systems, 99(5), 213-220.
Gorry, G. A., & Scott Morton, M. S. (1989). A framework for management information
systems. Sloan Management Review, 31(3), 49-61.
121
Keen, P. G. W., & Scott Morton, M. S. (1978). Decision support systems: An organizational
perspective. Reading, MA: Addison-Wesley.
Marakas, G. M. (2003). Decision support systems in the 21st century (2nd ed.) Upper Saddle
River, NJ: Prentice-Hall. Meredith, J. (1998).
Building operations management theory through case and field research. Journal of
Operations Management, 16, 441-454.
Power, D. J. (2002). Decision support systems: Concepts and resources for managers.
Westport, CN: Quorum Books.
Sprague, R. H., & Carlson, E. D. (1982). Building Effective Decision Support Systems,
Englewood Cliffs, NJ: Prentice Hall.
Voss, C., Tsikriktsis, N., & Frolich, M. (2002). Case research in operations management.
International Journal of Operations & Management, 22(2), 195-219. Yin, R. K.
(1993). Applications of case study research. Newbury Park, CA: Sage Publications.
122
TOURISM MARKET POTENTIAL OF SMALL RESOURCE-BASED ECONOMIES:
THE CASE OF FIJI ISLANDS
ABSTRACT
The Delphi survey as a market research tool was designed to project the future Fiji
Island tourism scenario from 2001 through the year 2020. It is predicted that Fijian tourism
industry is expected to grow and prosper substantially over the next five years due to
increased tourist demand for non-traditional holiday destinations. There will also be a high
demand for activity-based tourism. Although the tourism demand projections predict positive
trends in the short term, tourism demand is subject to a host of uncontrollable factors which
make long-term projections difficult and cumbersome. In view of the developments and
changes, tourism industry operators and planners need scientifically accepted projection bases
for tourism investment and effective operational tourism decisions.
In recent years, rapid socio-economic changes and market developments are taking
place in the Fiji Islands. Among all, Fiji Islands’ tourism has grown substantially over the
last decade to become a major source of income, exceeding agriculture as a source of foreign
exchange earnings. But as a large portion of tourist spending leaks out of the economy as
payments for imports, the net economic impact of tourism is smaller than that implied by
tourist spending numbers. However, tourism is still a critical source of creating jobs and
foreign exchange and is a means of access to the rest of the world.
In 1999, Fiji Islands visitor arrivals exceeded over 4,00,000 and was estimated to pass
the half a million mark by the end of 2003 (Fiji Bureau of Statistics, July 2002). However, a
civilian coup on May 19, 2000 (which overthrew a democratically elected government and
held parliamentarians hostage for over 50 days) shook visitors’ confidence in their safety
while holidaying in Fiji Islands and as a result visitor arrival figures dropped drastically to
2,94,000 for 2000 but the signs are that the industry is picking up again and in the year 2001,
arrivals exceeded 3,50,000 tourists. Also up to the end of the first half of the year 2002,
visitor arrivals totaled 1,830,000 more than the same period in the year 2001. All indications
are that Fijian tourism will grow and prosper in the future. How much and how fast it will
grow depends on many factors, both internal and external.
Australia and New Zealand have so far been Fiji Islands’ major markets supplying
over 40 percent of its tourist arrivals as recently as of 1997. Expanding these neighboring
markets should be relatively easy because of their proximity and close economic ties to Fiji
Islands. However, expanding other markets in Japan, North America, United Kingdom,
Continental Europe, South Korea, and other Pacific neighbors will remain a challenge but one
that is critical to further market development, growth and diversification. One way to expand
and diversify tourist markets, especially in North America and Japan, could be to attract more
global and regional hotel chains. Eco-tourism is another growing sector of the industry that
123
looks to play a major part in Fijian tourism in the years to come. A new $1.5 million Island
Sanctuary in the Mamanucas opened for business in 2002. The eco-tourism resort offers tent,
dormitory and private room style accommodation. Bounty Island is the home of the
endangered bird, the Banded Rail and is a nesting ground for the endangered hawksbill turtle.
Government policies on sustainable development, popular opinion on the benefits of eco-
tourism and the growing trend and belief of tourists to support only ecologically friendly
operations has given rise to more eco-friendly hotels, trekking and adventure tourism.
The purpose of this study was to examine tourism market potential of Fiji Islands by
the year 2020 by utilizing the well known qualitative Delphi forecasting technique. The use
of forecasting is most needed in tourism industry because the industry is affected by a host of
uncontrollable factors. Current patterns of tourism in Fiji Islands and the world over, are
undergoing drastic changes and transformation as the new travelers are more diverse in their
interests, more discriminating, demanding and value conscious. Along with these, new
attitudes towards travel; the economic, the socio-demographic, socio-political, governmental
and technological environments are also changing, and new developments in each of these
areas have an impact on the tourism outlook for the 2000 and beyond. In particular, tourism
demand in Fiji Islands has been most affected by the degree of political stability and visitors
confidence in their safety whilst holidaying in Fiji Islands (as an aftermath of coups in the
years 1987 and 2000).
Kaynak and Marandu (2006) indicates that for orderly future planning purposes,
market demand should be measured with reference to a stated period of time. The longer the
forecasting interval, the more tenuous the forecast. Every forecast is based on a set of
assumptions about environmental and marketing conditions, and the chance that some of
these assumptions will not be fulfilled increases with the length of the forecast period. Much
interest in the whole subject of predicting future market environment is being stimulated by
futurists; but at the same time it is important to heed to the caution (Yong and Leng 1988).
The Panel of experts was selected by knowledgeable international and national tourism
analysts for their knowledge of the subject under review. The ensure a wide range of ideas
and views, the selected industry experts were not permitted to interact with one another.
Questions to be answered about tourism market potential of Fiji Islands were grouped into
four broad categories:
(i) To what extent, Fijian society will undergo changes and transformations in its value
systems from the year of 2001 through 2020?
(ii) How will the Fijian tourism industry undergo changes in its structure from the year
2001 through 2020?
(iii) Which are the tourism events and scenarios having a potential impact on tourism
development and tourism training in Fiji Islands from the year 2001 through 2020 in
terms of likelihood of occurrence of event, year of probable occurrence, and
importance to tourism training?
(iv) Which of the tourist regions and businesses would develop in importance in the
coming years?
(v) What direction should the Government’s tourism department take in regards to
tourism market development?
124
III. RESEARCH FINDINGS AND CONCLUSION
The purpose of this study was to present the results and recommendations of an
empirical study based on the Delphi qualitative forecasting technique. The study was
conducted among 72 tourism experts. Of the total, 69 respondents provided numerous
projections of various perspectives relating to the tourism industry of Fiji Islands. The tourism
experts indicated slight decrease in traditionalism in work, family, and education; hard work
as a virtue; authoritarianism in family decision making; materialism; and a slight increase in
other value changes in the Fijian society. With the changing infrastructure of Fiji Islands
tourism experts believe that motels, tourist homes, campgrounds and trailer parks, airline
traffic, inter-city bus lines traffic, handicraft centers, national heritage centers (i.e. sand dunes,
momiguns), cultural and educational attractions, museums, and back packers/budget
properties will become increasingly more popular owing to increased tourist potential. Future
scenario of Fijian tourism industry was delineated by seeking the tourism industry experts’
views on events having major impact on tourism development and tourism education. Some
of the most important trends and developments in the Fijian tourism industry are listed under
four major categories as follows:
There are many factors that may influence the future growth of tourism industry, such
as: a larger segment of the population having the time and resources to travel; a narrowing
down of distances because of continuing transportation improvements (which also open new
and/or cheaper tourism destinations to compete with Fiji Islands), increased government
recognition and realization of the impact of tourism and its role as the major provider of
revenue for the economy, rapid advances in communications technology that allow the global
coordination of tourism industry, and the increasing importance and role that leisure and
travel has in consumer lives today as a result of changing attitudes towards work, women's
125
roles, affluence and inflation. As tourism is the mainstay of Fiji Island’s economy, therefore,
this study assumes greater importance by finding answers to questions that have a direct
bearing on tourism market potential of Fiji Islands .
Notes: A five point scale was used where 1=significant increase; 2=slight increase; 3=no
change; 4=slight decrease;
5=significant decrease.
a
Indicates the degree of impact that the value changes in Fiji Islands society will have on
tourism: 0=not important; 10=critically important. Under the impact of tourism 4 to 5 signifies
medium, 5 to 6 moderate, 6 to 7 high and above 7 highest; (-)=decrease (+)=increase
126
Duty Free shops 4.18 0.757 4 slight (+) highest
Travel agencies 4.09 0.866 4 slight (+) highest
Eco-tourism 4.61 0.602 4 slight (+) highest
Handicraft centers 4.16 0.828 5 significant (+) highest
National Heritage Centers (i.e. sand dunes, 4.11 0.704 4 significant (+) highest
momiguns)
Theme Parks 3.79 0.713 4 slight (+) high
Historical Parks 3.77 0.679 4 slight (+) high
National and Botanical Parks 3.86 0.699 4 slight (+) high
Local entertainment establishments 4.29 0.780 4 slight (+) highest
Personal services (i.e. tourist guide, sports 4.35 0.734 4 slight (+) highest
trainers, etc)
Cultural and educational attractions 4.28 0.740 4 significant (+) highest
Cruise ship traffic 4.18 0.875 4 slight (+) highest
Water sport facilities 4.27 0.851 4 slight (+) highest
Museums 3.61 0.653 4 significant (+) moderate
Festivals and events 4.11 0.930 4 None high
Passenger car traffic 3.98 0.886 4 slight (+) moderate
Bus tours 3.89 0.862 4 slight (+) high
Government involvement in tourism 4.56 0.914 4 slight (+) highest
Back packer/ Budget properties 4.56 0.787 5 significant (+) highest
Others 4.71 0.488 5 significant (+) highest
Notes: A five point scale was used where 1=significant increase; 2=slight increase; 3=no
change; 4=slight decrease;
5=significant decrease.
a
Indicates the degree of impact that the value changes in Fiji Islands society will have on
tourism: 0=not important; 10=critically important. Under the impact of tourism 4 to 5 signifies
medium, 5 to 6 moderate, 6 to 7 high and above 7 highest; (-)=decrease (+)=increase
b
Other factors indicated by the experts that could have a slight or significant increase in the
structure of the Fijian tourism industry an a high impact on Fijian tourism are indicated as
follows:
127
REFERENCES
128
WHO SAYS DECISION-MAKING IS RATIONAL: IMPLICATIONS FOR
RESPONDING TO AN IMPENDING FORESEEABLE DISASTER
ABSTRACT
This paper focuses on the concept of rational decision-making and how it collapses
during an imminent natural disaster. The authors have devised a model that incorporates the
main decision-making processes and illustrates how they become intertwined with a “Black
Box” that may suspend rational decision-making. Hurricane Katrina and the Federal, State,
and Local government response exemplifies this process. Various factors are examined that
influence the Black Box processes and suggestions are offered to minimize the Black Box
effect.
I. INTRODUCTION
Impending natural disasters, as their name implies, are neither initiated by nor
governed by man. Some disasters strike with minimal or no notice at all. They are categorized
unforeseeable, and regardless of the preparation, there is negligible protection against lighting,
tornadoes, tsunami and earthquakes. Conversely, other natural disasters, while equally
destructive, have foreseeable windows ranging from hours to weeks. These foreseeable
natural disasters include hurricanes, heavy rains which may result in flooding, forest fires,
significant snowfall/blizzard and volcanic eruptions.
129
II. PURPOSE
The purpose of this paper is to ascertain if rational decision-making occurs when
responding to foreseeable natural disasters.
To combat inaction, the World Health Organization (2002) recommends that any
program of disaster prevention and preparedness should promote optimum coordination
between the various governmental, nongovernmental, and private organizations involved.
While planning is integral to preparedness, Rosenthal (1998), contends that industrial society
is especially susceptible to natural disasters and has become acerbated by policy makers who
have not prepared themselves or the public for appropriate responses once tragedy strikes.
Risk, uncertainty, crisis, collective stress, and “normal accidents'' now need to be incorporated
into a broader understanding of how governments and decision- makers respond to crises and
their concomitants: unpleasantness in unexpected circumstances, representing unscheduled
events, unprecedented in their implications and, by normal routine standards, almost
unmanageable.
Denis (1995) highlights six major types of activities on which disaster managers
should focus: (1) obtaining information about the situation; (2) getting advice on the best
course of action; (3) choosing: the decision to do something; (4) authorizing the action; (5)
having the action executed; and (6) explaining and communicating the action. These steps
attempt to logically frame the action in the midst of a potentially emotional crisis. However,
McCarthy (2003) found that the experience of crisis gave rise to a more rational, planned
approach to the strategy-making process. In the aftermath of crisis, entrepreneurs had to spend
more time communicating with, explaining and justifying their actions to key stakeholders.
Since people are not mindless automatons, the human behavior components must be
included in the rational decision-making process. French (2005) argued that while emergency
modeling incorporates technology, it does not accurately take into account the social aspects
and behavior of people. Previously, Walls (2002) cited the methods of slowing down,
listening, learning and feeling to augment rational decision-making.
130
Conversely, Simon (1957) noted that limits exist on our time, resources, and ability to
process information. He introduced the bounded rationality model where rather than a logical,
optimized solution is reached, a satisfying or a “good enough” solution is selected.
Decisions are at the heart of leader accomplishment and may range from incidental to
life-saving. The decision may be, tricky, puzzling, nerve-racking, possessing clear boundaries,
or high in ambiguity, is both an art and a science. At times, the boldest decisions may not be
the safest, but they show the leaders mettle. To analyze the decision-making process in
responding to an impending foreseeable disaster, the authors present the following conceptual
decision-making model shown in Figure 1. This model is developed through an extensive
literature review and models of other authors mentioned by Arsham, 2005. According to the
model, a decision-making process in any foreseeable disaster is influenced by “Black Box
Effect.”
(3) Controllable input
(Resources/capability/etc)
Decision Making
(6) Action
(The best pick) (7) Black Box
(1) Range of
) (5) Information Analysis Objectives
by experts
(4) Alternatives
Choices (2) Uncontrollable Input
(Information explicit or Tacit)
Figure 1: Decision-making model in any foreseeable disaster
131
3. Controllable Input: Need to objectively evaluate your own capability, capacity, and
resources limitations. All decision makers should evaluate the situation at hand to
determine whether or not he/she is able to make a rational decision. To be capable, the
decision maker has to have the personal attributes, especially the mental power, required
to perform the task. The phrase “decision-making capacity” refers to ability to understand
the nature and consequences of situations, and to make and communicate decisions based
on that understanding. It also refers to the alternatives that are specified by the decision
maker. There are four levels of capacity including the ability to communicate a choice; the
ability to understand information; the ability to comprehend the situation; and the ability
to weigh information in a rationally defensible way. When making decisions, many people
tend to overlook their resource limitations. Sometimes decisions are made irrationally and
some don’t really know if they have all the information needed to approach the task. There
are times when the decision maker may be emotionally attached to the situation and does
not consider all the information at hand. Before confronting a situation, all controllable
inputs must be evaluated.
4. Alternative Choices: Carefully weigh your knowledge of the costs and risks of negative
as well as positive consequences that could flow from each alternative including 1) List all
the alternatives you are considering, 2) List all of the values or criteria that will be
affected by the decision, 3) Evaluate each alternative by each criteria or value, and 4)
Choose the alternative which you predict will satisfy the criteria the best.
5. Information Analysis: In conducting the information analysis, the objective is to opt for
the best course of action. Once the decision maker has chosen an alternative, he/she has to
make sure that they are sound on the logic and reasoning portion of the decision. Multi-
perspective analysis is used to look at decisions from a number of important perspectives,
thereby moving us outside our habitual thinking style, and helping us to get a more
rounded view of a situation. With this, there are infinite alternatives from which the
decision maker can choose. Many successful people think from a very rational and
positive viewpoint which may be part of the reason for their success. Often they may fail
to look at a problem from an emotional, intuitive, creative or negative viewpoint. This can
mean that they underestimate resistance to plans, fail to make creative leaps, and do not
make essential contingency plans. Similarly, pessimists may be excessively defensive.
Emotions may limit calm and rational decisions.
6. Action: The decision maker must first know what information and/or resources are
needed at each step to implement the plan. All choices come with obstacles and because
of this, the decision maker must consider all obstacles and how each one can be overcome.
After this, all steps for implementing the decision have to be determined along with
understanding of beginning and ending each step. Lastly, the decision maker needs to
identify all information and resources needed for each step. After thoroughly considering
a wide range of possible alternative courses of action, choose the one that best fits with the
set of goals, desires, lifestyle, values, and so on. Make sure that a contingency plan is
created just in case the best pick option doesn’t work out as intended.
7. Black Box Effect: The Black Box affects the rationality in decision making as well as
outcomes of a decision. The typical Black Box factors are: politics, time constraint, ego,
ideology, assumption of competency, scrutiny of the media, bureaucratic structure, false
hope, fishbowl mentality, etc.
132
Select Examples of Black Box Effects during and after tropical storm Katrina:
Political:
• Democratic Mayor Ray Nagin (Mayor & Governor does not get along)
• Democratic Governor Kathleen Blanco (lack of preparedness and mishandling the
rescue and relief operations)
• Republican President George Bush (Political Appointment for FEMA)
Distortion of the media:
• Overstated rapes and murders at Superdome
• Looting (Wide spread by law enforcement)
• Inflated death count predictions
The Black Box Effect is due to influences on human beings and can therefore never be
totally eliminated. However, steps can be taken to minimize human error. The authors suggest
the utilization of experts and technologies are critical to overcome human errors (explained in
conclusion and recommendations section).
V. CONCLUSION
Some elected and appointed officials have the primary responsibility of safeguarding
the public safety. Planning, simulations, contingency and evacuation protocols are all
worthwhile measures and we naively assume that government paternalism will protect us
from any challenge. As a result, we place our trust in their competence and we expect them to
make clear, rational decisions.
While the day-to-day minutia of government may be carried out with aplomb,
imminent natural disasters have a way of disrupting public safety, economies and rational
decision-making. The ever-present watchful eye of television has made the country, if not the
world, Monday morning quarterback in almost every situation. Nearly all decisions are
scrutinized and filtered through ego, assumed competency, political, ideological, economic,
and magnifying lenses. The slow and unwieldy bureaucratic structure provides false hope
while there is a leadership vacuum in many major cases.
Experts who have been trained and field tested in a variety of crisis will be able to
more quickly analyze and evaluate potential courses of action as opposed to political
candidates and appointees who may lack the needed skills. Further, closer alignment with 911
services, FEMA, and Homeland Security to empower local agents the freedom of action to
cope with the impending crisis is imperative.
133
There’s an adage that “anything can be accomplished if no one is worried about who
gets the credit”. However, a more apt maxim has become “nothing gets done so that no one
gets the blame”. Making rational decisions follows a series of logical steps. However,
rationality may be suspended in the Black Box!
REFERENCES
134
CHAPTER 5
135
AN ALTERNATIVE APPROACH FOR DEVELOPING AND MANAGING
INFORMATION SECURITY PROGRAM
I. INTRODUCTION
136
information security management that refers to the structured process for the
implementation and ongoing management of information security in an organization
(Vermeulen, et al., 2002).
Information security management should be treated like any business functions with its
activities based upon business needs and policies (Mason, 2000). The policies should be
developed by the management steering committee, which will have the authority to
establish the overall directing of the organization’s information systems (Scott, 1986).
Policies provide the guidelines for operational security, as well as physical and technical
security (Vermeulen, et al., 2002).
Activities and decisions involved in information security management are the ultimate
responsibility of the top management (Dinnie, 1999). Standards are very important and
information security management standards must be considered in the development and
management of information security (Solms, 1999).
The first phase of the new approach is about security planning. Security planning
involves policy, standards and plans. Security policy is a statement indicating:
• What is to be protected
• Who is responsible
• When does the policy take effect
• Where within the organization the policy reaches
• Why was the policy developed, and
• How will a breach of the policy be treated
The security policy statement does not need to be long. Policy scope must be clearly
defined, responsibilities must be formally assigned, and the policy must be related to other
active policies such as physical security and systems responsibilities. The policy should not
define procedures or other implementation details and should not specify titles or names
that may become obsolete through reorganization. A policy is effective only if it is
communicated and enforced. Line managers should be required to show solid evidence of
policy implementation in their individual areas.
Standards are used to set objectives which need be achieved. Systems, often, are
converted to production mode without adequate security or internal controls. There is often
lack of quality procedures and assurance in IT production leading to defective products that
are unstable (Tryfonas, et al, 2001). There is better control if system development standards
address security and control. Work steps should identify security and control requirements,
and relate security requirements to the specific controls that will be implemented
137
Planning is a process that helps to ensure that security is addressed in a
comprehensive manner throughout a system’s life cycle. There are two main tasks that are
central to planning computer security: assignment of responsibility for security planning
and assessment of risks. Responsibility for direction and oversight should be assigned to a
management steering committee, with individuals in strategic positions from each of the
high priority areas to be addressed by the overall security program. To be effective, the
committee needs visibility and clout. Representation from the highest level of management-
officers or board members, even if by proxy is essential.
Security is the state of being free from unacceptable risk of business. The
management committee with members from various different departments and disciplines
can design a mission statement for an information security group. Responsibility for
ongoing security administration should be assigned to a security officer who must be placed
sufficiently high up in the organization to accomplish the job without undue influence by
organizational politics. Because the responsibilities of the position will vary, no one
organizational arrangement will work for every company. Ultimately, responsibility for the
security of specific information and information systems should be assigned to the line
managers who are primarily responsible for the systems. Information security
responsibilities should be seen as an integral part of line management responsibility, as it is
for other company resources.
Risk analysis is the process of identifying an organization’s greatest risks. There are
many risks attached to system development and there is a need for risk analysis (Maguire,
2002). Classic risk analysis, however, can be expensive and time-consuming. The results
are often misleading, their usefulness is outdated quickly, and seldom, if ever, are the
results as exact as they appear. A highly quantitative approach to security risk analysis can
be justified only when both the cost of implementing security measures and the potential
benefits of these measures are very large, and when the decision of whether to implement
the measures is not obvious. In all cases, a qualitative approach to risk analysis is more
efficient and effective.
The first step in qualitative risk analysis is to gain a business perspective on the
company’s most critical business functions and assets. Opinions from the CEO, top
financial and operating officers, and directors of major departments should be obtained.
These functions and assets are then related to the company’s high-volume transactions,
proprietary information, cash flow, and customer, creditor, and employee relationships.
Next, the systems and resources used for executing the critical function are determined. The
procedures of the information services department are reviewed to see how well existing
security and controls work. Finally, deficiencies that could lead to loss or compromise of
these critical assets and functions are identified. With this qualitative method, meaningful
priorities for the security program can be established with only a modest investment in risk
analysis.
138
IV. SECURITY ADMINISTRATION
Technical controls are those that are related to and dependent on software and computer
hardware systems. These include, data integrity and systems software integrity embodying
• Reliability of the data source
• Source data preparation
• Data entry control
• Data input acceptance control
• Output controls
• Auditability
• Evidence of correctness of software
• Evidence of robustness
• Evidence of trustworthiness
• Programming standards in operation
Physical controls relate to physical access to the computer systems and the computer
room, and the prevention of unauthorized personnel from gaining physical access to the
computer system.
It is necessary to keep records for the purpose of tracing activities harmful to the
security and integrity of the computer system and information. The audit and threat
monitoring activities involves threat and risk evaluation, vulnerability analysis and controls
analysis
Many writers consider risk as a factor to be addressed after the implementation of the
system (Curtis, 1998), (Simon, 2001). Threat and risk evaluation is necessary in order not
139
to waste resources in protecting information from nonexistent and superficial threats, while
ignoring potential and actual threats. It is also necessary to consider the amount of
resources allocated to a given threat so that it would be proportional to the risk involved.
Vulnerability analysis involves efforts to locate the areas where the computer system is
vulnerable to compromise. This analysis is necessary during audit because it is important
to determine where a threat can affect the computer security.
Control analysis is the determination of whether the controls in place are adequate for
the prevention of any breaches of security. This analysis is done after determining the
threats and vulnerability areas that can affect the computer security.
Lastly, audit and threat monitoring includes post-processing controls and interactive
controls such as:
• Audit of accounting
• Mapping
• Tracing
• Storage of information at a remote location
• Online system performance evaluation
• Real-time error detection
• Monitoring of adequacy of controls
• Monitoring unusual conditions and insertions
Audits and monitoring are two basic methods for operational assurance. An audit is one-
time or periodic event to evaluate security while monitoring refers to ongoing activity that
examines either the system or the users. Monitoring most of the time applies to activities
that are performed in real-time.
VI. CONCLUSION
Security is never perfect and there are always changes and new ways and methods to
subvert security. Management should be aware of these changes and prepare for them.
Planning is the crucial step for defining and executing security needs. But without security
administration, followed by monitoring of the entire cycle, significant progress toward
140
information security is more dependent on chance than on the true capabilities of the
organization.
REFERENCES
141
UTILIZATION OF INFORMATION RESOURCES FOR STRATEGIC
MANAGEMENT IN A GLOBAL ENTERPRISE
ABSTRACT
I. INTRODUCTION
142
on information system but are unable to develop adequate and usable functions (Lee, 2001).
Strategic investments in information are now made to gain competitive advantage (Weill, et.
al., 2003). An organization’s investment in information technology must be in line with its
strategic objectives, building the capabilities necessary to deliver business value. (Weill, et. al.
2003). Information has become a major economic good, frequently exchanged in concert with
or even in place of, tangible goods and services. (Applegate, 2003). Information system used
in an efficient process brings more value to performance than same Information System used in
an inefficient process. Information resources can be considered as "the services, packages,
support technologies, and systems used to generate, store, organize, move and display
information” (Orna, 1990).
From the 1960s to the 1990s IS strategy was driven by internal needs of the
organization, to lower existing transaction costs. Later, it was used to provide support for
managers by collecting and distributing information. As competitors built similar systems, any
advantage gained from using IS diminished as competitors installed similar or newer systems.
Organizations not only use and seek those applications that provide them with advantage over
competition, but those applications keep them from being outmanoeuvred by new start-ups
with innovative business models or traditional companies entering new markets.
With lack of empirical support for the positive economic impact of IT on organizations,
the productivity paradox of IT necessitates that we consider the value of information resources
(Jon-Arild, 1997). The market forces of supply and demand determine the value of
information resources that lies in the value of the actions an organization takes as a result of
having information (Myburgh, 2002). Value of information is determined by
a. Assessing the quality of information itself regarding accuracy, comprehensiveness,
credibility, relevance, simplicity and validity
b. Evaluating the impact of information on the productivity.
c. Assessing the impact on the effectiveness of the organization in terms of
143
contributing to new markets, improved customer satisfaction, meeting targets and
objectives, and promoting more harmonious relationship.
Scarcity does not add value unless there is the need driven by the competitive position
of the organization within its industry. Scarce information resources are likely to be either
hard to copy or to substitute. A demand must exist for the resources.
The value chain model helps in determining where a resource’s value lies. Value chain
is a model for depicting the increasing importance of activities in a process. Information
management value chain focuses on the discrete activities that incur costs in order to add value
to information. The value determination is to improve the usefulness of information, to the
ultimate users, helping them make better decisions (Cisco, 1999). Value chain is the chain of
activities that creates customer value and can be divided into two broad categories - support
and primary activities. Primary activities relate directly to the value created in a product or
service, while support activities make it possible for the primary activities to exist and remain
coordinated. Each activity affects how other activities are performed, suggesting that
information resources should not be applied in isolation. If information resources are focused
too narrowly on a specific activity then the expected value increase may not be realized if other
parts of the chain are not adjusted. The value chain framework suggests that competition stems
from two sources: lowering the cost to perform activities and adding value to a product or
service so that buyers will pay more. To achieve true competitive advantage, an organization
requires accurate information on elements outside itself. Lowering activity costs only achieves
advantage if the organization possesses information on its competitors’ cost structures.
Developing unique knowledge-sharing processes can help reduce the impact of the loss
of an employee, where the management relies on the individual’s IT skills. Such reliance
exposes a firm to the risk that key individuals will leave the organization, taking the resource
with them. Recording the lessons learned from all team members after the completion of each
project is one attempt at lowering this risk.
Strategy is the creation of a unique and valuable position, involving different set of
activities. The essence of strategic positioning is to choose activities that are different from
rivals’ activities. (Porter, 1998). Strategy is creating a relationship among an organization’s
activities. There are three views on the alignment of IS strategy with business strategy. The
first view strategically directs information resources to alter the competitive forces benefiting
the organization’s position. The second view uses value chain model to assess the internal
144
operations of the organization (Porter, 1980). The third view uses the theory of strategic thrusts
to understand how organizations choose to compete in their selected markets.
• Differentiation thrusts that focus resources on product or service gaps not filled by
competitors. These will allow value to be created and offered to customers in a new
form.
• Cost thrusts that focus resources on reducing costs incurred by the firm, by
suppliers, or by customers, or on increasing the costs of a competitor.
• Innovation thrusts that focus resources on creating new products to sell or on
creating new processes of creating, producing, or delivering a product.
• Growth thrusts that focus resources on acquisition, joint venture, or agreement. The
purpose of an alliance is to create one or more of the following four generic
advantages in the market: product integration, product distribution, product
extension, and product development. Alliances require coordinating information
resources of different organizations over extended periods of time.
These thrusts represent the strategic purposes that drive the use of the organization’s
resources. The organization has two choices when applying a strategic thrust. Either the thrust
acts offensively to improve the competitive advantage of the firm or defensively to reduce the
opportunities available to competitors. Each choice requires a different perspective on
collecting, organizing, and using information in the organization based on the purpose of the
thrust. An organization has two choices for direction - use the information system itself or
provide the system to the chosen target. Many examples exist where organizations began using
a system and then gained further advantages by providing the system to a new target. There are
two basic commercial strategies to fuel this growth – product differentiation and low
cost/price. For above-average performance, management has four generic strategies, in cost
leadership, differentiation, cost focus, and focused differentiation.
Competitive advantage within these generic advantages stem from the way an
organization organizes and performs particular activities. These activities in turn result in
strategic advantage supported by technology and information systems. Focusing on strategic
thrusts helps ensure that information resources are used with the same intent as the rest of the
organization’s resources.
145
VII. CONCLUSION
Organizations face more competition today than before and organizations that use
information strategically will enjoy an edge in achieving success. Such organizations must
process and use all the information resources available to them. To better understand the type
of advantages the information resources might create, organizations need to consider the value
of information resources, the appropriateness of the value, the rate of depreciation of the value
of the information resources and the mobility of these information resources. Understanding
the forces that influence an organization’s competitive environment and using value chain
model to assess the internal operations of the organization help management to align IS
strategy with business strategy. Information resources could be used to support the strategic
thrusts of the organization to achieve a competitive edge in the industry.
REFERENCES
146
MANAGING THE ENTERPRISE NETWORK: PERFORMANCE OF ROUTING
VERSUS SWITCHING ON A STATE OF THE ART SWITCH
ABSTRACT
Given the recent increased dependence on data networks for Internet and other
business needs, there is an increased need to address the functionality and security of the
devices which allow the transmission of data. To that end, this paper examines the relative
efficiencies of both switches and routers utilizing data obtained in a controlled laboratory
experiment. A Force 10 E-300 configurable switch was used to gather data with eight
configurations ranging from a 1 by 3 to a 2 by 6. The data collected appears to suggest that
while there is a difference between the packet inter-arrival time and the mean packet intensity
in comparing results for the separate router and switch configurations there is no difference
between the mean throughput between the router and switch.
I. INTRODUCTION
Ethernet emerged as the primary local area network (LAN) architecture in the 1990’s.
During this time, networks made the transition from a shared media in which workstations
were concentrated with hubs to an environment where a switched configuration was utilized
(Guster and Holden, 1996). What drove this conversion was a desire for better performance
because each port received a dedicated bandwidth allocation instead of sharing the available
bandwidth with every other port on the switch. Also, the switch offered better security
because a packet sniffer connected to a port on a hub could see all traffic coming in and out of
that hub. In contrast, on a switched network, the sniffer could only see the traffic coming in
and out of the port it is connected to. This transition was further aided by the fact many
switches were designed to be self configurable. Because a hub is essentially a multiport
repeater and requires no configuration, a hub could effectively be replaced with a switch.
Switches have the added functionality that allows them to send out address probing packets
and based on those packets could learn the required configuration parameters on their own
(Guster and Hall, 2000). Moreover, existing network management personnel need little or
additional training. For these reasons, it made business sense to utilize switches.
In this new environment, there was also a trend to minimize routing. Historically, in
hub based LAN’s there was a need in certain applications to exercise more control than could
be provided with a hub. For example, a company may have developed LAN’s for each of its
departments independently and needed to implement a different access control policy for each
department (Yuan and Strayer, 2001). In that case, a router could easily provide the
connecting point for all of these independent LAN’s and be programmed to control access to
different LAN segments by the network address structure (OSI layer 3). The switch
147
revolution of the 90’s and the need to recover internet address licenses because of the
limitations of IP version 4’s 32 bit address scheme relegated routing on the LAN level to
proving the interface to the wide area network (WAN) or the Internet (Guster and Shilts,
1998). Because this implementation allowed forwarding based on the physical instead of the
network address, this solution provides better performance and requires less personnel
resources to implement and manage (Tanenbaum, 1995).
With the near ubiquity of the internet and other networks, there is a new paradigm in
place in regard to network security (Schmidt & Arnett, 2005). Effective security paradigms
necessarily require mechanisms for deterrence, prevention, detection, and remediation (Straub
and Welke, 1998). The implementation of security should involve a holistic approach and
needs to be addressed prior to a systems implementation. Indeed security issues can
significantly impede system performance. In fact, after a short period of network activity, it
was found that certain metrics including disk access time increased by a whopping 20%
(Arnett and Schmidt, 2005). If one accepts the premise that the added management
capabilities are required in future enterprise networks to help mitigate security concerns, then
data is needed about the performance differences between switched and router based networks
so that decision makers can make informed and objective decisions regarding the most
economic, efficient, and effective network implementations. To that end, the purpose of this
paper is to run a series of controlled experiments with a high-end switch that is programmable
as either a switch or a router to ascertain its performance capabilities in each category.
II. METHOD
Several test-bed networks of various sizes were configured and provided the
destination mix for the various experiments. A workload generation program was used to
offer a consistent network loads across the multiple experiments. A Force 10 E-300 switch
was used to forward the packets. The manner in which this switch was programmed (either as
a switch or a router) became the experimental treatment. The test-beds were configured in the
four following ways: three devices sending to one device, six devices sending to one device,
three devices sending to two devices and six devices sending to two devices. The devices
were standard Intel based PCs running the Linux operating system. A packet sniffer was
placed on the receiving device(s) and it collected arrival time, size and source/destination
addresses from each packet as it traveled the network.
TCPDUMP was used to capture packets generated with eight different configurations
ranging from a 1 by 3 to a 2 by 6. These eight configurations were divided into 36 sets of
100,000 sets of packets, totaling 3.6 million lines of code. The first six sets were generated
with the Force E300 configured as a router. Each set consisted of 100,000 packets captured.
The first three sets were captured by one machine with three other machines generating
traffic. The second set was captured by one machine from six machines generating traffic,
again repeated three times. These same experiments were repeated with the Force E300
configured as a switch.
The next phase in the experiment consisted of two machines capturing traffic
simultaneously. First, two machines capturing data from three machines, and then from six.
Similar to the aforementioned experiment, the E300 was configured first as a router and then
as a switch with runs of 100,000 packets each. Like machines runs were then combined into
files of 200,000 packets each.
148
III. RESULTS
The results from the eight experimental trials are depicted in Table I. To help ensure
reliability of the experiments, the trial for each row in Table I below was run three times to
check for consistency. It was determined that the variation within each of the row trials was
minimal and consistent with what would be expected from a workload distribution generator
that uses a random number to devise its distribution. To account for this slight variation, the
values reported below come from the row trial with the median mean inter-arrival time.
Table I
Means and Standard Deviation for the Various Device Mixes
The data from Table I will be used to address the following research propositions:
P1. There is no difference in packet inter-arrival time between a switch and a router
configuration.
P3. There is no difference in mean packet intensity between a switch and a router
configuration.
After a through analysis of the data, it is apparent that there is a difference in packet
inter-arrival time. That difference shows that the inter-arrival times on the switch
configuration are smaller which would be expected because the switch only needs to read to
the OSI layer 2 header, whereas the router must read to the layer 3 header. However, the
differences in every mix but one (1x6) are less than .0005 milliseconds. To put these values
in perspective, it is useful to look at the suggested values from an efficient network
application. Oppenheimer (1999), reports that the maximum target response time to an end
user should fall into the 400-800 millisecond range. If responses start exceeding this target by
two or three standard deviations then end users become dissatisfied and may resubmit the
same request again or abandon the application entirely. It is quite impressive that the Force
10 switch was able to pump packets through when configured as a switch on average in less
than one millisecond, which would take up just a small portion of the 400-800 millisecond
end-to-end target. Reconfiguring the Force 10 device as router consistently added delay.
However, that delay was minimal. Typically the delay was in the one millisecond range.
149
Therefore, it appears that P1 can be rejected and there is a difference in performance between
the switch and the router configurations in regard to inter-arrival time.
The results of the throughput data are less clear and there is no consistent pattern. In
some cases more throughput was observed in the switch configuration in others more
throughput was observed in the router configuration. Furthermore, the workload generation
program in theory is supposed to send about the same amount of data, it just gets there quicker
with the switch. So therefore the variation in the results can in part be attributed to the
workload generation program which itself will offer a workload that is generally consistent
within the boundaries of a distribution parameter. The mean throughputs ranged from about
17 to about 25 million bytes, which would appear to be consistent with a synthetically
generated distribution. These values may also be affected by sample size. In the one by
samples the number of packets collected was 100,000, whereas, in the two by samples
200,000 packets were collected (100,000 at each of the receiving stations). On the surface
this sounds like adequate data, but when one considers that the switch is designed to support
the transfer of data in the terabit per-second range on its backplane the 100,000 packets
collected could easily not be representative of the massive amount of data that might be
transferred. Based on the data collected herein one finds little evidence to reject P2.
Therefore, it appears that there is no statistical difference in throughput between the switch
and the router configuration.
The packet intensity data is in many ways similar to the packet inter-arrival data. In
all cases the intensity was greater for the switch. Why then are the throughput values so
inconsistent and sometimes favor the router configuration? In fact, there is less consistency in
the intensity values, which range from about 35 to 56 thousand packets. However, one needs
to evaluate these values in light of the fact that the packets are variable in size. Because these
packets were sent over standard Ethernet it would be expected that the packets would vary
approximately from a minimum of 54 to a maximum of 1514 bytes. Therefore, variation in
packet size would mean that one could send the same amount of data with fewer packets if the
packets sizes were larger. This is evident in the data, especially in the 1x6 level. In that
instance, the switch uses about 55 thousand packets to yield throughput of about 22 million
bytes, whereas the routers attains about the same through put with just 35 thousand packets.
Furthermore, this situation makes sense if one remembers that switches and routers evaluate
the destination address of a packet and forward the whole packet accordingly, rather that
evaluate the destination of each byte in the stream. Of course larger packets take more time to
transfer, but it is the speed at which the device can evaluate and forward packets that is of
prime concern. Therefore, based on the data collected it appears that P3 can be rejected.
Accordingly it appears there is a difference in packet intensity between the switch and the
router configurations.
Although knowing the means for the packet inter-arrival arrival rates provides a good
idea of central tendency within the router and switch configurations it was felt that
distribution plots might provide additional useful information. An examination of all
destination mixes reveals a strikingly similar distribution pattern, although there is an
occasional spike that favors a specific configuration. Given the small magnitude of the packet
inter-arrival means and their associated standard deviations one might expect plots of this
type. In all cases the switch activity quits before the router activity. This is explained by the
fact that the experiment was run until a specific number of packets were received, 100,000 for
the one bys and 200,000 for the two bys. Therefore, the switch configuration was able to
forward the prescribed number of packets more quickly than the router configuration.
150
IV. CONCLUSION
It was clear that there was a performance difference between the switch and router
configuration in favor of the switch. This would be expected because the switch only needs to
evaluate the physical address, whereas the router must evaluate the network address.
However, the magnitude of the difference was minimal on this high end enterprise level
switch. In fact, it was typically less on average than one millisecond. This value when
considered as a part of the total inquiry to response delay target of 400-800 milliseconds is
almost insignificant. Certainly, it is small enough that a company considering converting its
LAN forwarding logic to gain the added control of routing would not be discouraged by the
potential performance loss.
In the near future one can expect that the Internet will remain the dominant networking
infrastructure and the traditional 80/20 traffic model will continue to erode. Further,
applications will continue to grow in bandwidth requirements in part because of the trend to
provide more interactivity through multimedia. Unfortunately, one can expect hacking to
continue to grow as well.
All of these factors further reinforce the need for totally routed networks to support the
management, addressing performance and security needs of the future. In 1992 the largest
threats to information systems were internal in origin (Loch, Carr, and Warkentin, 1992); this
continues to be the case today (Whitman, 2003). A router allows for more control and
filtering by specific address. Due to this increased control, it is one mechanism that can be
utilized in an effort to reduce the internal threat. The results contained herein are
encouraging, it was demonstrated that a routing configuration does not add significantly to
delay at least on a lightly loaded switch. At this point additional research is needed to explore
these factors on a work group level switch and the effects of scaling on performance as
workload increases.
151
REFERENCES
Arnett, Kirk P. and Schmidt, Mark B. "Busting the Ghost in the Machine." Communications of
the ACM., 48, (8), 2005, 92-95.
Guster, Dennis, and Holden, Mark. “Integrating High-Speed Switches into Existing Networks as a
Means of Bridging to the 100Mbs World.” A paper presented at the Small College Computing
Symposium, St. Cloud, MN., 1996, April 18-20.
Guster, Dennis, and Hall, Charles. “Integrating High-Speed WAN Transmission Technologies
into a College Computer Networking Curriculum.” A paper presented at the 33rd Annual
Conference of the Midwest Instruction and Computing Symposium, St. Paul, MN., 2000,
April 13-15.
Guster, Dennis, and Shilts, Aaron. “Deploying Port Manageable Hubs to Create Virtual LAN’s to
Improve Network Performance in a General Education Computer Lab.” A paper presented at
the Small College Computing Symposium, Fargo, ND., 1998, April 16-18.
Loch, Karen D.; Carr, Houston H.; and Warkentin, Merrill E. “Threats to Information Systems:
Today’s Reality, Yesterday’s Understanding.” MIS Quarterly., 16, (2), 1992, 173-186.
Oppenheimer, Priscilla. “Analyzing Technical Goals and Constraints.” In Top-Down Network
Design: A Systems Analysis Approach to Enterprise Network Design. Macmillan Technical
Publishing (Cisco Press), 1999.
Schmidt, Mark B. and Arnett, Kirk P. “SPYWARE: A Little Knowledge is a Wonderful Thing.”
Communications of the ACM., 48, (8), 2005, 67-70.
Straub, Detmar W., and Welke Richard J. “Coping with Systems Risk: Security Planning
Models for Management Decision Making.” MIS Quarterly., 22, (4), 1998, 441-469.
Tanenbaum, Andrew. Distributed Operating Systems. Englewood Cliffs, NJ: Prentice Hall,
1996.
Whitman, Michael E. “Enemy at the Gate: Threats to Information Security.” Communications of
the ACM., 46, (8), 2003, 91-95.
Yuan, Raixi, and Strayer, Timothy. Virtual Private Networks. Reading, MA: Addison Wesley,
2001.
The authors would like to acknowledge Force 10 Inc. for their support of our research.
152
AN EMPIRICAL INVESTIGATION OF ROOTKIT AWARENESS
ABSTRACT
Despite the recent increased attention afforded rootkits by media outlets such as CNN
Headline News (2005), there appears to be a dearth in our awareness and understanding of the
rootkit security paradigm. This paper defines and describes rootkits and the threat they pose.
Next, it presents a study that utilizes an instrument which was used in two prior studies.
Results are presented based on data collected from 210 IT users from three geographically
separate institutes of higher learning. The results indicate that knowledge of rootkits is well
below that of spyware and viruses understanding. Fortunately, even though rootkits are
potentially as damaging as other malware, users may not suffer the full effect of rootkits if the
security community can raise awareness to the point where end users will utilize rootkit
detection and removal tools as part of their overall computing paradigm.
I. INTRODUCTION
Given today’s reliance on computer networks and the Internet it is no surprise that
more attention is being given to malware such as viruses, worms, and spyware. The
professional literature has seen an increase in the number of journals and special issues that
deal with security issues. Among those outlets devoting content toward the pursuit of this
topic are ACM Transactions on Information and Systems Security, Computer Fraud and
Security, Computer Law and Security, Computers and Security, Computer Security Journal,
International Journal of Information and Computer Security, IEEE Security And Privacy,
Information Management and Computer Security, Information Systems Security, International
Journal of Information Security, Journal of Computer Security, Journal of Internet Security,
Journal of Information Privacy & Security, Journal of Information System Security, and
Journal of Privacy Technology to name a few (see http://www.misprofessor.com/).
Additionally, editors are publishing special issues of journals with a security focus. For
example, in August 2005, Communications of the ACM had a special issue on spyware.
153
Concurrent with the pervasiveness of modern security threats, recent research indicates
that corporate IT officials are finally starting to devote an increasing amount of resources to
threat detection and amelioration (Whitman, 2003). A recent survey of 301 IT executives
found that security concerns are increasing on the list of managements’ most important
concerns (Luftman and McLean, 2004). Increases in the number of formal security audits,
financial commitments to holistic security practices, and interest in security awareness
training are indeed steps in the right direction (Gordon et al., 2005).
The purpose of this paper is to present a comparison of concern for rootkits and other
security threats in an effort to increase the level of knowledge of the rootkit phenomenon, and
thereby further our progress in the struggle to effectively cope with the threat. The remainder
of this paper is organized as follows; the next section presents a discussion of rootkits,
followed by a description of the survey and respondents. Then,, findings are analyzed and
discussed.
A rootkit is a “type of Trojan that keeps itself, other files, registry keys and network
connections hidden from detection. It runs at the lowest level of the machine and typically
intercepts common API calls. For example, it can intercept requests to a file manager such as
Explorer and cause it to keep certain files hidden from display, even reporting false file counts
and sizes to the user” (TechWeb, 2005). This malware has its origins in the UNIX world, and
because it allows access at the lowest level (or root level), was termed rootkit.
Rootkits were developed circa 1995 and originally targeted UNIX machines. Until
recently rootkits have been relatively rare on Windows machines (Roberts, 2005). Rootkits
were initially designed to infect UNIX systems. Perhaps in an effort to increase their depth
and reach, rootkits more recently began beleaguering systems running Microsoft Windows. It
is likely that the trend to focus on Windows based machines will continue into the near future
(Seltzer, 2005).
Specifically, a rootkit refers to a piece of code that is intended to hide files, processes,
or registry data, most often in an attempt to mask intrusion and to surreptitiously gain
administrative rights to a computer system. However, rootkits can also provide the
mechanism by which various forms of malware, including viruses, spyware, and Trojans,
attempt to conceal their existence from detection utilities such as anti-spyware and anti-virus
applications. The combination of two of more malware programs, such as rootkits, spyware,
viruses, and worms, is referred to as a blended threat. For instance, the product of a
spyware/rootkit blended threat is malware that contains, from the hacker’s perspective, the
best of both worlds. The spyware/rootkit blended threat would include the mobility and
payload of spyware with the stealth like nature and persistence of a rootkit. The resulting
threat is much more difficult to detect and remove.
Although it is difficult to say with 100% certainty when rootkits targeting Windows
first appeared, a program manager for Microsoft Solutions for Security, indicates that a rootkit
targeting Windows NT, was introduced in 1999 by Greg Hoglund (Dillard (2005). Still
interested in rootkits, Hoglund also maintains rootkit.com, a popular website for
disseminating information concerning rootkit exploits. Dillard (2005) contends that rootkits
target the extensible nature of operating systems, applying the same principles for application
development as found in legitimate feature-rich software. Unfortunately, in the case of the
rootkit, the purpose is solely intended to benefit the potential hacker.
154
The following section describes the details of the survey. The survey was administered
to 210 IT users at three institutions of higher learning and the results are detailed in
subsequent sections.
This study has its roots in two previous studies (Jones et al., 1993; Schmidt and Arnett,
2005). Both of these had a goal of examining relatively new malware as it emerged on the
computing landscape. The original study (Jones et al., 1993) focused on users perceptions of
computer viruses. More recently Schmidt and Arnett (2005) utilized a similar instrument to
access users’ perceptions of spyware. The focus of this study was similar in that it
investigated the relatively new phenomena of rootkits. Specifically, this study examined IT
users’ perceptions of rootkits, spyware, and viruses. The following section describes the
survey, its subjects and their demographics, and the analysis process that followed.
The survey used in this research is based on a survey that was originally published in a
1993 Computers and Security article (Jones et al., 1993) that discussed the knowledge of
computer viruses and was later used in a 2005 Communications of the ACM article (Schmidt
and Arnett, 2005) that examined that user knowledge of spyware. The survey used a five-
point Likert scale (1 = Strongly Disagree, 3 = Neutral, 5 = Strongly Agree) for the research
items and contained additional demographic items including gender, age, computer
experience, education and occupation.
The surveys were administered in-class, to students who were assured of the
confidentiality of their responses. IT professionals were asked to complete the survey in their
workplace. Respondents were asked to circle the answer that most closely described their
answer for each question. In an effort to increase our understanding of rootkit awareness and
knowledge, a survey was conducted involving 210 faculty, staff (including IT professionals),
and students from three public institutes of higher learning from various geographical regions
within the United States.
The majority of respondents were male (57%), with 5-10 years of computer experience
(43%). A large number (97%) of respondents have used the Microsoft Windows environment
for at least one year with 69% having at least five years of windows experience. Conversely,
considering UNIX, the platform from which rootkits evolved, more than 78% of respondents
have less than one year experience if any at all.
155
ANOVA results. The ANOVA results indicate that there are differences between self
reported knowledge levels of these three types of malware.
ANOVA
Source of Variation SS df MS F P-value F crit
Between Groups 1038.158428 2 519.079214 596.771045 9.11E-146 3.010114242
Within Groups 544.5029392 626 0.869813002
After finding highly significant results (P<.001), the next step was to determine where
the source of the differences lie. Tukey’s Honestly Significant Difference (HSD) procedure
was used to determine that if the absolute difference between means was > .25, then there is a
difference in user perceptions between the two means in question. Table II depicts the means
and the absolute difference between them.
Respondents were more familiar with spyware than they were with rootkits.
Respondents were more familiar with viruses than rootkits. It appears that it takes some time
for the awareness levels of new malware to reach a point where IT users are cognizant of the
threats to a level which they can adequately protect themselves. A historical view of viruses
finds that 79.6% of respondents were aware of viruses for one or more years when viruses
were approximately 10 years old (Jones et al., 1993). More recently Schmidt & Arnett (2005)
found that even though spyware is substantially less than 10 years old, only 6% of
respondents were aware of spyware for less than a year. It appears that as time progresses,
users are more aware of malware and experience a compressed time period from the
introduction of a particular malware to widespread awareness. Time will tell as to whether or
not this pattern holds with rootkit awareness.
Interestingly, there is no statistically significant difference between self reported
knowledge of spyware and viruses. Specifically, on a seven point Likert scale users report an
awareness level of 4.13 for spyware and a level of 4.21 for viruses. This difference is not
statistically significant which points to the conclusion that users report that they are as
knowledgeable regarding spyware as they are regarding viruses.
Even with the increase in the popular press of reports of incidents involving rootkits,
awareness regarding the pervasiveness and threats posed by rootkits remains low. Of those
156
survey respondents that reported a familiarity with rootkits, only 59% were able to accurately
report a rootkit’s ability to manipulate logs, 56% knew that rootkits could be used to provide
false feedback to detection utilities, and only 62% indicated that rootkits can be used to
provide hackers with administrative rights.
V. CONCLUSION
Rootkits have the potential to cause a great deal of harm because they are designed not
only to conceal themselves but also to conceal other symbiotic malware such as viruses and
spyware (Seltzer, 2005). Because consumers are not demanding rootkit detection and
removal methods, antivirus software developers have been slow to add rootkit features to their
protection tools. However, some companies are now moving in that direction. For instance,
F-Secure (http://www.f-secure.com/) now includes “BlackLight,” a rootkit detection tool with
its “F-Secure Internet Security 2006” security suite. It seems obvious that as awareness
increases, perhaps due to recent high profile rootkit abuses, that user awareness of rootkits
will increase. When the knowledge levels increase it is then logical to assume consumers will
demand more adequate protection tools.
It is evident to many that rootkits pose a significant threat to computer security. Given
the current levels of awareness and knowledge within the user community, this threat will
likely continue to emerge much as virus and spyware threats did in their beginnings.
Unfortunately, it is likely that security professionals’ attempts to mitigate this threat will
encounter many of the same challenges we face in our efforts against viruses and spyware.
Given the signature of a rootkit is to conceal its presence and activities, many current
protection mechanisms are largely ineffective because they cannot mitigate what they cannot
detect. Unfortunately, a lack of knowledge of computer security threats negatively effects an
organization’s ability to counter those threats (Straub and Welke, 1998). Given the
aforementioned findings, it appears as though effective widespread rootkit threat amelioration
will likely be a phenomenon of the future. But, the future may be closer than we think!
REFERENCES
157
Luftman, Jerry, and McLean, Ephraim R. "Key Issues for It Executives." MIS Quarterly
Executive., 3, (2), 2004, 89-104.
News, CNN Headline. Rootkit Report, 2005.
Roberts, Paul F. "Rootkits Sprout on Networks." eWeek., October 17, 2005, 25.
Schmidt, Mark B., and Arnett, Kirk P. "Spyware: A Little Knowledge Is a Wonderful Thing."
Communications of the ACM., 48, (8), 2005, 67-70.
Seltzer, Larry. "Rootkits: The Ultimate Stealth Attack." PC Magazine., 24, (8), 2005, 76.
Stafford, Thomas F. "Spyware." Communications of the ACM., 48, (8), 2005, 34-35.
Straub, Detmar W., and Welke, Richard J. "Coping with Systems Risk: Security Planning
Models for Management Decision Making." MIS Quarterly., 22 (4), 1998, 441-69.
TechWeb. 2005. <http://www.techweb.com/encyclopedia/>.
Whitman, Michael E. "Enemy at the Gate: Threat to Information Security." Communications
of the ACM., 46, (8), 2003, 91-95.
158
CHAPTER 6
E-BUSINESS
159
I’M WITH THE BROADBAND: THE ECONOMIC IMPACT OF BROADBAND
INTERNET ACCESS ON THE MUSIC INDUSTRY
ABSTRACT
Despite its illegal origins, by 2005 digital music distribution and online file sharing – in
concert with the growth of broadband Internet access – revolutionized the music industry.
Musicians, consumers and record companies are now fully beginning to grasp the greatness of
this popular new paradigm. This paper explores the origins of P2P systems, investigates their
effects on the music industry and surveys the future of this brave new world of music
production, promotion and distribution.
I. I WANT MY MP3!
When Napster launched in 1999, it fundamentally changed the way by which people
obtained music (Zentner, 2003). Leveraging a peer-to-peer platform (“P2P”) Napster directly
connected two or more computers, enabling them to share files and resources (Jacover, 2002).
Offering increased speed, unparalleled selection and unequaled affordability (it was free)
Napster simplified and centralized the process by which MP3 files could be exchanged around
the world. The benefits of MP3 compression were one of the fundamental factors to Napster’s
success. Moore and McMullen (2004) explain the mechanics and benefits of MP3 files as
follows:
Prior to employing the MP3 compression algorithm, a music file stored on a
computer could be as large as 40 to 45 megabytes in size, and would take
around one and one-half hours to transfer over a phone line…After using the
MP3 algorithm, the same music file would be around 3 to 5 MB and would
take around 8 to 15 minutes to transfer. (p. 3)
In true viral fashion, Napster quickly generated a user base of over 20 million unique
accounts at its peak, with more than 500,000 unique IP addresses connected to the system at
any one time (Blackburn, 2004). Prior to Napster, the music industry in the United States was
growing after several years of stagnation (Blackburn, 2004). However, the gains made in the
years prior to 1999 quickly disappeared once Napster launched.
Moore and McMullan (2004) highlight a 2002 congressional report estimating that there
were more than 3 million users swapping 2.6 billion songs per year, costing songwriters $240
million a month – an amount predicted to balloon to $3.1 billion annually by 2005. If the
number of music available on-line were reduced by 30 percent, sales could have been
approximately 10 percent higher in 2003 (Blackburn, 2004).
160
Neilsen SoundScan also recorded a significant drop in music sales near college
campuses between 1997 and 2000 (Blackburn, 2004). College campuses are the key to
understanding digital music distribution – partly due to the early adopter tendencies of
students when it comes to technology, music and popular culture, but also because most
institutions of higher learning provide high speed Internet access at little or no charge to
students. Zentner (2003) offers the following insight into the situation:
Universities have very fast connections and Napster and its successors were
banned in many of them because file swapping was consuming much of the
available bandwidth. In the case of the University of Illinois at Urbana-
Champaign, this amounted to 75 percent of the total bandwidth. (p. 3)
Overall, Zentner (2004) estimates file sharing networks reduce the probability of
purchasing music by 30 percent. However, the RIAA’s tactics have worked: Blackburn (2004)
estimates the lawsuits increased album sales 2.9 percent during the 23 week period after the
strategy was announced. Zentner (2004) adds that the RIAA’s pursuit of individual users
resulted in increased record sales. On March 5, 2001 Napster was ordered to cease operations,
and by July the system was shuttered.
Interestingly, despite the tremendous fiscal impact file sharing has on music sales, it
isn’t always considered negative by all artists. In fact, some actually welcome and fully
support it. Blackburn (2004) explains this dichotomy as it relates to established and aspiring
musicians:
First, there is a direct substitution effect on sales as some consumers
download rather than purchase music. Second, there is a penetration effect
which increases sales, as the spread of an artist’s works helps to make the
artist more well-known…The first effect is strongest for well-known artists,
while the second is strongest for unknown artists. The overall negative
impact of file sharing arises because aggregate sales are dominated by sales
of well-known artists. (p. 1)
Perhaps it is partly because of this that illegal file sharing systems continue to flourish
while several sanctioned options have also become available (many as official distribution
channels of large media corporations).
Overall, the number of users on all worldwide P2P networks is estimated to have
reached nearly 10 million by October 2004 (Working Party on the Information Economy,
161
2005). The United States accounted for nearly 50 percent of all users (Working Party on the
Information Economy, 2005).
Broadband is most widely available from the cable company via existing coaxial lines,
or from the phone company (or similar provider) through a digital subscriber line (DSL) that
makes use of telephone wiring. Cable offers download speeds as high as 8 megabits per
second (mbps) and an upload rate as high as 768 kbps. DSL services offer users download
speeds between 384 kbps and 1.5 mbps and upload availability of 128 kbps to 384 kbps.
Cost remains a significant, though not necessarily limiting factor. Generally cable plans
are more expensive than DSL – especially since AT&T dropped their rates to $14.95 per
month. Cable costs between $30 and $50 per month, but offers additional benefits that justify
the cost for certain consumers. Additional expenses may include the need for special
equipment, installation fees and any other related expenses.
However, even without the recent pricing reductions, cost might not be a barrier to entry
into the broadband world for some consumers. Hatfield, et. al. (2003) explains how
microeconomic theory illustrates that most rational consumers prefer goods and services that
maximize their utility. After an initial awareness of a need is developed – faster Internet
access in this case – a consumer researches options that meet that need; weighing features and
benefits against strengths and weaknesses. The utility function helps consumers decide which
goods to purchase given their income, time and other constraints. The result of this cost versus
benefit assessment ultimately influences a consumer’s decision.
Atkinson and Newkirk (2002) add that, “broadband users…use the Internet for
telecommuting, distance learning, and multimedia applications such as television, movies, and
music. These represent the pressure points of broadband demand,” (p. 6).
Broadband Internet access was clearly a catalyst for online music sharing and
distribution. As accessibility, affordability and reliability improves, the number of subscribers
grows. The Pew Internet & American Life Project (2004a) found that 68 million adult
Americans use broadband Internet access at home or work, while 48 million adult Americans
162
have broadband access at home. Data from March 2005 indicates 50 percent of all home
internet users now have high-speed access (Pew Internet & American Life Project, 2005).
In light of recent developments, Moore and McMullan (2004) question the need for
continued use of the MP3 format. This is an especially relevant question since cable modems
and Digital Subscriber Lines (DSL)…allow for transfers of data at speeds greater than 50
times that of traditional phone modems. Both forms of broadband Internet access are
becoming more commonplace in residential establishments, making file sharing an even faster
activity,” (p. 4).
Realistically, P2P networks and broadband Internet access represent a viable paradigm
shift in the production and distribution of music and other forms of entertainment. Moving
music online presents a vast and greatly untapped marketing medium. It is an opportunity, not
a challenge. Research presented by Pew Internet & American Life Project (2004b) supports
the belief that digital is the domain in which musicians should be operating:
Artists and musicians are more likely to say that the internet has made it
possible for them to make more money from their art than they are to say it
has made it harder to protect their work from piracy or unlawful use. (p. ii)
Times are changing. In light of the current legal climate, more consumers now use paid
music and movie services than their illegal counterparts. According to the Pew Internet &
American Life Project (2005), “34 percent of current music downloaders…now use paid
services and 9 percent say they have tried them in the past,” (p. 6).
Distribution channels have evolved significantly since the advent of the Internet and file
sharing systems. According to Blackburn (2004), in 1999, 51 percent of albums were sold in
retail music stores and 34 percent were purchased in other retail establishments. By 2003, the
percentage of sales in music stores dropped to approximately 35 percent with more than 50
percent sold in other types of stores. Most significantly, by 2003, fully 5 percent of all music
sales occurred through the Internet – a figure that has continued to grow in recent years.
Generally, the core of music sales is shifting away from stores that exclusively sell
music to more general merchandisers who offer items that complement or encourage a music
purchase. Predominantly, large electronics chains like Best Buy and Circuit City, in addition
to general merchandisers such as Wal-Mart are reaping the rewards.
Despite the negative effect online file sharing may have on sales of music CDs, it
represents an entirely new distribution model for the future. Beyond the basics and mechanics
of this electronic delivery system, offering music online – in a paid scenario – opens the doors
to increased sales of complementary items and impulse purchases. Whereas MP3s are
substitutes for CDs (just as CDs were substitutes for LPs) migrating to online music delivery
provides new paths to revenue. In addition, the Internet gets music to more people in more
places than a single CD ever could. Curien, et. al. (2004) explains:
163
An increase in piracy should generate a drop in CDs sales but an increase in
expenses in ancillary products. [For example] DVDs and songs used as rings
for mobile phone…encountered a strong growth over the recent period. (p.
16)
Revenue from live concerts has increased in parallel with the explosion of online file
sharing and MP3 availability. So, despite the turndown in CD sales, the digital distribution of
music ultimately resulted in more revenue. Curien, et. al. (2004) offers the following
explanation:
Between 2000 and 2003, the 47 percent increase in revenues generated by
live shows revenues has been much more important than the drop in CDs
sales. Since the average price of a concert increased by 24 percent over the
same period…between 2000 and 2003, both the volume of sold units and
revenues do increase. (p. 16-17).
Despite its illegal origins, by 2005 digital music distribution and online file sharing – in
concert with the growth of broadband Internet access –has opened new doors for the music
industry. Online delivery is a considerably more user-friendly experience because it enables
consumers to choose the music they want to listen to in an order and grouping of their choice.
While many aspects of this new paradigm are not yet vetted, in general it appears that there is
a light at the end of the tunnel, for musicians, consumers and record companies alike. Rock
on!
REFERENCES
Atkinson, R., Ham, S. and Newkirk, B. (2002, September). “Unleashing the Potential of the
High-Speed Internet: Strategies to Boost Broadband Demand.” Progressive Policy Institute
Technology & New Economy Project.
Berger, S. (2001). ‘The use of the internet to ‘share’ copyrighted material and its effect on
copyright law.” Journal of Legal Advocacy & Practice, 3, 92-105.
Blackburn, D. (2004, December 30). “On-line Piracy and Recorded Music Sales.” Harvard
University: Department of Economics.
Curien, N., Laffond, G., Lainé, J., and Moreau, F. (2004, November 11). “Towards a New
Business Model for the Music Industry: Accomodating Piracy through Ancillary Products.”
Laboratoire d’économétrie, Conservatoire National des Arts et Métiers.
Hatfield, D., Jackson, M., Lookabaugh, T., Savage, S., Sicker, D., and Waldman, D. (2003,
February 8). “Broadband Internet Access, Awareness, and Use: Analysis of United States
Household Data.” University of Colorado, Boulder
Jacover, A. (2002). “I want my mp3! Creating a legal and practical scheme to combat copyright
infringement on peer-to-peer internet applications.” Georgetown Law Journal, 90, 2207-
2254.
Kooser, A. (2005, September). “Answer the Call: Two Companies Ring In Much-Needed New
Phone Systems.” Entrepreneur, September 2005, p. 32).
Moore, R. and McMullan, E. (2004, October 29). Perceptions of Peer-to-Peer File Sharing
Among University Students. School of Criminal Justice, University at Albany, State
University of New York.
Pew Internet & American Life Project. (2004a, April). “Pew Internet Project Data Memo: 55%
of Adult Internet Users Have Broadband at Home or Work; Home Broadband Adoption has
Increased 60 % in Past Year and Use of DSL Lines is Surging.”
164
Pew Internet & American Life Project. (2004b, December 5). “Artists, Musicians and the
Internet: They have embraced the internet as a tool that helps them create, promote, and sell
their work. However, they are divided about the impact and importance of free filesharing
and other copyright issues.”
Pew Internet & American Life Project. (2005, March). “Pew Internet Project Data Memo:
Music and video downloading moves beyond P2P.”
Working Party on the Information Economy. (2005, June 8). “Digital Broadband Content:
Music.” Organisation for Economic Co-operation and Development.
Zentner, A. (2003, June). “Measuring the Effect of Music Downloads on Music Purchases.”
University of Chicago.
165
U.S. ATTEMPTS TO SLOW GLOBAL EXPANSION OF INTERNET RETAILING
MEETS LEGAL RESISTANCE
ABSTRACT
E-commerce has thrived in some sectors of the U.S. economy, including airline and
hotel ticketing, books, music recordings, and computers. In other sectors, Internet-based
commerce has not taken hold to the same extent. One factor in explaining slow expansion of
e-commerce in certain sectors has been the use of legal impediments to Internet commerce.
Entrenched business interests have sought to block entrepreneurs and consumers from
developing competitive alternatives in e-commerce. The antitrust laws limit how far
companies acting on their own may go in imposing non-regulatory limitations on e-
commerce. Where competitors have had more success in slowing e-commerce competition
has been through state law barriers in such industries as wine, contact lenses, automobiles,
caskets, real estate, mortgages, and financial services. Recent U.S. court decisions
demonstrate that courts are increasingly willing to strike down protectionist state and local
laws that impede Internet commerce.
I. INTRODUCTION
For other areas of retail commerce, however, on-line retailing has not taken hold to the
same extent. The Federal Trade Commission in October of 2002 held hearings on state-
imposed impediments to e-commerce. The hearings addressed ten areas where state laws are
holding back the growth of e-commerce, including wine, contact lenses, automobiles, caskets,
online legal services, health care, real estate, mortgages, and financial services. Many
participants testified that e-commerce sales in these areas are being held back for reasons that
have little to do with products being unsuitable for Internet purchasing, but rather, due to
outdated or deliberately protectionist impediments that impede Internet purchases. (Smith,
2003).
One factor, although certainly not the only factor, in explaining slow expansion of e-
commerce in certain sectors of the economy has been efforts of non-Internet competitors to
impede competition from on-line competitors. Besides the private efforts of entrenched
competitors, state laws have been used, often at the urging of entrenched business interests, to
effectively block entrepreneurs and consumers from developing competitive alternatives in e-
commerce. (Ribstein & Kobayashi, 2001).
Since the FTC hearings, federal courts have indicated greater willingness to strike
down state and local laws that impede commerce. Impediments to e-commerce by private
166
firms have always been subject to the antitrust laws. (See, e.g., ABA Section of Antitrust law,
2002). Another important area of law for the analysis in this paper is state franchise and
dealership law. The cases discussed below are an example of local interests using such laws
to effectively prevent the purchase of wine via the Internet from out-of-state sources. (See,
e.g., Brimer, 2004). With such laws, wineries had little incentive to develop Internet-based
distribution.
Limitations on e-commerce retail sales generally come from three sources which are
both considered below. (See. e.g., Atkinson and Wilhelm, 2001, Foer, 2001). The first is
private efforts by potentially competing businesses to hinder competition from Internet
retailers. These private efforts may be attempted unilaterally by firms with market power or
collectively by similarly-positioned firms, and are subject to the U.S. antitrust laws. The
second is by governmental authorities to place regulatory restrictions on e-commerce, or
alternatively to continue outdated regulatory restrictions that are more burdensome on e-
commerce retailers than on more traditional types of retailers. The third is a combination of
the first two—actions by competitors to lobby governmental regulators to restrict competition
from Internet retailers, which is also generally exempt from the U.S. antitrust laws. Of
course, such limitations are not confined to U.S. regulations, and pose challenges in the global
economy. (Frynas, 2002).
Efforts by businesses to limit Internet retailing take several forms. Manufacturers may
try to prevent Internet retail sales of their products by distributors. Or distributors may try to
require that their suppliers refuse to deal with Internet retail competitors, as Chrysler dealers
did in an early-commerce case (discussed below). Trade associations and other such groups
may also be used as a mechanism for limiting competition from Internet retailers, as may have
been the case with the U.S National Automobile Dealers Association. Each of these sources
of restrictions on Internet sales in the United States are generally subject to the antitrust laws.
The Sherman Antitrust Act of 1890 is the most relevant law for evaluating e-
commerce constraints. Section 1 of the Sherman Act, 15 USCS § 1 (2005), is rather broad in
its wording and makes illegal "every contract, combination in the form of trust or otherwise,
or conspiracy in restraint of trade or commerce." Despite this broad language, Section 1 of
the Sherman Act has consistently been interpreted to prohibit only those restraints of trade
that unreasonably restrict competition. See, e.g., Arizona v. Maricopa County Med. Soc’y,
457 U.S. 332 (1982). Section 2 of the Sherman Act, 15 USCS § 2 (2005), prohibits activities
by "every person who shall monopolize, or attempt to monopolize…any part of trade or
commerce.” Since most (but not all) companies seeking to hinder Internet competition do not
have significant market power, Section 1 of the Sherman Act is the most likely to be relevant
for this analysis.
A variety of restrictions on the resale of its products have been analyzed under the
Sherman Act and usually upheld by the U.S. courts. Such restrictions on Internet sales may
include refusing to sell to Internet resellers, limiting Internet sales to certain resellers, or
restricting a distributor’ Internet sales to specified territories.
167
Whether this is a good or bad business decision is not the issue. If restrictions are aimed at
enabling a manufacturer to compete more effectively with its competitors, such restrictions
usually will be found to be reasonable. See, e.g., GTE Sylvania Inc. v. Continental TV, Inc.,
433 U.S. 36 (1977).
For a manufacturer or distributor with market power, even what might appear to be a
unilateral refusal to do business is subject to more scrutiny under the antitrust laws. In
particular, manufacturers or distributors with significant market power face significant legal
issues if they condition sales or purchases on an assurance of not dealing with a competitor or
class of competitors. For example, Toys “R” Us, an important U.S. toy retailer, was found to
have abused its market power. The FTC claimed that Toys “R” Us pressured toy
manufacturers into not making the toys offered by Toys “R” Us available to warehouse club
stores. Toys “R” Us responded that it should be allowed to choose whether to do business
with suppliers as it chooses. The Court disagreed, and held that Toys “R” Us had used its
market power to hinder competing retailers through the unilateral threat not to do business
with the supplier who sell to competitors of Toys “R” Us. Toys “R” Us v. FTC, 221 F.3d 928
(7th Cit. 2000).
The rules change when the conduct is not unilateral. For example, if a manufacturer
(regardless of market power) enters into an agreement with a competing manufacturer not to
sell to Internet resellers, that agreement could be characterized as an unlawful agreement not
to compete under § 1 of the Sherman Act. In an early e-commerce antitrust case, 25 Chrysler
dealers in and around Idaho were charged by the FTC with entering into an illegal conspiracy
when they used their association, Fair Allocation System, Inc, (“FAS”) to demand that
Chrysler allocate new vehicles on a different basis. FAS’s demanded that Chrysler change its
allocation formula to disfavor an Idaho dealer with substantial Internet sales. This demand
was accompanied by threats of a boycott of certain models and refusals to provide certain
warranty repairs. The matter was resolved with a consent decree prohibiting FAS from
threatening such a boycott. In re Fair Allocation System, Inc., 63 Fed. Reg. 43,183 (August
12, 1998).
When federal, state, or local government regulations are in conflict with free and open
competition, Congress and U.S. Courts have generally resolved these conflicts in favor of the
regulations over the antitrust laws. Important recent court decisions, however, suggest that
courts are in the process of clarifying greater limitations on state’s ability to interfere with e-
commerce.
Under the “state action” doctrine, states have been allowed to impose regulatory
requirements mandating conduct that would otherwise violate the antitrust laws. The state
action doctrine is often called “Parker immunity,” after Parker v. Brown, 317 U.S. 341, 352
(1943). To qualify for Parker immunity, (1) the state must clearly articulate and affirmatively
express the restraint as state policy, and (2) the policy must be actively supervised by the state
itself. California Retail Liquor Dealers Ass’n v. Midcal Aluminum, Inc., 445 U.S. 97 (1980).
To the extent that state laws meet this test, the courts will recognize the restraint as being
within the legitimate regulatory power of the state.
Many of the state-level restrictions on e-commerce are in the form of state franchise
laws. State franchise laws were originally adopted in response to concerns about vulnerability
of franchisees to abuses by franchisors. The first state franchise law, the California Franchise
168
Registration and Disclosure Act, was passed in 1971. Currently, eighteen states have statutes
of general applicability prohibiting termination of a “franchise” or “dealer,” as the terms are
defined in the statutes, without good cause. Many states also have comparable statutes
applicable to the distribution of wine and alcoholic beverages, retail automobile sales and
gasoline and related petroleum products.
State franchise laws may override the contractual provisions in agreements between a
manufacturer and its dealer or distributor. For example, in To-Am Equipment Co. v.
Mitsubishi Caterpillar Forklift America, Inc., 152 F.3d 658 (7th. Cir. 1998), the Court found
that even though a manufacturer strictly complied with the termination provisions of its
contract with its exclusive distributor in parts of Illinois (which the contract stated were to be
subject to Texas law), the manufacturer was subject to the Illinois Franchise Disclosure Act
and liable for violating the distributor’s rights under the Illinois state franchise law.
Notably, the National Automotive Dealers Association (“NADA”), the largest trade
association representing new car dealers, has taken the position that most state franchise laws
effectively prohibit automobile manufacturers from engaging in any direct Internet sales.
(NADA 2002). In 45 states, automobile dealer franchise laws contain “relevant market area”
provisions placing the burden on the automobile manufacturer to justify to a state agency any
attempt to allow a new party, such as an Internet seller, to sell in an existing dealer’s territory.
Courts have upheld the constitutionality of these relevant market area provisions. See, e.g.,
New Motor Vehicle Board v. Orrin W. Fox Co., 439 U.S. 96 (1978).
While some federal courts have struck down certain state laws and regulations
impeding the growth of e-commerce as violations of the commerce clause of the U.S.
Constitution, other federal courts have upheld such restrictions. See, e.g., Bridenbaugh v.
Freeman-Wilson, 227 F.3d 848 (7th Cir. 2000) (upholding constitutionality of Indiana's
alcoholic beverage statute); Dickerson v. Bailey, 212 F. Supp. 2d 673 (S.D. Tex. 2002)
(holding unconstitutional Texas's statutory ban on direct importation of wine by Texas
residents).
A key case in this battle involves Michigan and New York restrictions on direct
shipments of wine into the state, such as by Internet sales. Eleanor Heald, a wine collector,
challenged the Michigan's Liquor Control Code as violating the Commerce Clause in Article I
of the U.S. Constitution by prohibiting out-of-state wineries from shipping wine directly to
Michigan residents. The Michigan state allowed in-state wineries to make such direct
shipments. The same argument was made in a separate case in New York. In a 5-4 decision,
the Supreme Court ruled both the Michigan and the New York laws unconstitutional.
Granholm v. Heald, 125 S.Ct. 1885 (2005). While states had the power to regulate alcohol
sales, states did not have the power to discriminate against out-of-state interests. The
Supreme Court concluded that any justifications offered by the states were undermined by the
169
unequal application which allowed Internet sales by in-state wineries but not by out-of-state
wineries.
Court opposition to protectionist state laws has extended into other areas and other
legal grounds. Perhaps the most significant is Craigmiles v. Giles, 312 F.3rd 220 (2002), in
which the Sixth Circuit Court of Appeals struck down a Tennessee law requiring caskets be
sold only by Tennessee-licensed funeral directors. The Court noted that funeral homes at the
time typically marked up the cost of caskets by 250 to 300 percent, while the plaintiffs
challenging the regulation typically sold caskets elsewhere for much lower prices. In
enjoining the enforcement of the casket sales restriction, the Court pointed to the Equal
Protection and Due Process clauses of the Fourteen Amendment to the U.S. Constitution. The
Court found that the statue contained an obvious protectionist bias, and found no rational basis
for the regulation that could not be achieved with a much less intrusive regulation. Note that
the restrictions on casket sales applied to both in-state and out-of-state interests, so that the
Sixth Circuit enjoined the enforcement of the Tennessee statute on a broader basis than the
U.S. Supreme Court used in the wine cases.
Thus, courts have been willing to use at least two Constitutional bases for striking
down state laws that restrict Internet commerce—the Commerce Clause in Article I of the
U.S. Constitution (used in the wine cases) and the Due Process and Equal Protection clauses
in the Fourteenth Amendment (used in the Tennessee casket sales case).
Nonetheless, despite the antitrust immunity granted by the Noerr decision for political
activity, the antitrust enforcement agencies do not sit silently when such restrictions on
competition are proposed. For example, the Antitrust Division and FTC jointly advised the
Rhode Island legislature of its opposition to a proposed law to restrict real estate closings from
being preformed by non-attorneys. (U.S. Department of Justice, 2002). Similarly, the FTC
opposed regulations to restrict the online sale of replacement contact lenses. According to the
FTC, requiring Internet-based sellers to obtain optical establishment licenses "would likely
increase consumer costs while producing no offsetting health benefits." (FTC, 2002). Rather
than improve consumer optical health, increased licensing costs could lead to higher prices,
which could lead consumers to replace their contact lenses less frequently.
V. CONCLUSION
Attempts to erect barriers to e-commerce within the U.S., and therefore affecting
overseas trade, often have anticompetitive effects on the marketplace. While proponents of
such barriers may claim to provide consumer protection through the restriction of e-
commerce, the primary purpose is often the protection of local interests, at the expense of out-
of-state or international competitors. Business managers seeking to limit Internet competition
170
should be on notice that United States policies have shifted and are now less likely to allow
such restraints.
REFERENCES
ABA Section of Antitrust Law. Antitrust Law Developments, 5th ed. Chicago, IL: American
Bar Association, 2002
Atkinson, Robert D., & Wilhelm, Thomas G. The Best States for E-Commerce. Washington,
DC: Progressive Policy Institute, 2001.
Brimer, Jeffrey A., and Smith-Porter, Leslie. Annual Franchise and Distribution Law
Developments, 2004 ed. Chicago, IL: American Bar Association, 2004.
Foer, Albert A. “Antitrust Meets E-Commerce: A Primer.” Journal of Public Policy and
Marketing, 20., (1), 2001, 51-63.
Federal Trade Commission. “FTC provides Connecticut with Comments on the Sale of
Contact Lenses by Out-of-State Sources,” March 28, 2002, available at
http://www.ftc.gov/opa/2002/03/contactlenses.htm.
Frynas, Jedrzel G. “The Limits of Globalization: Legal and Political Issues in E-Commerce.”
Journal of Management History., 40, (9), 2002, 871-880.
National Automobile Dealers Association. “Comments Submitted by the National Automobile
Dealers Association Regarding Competition,” submitted to the Federal Trade
Commission’s Public Workshop, October, 2002, available at
http://www.ftc.gov/opp/ecommerce/anticompetitive/comments/nada.pdf.
Ribstein, Larry E., & Kobayashi, Bruce H. “State Regulation of Electronic Commerce.”
Emory Law Journal, 51, (1), 2001, 1-82.
Smith, David H. “Consumer Protection or Veiled Protectionism? An Overview of Recent
Challenges to State Restrictions on E-Commerce.” Loyola Consumer Law Review, 15,
2003, 359-375.
United States Department of Commerce. “News,” November 22, 2005, available at
http://www.census.gov/mrts/www/data/html/05Q3.html.
United States Department of Justice. “Letter about Legislation Concerning Non-Lawyer
Competition for Real Estate Closings,” March 29, 2002,
http://www.usdoj.gov/atr/public/comments/10905.htm
171
SEGMENTING CELL PHONE USERS BY GENDER, PERCEPTIONS, & ATTITUDE
TOWARD INTERNET & WIRELESS PROMOTIONS
ABSTRACT
Since cellular phone carriers offer many wireless promotions as incentives to elicit
cellular phone purchase among college students, one of the primary target audiences in
cellular phone market, and limited research is available, it is important to understand and
examine college students’ attitudes toward existing and potential wireless promotions for
strategic planning. Therefore, this study investigates college students’ practical perspectives
of what they are looking for in a cellular phone. Moreover, it examines the relations among
college students’ gender, experiences with the Internet and wireless promotions, and their
attitudes toward various wireless promotions. The results provide practical implications and
directions for future research.
I. INTRODUCTION
The increasing consumer interest in mobile devices and rapidly changing development
in wireless technologies make cellular phone market a very profitable market. Research
reveals that 1.4 billion mobile phone subscribers in the world have sent over 350 billion text
messages a month (www.cita.org). Among the 350 billion text messages, 15% of them are
commercial messages. In addition, 54% of cellular phone owners use one or more mobile
services including messaging, Internet access, downloadable ring tones, and mobile gaming.
While cellular phone carriers use advertising campaigns intensely to recruit consumers
with a variety of services, creating persuasive messages to gain favorable entry into
consumers’ minds has been challenging. Do consumers consider a service offer as a receipt of
a benefit? If they do, what are factors that may enhance their favorable attitudes? The main
purpose of this study is to find empirical evidence documenting factors that may affect
consumers’ attitudes toward existing and potential wireless promotions, which plays a
significant and constructive role in the development of future wireless marketing
communication strategies. Thus, the study’s methodology aims at investigating consumers’
cellular phone usages based on gender and information search behavior that may influence
their attitudes toward wireless promotions.
Past studies have suggested that males and females used their cellular phones
differently (Khoo and Senn, 2004, Madell and Muncer, 2004). Madell and Muncer (2004)
found females were more likely to own a cellular phone than males. They also found a
significant relationship between the way consumers used messaging services and e-mail
services. Khoo and Senn (2004) found females showed more negative attitudes than males
when exposing to sex-related text messages.
172
H1: Males and females would display different attitudes toward mobile promotions.
Past studies have indicated consumers formed their attitudes toward Internet
advertising based on their experiences with Internet advertising, whereas informativeness and
enjoyment were two main determinants of consumers’ attitudes toward Internet advertising
(Bracket and Carr, 2001, Schlosser, Shawitt, and Kanfer, 1999). Bracket and Carr (2001)
found that Internet advertising was perceived as a valuable source of information as well as
more irritating than advertising carried by traditional media. However, Schlosser, Shawitt, and
Kanfer (1999) found Internet advertising was perceived as more informative and trustworthy
than advertising carried by traditional media.
Tsang, Ho, and Liang (2004) measured consumers’ attitudes toward mobile
advertising and revealed that consumers perceived receiving advertisers’ text messages as
disturbing. However, they also indicated that entertainment, informativeness, irritation, and
credibility features of messages directly influenced consumers’ positive attitudes toward
cellular phone usages when advertisers delivered text messages with consumers’ permissions.
Similarly, Barwise and Strong (2002) found that mobile advertising generating favorable
impressions improved consumers’ brand attitudes and increased brand awareness when
consumers permitted the delivery of mobile advertising. In other words, with a prior
permission, most cellular phone users not only read the incoming text messages but also
responded to them.
H2: There would be a correlation between consumers’ experiences with cellular phone
usages and attitudes toward mobile promotions.
Research has studied the similarities and differences between online marketing and
mobile marketing. Tsang, Ho, and Liang (2004) believe that “Internet and mobile advertising
are emerging media used to deliver digital texts, images, and voices with interactive,
immediate, personalized, and responsive capabilities” (page 68). Although both channels
allow advertisers to personalize their messages and analyze behavior patterns, Internet
advertising provides unlimited amount of data delivery with little cost (Yoon and Kim, 2001),
whereas mobile advertising is more suitable for campaigns that are time and location sensitive
(Tsang, Ho, and Liang, 2004).
173
RQ: Is there a relationship between exploratory information seeking behavior and
consumers’ attitudes toward mobile marketing?
III. METHODOLOGY
A survey was developed and administered to the study’s respondents by using items
gathered from the literature and using literature as a guide to adapt, when necessary, to the
specific focus of this study. All respondents received an informed consent from the study’s
principal investigators prior to participating in this study. Once the respondents agreed to
participate in this study, they filled out all questions at their own pace. A total of 205 college
students were recruited form communication and psychology classes at a northeast university,
and 184 respondents’ responses were used. Twenty-one respondents’ responses were dropped
because of missing data. Research credits were given to students who participated in this
study.
The survey administered to the respondents used published scales (Baumgartner and
Steenkamp, 1996; Tsang, Ho, and Liang, 2004). First, the respondents were asked several
questions regarding their experiences and attitudes toward their cellular phone usages. Next,
respondents’ perceptions about the Internet and specific functionalities of a cellular phone
were measured. The third section of the questionnaire measured respondents’ past experiences
with Internet promotions and cellular phone usages and their behavioral intents in accepting
certain promotional offers.
Seven dependent variables were measured in this study. The respondents were asked
whether they would be interested in reading breaking news on their cellular phones without
additional charge with a bipolar, 7-point semantic differential scale. The other six dependent
variables were respondents’ attitudes toward various wireless promotions including receiving
text messages, receiving multi-media messages, receiving coupon, participating in
sweepstakes, receiving product information, and receiving free downloads. Respondents’
gender was asked and coded as the independent variable. Moreover, nine covariates were used
in this study.
IV. RESULTS
This study used a single MANCOVA test with respondents’ interests in reading
breaking news on their cellular phones, attitudes toward receiving mobile text messages,
receiving mobile multi-media messages, receiving mobile coupon, participating in mobile
sweepstakes, receiving mobile product information, and receiving mobile free downloads as
the dependent variables. Gender was used as the fixed variable, and nine covariates were used
for data analysis. An advantage of using a single MANCOVA test was that the correlations
among the set of seven dependent variables were considered simultaneously.
The Box's test of equality of covariance matrices (p = .052) revealed that the observed
covariance matrices of the dependent variables were equal across groups. H1 was supported as
there was a main effect (Wilks’ λ = .829) for the independent variable, gender, F (7, 167) =
4.916, p = .000, based on the multivariate tests (Table I). It was concluded that the mean
vectors for the gender variable were not equal. Overall, the set of means between male and
female respondents were different. Consequently, the gender differences with respect to the
dependent variables were established, and the individual univariate tests of between-subjects
effects could further determine on which variables male and female respondents differ. H2
and H3 were also supported since two covariates, frequency of receiving mobile text
174
messages (Wilks’ λ = .861), F (7, 167) = 3.85, p = .001, and respondents’ attitudes toward
participating in sweepstakes (Wilks’ λ = .681), F (7, 167) = 10.883, p = .000, were statistically
significant respectively based on the multivariate tests.
Finally, this study asked a research question regarding the relationship between
information seeking behavior and respondents’ attitudes toward mobile promotions. The
results revealed that there was a main effect for information seeking behavior as a covariate
(Wilks’ λ = .904), F (7, 167) = 2.529, p = .017. Based on the individual univariate tests, the
higher information seeking behavior the respondents exhibited, the more they would be
willing to read breaking news on their cellular phones, F (1, 183) = 6.303, p = .013. The
respondents would also form better attitudes toward receiving mobile coupon, F (1, 183) =
7.816, p = .006, and receiving free downloads, F (1, 183) = 7.359, p = .007, if they exhibited
higher information seeking behavior.
V. CONCLUSION
This study raises hypotheses and a research question as to how consumers’ gender,
attitudes toward Internet promotions, cellular phone usages, and information seeking behavior
affect their attitudes toward mobile promotions. The results suggest that male consumers may
175
be more interested in reading breaking news on their cellular phones than female consumers.
Male consumers may also be more interested in receiving product information on their
cellular phones than female consumers. However, this study also suggests that gender is
neither a perfect predictor nor an appropriate segmenting criterion for consumers’ attitudes
toward receiving text messages, receiving multi-media messages, receiving mobile coupon,
receiving free downloads, and participating in sweepstakes.
The more often consumers receive text messages on their cellular phones, the more
likely they are willing to read breaking news and receive text messages on their cellular
phones, which suggests a possibility that consumers may get used to receiving text messages.
Moreover, the better attitudes consumers have toward participating in sweepstakes by giving
out their personal e-mails on the Internet, the better attitudes they may form toward receiving
messages, mobile coupon, product information, and free downloads and participating in
mobile sweepstakes. If consumers have higher tendency in seeking information, they are more
likely to read breaking news and receive mobile coupon and free downloads via their cellular
phones.
When interpreting the findings from this study, some of the limitations should be taken
into consideration. One of the limitations has to do with the geographical representation of the
study’s sampling. This study used a college-based sampling, and no assumptions were made
about network coverage among different countries. Thus, the results could not be reliably
extrapolated to other countries. A related issue also has to do with different countries’ mobile
tariff plans. Different carriers may have different plans and fees for receiving text messages
around the globe. This study did not examine male and female consumers’ acceptable levels
of mobile plans around the globe. The irritation levels may be different between gender if
male and female consumers have different expectations toward different plans and fees for
receiving text messages around the globe. These issues certainly suggest that future research
should focus on studying cultural differences among cellular phone users. As one of the
176
reviewers suggested, future research should also attempt to obtain available market research
statistics from the likes of Vodafone for international comparisons on cellular phone usages.
Finally, this study did not account for differing levels of product involvement that the
respondents might have with their cellular phones. It is possible that this factor might affect
respondents’ responses and thus moderated their attitudes toward mobile promotions.
Consumers may develop a variety of brand associations that are subsequently paired based on
their perceptions of wireless promotions. In this case, an additional extension of this study lies
in the interactions between different types of pre-exposed brand association and different
types of post-exposed brand association among cellular phone brands. It is possible that
different types of brand association will materialize and mediate consumers’ attitudes toward
mobile promotions.
REFERENCES
177
E-BUSINESS BASED SME GROWTH:
VIRTUAL PARTNERSHIPS AND KNOWLEDGE EQUIVALENCY
ABSTRACT
I. INTRODUCTION
Most will agree that E-business and the Internet are potential new ways of working,
both intra-organisationally and inter-organisationally, to deliver growth and development.
These new ways of working revolve around partnerships and virtual networks. To initiate and
facilitate these partnerships, SMEs have to address some major issues:
• Understanding and exploitation of e-business systems and technology requirements
• Knowledge equivalency: The need for comparative levels of capability in
key areas.
SMEs generate a substantial share of European GDP and they are a key source of new
jobs as well as a fertile breeding ground for entrepreneurship and new business ideas. There is
therefore cause for genuine concern about the consequences if SMEs were to miss the
opportunities offered by ICT and e-business to raise productivity and to foster innovation.
The UK’s Small Business Service (2001) reviewed the use of SME websites and found that:
• 34% of SMEs used a website to advertise products/services
• 34% used them for general publicity
• 14% used them for customer support and liaison
• 18% of them were actually trading online
• Only 4% considered themselves to be very successful in e-business
Thus, most SMEs are in the very early stage of e-business, with few of them fully
utilising the website as an efficient marketing tool. Even less have integrated e-business.
178
Jeffcoate et al (2002) claim that most SMEs have been slow to adapt to the Web. UK SMEs
have almost the same total sales turnover as large companies with fully integrated e-business
system but less SMEs have adopted e-business than in the USA and only 5% have what may
be described as a full, integrated e-business system.
It would seem that the key benefits of e-Business for SMEs are new
business/customers, improved profitability, competitiveness, improved efficiency and the
ability to create partnerships. Despite all these advantages, e-Business has had a relatively
poor uptake by UK SMEs with lack of knowledge being a key barrier. Any advantages that
are gained are derived from using e-Business as an extension of business strategy rather than
being technology driven. In this research we wanted to explore this new way of working via
e-based partnerships. We have been working with some 200 SMEs, based in Merseyside,UK,
to try to define how they can use e-Business and e-Technologies as the basis for collaboration
in SME partnerships and in particular how they approach working in partnerships and
networks. Success in this seems to revolve around SME knowledge equivalency. The
concept of Core Competence Knowledge Equivalency (CCKE) for success in “virtual”
partnerships is a key element in success. Our development and use of a CCKE self-
assessment tool is described via a case study of five SME partnerships involving some 25
SMEs. Finally, we look at factors other than e-capability that influenced the clusters.
The fact that there are many critical factors for success in any e-Business
transformation is actually a limiting influence. SMEs have to have an appropriate level of e-
Business competence to allow them to develop their e-Business systems and technology. This
is further complicated when SMEs try to enter SME partnerships. If they do not have
equivalent skills/knowledge, problems occur particularly in respect of the level of e-business
up-take.
The significance of the computer-mediated data networks that enable such networked
organisations (Snow et al 1992) and co-ordination at arms' length was recognised by Fulk and
de Sanctis (1995). Network technology has evolved from ‘hard wired’ systems such as
Electronic Data Interchange (EDI) to the ubiquity of the Internet and its common denominator
protocol TCP/IP. This in turn has led to the ability to easily and cheaply dissolve and re-
establish virtual co-ordinating relationships and to the development of dynamic network
organisations (Benjamin and Wigand, 1995). The huge benefit of the dynamic network
organisation is the reduced asset specificity and the resultant increase in flexibility. This is
the essence of the “Virtual Enterprise” (VE). Whilst our work is not true VE, we use the VE
characteristics as its basis. Camarinha-Matos and Afsarmanesh (1999) describe a VE as a
"temporary alliance of enterprises that come together to share resources and skills or core
179
competencies in order to better respond to business opportunities, and whose cooperation is
supported by computer networks." They emphasise its transitory nature, as does Byrne
(1993).
We wished to develop a method that would allow us to compare the level of a range
of CKFs within a group of SMEs participating in a potential e-based partnership. After much
debate, we finally turned to the Supply Chain Management (SCM) concept. SCM is
inextricably linked with the concept of core competence best practice. For supply chains to
function to maximum benefit to all partners, there is much evidence, (Kanter, 1994, Macbeth
and Ferguson,1994, Roy,1992) of the need for close relationships between partners. This led
to the need for the establishment of realistic working standards and practices between
companies (Lamming, 1993)
Andersen (1999) describes an exercise attempting to benchmark best practice (which
closely relates to our objectives) in SCM and also identified key supplier characteristics. This
work happened to be in an area where SMEs predominate.
The details of the development of the assessment tool have been reported elsewhere
(Porter & Barclay, 2003). In the next section we explain how it was used within SME
partnerships.
As far as possible, the e-based clusters were left to organise themselves. During the
initial cluster organisation meetings, one of the research group took the role of facilitator to
encourage individuals to participate. A provisional agenda was drawn up to provide some
180
focus for the meetings. Essentially these first meetings were used to introduce individuals and
discuss areas of potential interest and activity.
The CCKE was run on all the companies both as individual SMEs and as part of a
cluster. Examples of the results from an individual cluster are given below in Table 1. This
shows the actual score of one of the SMEs as compared to the score needed to meet 50% of
the “idealised” supply chain benchmark standard.
This shows significant knowledge shortfalls in Logistics and the related IT Systems.
Thus, if this company wished to enter an e-based partnership where logistics (especially) was
important, there would be a prohibitive knowledge gap that had to be addressed.
This analysis was run at the start of the cluster activity on each individual company
(within each cluster) for self-awareness and development. By identifying knowledge gaps, we
were able to draw up development programmes and methodologies to address and/or
compensate for shortfalls. We will not go into the detail of this, as this is not the purpose of
this article.
After 12 months all the partnerships had succeeded in working together to some extent,
as e-based partnerships and had attracted business (some to a greater extent than others) that
they would not have gained as individual SMEs. As well as assessing the business growth
generated by their collaborative work, we also questioned them as to how they felt their
partnership was working. Using these two factors, we were able to assess that Cluster 3 was
the best and Cluster 1 was the worse cluster. The Table 2 shows the initial assessment scores
for both of these clusters together with the average score for all five clusters:
Core Competencies
Code Average Cluster 1 Cluster 3
Score
Business Considerations 1 19 20 21
181
Financial 2 43 32 37
Logistics 3 35 41 50
Customer Service 4 30 27 33
Management
Technical 5 26 21 27
Quality 6 33 31 35
Suppliers 7 37 30 36
IT Systems 8 33 28 40
Development 9 14 12 16
Human Resources 10 29 25 32
Total 299 267 327
Table 2. Results From Cluster Application
Cluster 1 had the lowest core competence score and this is reflected in problems with
IT Systems and Logistics use (this is the biggest single difference between the two clusters).
From this it is clear that the best performing cluster (Cluster 3) has a much higher average
core competencies score with less knowledge gaps. Of interest is the fact that the score for
Business Considerations is almost the same for all clusters. Additionally, Cluster 1 scored
better than any of the clusters on Financial Management.
Most importantly though, it is the fact that these were supposedly all e-based clusters
and it can be seen that the scores for IT Systems and Logistics were critical and reflect the
success of Cluster 3. All the SMEs in Cluster 3 had ICT in-house expertise.
Cluster 3 was the most successful in commercial terms as it bid for and got a £1.3M
contract that none of them as individual SMEs would have been able to deliver.
Cluster 1 was the only cluster that did not hold regular meetings of all participating
staff.
The initial learning from this limited data and time frame is:
• The results from the CCKE tool correlate well with the operational evidence
• Face-to-face cooperation (not by e-mail) is important
• Having an expert in the key area of the cluster is beneficial
• The mentoring/facilitating roles are important.
It can be seen that knowledge (and expertise) is critical to the success of any enterprise
and a company needs to be aware of the amount and depth of knowledge at its disposal. It
does appear that the CCKE is sensitive enough to identify the operational knowledge gaps in
the clusters. However, it is too early to say that this is the definitive measure as personal
interaction is obviously an important element. We found that it was immensely difficult for
any cluster to operate without external intervention that the companies trusted (i.e. The
University staff). However, once a basic trust had been established, the SMEs were much
more likely to collaborate on projects. What can be reasonably asserted is that they were
much more likely to enter a “partnership” where they see that they have a share of the power.
VI. CONCLUSIONS
The CCKE Assessment Tool was most useful in determining equivalency of capability
for assembling the clusters. It was also useful in allowing the companies to assess their
expertise and deficiencies. All the companies saw the potential advantage of working in
182
collaboration. However, despite the proven and apparent advantages, we met with significant
resistance to this whole process. The reasons given for this were:
Too much time, effort and energy would be expended to meet any minimum standards
requirement, “This could be better used developing the business by traditional routes”.
Entering into such a process “Removes the control of our destiny from us”.
• “We don’t trust them" e.g. the lead bidder may take future business.
• The head of any Supply Chain we may access is too powerful.
Thus it was the issues of trust and power that were the major problems.
While the work we did was not true virtual enterprise based, it was e-business based
and has lessons for the virtual enterprise activity. From our experience with this
project, the development of SME partnerships turns on some major issues:
• It is unlikely to happen unless a significant business opportunity arises.
• If the opportunity arises outside of existing working relationships, then there are
the problems of power and especially, trust.
• An external catalyst seems to be essential in creating the partnership.
• Higher-level technical capability seemed to promote good working relationships.
• Formation of partnerships tends to be driven by community, enterprise or
technology factors.
This last observation is in broad agreement with the findings of Lockett & Brown (2003).
Getting SMEs to collaborate outside of a major contract allows SMEs to build mutual
trust to the benefit of all. However, achieving this is an immensely difficult task. Whilst
some will become involved in a formal partnership such as a Supply Chain, most will not
because they have no power in such a system unless they have “expertise leverage” (which
most do not). The extreme alternative is one of “everybody doing their own thing” which
means reliance on organic growth, ruling out the high growth potential from “partnerships”.
REFERENCES
183
Kanter R M "Collaborative advantage: the art of alliances", Harvard Business Review, July-
August, 1994, pp 96-108.
Lamming, R C, Beyond Partnership: Strategies for Innovation and Lean Supply, Prentice-
Hall, Hemel Hempstead, 1993.
Lockett N J, & Brown D H, “Innovations affecting SMEs and E-business with reference to
Strategic Networks, Aggregations & Intermediaries” Lancaster University
Management School Working Paper, 2003/020
Macbeth D K & Ferguson N., Partnership sourcing: An integrated supply chain Approach,
Pitman, Financial Times, London, 1994
Macpherson A & Wilson A, “Enhancing SMEs’ capability: opportunities in supply chain
relationships”, Journal of Small Business and Enterprise Development, Vol 10, No 2,
2003, pp 167-179.
Moore J F, The death of competition : Leadership and Strategy in the Age of Business
Ecosystems, Harper Business, London, 1996
Roy R & Whelan R, "Successful recycling through value chain collaboration", Long Range
Planning, Vol 25, No 4, 1992, pp 75-87.
Small Business Service,), “Small and medium-sized enterprise (SME) Statistics for the
UK,2001,” http://www.sbs.gov.uk/default.php?page=/ press/news 90 .php
Snow C C. Miles R E &. Coleman H J, “Managing 21st century network organizations”,
Organizational Dynamics, Vol.20, No.3, 1992, pp. 5-20
Acknowledgement
The authors would like to thank Professor I Barclay, Director, Merseyside SME Development
Centre for his input in preparation of this paper.
184
CHAPTER 7
ECONOMICS
185
IMPORTANT CHANGES IN THE U.S. FINANCIAL SYSTEM
ABSTRACT
Despite the number of sharp changes in its economic world during this new century,
the United States remains a well-positioned and growing economy in today’s world. Among
the world’s major countries, the United States continues to grow rapidly. Its growth has taken
place in a world in which it has faced a decline in its stock market prices of more than $7
trillion (in a 3-year period) and a sharp move in its balance of payments position, that has left
it in a large negative trade position. Since the beginning of the year 2000, the balance of
payments deficit has risen from roughly $380 billion to $624 billion in 2004. The domestic
economy, despite an early recession, has been strengthened by a sharp increase in real estate
assets of well over $3 trillion. A key issue for the country is how these important changes will
affect the economy in future years.
I. INTRODUCTION
During the past five years, large and important changes have taken place in the United
States, both in the real and financial sectors of the economy. During the period, the U.S.
government has moved from having a surplus of almost $300 billion in the year 2000 to
having a deficit exceeding $360 billion in 2004. Also, since the beginning of the year 2000,
the balance of payments deficit of the United States has risen sharply. In the year 2000,
imports exceeded exports by $379.5 billion dollars; in 2004, imports were $624 billion dollars
greater than exports. In the first half of 2005 imports were running at a rate that suggested a
figure that could be $700 billion greater than exports (Flow of Funds Accounts of the United
States).
Household and nonprofit organizations began the year 2000 with more than $17.2
trillion in equity shares, and they ended the year 2002 with slightly under $10 trillion in
equity. Since then, equity shares have risen to nearly $14.4 trillion at the end of 2004. These
large changes reflect important changes that have taken place in the economy of the United
States during the past five years. They also indicate that there are important issues ahead for
the U.S. government in future years. Despite these large movements, the U.S. economy
suffered only a minor recession; indeed, so minor that it is questionable if there really was a
recession. And, even today, it is questionable if the large increases in prices of oil and gas
will lead the economy into recession. The majority view is that it will slow the growth of
GDP, but that GDP will remain positive during the period.
Earlier studies have shown that while changes in GDP are important for understanding
what is happening in the economy, other factors – independent of GDP – can also be
important. This is true both for understanding present economic activity and also for
understanding future activity. For example, in the United States in 1990, the consumer price
index rose at 6.1%. The Federal Reserve responded by tightening monetary policy and sent
the country into a recession. Interestingly, during the same year, the prices of a number of
186
existing assets were falling and were expected to – and did – decline in the future. These
assets included land and commercial and residential real estate. For example, commercial real
estate in the North-East fell at 7% in 1990 and 18% in 1991; and, in the Pacific, they declined
at 2% in 1990 and 11% in 1991 (Bank for International Settlements).
Using a measure that combined these movements in asset prices (and weighted them
approximately) with expected flows in the economy might well have led policymakers to
approach their decision to tighten with more caution. The economy might well have
experienced a slower rate of growth, but might well have avoided the recession that occurred.
A key lesson for the future is that while changes in GDP are important, it is also important to
assess the changes in other macro economic variables that are also important. We have, for
instance, mentioned the importance of changes in the prices of real estate and stocks
outstanding. Luckily, the changes in these variables had a relatively less serious effect on the
U.S. economy, largely because of important changes in other variables that had a
compensating effect. The effect of the huge drop in stock market prices between the years
2000 and 2002 of roughly $7.3 trillion was limited by several important offsetting factors.
First, there was an increase in value for other household assets of $6.3 trillion, of which real
estate assets were roughly $3.3 trillion (Goon, Massaro). In addition, the Federal budget
position changed from a surplus of 1.4 percent of GDP in 2000 to a deficit of 4.6 percent of
GDP in 2003. Also helping the economy, the federal funds rate declined from 6.5 percent at
the beginning of 2001 to 1 percent in 2003 (The Economist). These developments help to
explain why the decline in GNP was not more serious than would have been expected by the
huge decline in the stock market. The United States was, in retrospect, lucky in that the sharp
decline in over-valued stock market prices was neatly offset thanks, in part, to rising house
prices.
The overvalued stock market in Japan in 1989 fell more sharply and the price of real
estate assets also declined sharply. After a very difficult fifteen years, the value of the stock
market in Japan has now risen to be slightly over one-third, of its value in 1989 (it had been
well-below one-third of its 1989 value before its recent increase). On October 13, 2005 the
Tokyo Nikkei Stock average was 13449.24 (a net increase of 1960.48 during the year). (The
Wall Street Journal, October 14, 2005). Interestingly, prior to the collapse since 1990, the
Japanese economy was a favorable model for the rest of the world, having apparently good
growth with little inflation during the entire period of the 1980s. The tip-off of future changes
came when the prices of stocks and homes tripled between 1985 and 1990. The obvious
question, in retrospect, is whether policymakers should have responded earlier to these
changes, even though traditional measures of inflation and growth remained well-positioned
for future economic activity. Japan’s excellent economic growth in the 1980s was replaced by
a 15-year period of relatively slow growth.
IV. CONCLUSIONS
There have been substantial changes in important sectors of the United States
economy during the past six years. Equity prices have changed substantially, both up and
down. The balance of payments deficit of the country has risen sharply. Despite many large
changes in flow and asset prices, the rates of growth of the United States economy have not
been seriously affected. A key lesson from the behavior of the economy is that while changes
in the country’s GDP are important, it is also important to examine changes in other key
187
variables such as changes in real estate and stock market prices. It is also important to
examine policy changes that will affect the future growth of the economy, such as sharp
changes in tax policy and government surplus and deficits.
Future economic activity in the United States will be heavily influenced by whether
the large rise in real estate prices will continue or be met by a correction. Movements in stock
market prices and the prices of other financial assets will also be important.
REFERENCES
A. BOOKS:
Bank for International Settlements, 63rd Annual Report, Basle, 14 June 1993.
Federal Reserve Statistical Release, Flow of Funds Accounts of the United States,
Washington, D.C., September 21, 2005.
International Academy of Business Disciplines, Business Research Yearbook,Vol. VII, 2000,
Vincent G. Massaro, “What We’ve Learned from Derivatives.” International Academy
of Business Disciplines, Business Research Yearbook, Vol. XI, 2004. Goon, Robert,
and Vincent G. Massaro, “The Stock Market Bubble and Financial Challenges
Ahead.”
International Academy of Business Disciplines, Business Research Yearbook, Vol. XII, 2005,
Goon, Robert, and Vincent G. Massaro “A Financial Framework for Measuring
Inflation.”
B. JOURNAL ARTICLES:
The Economist, “Breaking the Deflationary Spell,” June 2003.
Federal Reserve Bank of New York Quarterly Review, Autumn 1993,
McDonough, William J., “The Global Derivatives Market.”
Federal Reserve Bank of Kansas City, Economic Review, Third Quarter 2000,
Filardo, Andrew J. “Monetary Policy and Asset Prices.”
The Wall Street Journal, February 21, 1995, pp. C1, 14, McGee Suzanne, “Got a
Bundle to Invest Fast? Think Stock-Index Futures.”
The Wall Street Journal, Friday, October 14, 2005, Section C, p. 10.
188
ACCOUNTING FOR SUCCESS IN SPORTS FRANCHISING
ABSTRACT
I. INTRODUCTION
Franchising in the United States can be generally defined as a method of doing
business in which the franchisor grants brand name and sometimes operational protocols to a
franchisee in return for fees and royalties. Recent years have brought about a plethora of
research articles and books about franchising (Preble and Hoffman, 1995; Swartz, 2000; Alon,
2005). A number of observations can be made from this research:
(1) Franchising accounts for 10% of the United States’ private sector
(2) Franchising is becoming a global phenomenon
(3) Franchising is growing in many sectors of the economy (as many as 70 industries
are engaged in franchising)
(4) Franchising has a significant or dominant presence in selected service industries
(5) Franchising is a successful model for doing business
While we know the many external/environmental factors of “success” in franchising -- such
regulations, economic conditions, consumer demand, intellectual property protection, etc. --
little research has been done documenting organizational correlates of franchising success,
and even less has been done to empirically assess sports franchising success. An article by
Alon (2004) examined the key success factors of franchising in the retail sector and concluded
that organizational factors of the franchise system (age, royalties, fees, years to franchising,
proportion of franchising, and internationalization) can help explain the relative success rates
of companies. Other franchising researchers focused on failure rates (Bates, 1995;
Castrogiovanni et al., 1993)
It is our intention in this paper to add to the body of evidence that exists on franchising
success/failure, in general, and to help explain selected sports franchising success matrices –
franchising value, total revenue, and operating income, using a number of available
organizational factors, including the percentage of media income, player costs, non-player
operating costs, and league type.
189
II. METHODOLOGY
Study Sample
The sample under study in this investigation consisted of the 113 “Major League”
professional sports franchises in North America, in the sports of American Football,
Basketball, Baseball and Ice Hockey, governed by the NFL, NBA, MLB and NHL
respectively. The data was taken in the 2000 – 2001 period, and there were 26, 29, 28 and 30
teams in the four respective leagues at that time. The data consisted of a number of financial
measures for the franchises, including operating income, total revenue, franchise value, player
costs, total operating expenses and media revenues.
Analysis
In an effort to understand the operational factors that contribute to franchise success,
three major financial measures, total revenue, operating income and franchise value, were
individually modeled, using Ordinary Least Squares (OLS) Regression Analysis, as functions
of the other measures listed above. Media revenue was taken as a proportion of total revenue
and introduced as an explanatory variable. Additionally, two components of operating
expenses, player cost and non-player costs, were taken separately as explanatory variables.
Lastly, three dummy variables were used to indicate whether or not the sport played by the
franchise was football, baseball or basketball. If significant, coefficients for these dummy
variables would indicate the advantage or disadvantage of fielding teams in any of these three
sports with respect to the excluded sport of ice hockey.
Table I shows descriptive statistics for the financial variables discussed above. As can
be seen, these are all positively skewed, most quite significantly. To mitigate this positive
skew in the data, and to produce regression coefficients that could be interpreted as percentage
change in the response variable for each percentage change in the explanatory variables, Log-
Log regression was performed by taking the base-10 logarithm of each of the input and output
(non-dummy) variables described above. The variable Operating Income was offset by a value
of +15 in order to make all of its values positive prior to taking the base-10 logarithm.
Coefficients for the dummy variables for each sport (retained in original form) could be
interpreted as the percentage change in the response variable attributable to fielding a team in
the associated sport.
190
that a number of the correlations are significant, and a few (specifically some of the
correlations to the Football dummy variable) are large. In fact, in each of our regression
models the coefficient for the variable Football had a VIF (variance inflation factor) value that
was greater than 10, a sign of potential multicollinearity issues. This could negatively impact
our ability to make accurate predictions from regression models based on these variables.
However, if we alter the choice of omitted dummy variable (from Ice Hockey to Basketball
for instance) the VIF values all fall within acceptable limits, and the remaining coefficients of
our models remain virtually unchanged. The existence of equivalent models, with changes
only in dummy variables, suggests that any potential multicollinearity issues may not have
significant impact on the models.
Three unique regression models were created using SPSS, one each for the base-10 log
variables for Franchise Value, Total Revenue and Operating Income. Two additional models
separately investigated the relationships for the cases where Operating Income was positive
and negative. The general form of all the models is given by:
191
TABLE III. OLS REGRESSION COEFFICIENTS
Response Variable in the Model
Model Coefficients Log FrVal LogTotRev LogOpInc LogOpIncPos LogOpIncNeg
Constant 0.684 0.281 0.711 -0.866 0.490
-0.139* -0.138** -0.364*
LogMdPct
(-2.481) (-2.719) (-2.026)
0.366** 0.438** -0.545**
LogPlyrCst
(6.728) (8.870) (-3.121)
0.760** 0.742** 1.359** 1.366* -2.776*
LogNonPlrOpEx
(12.659) (13.618) (7.048) (2.346) (-2.560)
0.207** 0.113** 0.300** 0.543*
Basketball
(7.250) (4.369) (3.276) (2.145)
0.058 0.063*
Baseball
(1.793) (2.142)
0.255** 0.136** 0.385**
Football
(5.883) (3.460) (2.759)
Notes: Coefficients with p-values > 10% not shown. T-values shown in parenthesis.
* Coefficient significant below 5% level.
** Coefficient significant below 1% level.
192
Non-Player Operating Expenses, more than double the effect that was observed in the overall
case for this independent variable. No other financial or dummy variables were significant for
this response variable however. Note that there were only 26 non-profitable franchises among
our data, a smaller sample size than we would ideally be comfortable with.
IV. CONCLUSION
An examination of each of our variables across the multiple success measures yields
interesting results for some of our models. The percentage of media on revenue had in all
cases a negative impact our success measures value, revenue and income. The largest
negative impact is on the operating expenses. There seems to be a negative financial return
for overemphasizing media as a revenue stream. In other words, the more owners of a
franchise seek to make media revenue grow as a percentage of total revenue, the more likely
their total revenues, valuation and operating income will suffer.
Player costs help valuation and revenues but not operating income according to our
models. This means that franchise owners who concentrate on bringing premium-priced
players can benefit their valuation and total revenue, but may suffer a loss in the operating
income. Premiums on celebrity players have skyrocketed leading to criticism from the
general public and franchise owners alike. However, our data show that despite increases in
premiums franchising values and revenues sorely depend on these expensive players. Player
operating costs has the largest (negative) coefficient in the operating income model.
Non-player operating expenses had a positive income in all our models, but one.
Generally speaking, spending on various non-player related items to enhance the image of the
franchise pays off in terms of valuation, revenue, and most of all operating income. This is
true particularly when the franchise is already operating at a loss. Then, non-player operating
expenses have a negative impact on the magnitude of the operating loss, equivalent to an
increase in operating income.
In terms of league, we used dummy variables with a comparison league of Ice Hockey.
In general, Basketball and Football show higher valuations, total revenues, and operating
income. Valuation is the most impacted by league. Baseball, too, had higher total revenues
compared to Ice Hockey, but insignificant difference in terms of valuation.
In summary, our various models were able to explain a large amount of the variation
in our dependent success variables. We explained over 90% of the variation in the franchise
value and total revenue using six organizational variables. Our model for operating income
explained somewhat less of the variability in that response variable, but still exhibited an R-
squared value around 45%. Our results show that even in a field as narrow as sports
franchising, on the one hand, wide variations exist in the relative impact of organizational
factors within sub-sectors of the industry – i.e., the different leagues – but, on the other hand,
a common structural framework can be developed to explain much of the variation of success
across the different leagues.
REFERENCES
Alon, Ilan. Service Franchising: A Global Perspective, New York: Springer, 2005.
Alon, Ilan. “Key Success Factors in the Franchising Sector in the Retailing Sector,”
Proceedings of the Southwest Academy of Management, (45th Annual Meeting)
Orlando, Florida (March 3-6), 2004.
193
Bates, Timothy. “Analysis of Survival Rates Among Franchise and Independent Small
Business Startups.” Journal of Small Business Management., 33 (2) 1995, 26-36.
Castrogiovanni, Gary J., Justis, Robert T., and Julian, Scott D. “Franchise Failure Rates: An
Assessment of Magnitude and Influencing Factor.” Journal of Small Business
Management., 16, 1993, 105-114.
Combs, James G., and Gary J. Castrogiovanni. “Franchisor Strategy: A Proposed Model and
Empirical Test of Franchise Versus Company Ownership.” Journal of Small Business
Management., 32, (2), 1994, 37-48.
Falbe, Cecilia M., Dandridge, Thomas C., and Kumar, Ajith. “The Effect of Organizational
Context on Entrepreneurial Strategies in Franchising.” Journal of Business Venturing.,
14, 1998, 125-140.
Falbe, Cecilia M. and Welsh, Dianne H. B. “NAFTA and Franchising: A Comparison of
Franchisor Perceptions of Characteristics Associated with Franchisee Success and
Failure in Canada, Mexico, and the United States.” Journal of Business Venturing., 13,
1998, 151-171.
Preble, John F. and Hoffman, Richard C. “Franchising Systems Around the Globe: a Status
Report.” Journal of Small Business Management., 33 (2), 1995, 80-88.
Sen, Kabir C. “The Use of Franchising as a Growth Strategy by US Restaurant Franchisors,
Journal of Consumer Marketing., 15, (4), 1998, 397-407.
Swartz, Leonard N. “Franchising Successfully Circles the Globe.” Franchising World., 32,
(5), 2000, 36-37.
194
DO CHINESE INVESTORS APPRECIATE
MARKET POWER OR COMPETITIVE CAPACITY
ABSTRACT
This paper examines the market-based and resource-based views of value generating
process of Chinese public companies, a sample of firms operating in a transition economy
with controlled institutional differences. Our study, by incorporating institution influences
into value generating process, finds that the traditional view of resourced-based variables
seems play a more important role in determining firm value than market-based view
regardless of the government stake in the firm. The firm’s value is mainly determined by the
size of the firm, management ownership that is not traded publicly, and the share held by the
largest shareholders. The results also indicate that China’s stock market is still in an immature
stage. The market valuation of the firm is mainly determined by the internal resource of the
firm.
I. INTRODUCTION
Market-based view (MBV) and resource-based view (RBV) of firms are two
competing theories on firms’ value generating strategy. Market-based view argues that firms
are able to generate higher value through competitive market advantages such as monopoly,
barriers to entry, and bargaining power (Grant, 1991). Resource-based view argues that firms
can increase value through tangible and intangible assets that facilitate strategies enhancing
efficiency and effectiveness (Barney, 1991).
Empirical studies on industrialized countries usually find that both the firm’s market
position and competitive capacity are important in affecting firm performance. Because these
two sets of factors are entangled to each other, it is hard to distinguish which strategy takes a
more important role (Powell, 1996; McGahan & Porter, 1997). Some empirical studies on
emerging economies, in contrast, find evidence supporting resource-based view of firm
(Makhija, 2003), suggesting that firm’s unsubstitutable resources are the major determinants
of firm value in a state of flux.
195
empirical evidence on the role of institutions in transition economies is limited (Hoskisson et
al. 2000; Djankov & Murrell 2002). Because institutional factors have many dimensions and
each can change differently during economic transition, it is hard to capture the institutional
factors through appropriate measurement in time series analysis. Because of the complexity of
the institutional environment in transition economies, it is expected that different strategies
would be applied by enterprises. Previous studies on strategies in emerging markets usually
carried out cross-sectional analysis, in which institutional framework is usually considered the
same for all enterprise, thus ignoring the institutional influences in enterprise strategies.
Publicly traded companies in China provide an ideal case for the study to incorporate
institutional impact to enterprise strategies. In such a big scale transition economy,
government exercised different level of control on different industries.
The paper follows Makhija’s (2003) effort on distinguishing the MBV and RBV of
firm with application in the economic transition of Czech. Our data contain most of
companies publicly traded in Shanghai Stock Exchange and Shenzhen Stock Exchange in
China during 2001 and 2003. Meanwhile, we selected industries with at least 18 companies
listed in the exchanges. Small number of companies within the industry may create bias in
terms of the market share and the number of firms. The final data include 969 companies that
have been publicly trade since 2001. We calculated the average percentage of government
stakes for each industry and then divided the sample in to three groups. We only select the
two groups to make a comparison for our study. The first group with less government control
includes 316 companies and the second group with more government control includes 312
companies.
MBV predicts that Firm Size, Profitability, and Market Share are positively related to
firm value, while Variability of profitability, Leverage, and No. of Rivals are negatively
related to firm value. On the other hand, RBV predicts that Variability of profitability,
Leverage, Management Ownership, Foreign Ownership, Government Ownership, Share held
196
by the largest share holder, and Management Efficiency are positively related to firm value,
while Firm Size is negatively related to firm value.
Since we attempt to examine the impact of resource-based variables and market based
variables on the market valuation of the firms, we use Tobin’s q as the measure of the listed
companies’ value. We used the data from 2003 to calculate all related variables. Tobin’s q in
our study is calculated by the following formula:
Tobin’s q = (ASP × NS + DEBT)/ ASSETS,
where ASP is the average stock price for 2003, NS is the number of stock issued, DEBT is the
total debt for 2003, and ASSETS is the total assets for 2003.
Table I shows the results for the full model using both MBV and RBV variables. In
both models, the coefficient of ROA is positive, as predicted by MBV, but it is not significant.
This result is similar as the results found in Makhija’s study (2003). The market share is
supposed to be positively related to the firm’s value. However, for both groups, market share
is negatively related to the firm’s value. Probably two reasons contribute to the results. First,
we calculated the market share based on public traded companies. For some industries, there
may be still many non-public companies, weakening the explanatory power of the test.
Second, companies that have bigger market shares are those companies that privatized from
the SOEs, and they may not work as efficiently as new companies that start from the
beginning.
For the three variables that are common for both market based view and resource
based view, market based view predicts a positive sign, the coefficient on firm size is
negatively related to firm’s value at 1% significant level. Although the market based view
predicts that the firm leverage is negatively related to the firm’s value, the coefficient for firm
leverage is positive, and statistically insignificant. While the variance of profitability is
expected to be negative based on the market based view, the coefficient of the variance of
market profitability for both groups are negative but non significant.
197
Table I Testing the Full Model Using Both Mbv and Rbv Variables
In comparison, the resource based view-based variables predict the firm’s value better.
We found that the size of firm, firm leverage, management ownership, and management
efficiency have predicted with the correct signs. The size of the firm, management ownership,
and share held by the largest shareholder are significant predictors. However, the government
ownership and share held by the largest shareholder are opposite to the expected sign
predicted by the resource-based variables. The sign of the coefficient of the firm that is
consistent with the sign expected by the resource-based view indicates that larger firms, with
larger bureaucracy, are less responsive. In China, the public firms may suffer from the agency
problems. The newly started firms with small size may be more efficient and the firm’s value
may be higher. The signs of foreign ownership are different between the two groups. For
companies in which the government has small stakes, the sign is opposite to the sign expected
by the resource-based view. As the resource-based view predict, the foreign ownership can
bring entrepreneurial skills and new knowledge and the firm’s value should be positively
related to the foreign ownership. However, the Chinese stock markets are still restricted to the
foreign investor. There are only very small portion of foreign ownership owned by foreign
investors. Only 2.9% companies have foreign investors. Others have no foreign investors.
This may contribute to the non-significant effect of the foreign ownership on the firm’s value.
The government ownership has a negative impact on the firm’s value for both groups. The
results contradict the expected results from resource-based view but are consistent with the
traditional view that government has a negative effect on firm’s efficiency and effectiveness.
Table II contains the results for the MBV model only. Each R2 is less than the
corresponding R2 for the full model. The signs for the two groups are not exactly the same as
estimated in the full model. For group 1, there are four variables with signs opposite to what
MBV predicts. For group 2, there are three variables with sign opposite to what MBV
predicts. The three variables are firm size, firm leverage, and number of rivals. It seems that
MBV can predict the firm’s value better for companies in which the government has a big
stake. Table III contains the estimations from the RBV model only for the two groups. The
coefficients estimated from RBV models are considerably higher than coefficients estimated
from MBV models for both groups. For both groups, the signs of coefficients estimated by the
RBV are consistent with those estimated by the full models.
198
Table III Testing for Rbv Variables Alone
We also conducted F tests to examine the contributions of the MBV and RBV
variables to the full model of both groups. By testing the MBV variables, the full model was
first estimated, and then we tested the constraint that the coefficients of the variables relating
to profitability (ROA), variance of profitability (VAR), firm size (SIZE), firm leverage
(LEV), market share (MKTSH), and number of rivals in the industry (NRIVAL) are all zero.
This hypothesis is rejected for both groups with an estimated F (6, 304) = 3.6, and a p-value =
0.0018 for group 1 and F (6, 300) = 6.56, and a p-value <.001 for group 2. Since SIZE, VAR
and LEV are also RBV variables, the test is repeated with only coefficients of ROA, MKTSH,
and NRIVAL hypothesized to be zero. This hypothesis can’t be rejected for both groups with
estimated F (3, 304) = 0.45, and p-value = 0.98 for group 1 and F (3, 300) = 0.26, and p-value
= .85 for group 2. Thus, inclusion of ROA, MKTSH, and NRIVAL does not contribute to the
full model. Inclusion of SIZE, VAR, and LEV does contribute to the full model. However, we
are not sure that inclusion of these three variables is attributed to the MBV or RBV. The F
tests were conducted for the RBV variables in a similar way. When all variables from RBV
are constrained to be zero, the hypothesis is rejected for both groups with F = 3.55, and p-
value <.001 for group 1 and F = 6.6, and p-value <.001 for group 2. When only MOW, FOR,
GOV, LOWN, and MGNEFF are constrained to be zero, the hypothesis is rejected again for
both groups with F = 3.28 and p-value = .0067 for group 1 and F = 8.79 and p-value <.001 for
group 2. Thus, the RBV variables make significant contributions to the full model whether
one includes variables unique to RBV or those common to both theories.
IV. CONCLUSION
In this study, we examined the determinants of both market-based view and resource-
based view on firm’s valuation. Since China’s stock market is still a new market and its
economy is a transitional economy, the traditional view of resourced-based variables seems
play a more important role in determining firms valuation than market-based view regardless
of the government stake in the firm. The firm’s value is mainly determined by the size of the
firm, management ownership that is not traded publicly, and the share held by the largest
shareholders. Other variables are not significantly related to the firm’s valuation. The results
from this study indicate that China’s stock market is still in an immature stage. The market
valuation of the firm is mainly determined by the internal resource of the firm.
REFERENCE
199
Djankov, S., and Murrell, P., “Enterprise Restructuring in Transition: A Quantitative Survey.”
Journal of Economic Literature, 40, (3), 2002, 739-792.
Grant, R. M., “A Resource-Based Perspective of Competitive Advantage.” California
Management Review, 33, 1991, 114–135.
Hoskisson, R., Eden, L., Lau, C., and Wright, M., “Strategy in Emerging Economies.”
Academy of Management Journal, 43, (3), 2000, 249-267.
Makhija, M., “Comparing the Resource-Based and Market-Based Views of the Firm:
Empirical Evidence from Czech Privatization.” Strategic Management Journal, 24, (5),
2003, 433-451.
McGahan, A. M., and Porter, M., “How Much Does Industry Matter, Really?” Strategic
Management Journal, 18, (Summer Special), 1997, 15-30.
Mehra, A., “Resource and Market Based Determinants of Performance in the U.S. Banking
Industry.” Strategic Management Journal, 17, (4), 1996, 307–322.
Powell, T. C., “How Much Does Industry Matter? An Alternative Empirical Test.” Strategic
Management Journal, 17, (4), 1996, 323-334.
Suhomlinova, O., “Constructive Destruction: Transformation of Russian State-Owned
Construction Enterprises during Market Transition.” Organization Studies, 20, 1990,
451-484.
200
CONSUMER ETHNOCENTRISM AND
EVALUATION OF INTERNATIONAL AIRLINES
ABSTRACT
I. INTRODUCTION
Shimp and Sharma (1987) first point out the concept of consumer ethnocentrism as the
beliefs held by consumers about the appropriateness, indeed morality of purchasing foreign
made products. Therefore, consumer ethnocentrism may be viewed as a way to differentiate
between consumer groups who prefer domestic to foreign products (Shimp and Sharma,
1987). These consumer ethnocentric tendencies may lead to negative attitudes towards foreign
products.
This paper begins with a brief review of the literature related to ethnocentrism and
consumer ethnocentrism. A number of hypotheses are then proposed. The research design
used to test the hypothesis, the results of the study and a discussion follow.
II. ETHNOCENTRISM
201
ethnocentrism, can be defined as the tendency to view in-group as the standard against which
other groups are judged.
According to Shimp and Sharma (1987), consumer ethnocentrism results from the love
and concern for one’s own country and the fear of losing control of one’s economic interests
as the result of the harmful effects that imports may bring to one’s countrymen. Ethnocentric
consumers prefer domestic goods either because they believe that products from their own
country are the best (Klein et al., 1998), or due to a concern for morality lead consumers to
purchase domestic products even though the quality is lower than that of imports (Wall and
Heslop, 1986). Consumer ethnocentrism may play a significant role when people believe that
their personal or national well-being is under threat from imports (Sharma et al., 1995; Shimp
and Sharma, 1987). The more importance a consumer places on whether or not a product is
made in his/her home country, the higher his/her ethnocentric tendency (Huddleston et al.,
2001). Research from the US and other developed countries generally supports that highly
ethnocentric consumers overestimate domestic products, underestimate imports, and feel a
moral obligation to buy domestic merchandise (Netemeyer et al., 1991; Sharma et al., 1995;
Shimp and Sharma, 1987).
Shimp and Sharma (1987) developed the CET Scale to measure consumer
ethnocentrism. CET scale has 17 items, each of which is measured on a 7-point Likert scale.
The validity of this scale is first tested by Shimp and Sharma in the US and later tested across
nations in US, France, Japan, and West Germany by Netemeyer(1997).
According to Tajfel (1978), social identity is defined as the part of an individual’s self-
concept that derives from his knowledge of his membership in a social group together with the
value and emotional significance attached to that membership. A fundamental prediction of
social identity theory is that discriminatory behavior is related to an individual’s degree of in-
group identification (Tajfel, 1978). Social identity is a core ingredient of ethnocentrism
because it seeks to enhance self-esteem. The categorization and enhancement associated with
social identity supports ethnocentrism, i.e. in-group favoritism and out-group discrimination
(Perreault and Bourhis, 1999). Lantz and Loeb (1999) extend this effect to the notion of the
economic well-being and suggest that ethnocentrism is related to the social economic well-
being of the group.
We propose that national identity and economic well-being are two key antecedents to
consumer ethnocentrism. Consequence of consumer ethnocentrism will be reflected in
evaluation of domestic and foreign made products. The following hypotheses emerge from the
literature:
H1: There is a positive relation between national identity and consumer ethnocentrism.
H2: There is a positive relation between people’s interest in national economic well-being and
consumer ethnocentrism.
H3: There is a positive impact of consumer ethnocentrism on the evaluation of domestic
products.
202
H4: There is a negative impact of consumer ethnocentrism on the evaluation of foreign
products both from socially close countries and socially distant countries.
Data was collected from 4975 air travelers in 18 cities’ airport of three different
countries (USA, Canada and Mexico) on a personal interview basis. During an interview,
respondents from these 18 airports were asked to complete a structured questionnaire, which
consisted of multiple items that are formed to operationalize our conceptual framework. The
total sample consisted of 4975 respondents: 2393 respondents in USA, 1996 respondents in
Canada, and 586 respondents in Mexico. Participants were asked to evaluate airline services
from home country, socially close countries and socially distant countries. To measure the
level of national identity, economic well-being, consumer ethnocentrism, established scales
were used.
To test the reliability of the measurement scales used in this study, we computed
Cronbach's alpha of each scale across the samples. The highest Cronbach's alpha is .96 and
the lowest is .67. The CETSCALE’s reliability of .88 is comparable to the reliabilities
reported by Shimp and Sharama (1987). Generally speaking, the Cronbach's alphas of the
measurement scales are acceptable.
203
Hypothesis 3 argues that there is a positive impact of consumer ethnocentrism on the
evaluation of domestic products (PE-home); hypothesis 4 states that there is a negative impact
of consumer ethnocentrism on the evaluation of foreign products, both from socially close
(PE-close) and distant countries(PE-distant). We ran the regression analysis of product
evaluation (PE) of home country products as a function of consumer ethnocentrism. As
expected, for the total sample (see Table 1), as well as for each individual country (see Table
2), consumer ethnocentrism is found to be positively related to evaluation of domestic
products. In other words, consumers who score high on consumer ethnocentrism tend to give
higher evaluation for home country products. Hypothesis 4 is supported by the total sample
results (see Table 1), showing that consumer ethnocentrism has a negative impact on the
evaluation of foreign products, no matter the products are from socially-close or socially-
distant countries. The respective results from the U.S. sample and the Canadian sample
support H4 as well. The results from the Mexican sample represent slight deviation (see Table
2). The negative impact of consumer ethnocentrism on product evaluation of socially distant
countries is not significant, though the result indicates the same tendency (β= -.041).
Furthermore, the Mexican data also reveals a tendency that consumer ethnocentrism is
positively related to product evaluation of socially close countries (β= +.027), though the
result is not significant. Therefore, H4 is supported by results from the total sample and from
U.S. and Canada.
VIII. CONCLUSION
First and foremost, results of this study confirm that consumer ethnocentrism does
have an impact on consumers’ evaluation of products. There is a positive impact of consumer
ethnocentrism on the evaluation of home-country products, and at the same time consumer
ethnocentrism has a negative impact on the evaluation of foreign product. Consumers who
scored high on consumer ethnocentrism tend to evaluate home county products more
positively than foreign products. In other words, ethnocentric consumers prefer home-country
products to imports. This finding is consistent with the literature (Shimp and Sharma 1987;
204
Klein et al. 1998; Sharma et al. 1995; Bruning 1997). Furthermore, we extend the scope of
consumer ethnocentrism research to a global environment by studying consumers from US,
Canada and Mexico. Results from each country suggest that the positive impact of consumer
ethnocentrism on evaluation of home-country products is prominent both in US and Canada,
developed countries, and in Mexico, a developing country.
Second, our research suggests that consumers’ perception of their national identity and
their interest in national economic well-being can be viewed as antecedents to consumer
ethnocentrism. Consumer ethnocentrism is positively correlated with people’s perception of
their national identity. In addition, consumers’ interest in national economic well-being has a
positive impact on consumer ethnocentrism. Previous studies mainly focused on the impact of
national identity on in-group favoritism and out-group discrimination (Perreault and Bourhis
1999). Few researchers have examined the influence of both national identity and interest in
national economic well-being on consumer ethnocentrism. The result of our study helps to
establish the antecedent relationship between the constructs of national identity and interest in
national economic well-being, and consumer ethnocentrism.
The Mexican sample is quite similar to the total sample. When we study the impact of
consumer ethnocentrism on the evaluation of foreign products, we have found that respective
results from the U.S. sample and the Canada sample support our hypothesis – consumer
ethnocentrism has a negative impact on the evaluation of foreign products. This is consistent
with the total sample. The Mexican sample manifests slight deviation. The negative impact of
consumer ethnocentrism on product evaluation of socially distant countries is not significant,
though the result indicates the same tendency. Furthermore, the Mexican data also reveals that
there is a tendency that consumer ethnocentrism is positively related to product evaluation of
socially close countries, though the result is not significant. Unlike the U.S. and Canada,
Mexico is a developing country. Consumers from developing countries tend to have higher
evaluation of imported products (Wang and Chen, 2004). This may help explain the
differences between Mexico and the other two countries as reflected in this study.
As with any study, this research possesses limitations. The relationship of consumer
ethnocentrism and product evaluation was only tested via one product. Future research can be
undertaken to explore relevance for this relationship with other products, e.g. convenience
good. Moreover, this research only investigated the product evaluation of home country
products versus foreign products in general. We did not examine the product evaluation of a
specific foreign country. Perhaps, future research studies could investigate the relationship of
consumer ethnocentrism and product evaluation in the context of specific countries.
REFERENCES
Bruning, E. "Country of origin, national loyalty and product choice: the case of international
air travel", International Marketing Review., 14(1), 1997, 59-74.
Elchardus M., L. Huyse, and E. van Dael . Het Maatschappelijk Middenveld in Vlaanderen:
Een Onderzoek Naar de Sociale Constructie van Democratisch Burgerschap. Brussels:
VUB press, 2000.
Henri, Tajfel. Differentiation Between Social Groups. London: Academic press,1978.
Huddleston P., Good L. K., and Stoel L. "Consumer Ethnocentrism, Product Necessity and
Polish Consumers' Perceptions of Quality." International Journal of Retail &
Distribution Management., 29 (5), 2001, 236-46.
205
Klein, Jill Gabrielle, Ettenson, Richard and Morris, and Marlene D. "The Animosity Model of
Foreign Product Purchase: An Empirical Test in the People's Republic of China."
Journal of Markteting., 62 (1), 1998, 89-100.
Lantz G. and Loeb S. "Country of Origin and Ethnocentrism: An Analysis of Canadian and
American Preferences Using Social Identity Theory." Advances in Consumer
Research., 23, 1996, 374-78.
LeVine R. and Campbell D. T. Ethnocentrism: Theories of Conflict, Ethnic Attitude and
Group Behavior. London: John Wiley, 1972.
Matsumoto, D. Culture & Psychology. London: Wadsworth, 2000.
Netemeyer R. G., Durvasula S., and Lichtenstein. "A Cross National Assessment of the
Reliability and Validity of the CETSCALE." Journal of Marketing Research., 28,
1991, 320-28.
Perreault S. and Bourhis R. "Ethnocentrism, Social Identification, and Discrimination."
Journal of Personality and Social Psychology Bulletin., 25 (1), 1999, 92-103.
Rothbart, M. Intergroup Perception and Social Conflict. Chicago: Nelson-Hall, 1993.
Sharma S. and Shimp T. and Shin J. "Consumer Ethnocentrism: A Test of Antecedents and
Moderators." Journal of the Academy of Marketing Science., 23 (1), 995, 26-37.
Shimp T. and Sharma S. "Consumer Ethnocentrism: Construction and Validation of the
CETSCALE." Journal of Marketing Research., 24, 1987, 280-89.
Sumner, W.G. Folkways. New York: Ginn, 1906.
Wall M. and Heslop L. A. "Consumer Attitudes toward Canadian-made versus Imported
Products." Journal of the Academy of Marketing Science., 14 (Summer), 1986, 27-36.
Wang, Cheng Lu; and Zhen, Xiong Chen "Consumer ethnocentrism and willingness to buy
domestic products in a developing country setting: testing moderating effects." Journal
of Consumer Marketing., 21 (6), 2004, 391-400.
Worchel S. and Simpson J. A. Conflict between People & Groups: Causes, Processes, and
Resolutions. Chicago: Nelson-Hall Publishers, 1993.
206
THE VALUATION ABILITIES OF THE PRICE-EARNINGS-TO-GROWTH RATIO
AND ITS ASSOCIATION WITH EXECUTIVE COMPENSATION
ABSTRACT
The objective of this study is to examine the valuation abilities of the price-earnings-
to-growth (PEG) ratio and its association with executive compensation. Financial analysts are
found to use price-multiple heuristics such as the PEG ratio, rather than residual income
valuation (RIV) models, to support their recommendations (Bradshaw, 2002). This study
measures the valuation abilities of the PEG ratio relative to a RIV model and extends prior
research on the PEG ratio to examine its association with executive compensation. The
results do not support the superiority of the RIV model over the model based on the PEG
ratio. However, the results provide support for the existence of a relationship between the
PEG ratio and executive compensation.
I. INTRODUCTION
The PEG ratio is defined as: PEG = (P/E) / LTG, where P/E is the forward price-to-
earnings ratio (i.e., price divided by forecasted earnings), and LTG is the analysts’ projection
of long-term annual earnings growth stated in percent (Bradshaw, 2004). Two surveys by
Block (1999) and Bradshaw (2002) indicate that financial analysts use models based on the
PEG ratio, rather than residual income valuation (RIV) models, to support their
recommendations. However, evidence on the association between the PEG ratio and firm
value is limited. The objective of this study is two fold: (1) to examine the valuation abilities
of the PEG ratio and (2) to investigate its association with executive compensation.
Among the motivations that are behind examining the valuation abilities of the PEG
ratio is that most ratio-based prediction literature focuses on using financial ratios to predict
future earnings (e.g., Ou and Penman, 1989 a, b). Instead, this study extends the ratio-based
prediction literature by investigating the association between financial ratios and stock prices.
This study considers the association of PEG ratio with executive compensation for several
reasons, among which is that the PEG ratio provides an indicator for how the firm is being
valued by the market. The results of regression and portfolio analyses did not provide
evidence that supports the hypothesis of the PEG ratio superiority as a valuation tool relative
to the RIV model. On the other hand, the results show that the association of the
compensation with the PEG ratio is significant.
Ohlson and Juettner-Nauroth (2000) propose a model of forward P/E and earnings
growth in a valuation setting. They consider how next period earnings per share and earnings
per share growth relate to a firm’s current price per share. Easton (2004) indicates that
207
analysts still pervasively focus on forecasts of earnings and earnings growth rather than book
value and book value growth implicit in the RIV models.
The dividend-discounting model defines share price as the present value of expected
future dividends discounted at their risk-adjusted expected rate of return. Starting from a
dividends-discounting model, Ohlson (1995) formulates the RIV model to expresses firm
value as the sum of current book value and the discounted present value of expected abnormal
earnings. Large number of studies examined the RIV and provide evidence on its validity
(e.g., Bernard, 1995).
Early studies in the compensation area focused on documenting the relation between
CEO pay and company performance (e.g., Jensen and Murphy, 1990a). Jensen and Murphy
(1990 a, b) find a modest relation between CEO compensation and corporate performance.
Lambert (1993) indicates that since stock prices reflect many factors such as macroeconomic
shocks and changes in interest rates, managerial remuneration is often based on performance
measures such as accounting earnings, which reflect firm-specific changes in value that result
from managerial actions.
Bradshaw (2004) shows that a valuation model based on the PEG ratio explains
analysts’ recommendations. We extend Bradshaw (2004) by testing whether a valuation
model based on the PEG ratio can explain firm value better than the traditional residual
income valuation model.
H1: Firm value is better explained by a model based on the PEG ratio than by the residual
income valuation model
Due to the fact that the PEG ratio takes into consideration growth expectations which
are considered in executive compensation, that analysts support their recommendations with
the PEG model, and that analysts’ estimations affect market prices (Abarbanell and Bushee,
1997), we expect executive compensation to be associated with the PEG ratio. This
expectation is stated in the following hypothesis:
H2: Executive compensation is significantly associated with the PEG ratio.
To examine H1, we follow a method similar to that used by Bradshaw (2004). Two
values are estimated: one is estimated using RIV model, and the other is estimated using a
model based on the PEG ratio. First, a value based on the following residual income model is
estimated:
3
Eit [ RI t +τ ] Eit [TVt +3 ]
V RI it = BVPS it + ∑ +
τ =1 (1 + r )τ (1 + r ) 3
(3-1)
where V is the estimate of firm intrinsic value, BVPS it is book value per share of firm i at
time t. The second term is the present value of expected residual income, Eit [RIt+τ], over a
period of three years. (We used three years because of the small number of firms that have
forecasts on I/B/E/S beyond three year horizon.) The third term is the terminal value. We
use a two-year ahead earnings forecast and expected long term growth in earnings to estimate
the firm value based on the PEG ratio:
VPEG it = E it[EPS t+2] * LTG it *100
(3-2)
To test which model had stronger valuation abilities, the following regressions are estimated:
P it = α+ β1 VRI it + ε it
208
(3-3)
P it = α+ β2 VPEG it + ε it
(3-4)
Ret it = α+ β1 VRI it /P it-1 + ε it
(3-5)
Ret it = α+ β2 VPEG it /P it-1 + ε it
(3-6)
The significance of the coefficients, the adjusted R2, and Vuong test (1989) is used to test the
difference between the adjusted explanatory powers of the models. In addition, a portfolio
analysis is conducted to compare the size adjusted abnormal return provided by a trading
strategy based on the PEG ratio versus that provided by a trading strategy based on the RIV
model.
To test for the association of compensation with the PEG ratio (H2), the following
model is estimated:
ln Comp it = α + β 1 ln PEG it + β 2 ln E it + β 3 ln Ret it + ε it
(3-7)
where ln Compensation it is the change in the natural log of compensation, ln PEG it is
the change in the natural log of the PEG ratio, ln E it is the change in the natural log of
earnings per share before extraordinary items, and ln Ret is the natural log of stock returns.
The model is estimated twice, using data on the five highest paid executives and using data on
CEOs only.
IV. RESULTS
The data was collected from Compustat, I/B/E/S, CRSP, and ExecuComp. Table 1
shows the results of estimating the price and return modes.
Table I
Comparing the Residual Income Valuation Model and the PEG Ratio in Terms of Their
Association with Stock Prices and Returns
PRICE RETURN
(3-6) (3-7) (3-8) (3-9)
Variable VRI VPEG VRI/P VPEG/P
Intercept 27.85*** 29.77*** 0.15*** 0.15***
(33.42) (37.77) (9.03) (9.75)
VRI it 0.54*** 0.011
(19.31) (0.88)
VPEG it 0.41*** 0.016***
(18.47) (2.58)
N 2,618 2,618 2,618 2,618
Adj. R2 12.45% 11.50% -0.01% 0.22%
F-value 373.02 341.12 0.77 6.64
p-value <0.0001 <0.0001 0.3803 0.01
Vuong's Z 3.76*** 1.71*
209
Where VRI it is the firm estimated value based on a residual income valuation model that is
calculated as follows: VRI it = BVPS it + (EPS it +1 - r * BVPS it) / (1+r) + (EPS it +2 - r *
(BVPS it + EPS it +1 - (EPS it +1 * k)))/ (1+r) ** 2 + (EPS it +3 - r * (BVPS it + EPS it +1 - (EPS it
+1* k)) + EPS t+2 – (EPS t+2 * k)) / (1 +r) ** 3 + (EPS it +3 - r * (BVPS it + EPS it +1 - (EPS it
+1* k)) + EPS t+2 – (EPS t+2 * k)) / r * (1 + r) ** 3, VPEG it is the firm estimated value based on
the PEG ratio calculated as follows: VPEG it = E it[EPS t+2] * LTG it *100, Ret it is stock returns,
and P it is the stock price at the end of the fiscal year.
Price models (3-3) and (3-4) are significant with a p-value of less than 0.0001. The
coefficients on both VRI it and VPEG it are significant. The adjusted R2 of (3-3), which is 12.45
%, is higher than the adjusted R2 of (3-4), which is 11.50%. In addition, the significant
Vuong’s Z-statistics of 3.76 indicates that the explanatory power of model (3-3) is higher
relative to (3-4), which leads to the rejection of H1. On the other hand, the association
between stock returns and VPEG it is stronger than their association with VRI it. In addition,
Vuong’s Z-statistic is significant which indicates the higher explanatory power of model (3-
6). These conflicting results do not provide support for H1. To further examine the valuation
ability of the two models, we conduct a portfolio analysis to see whether there is a difference
between the results of two trading strategies, where one strategy is based on using the PEG
ratio while the second strategy is based on the RIV model. Table 2 shows the results of
portfolios based on taking a short position in low VPEG/P and VRI/P ratios and taking a long
position in high VPEG/P and VRI/P ratios.
Table II
Portfolio Analysis:
Measuring the Characteristics of Quintile Portfolios Formed by the Residual Income
Valuation-Based Value-to-Price (VRI / P) and the PEG-Based Value-to-price (VPEG / P)
VRI VPEG
n VRI/P RET AB_RET n VPEG/P RET AB_RET
LOWEST 521 -0.228 0.281 0.192 521 -1.162 0.173 0.122
t- statistic (-4.17) (4.47) (3.34) (-5.77) (3.22) (2.06)
Var. 1.562 2.064 1.579 21.14 1.499 1.54
526 0.280 0.109 0.320 526 0.358 0.106 0.022
t- statistic (98.40) (4.89) (1.53) (86.56) (4.49) (0.95)
Var. 0.004 0.261 0.222 0.009 0.297 0.26
528 0.469 0.143 0.070 528 0.576 0.135 0.060
t- statistic (130.28) (6.74) (3.13) (102.59) (7.20) (3.28)
Var. 0.006 0.237 0.235 0.0166 0.186 0.166
526 0.693 0.083 0.019 526 0.867 0.170 0.073
t- statistic (1.42) (5.01) (0.95) (100.97) (4.85) (3.08)
Var. 0.012 0.145 0.172 0.038 0.645 0.255
HIGHEST 523 1.672 0.180 0.097 523 2.018 0.210 0.145
t- statistic (21.03) (7.71) (3.90) (20.56) (7.80) (4.93)
Var. 3.307 0.284 0.279 4.990 0.380 0.391
Hedge -0.10 -0.10 0.04 0.02
t-statistic -0.27 -0.23 0.07 0.05
Where VRI is firm value estimated using a residual income valuation model, VPEG is firm value
estimated using the PEG ratio, P is price at beginning of the period, RET is stock return over
the year from CRSP, AB_RET is size adjusted abnormal return from CRSP.
210
− −
t-statistic = −
x1− x
−
2 . Where x1 , x 2 ; s12 , s 22 ; n1 , n2 are the mean value, the variance, and
s 12 / n 1 + s 22 / n 2
the number of observations in the highest and lowest portfolio, respectively (Howell, D. C.
1997).
To test H2, data is collected on the five highest paid executives in all firms in the
ExecuComp database over the period from 1992 to 2002. According to H2, a positive and
significant relationship between the change in the PEG and the executive compensation
indicates that managers are compensated for keeping the firm highly valued by the market.
Table 3 shows the results of measuring the association of the executive compensation with the
PEG ratio.
Table III
Testing the Elasticity of Executive Compensation to the PEG Ratio
ln Comp it = α + β 1 ln PEG it + β 2 ln E it + β 3 ln Ret it + ε it (3-7)
Where Comp is the cash compensation that equals the total of cash salary and cash bonus, the
PEG is the price-earnings-to-growth ratio calculated by dividing forward price earnings ratio,
which is stock price over the two-year ahead forecasted earnings, over earnings long term
expected growth, E is earnings per share before extraordinary items and discontinued
operations (Compustat item # A58), and Ret is the annual stock return. The ln Comp, ln
211
PEG, ln E, and ln Ret are the natural log of the cash compensation, natural log of the PEG
ratio, natural log of earnings, and the natural log of the stock return, respectively.
The results show that the coefficient on the natural log of PEG ratio is significant at a
0.01 level. After controlling for earnings and stock returns and earnings, the coefficient on
the change in the PEG ratio remains significant. When restricting the analysis to data on the
firms’ CEOs, the results show that the elasticity of compensation to the changes in the PEG
ratio is significant, with and without controlling for earnings and stock returns. The
interpretation for this positive elasticity is that managers are compensated for having the firm
highly valued by the market. These results provide support for H2.
V. CONCLUSION
The results do not provide evidence on the superiority of the PEG ratio, as a valuation
tool, on the RIV model. While the association between firm values estimated using the PEG
ratio and stock prices is stronger than the association of firm values estimated using the RIV
model and shock prices, the stock returns are more associated with firm values estimated
using the RIV model than with firm values estimated using the PEG ratio. In addition,
following a trading strategy based on either valuation tools do not provide significant
abnormal return. Finally, the results provide support for the existence of a relationship
between executive compensation and the PEG ratio. This means that managers are
compensated for keeping their firms highly valued by the market.
REFERENCES
Abarbanell, J. S., and B. J. Bushee. “Fundamental Analysis, Future Earnings, and Stock
Prices.” Journal of Accounting Research. 35, (1), 1997, 1-24.
Bernard, V. “The Feltham–Ohlson Framework: Implications For Empiricists.” Contemporary
Accounting Research., 11, (2), 1995, 733–747.
Block, S. B. “A Study of Financial Analysts: Practice and Theory.” Financial Analysts
Journal., 55, (4), 1999, 86-95.
Bradshaw, M. T. “The Use of Target Prices to Justify Sell-Side Analysts’ Stock
Recommendations.” Accounting Horizons., 16, (1), 2002, 27-41.
“How Do Analysts Use Their Earnings Forecasts in Generating Stock Recommendations?”
The Accounting Review., 79, (January), 2004, 25-50.
Easton, P. D. “PE Ratios, PEG Ratios, and Estimating the Implied Expected Rate of Return
on Equity Capital.” The Accounting Review., 79, (January), 2004, 73-96.
Feltham, G., and J. Ohlson. “Valuation and clean surplus accounting for operating and
financial activities.” Contemporary Accounting Research., 11, (2), 1995, 689–731.
Howell, D. C. Statistical Methods for Psychology. Fourth edition. Belmont, CA: Duxbury
Press, 1997.
Jensen, M., and K. J. Murphy. “CEO Incentives: It’s Not How Much, But How.” Harvard
Business Review., 68, (3), 1990a,138-153.
“Performance Pay and Top Management Incentives.” Journal of Political Economy.,
98, (2), 1990b, 225-264.
Lambert, R. A. “The Use of Accounting and Security Price Measures of Performance in
Managerial Compensation Contracts.” Journal of Accounting and Economics., 16, (1-
3), 1993, 55-101.
212
A SIMPLE NASH EQUILIBRIUM FROM “A BEAUTIFUL MIND”
ABSTRACT
This paper quantifies a scene from the movie about John Nash’s life, “A
Beautiful Mind.” In that scene, John Nash’s character envisions a special type of
equilibrium where it is better to cooperate with other market participants than to
compete. The popularity of this movie is seen as an opportunity to reveal the
importance of Nash’s work, and to topically cover the mathematical principles
involved. It is written in a form that is accessible to both undergraduate and graduate
students.
I. INTRODUCTION
The Academy Award winning film, “A Beautiful Mind” is more a celebration of John
Nash’s life than of his work (although the movie takes great artistic liberties). To me, the
popularity of this movie and the myth of Nash’s special equilibrium opened the door to show
students the power of his work. Not often do these opportunities present themselves, so I took
a few moments to define and solve a problem I interpreted from the movie. It is my intention
to use this example as a pedagogical tool to show the decision framework, as well as the
underlying mathematical principles.
Before beginning the formal definition of the problem and its solution, I would be
remiss if I were to not mention the impact such a decision framework has on our everyday
lives. For many years, firms and individuals were considered to make decisions on wealth
allocations, levels of production, etc., in isolation, without regard for the actions that might be
taken by other decision makers or competitors. Nash’s work brought us closer to the reality of
decision-making where an individual would, at the very least, try to guess the strategy of his
competitor and make a decision based on that conjecture. Thus, decisions are not made in
solitude, but with rational thought about what another individual or firm might do. This is
true even for young men fighting over the affections of a young lady.
213
II. REVIEW OF NASH EQUILIBRIUM
Most textbooks cover “game theory” applications at some level. Varian (1992)
provides a very nice coverage of this topic, and the following descriptions borrow liberally
from his text. Varian defines the following:
Nash Equilibrium
A Nash Equilibrium consists of probability beliefs (π r , π c ) over strategies “r” and
“c”, and probability of choosing strategies ( p r , p c ) such that,
(i) The beliefs are correct, p r = π r and p c = π c ; and
(ii) Each player is choosing p r and p c to maximize their expected utility.
An interesting special case of a Nash Equilibrium is one of “pure strategies” in which the
probability of choosing a particular strategy is 1.0.
Pure Strategies
A Nash Equilibrium in pure strategies is a pair (r*, c *) such that U r (r*, c *) ≥ U r (r , c *)
for all row strategies, “r,” and U c (r*, c *) ≥ U c (r*, c ) for all column strategies “c.”
In one of its most poignant moments, “A Beautiful Mind” interprets John Nash’s
vision through a pub scene where either real or imagined (it doesn’t matter) women enter the
pub. One is blonde and considered to be the most beautiful and desirable across social
standards. The others, brunettes, are sufficient in number to be paired with the young men in
the room without anyone being left alone. I cannot emphasize enough the probability that the
following could be construed as a misogynistic slant, but nothing could be further from the
truth. I am merely using a scene from a movie where the character has, indeed, questionable
behavior. That the women consider these gentlemen to be equally acceptable suitors, if at all
suitable, requires some heroic assumptions, but to advance the story, let’s allow the
assumptions to stand without question or debate. Nash’s character follows this line of
reasoning:
If we all compete for the blonde, we will block each other and no one will get
her. It will cost us a great deal of money and effort, and we will all end up
with nothing. At this result, we turn to the brunettes, but they will reject us
because no one wants to be second choice. Therefore, the initial decision to go
after the blonde causes an undesirable result in the first and second attempts.
The only way for everyone to “win” is to choose a strategy of co-operation
where we bypass the blonde and seek the brunettes.
Problem Definition
John and Robert are two young men in a pub who spot a group of attractive young
ladies - one blonde and two brunettes. If both John and Robert try to attract the blonde, they
will expend time and energy, not to mention a great deal of money. It is suggested that they
will block each other and neither will achieve the desired objective. A subsequent attempt to
attract the brunettes will be pointless because the brunettes will not be interested. After all, no-
one wants to be the second choice. By both attempting to go after the blonde, John and Robert
will end up with utility -3.
214
If John chooses the blonde he will get 2 units of utility but Robert will get 0. Even
though Robert can get the brunette, he “loses face” because he didn’t compete for the blonde,
hence, there is zero utility. If, however, John and Robert forego the blonde they will both
succeed and get 1 unit of utility each. The payoffs for each strategy are shown in the table
below where the coefficient on the left hand side is the payoff to John, and on the right hand
side, the payoff to Robert:
Robert
Blonde (L) Brunette (R)
Blonde (T) -3, -3 2, 0
John Brunette (B) 0, 2 1, 1
Maximizing (2a) with respect to p T and p B give the Kuhn-Tucker conditions for the
Lagrange mutipliers. (Note: Kuhn and Tucker were colleagues of John Nash at Princeton.)
∂L John
= 0 ⇒ 2 p R − 3 p L = λ1 + λ 2 (2b)
∂pT
∂L John
= 0 ⇒ p R = λ1 + λ3 (2c)
∂p B
From complementary slackness conditions, λ 2 = λ 3 = 0 (see Appendix), thus, (2b) and (2c)
can be solved for p L = 1 4 and p R = 3 4 . Returning these probabilities to (1a) gives an
expected utility of 3 4 for a mixed strategy.
Similar to John, Robert’s decision is made to maximize his expected utility shown in (3a,
b)
Max E (U Robert ) = p L [(− 3) pT + (2 ) p B ] + p B [(0 ) p L + (1) p R ] (3a)
Subject to,
215
pL + pR = 1
pL ≥ 0 (3b)
pR ≥ 0
∂LRobert
= 0 ⇒ 2 p B − 3 pT = λ1 + λ2 (4b)
∂p L
∂LRobert
= 0 ⇒ p B = λ1 + λ3 (4c)
∂p R
Again, complementary slackness requires that λ 2 = λ 3 = 0 , so that (4b) and (4c) can be solved
for pT = 1 4 and p B = 3 4 . Returning these probabilities to (3a) gives an expected utility of
3 4 for a mixed strategy.
REFERENCES
216
ECONOMICS OF/AND LOVE:
AN ANALYSIS INTO DOWRY PRICING IN EAST AFRICA
ABSTRACT
This paper addresses dowry (bride price) payment in East Africa. It addresses a
cultural issue from an economic perspective. It is assumed that dowry payment is driven by
economic fundamentals such as recessions and booms, income levels, exchange rates, location
and has intergeneration effects. By collecting and using data on what a sample of men paid for
dowry to their in-laws, the paper attempts to develop a robust dowry pricing model. Other
issues that are raised but left for future studies includes the role of dowry as an impediment to
social and cultural progress of women, the future of dowry payment and its role as a form of
social security, akin to the US social security system.
I. INTRODUCTION
In 2001, The New York Times carried an intriguing story in which peasants in
Zimbabwe were responding to economic problems in a unique way. They demanded that the
dowry (bride price) be paid in US dollars instead of Zimbabwe dollars. The latter was less
stable. Another more perplexing but sad story appeared from the same country. “Patrick
Mupedzi (42) of Mupondi village in Masvingo allegedly beat to death his 25-year-old son
Zachariah Mupedzi following a 30-minute fist fight.The son had allegedly paid lobola(dowry)
to his in-laws with a cow belonging to the father without his consent (Daily News May 10,
2001)
In the Indian Subcontinent, it is the other way round, the girl pays dowry to the man.
Cases of bride burning, because the in-laws cannot accept the bride price were too common
until the Indian Government stepped in. In Kenya gender activists are up in arms against the
practice of dowry payment, they say it “commoditises” women and denies them dignity
Bride price, some people prefer to call it bride wealth or dowry has been well research
by anthropologists, sociologists and other social scientist. Jomo Kenyatta (1938) wrote
extensively about marriage among the Kikuyus of Kenya. But economists, particularly in East
Africa have paid scant attention to the issue. However, some economists like Gary Becker
have written extensively about marriage, divorce and other “soft issues”.
While there are studies that have addressed the issue of dowry mostly in Indian
subcontinent, few if any have addressed the issue in East Africa, where dowry payment is
common and where the man’s side pays dowry unlike in India where the lady’s side pays it.
Yet each year millions of dollars change hands as daughters “change hands” and the
purchasing power of their fathers improve. Dowry payment is not just an anthropological or
social issue, it is also an economic issue, and should be addressed from that perspective. There
a number of reasons why economists should pay attention to dowry payment.
217
Some Economists argue that dowry payment belongs to the underground economy,
because it is not taxed and is not represented in the official statistics, hence like housework
lowers the GDP of a given country.
In Dowry payment there is a market, which has rules and regulation e.g. in most
communities you cannot pay the entire dowry at once, you pay it over long period of time,
like a counsel. If a man dies without clearing the dowry, his children take over! The bride and
the bridegroom are not involved in the dowry negotiation. But as Banerjee (1999) observes,
this dowry disadvantages the marriage market. He cites the lack of marriage men.
Dowry payment can be seen as a form of social security (Dekker, 2002). Since dowry
is paid over the years, parents of the girl can always call upon the “boy” incase of any
financial problem. Dowry can also be seen from a social security point of view where parents
of the girl see her as an investment that can be recouped for the rest of their life. On the
contrary, the Man’s side sees the lady as an investment; she will contribute in terms of money,
in addition to children (traditional perspective).
Dowry is often priced in currency like dollars and shillings and is thus affected by
economic conditions like inflation. One may there ask if fathers have been getting fair
“prices” for their daughters. Rao (1993) notes that in South East Asia, dowries have risen and
amounts to 50% of household assets. However, Edlund (1997) suggests this rise could be due
to increase in wealth, not scarcity of men.
There is intermediation; the marrying parties never negotiate for themselves: Is there
an agency problem?
Dowry payment affects a man’s earning and investment potential. He is at a big
disadvantage because he often starts paying dowry when he has nothing; most people marry
after school. Instead of using his money to invest, they use it to pay dowry to in-laws. Dowry
can therefore be seen as a competitor for scarce resources. Some people say dowry creates
dependency on the part of the woman’s side. It may be that in some regions of the world,
cohabitation has been used as an escape to obligations of paying dowry; however, Clerkberg
(1999) found that cohabitators are likely to be having economic problems.
How is the dowry priced? What factors do the negotiators consider in arriving at a
“fair” price? Though the amount of dowry was fixed by traditions, the negotiators have a lot
of latitude in deciding the amount. E.g. in Eat Africa, educated girls demand a premium, with
some people speculating that that is “pricing-them-out –of –the-marriage market.”
218
The Churches are quiet on dowry, while the legal system recognizes dowry payment
as a legal marriage rite, equivalent to marriage certificate or license.
Though Dowry payment is an economic issue, there are few studies that have looked at dowry
payment from an economic perspective in East Africa, though there are plenty of studies in
Indian subcontinent where women pay dowry, opposite to what happens in East Africa. This
paper will attempt to achieve the following objectives.
• Determine the variables that determine the size of dowry in East Africa
• Develop a dowry pricing model.
• Investigate how dowry has been an impediment to progress of women in East Africa.
• Investigate the future of dowry in East Africa
• Investigate how the agency problem is resolved in dowry negotiations
• Make a contribution to cultural economics
IV. METHODOLOGY
The value of dowry (Bride Price) = what is paid at negotiation day + net present value of
future payments;
D
D = D +∑ t
(1+ r )
T 0 t
where
Dt = dowry payment at time t.
r = interest rate
The main factors that determine the value of dowry are interest rate, r (and exchange
rate), time t, and Dt; the stream of dowries paid later. These are paid at no agreed time and are
rarely standardized. Other determinants of D are the education of the man and lady, their age,
their net income, age of parents, economic situation( recession versus boom) and ladies
position in family, is she the only girl , a first born or a last born. r, the rate of interest can be
got from national statistics. The rest can be got from a survey.
Therefore, D = f (r, t, age1, age2, age3, Econ, Position)
Since we can’t really get what will be paid in future, we can use what is paid at the beginning
D0 as the proxy of the dowry because in most cases it is a good predictor of what will be paid
in future and in most societies it is considered as the most important part of dowry. So our
main focus now remains getting a model that can predict the dowry payment.
Ed = Education of the wife, Age1 and Age2 are ages of wife and husband, Age 3 is the
average age of the parents when the daughter married, Location, which will be a dummy
depend on the region of the country or ethnic group. Econ refers to economic situation,
recession or boom. Data will be corrected from Kenya, Uganda and Tanzania. Each country
will provide 100 couples. This approach will be a variation of the method used by Dekker
(2002).The couples will be of varying ages i.e. the study will be cross sectional.
219
V. PRELIMINARY RESULTS
28 36000 480
34 15000 200
35 74000 986.6667
35 56,000 746.6667
36 100,000 1333.333
45 34600 461.3333
45 137000 1826.667
55 54000 720
65 30,000 400
85 40,000 533.3333
_________________________________________________________
Note: 1US $ = about 75 Kenya Shillings
2000
1500
1000 Dowry in $
500
0 50 100
Age of Husband
It appears that young people are paying higher dowries, but the exchange rate has not
been factored in. The Kenyan Shilling may have been stronger in the past e.g. in 1977 it about
7 KSh to US dollar. Now it is 75 Ksh to US dollar. This data was corrected in 2004-2005.
To achieve the other objectives, a questionnaire will be designed and the couples and other
opinion makers interviewed. Actual dowry negotiations will be observed to gain deeper
insights into the negotiation process. Further, focus groups will be interviewed in both rural
and urban areas to get a modern perspective to this phenomenon. Interestingly dowry
negotiations don’t vary much between urban and rural areas, but the attitude towards it does
vary. Note: This is an ongoing study and more data will be corrected and analyzed.
220
REFERENCES
Anderson, S. “The Economics of Dowry Payment in Pakistan,” Tilburg University Center for
Economic Research, 2000.
Banerjee, K. “Gender Stratification and the Contemporary Marriage Market in India,” Journal
of Family Issues, September 1999.
Bell, D and Song, S. “Characteristics of Bride Wealth Under Restricted Exchange” Journal of
Quantitative Anthropology 2: 1990.
Borroah,V. “Does Employment Make Men less Marriageable?” Applied Economics, 34,
London, August 15, 2002: 1571-1582.
Clarkberg, M. “The Price of Partnering: The role of Economic Well Being in Young Adults’
First Union Experience,” Social Forces 77 (3), 1999: 945-968.
Coles, M and Burdett, K. “Marriage and Class,” The Quarterly Journal of Economics 112 (1),
February 1997: 141-168.
Dekker, M., “Bride Wealth and Household Security in Rural Zimbabwe, A Paper presented at
Cambridge University, March 2002.
Edlund, E. “Dowry Inflation: A Comment,” A Working Paper Series in Economics and
Finance No.193, September, 1997, Stockholm School of Economics.
Foster, J. “Is Medical Practice a Marriage Breaker?” Medical Economics 75 (7), April 13,
1998: pp 14.
Iraki, X.N. “Economics of Dowry and the Question of Taxing it”, The East African Standard,
August 3, 2005: Nairobi, Kenya.
Iraki, X.N. “Should Bride Price Be on Market Terms or do we Abolish it?” The People, June
3, 2002: Nairobi, Kenya.
Loscocco, K and Spitze, G. “Women’s Position in the Household,” Quarterly Journal of
Economics and Finance, 39, 1999: 647-661.
Oriang, L. “Ladies, no Surrender, no Retreat” The Daily Nation, March 11, 2005: Nairobi,
Kenya.
Rao, V. “The Rising Price of Husbands: A Hedonic Analysis of Dowry Increase in Rural
India,” University of Chicago Press, 1993.
Rowtorn, R. “Marriage and Trust, Some Lessons From Economics,” Cambridge Journal of
Economics 23 (5), September, 1999: London: 661-691.
Taneka, M. “The Economics of Marriage,” Japan Echo, 1996.
Tertilt, M. “The Economics of Bride Price and Dowry: A Marriage Market Analysis,”
University of Minnesota, April 2002.
221
INTERNATIONAL TRADE GROWTH AND CHANGES
IN U.S. MANUFACTURING CONCENTRATION
ABSTRACT
The impact of increased trade upon both the degree of regional specialization in U.S.
manufacturing sectors as measured by Hoover’s coefficient of localization, and the degree of
spatial concentration as measured by the Gini coefficient, is tested. The paper improves upon
prior work that was strictly cross-sectional by estimating over a panel of state-level data.
Contrary to prior published work, increases in trade activity in a manufacturing sector are
associated with declining levels of both regional specialization and spatial concentration.
These results do not support much of the theoretic work over the past decade that postulates
increasing regional specialization and spatial concentration as a consequence of rising
international economic integration.
I. INTRODUCTION
Porter (1990) has been an influential writer on the impact of increased trade activity
upon regional clustering. Porter’s theory emphasizes that a nation’s successful industries will
be linked through both vertical and horizontal relationships. Spillover economies, or external
economies of scale, are critical to Porter’s explanation of why successful firms tend to cluster
regionally. Porter goes on to posit that “As more industries are exposed to international
competition in the economy, the more pronounced the movement toward clustering will
become.” (Porter, 1990, page 152). Krugman (1991a,b) reaches conclusions similar to Porter.
In his model, a threshhold value for transport costs will exist where above (below) the
222
threshhold value manufacturing activity will be diffuse (concentrated). Trade barrier
reductions are treated as a decline in transport costs. Krugman notes, however, that extending
the model to include more than two regions introduces ambiguity into the results.
To date, limited empirical work has been done assessing the validity of the above
theories on the impact of increased trade upon regional specialization or concentration of
manufacturing activity, and the findings are mixed. Greenaway and Hine (1991) find
convergence, not divergence, in the overall similarity of industrial production structure from
1970-1985 for 18 of 22 OECD nations studied. In contrast, Brulhart (1998) analyzes each of
166 sub national EU regions, using the centrality index of Keeble et al (1986) as the
geographic concentration measure of economic activity, and finds a positive correlation
between these centrality measures and the importance of intraindustry trade in these regions
over the 1961-90 period.
This paper combines national-level trade flow data by industry with matching
manufacturing industry specialization and concentration indices built from state-level
manufacturing earnings for 21 manufacturing sectors. These are the 20 2-digit SIC sectors,
except transportation is divided into motor vehicles and other transportation sectors. The
coefficient of localization and the spatial gini coefficients are computed for each of the 21
industries at the national level for every year in the data set. The trade activity variables,
exports to domestic shipments and imports to domestic shipments, are taken from the National
Bureau of Economic Research Database “Imports and Exports By SIC Category, 1958-94”.
The overlap of the two data sets gives this study a sample period of 1969-94 with annual data.
223
Kim(1995). For a particular industry j, the coefficient of localization is derived from the
industry’s location quotients across the k states. The location quotient for industry j, state k, is
defined as:
Where Ejk is earnings in industry j for state k, Ek is total manufacturing earnings for state k,
EjU.S. is national earnings in industry j, and EU.S. is total national manufacturing earnings. After
solving Ljk for all k states, the localization curve for industry j is constructed. It is identical in
concept to the more well known Lorenz curve for income distributions. The coefficient of
localization for industry j, LCj from equation (1), is related to the localization curve in the
same manner as the Gini Coefficient is derived from the Lorenz Curve. If LCj equals zero,
then industry j is dispersed across the states in direct proportion with total manufacturing
activity. If LCj equals one, then the industry is completely localized in one state. The formula
from Pyatt et al (1980) has been widely used to estimate gini coefficients so it was used to
estimate LCj.
LCj is the most commonly used measure of regional specialization by industry sector,
but it does have some unavoidable shortcomings because states are an imperfect regional unit
of measure given the vast differences in population size across states. An alternative to LCj
that avoids the above problem of small versus large states is to simply estimate a gini
coefficient of the spatial concentration of manufacturing sector j’s activity for each year in the
sample period. Then, examine how, if at all, changes in trade activity in an industry are
associated with changes in the distribution of the activity across the U.S.
The empirical focus of this paper is narrow. The aim is not to explain the variation
across manufacturing sectors in their average values for either regional manufacturing
specialization (localization coefficient) or spatial concentration (spatial concentration gini).
These differences across industries are taken as a given and the impact upon these industry
measures from changes in trade activity is estimated using fixed effects panel data models.
By allowing the intercept coefficients to vary by industry, differences in industries tendencies
to cluster or specialize regionally due to economies of scale, or resource availability issues,
are in large part captured. Moreover, any changes over time in LCjt, or GCjt values that are
common across industries due to changes in transport costs, information costs, regulatory
environments, or other economy-wide factors can be captured through the use of time period
dummies, thereby aiding in isolating the impact from the trade activity variables.
The key findings from the regression analysis are summarized in Table 1. Not
surprisingly, the industry-specific intercept terms jointly are highly significant in every
regression equation. The trade activity variables also are statistically significant in every
regression equation and the overall fit of the model, as evidenced by the adjusted R2, is quite
high. The impact of increased trade activity in this model is clear: an increase in either
industry imports or exports is associated with reductions in the degree of both regional
specialization (LC) and spatial concentration (GC) in the industry. While the inclusion of
time dummies reduces the magnitude of the trade variables’ effects, they still remain negative
and statistically significant.
The findings also suggest that increasing exports have a larger impact on both the LC
and GC measures than does a rise in import activity for an industry. For both with and
without time dummies versions of the estimating equations, the coefficient on the export share
variable always is larger (absolute value sense) than the import share parameter. Moreover,
this difference always is statistically significant as can be seen in the results for equations 3a
and 4a. The gap between the export share parameter (-.394) from equation 3a and the import
share parameter (-.163) from equation 4a will be significant at the 1% level given the standard
deviations of the two parameter estimates. Comparisons of equations 3b versus 4b, 5a versus
6a, and 5b versus 6b yield the same conclusion.
As a check on the sensitivity of the above results to the inclusion in the data set of a
few outlying industries, the data set was modified. Equations 3a-4b were estimated excluding
from the sample the two industries with the highest LC values and the two industries with the
lowest LC values. Similarly, equations 5a-6b were estimated excluding from the sample the
industries with the two highest and two lowest GC values. There are no material differences
in these results and those of Table 1. If anything, the estimated magnitudes of the trade
variables’ impact are slightly higher (results not presented due to space constraints).
225
Table 1: Summary of Key Regression Analysis Results
Parameter
Dep. Time Estimates*
Equation Vbl. Dummies (X/S)jt (M/S)jt F-Stat** (p-value) Adj. R2
3a LCjt No -.394 -- 1036.7 (.000) .97
(-12.44)
3b LCjt Yes -.274 -- 1197.1 (.000) .98
(-6.31)
4a LCjt No -- -.163 966.9 (.000) .97
(-11.13)
4b LCjt Yes -- -.053 1078.4 (.000) .98
(-2.94)
5a GCjt No -.543 -- 377.4 (.000) .93
(-17.25)
5b GCjt Yes -.371 -- 429.6 (.000) .95
(-8.81)
6a GCjt No -- -.270 434.3 (.000) .94
(-20.23)
6a GCjt Yes -- -.182 494.9 (.000) .95
(-11.22)
*t-statistics are given in parenthesis; **F-stat is the F-Statistic for hypothesis test that all
industry-specific intercepts are equal. For equations excluding the time dummies the critical
value is for F(20,524) and for equations including time dummies it is for F(20,499)
V. CONCLUSION
This paper extends previous work on the determinants of U.S. manufacturing regional
specialization and spatial concentration by focusing upon the impact of changes in trade
activity upon these patterns. Constructed annual industry measures of regional specialization
and spatial concentration based on state-level earnings data, correctly matched with national
industry trade flow data, allowed for more accurate testing of trade’s impact than in prior
cross-sectional work. The findings provide little support for Porter’s conjecture that increased
international economic integration will stimulate additional regional clustering of
manufacturing sectors. Gini coefficient measures of spatial concentration for industries (GC)
are negatively, not positively, related to rising import and export levels. Nor is much support
provided for Krugman’s simple two-region model in which rising economic integration leads
to increasing regional specialization of manufacturing activity. Since 1969 in the U.S. there
have been ongoing improvements in the transportation infrastructure. Even more dramatic
has been the decline in communication and information exchange costs across regions. The
ease of economic integration across U.S. states since 1969 must have risen. Yet this paper
finds that rising trade activity, a measure of economic integration, is associated with falling
not rising regional specialization of manufacturing sectors as measured by Hoover’s
coefficient of localization (LC).
It is possible that the spatial dispersion of manufacturing activity associated with rising
trade is due to significant spatial variation in the growth of U.S. trade activity. The relatively
rapid growth of the Pacific Rim region over the sample period, for example, would favor West
Coast firms over East Coast firms, helping to disperse manufacturing activity beyond its
226
Midwest and Northeast core. Similar arguments could be made for Mexico and the U.S.
Southwest. This study’s primary contribution is to show that by standard measures of spatial
concentration (GC) or regional specialization (LC), rising trade activity in the U.S. since 1969
is associated with reductions in both the spatial concentration and specialization of U.S.
manufacturing sectors. Hopefully, these results will be reflected in future theoretical efforts at
modeling the impact of trade activity upon regional manufacturing patterns.
REFERENCES
227
THRESHOLD EFFECTS BETWEEN GERMAN
INFLATION AND PRODUCTIVITY GROWTH
ABSTRACT
I. INTRODUCTION
One of the widespread macroeconomic policy success stories for industrialized nations
in the 1980’s and 1990’s was the reduction of inflation rates from double to low single digits.
There has remained in some quarters, however, the call for further inflation reductions,
perhaps even to zero, with the expectation of improved economic growth and productivity as a
consequence of even further reductions in inflation. This paper examines for Germany
whether there is empirical support for the position that reducing already low inflation rates
further will lead to increased labor productivity growth and thereby increased overall
economic growth. Germany has had one of the lowest average rates of inflation in the world
the past 40 years so it is a prime candidate for investigating the productivity benefits, if any,
from further inflation reductions.
A number of potential channels for inflation to reduce productivity growth have been
identified (Feldstein, 1982; Fischer, 1986; Briault, 1995, Thornton, 1996). While it is quite
plausible that adverse inflationary effects would arise through these channels at high rates of
inflation, it is much less certain that inflation in the low single digits adversely impacts
productivity growth in a meaningful way. Previous empirical work on this issue imposed
linear relationships in the testing for inflation-productivity linkages (Smyth, 1995; Freeman &
Yerger, 1997; Freeman & Yerger, 2000). After accounting for the impact of business cycle
effects upon measured productivity, these papers fail to find an adverse impact from inflation
upon measured productivity growth for Germany.
This paper extends the previous literature by testing for the existence of ‘Threshold
Effects’ from inflation upon measured productivity growth. If the impact of inflation upon
productivity growth varies depending upon the initial level of inflation itself, then it is
possible that the absence of findings in the prior literature was due to estimation techniques
that forced the same relationship across the ‘high’ versus ‘low’ inflation regimes.
228
II. LITERATURE REVIEW
Among the earliest relevant studies are those of Clark (1982) and Ram (1984) who
found that inflation negatively granger-caused productivity growth in the U.S. while Jarret
and Selody (1982) found the same effect for Canada. Follow-up studies appeared to confirm
these earlier findings. Simios and Triantis (1988) analyzed U.S. data through 1986 while
Saunders and Biswas (1990) examined U.K. manufacturing productivity through 1985, with
both studies finding a significant negative impact from inflation upon productivity growth.
Smyth (1995a, 1995b) analyzed both German and U.S. multifactor productivity data and
found a significant negative effect from contemporaneous inflation.
The conclusions to be drawn from these studies are limited, however, by several
factors. First, these studies for the most part include only the run-up of inflation through the
early 1980’s and not the subsequent decline. Second, most of these papers failed to control
for potentially relevant business cycle effects. Lastly, these studies did not test for stationarity
of the data, a necessary condition for their causality tests to be valid. Several more recent
papers have addressed some or all of the above issues, and their findings call into question the
conclusions of the prior literature. Among the studies failing to find evidence of a negative
impact from inflation upon productivity growth are Sbordone and Kuttner (1994), Cameron,
et al (1996), and Freeman and Yerger (1997, 1998, 2000).
While these more recent time series studies fail to find any robust relationship between
inflation and productivity growth, it remains possible that such a relationship exists but only
after inflation increases past a certain threshold. A number of cross-sectional based analyses
support this conjecture, but their estimated threshold inflation values vary widely, from 2.5%
to more than 24%, across these studies. See Fisher (1993), Bruno (1995), Sarel (1996), Ghosh
and Phillips (1998), Christoffersen and Doyle (1998), Bruno and Easterly (1998), and Khan
and Senhadji (2000) for technical details. These cross-sectional results suggest at least two
conclusions for the present study: inflationary threshold effects are likely to exist for a
number of nations; and, inflationary threshold critical values probably vary across nations.
Given the many differences across nations in their tax systems, labor market rigidities, and
inflation histories, the second conclusion is not surprising. Rather than impose the same
inflationary threshold value across multiple nations as in a panel setting, this paper returns to a
time series analysis of a single nation, Germany, but modifies the estimation technique to
allow for the existence of threshold effects. While the results here cannot be generalized to
other nations, the approach used could be replicated on other industrialized nations and the
consistency of the findings compared.
III. DATA
We use quarterly data for Germany from 1962 Q1 to 1998 Q4. This end date permits
restricting the German data to just the former West Germany, eliminating a structural break in
the data unrelated to the inflation regime itself. All variables are expressed as annual growth
rates on a year-over-year quarterly basis. Inflation is the CPI growth rate, productivity is
growth rate for real output per worker in the manufacturing and mining sector. To control for
potential spurious correlation between inflation and business cycle effects, the model also
includes the growth rate of Germany’s industrial output index. All growth rate variables were
tested for stationarity using the Augmented Dickey-Fuller test statistic with a constant term
included in the estimating equation. As seen in Table 1 the null hypothesis of non-stationarity
229
is strongly rejected for each of the three variables, so the estimation techniques utilized in this
paper are valid.
The general form of the threshold model used in this paper is given in equation one
below. The model includes industrial output growth as an explanatory variable in order to
account for the potential cyclicality of measured productivity growth with the business cycle.
Failing to include a control for business cycle effects may lead to findings of spurious
correlation between inflation and productivity growth (Sborrone & Kuttner, 1994; Freeman &
Yerger 2000).
In the model, the parameter estimates on all variables can differ depending upon
whether the observation comes from the ‘low’ or the ‘high’ inflation regime. Observations
are assigned into the low (high) inflation regime if the value of their inflation threshold
variable is below (above) the critical switching value used to divide the sample into two
inflation regimes. If in Germany the adverse effects of inflation upon productivity growth are
notably more pronounced in the upper end of the inflation range Germany experienced, then
the threshold model should find a more deleterious impact from inflation in the high inflation
regime.
(1) prodt = A0(L) prodt-1 + B0(L)Xt + {A1(L) prodt-1 + B1(L)Xt }*DUM(thresh > critval) +
et
A0(L), A1(L), B0(L), and B1(L) are lag polynomials; prodt is the productivity growth rate; Xt
is the vector containing the inflation and industrial output growth measures; thresh is the
threshold variable used to sort inflation regimes; critval is the selected critical value against
which thresh is compared; and DUM(thresh > critval) = 1 if thresh > critval and 0 otherwise.
Standard inference techniques are not appropriate here since under the null hypothesis
of no threshold effect the variables thresh and critval are not identified. Consequently, we
utilize the test developed by Andrews (1993) for tests of parameter instability when the
change point is unknown, or known to lie in a restricted interval. His sup-LR test statistic is
the largest maximum likelihood ratio statistic found over the tested range of all possible
threshold variables and threshold values. In his paper critical values are given for the
rejection of the linearity null hypothesis in favor of the alternative hypothesis that threshold
effects exist. These critical values are used for the linearity tests on equation (1) in this paper.
Since economic theory does not support any particular lag length structure a priori, we
proceeded as follows. The lag length on all variables was varied from one to six lags. Also,
consistent with standard practices, one to six lags of inflation was tested as the threshold
variable. For each inflation lag threshold variable, the threshold critical value was varied by
initially setting the threshold critical value at 1.60% and then raising the threshold critical
230
value in 0.05% steps until the final iteration utilized a threshold critical value of 5.15%.
These boundaries keep at least 15% of the observations in each inflationary regime.
The 3 lag specification of equation (1) with 1 lag of inflation as the threshold variable,
and 2.95% as the threshold critical value, generated the largest maximum likelihood ratio test
statistic over the range of tested models. This is the value needed to compare against the
critical values provided by Andrews (1993). As seen in Table 2, the linearity null hypothesis
for equation (1) is strongly rejected as the max-LR test statistic of 31.55 is well above even
the 1% critical value of 25.75. The findings imply a threshold critical value of 2.95% as the
switching point between the low and high inflation regimes.
Model’s
Estimated Sup-LR
Sup-LR Critical Values
Test Statistic 10% 5% 1%
31.5 18.08 20.35 25.75
In the low inflation regime, the coefficients sum to a positive 0.060 but the sum is not
statistically significant as the p-value = 0.79. In the high inflation regime, however, the
impact of inflation upon productivity growth is both negative and statistically significant with
a parameter value of -.297 and p-value of 0.03. These results contradict previous findings of
no impact from inflation upon German productivity growth in studies that imposed a constant
linear relationship between inflation and productivity growth. Apparently, the prior findings
of no adverse effects from inflation were due to the mixing of the low inflation and high
inflation regimes’ observations in the same regression estimation.
V. CONCLUSION
We do not argue that an inflation rate of 2.95% strictly divides the German Data into
low versus high inflationary regimes across which the impact of inflation upon productivity
growth differs as a consequence of some abrupt regime change. Instead, we interpret these
results as being broadly consistent with negative effects from inflation upon productivity
growth emerging for Germany once the inflation rate moves into the upper one-half of the
inflation range experienced by Germany over this sample period. The threshold switching
model’s abrupt regime change approach likely is approximating less abrupt changes in the
underlying inflation-productivity relationship. We check the robustness of Table 2’s findings
by analyzing the results for all other inflation threshold critical values between 1.60% and
5.15% in 0.05% steps (not reported here due to space constraints). In the range of 2.0% to
231
4.0% threshold values, inflation in the ‘high inflation’ regime consistently has a statistically
significant negative impact upon productivity growth, but no adverse impact is found when
the inflation threshold is set below 2.0%.
These findings support those who argue for low inflation rates as a means to aid
productivity growth. In particular, it is consistent with the view that the German Central
Bank’s strong commitment to low inflation in the post World War II era contributed to
Germany’s strong record of productivity growth over this time. The finding suggests that if
the European Central Bank ultimately is as successful as was Germany’s Central Bank at
maintaining low inflation, then productivity growth across the entire Euro zone will be
enhanced. At the same time, however, this paper’s findings do not support a policy goal of
zero inflation as has been called for by some. Once inflation reaches the low 2% range or
below, this paper finds no evidence that further inflation reductions would aid productivity
growth.
REFERENCES
Andrews, Donald. “Tests for Parameter Instability and Structural Change With Unknown
Change Point.” Econometrica, 61, 4, 1993, 821-856.
Briault, Clive. “The Costs of Inflation.” Bank of England Quarterly Bulletin, February 1995,
33-45.
Bruno, Michael. “Does Inflation Really Lower Growth?” Finance and Development, Sept.
1995, 35-38.
Bruno, Michael and Easterly, William. “Inflation Crises and Long-Run Growth.” Journal of
Monetary Economics, 41, Feb. 1998, 3-26.
Cameron, Norman, Hum, Derek, and Simpson, Wayne. “Stylized Facts and Stylized Illusions:
Inflation and Productivity Revisited.” Canadian Journal of Economics, 29, 1, Feb.
1996, 152-62.
Christoffersen, Peter and Doyle, Peter. “From Inflation to Growth: Eight Years of Transition.”
IMF Working Paper 98/99, International Monetary Fund.
Clark, Peter. “Inflation and the Productivity Decline.” American Economic Review, 72, 2,
May 1982, 149-54.
Feldstein, Martin. “Inflation, Tax Rules, and Investment: Some Econometric Evidence.”
Econometrica, 50, 1982, 825-62.
Fischer, Stanley. Indexing, Inflation, and Economic Policy. Cambridge, MA: MIT Press,
1986.
“The Role of Macroeconomic Factors in Growth.” Journal of Monetary Economics, 32, Dec.
1993, 485-512.
Freeman, Donald and Yerger, David. “Inflation and Total Factor Productivity in Germany: A
Response to Smyth.” Weltwirtschlaftliches Archiv, 133, 1, 1997, 158-63.
“Inflation and Multifactor Productivity Growth: A Response to Smyth.” Applied
Economics Letters, 5, 1998, 271-74.
“Does Inflation Lower Productivity? Time Series Evidence on the Impact of Inflation on
Labor Productivity in 12 OECD Nations.” Atlantic Economic Journal, 28, 3, 2000,
315-332.
Ghosh, Atish and Phillips, Steven. “Warning: Inflation May Be Harmful to Your Growth.”
IMF Staff Papers, International Monetary Fund, 45, 4, 1998, 672-710.
Jarret, Peter and Selody, Jack. “The Productivity-Inflation Nexus in Canada.” The Review of
Economics and Statistics, 64, 3, August 1982, 361-67.
232
Khan, Mohsin and Senhadji, Abdelhak. “Threshold Effects in the Relationship Between
Inflation and Growth.” IMF Working Paper 00/110, 2000.
Potter, Simon. “A Nonlinear Approach to U.S. GNP.” Journal of Applied Econometrics, 10,
1995, 109-125.
Ram, Rati. “Causal Ordering Across Inflation and Productivity Growth in the Postwar United
States.” The Review of Economics and Statistics, 66, 1986, 472-77.
Sarel, Michael. “Nonlinear Effects of Inflation on Economic Growth.” IMF Staff Papers,
International Monetary Fund, 43 (March), 199-215.
Saunders, Peter and Biswas, Basudeb. “An Empirical Note on the Relationship Between
Inflationand Productivity in the United Kingdom.” British Review of Economic
Issues, 12, 8, October 1990, 67-77.
Sbordonne, Argia and Kuttner, Kenneth. “Does Inflation Reduce Productivity?” Federal
Reserve Bank of Chicago Economic Perspectives, Nov/Dec 1994, 2-14.
Simios, Evangelos and Triantis, John. “A Note on Productivity, Inflation, and Causality.”
Rivista Internazionale di Scienze Economiche a Commerciali, 35, 9, 1988, 839-46.
Smyth, David. “Inflation and Total Factor Productivity in Germany.” Weltwirtschlaftliches
Archiv, 131, 2, 1995a, 403-05.
“The Supply Side Effects of Inflation in the United States: Evidence from
Multifactor Productivity.” Applied Economics Letters, 2, 1995b, 482-83.
Thornton, Daniel. “The Costs and Benefits of Price Stability: An Assessment of Howitt’s
Rule.”
Federal Reserve Bank of St. Louis Review, March/April 1996, 23-28.
Tsay, Ruey. “Testing and Modeling Threshold Autoregressive Processes.” Journal of the
American Statistical Association, 84, 405, 1989, 231-240.
233
TRADE AND GROWTH SINCE THE NINETIES:
THE INTERNATIONAL EXPERIENCE
ABSTRACT
The paper seeks to examine effect of exports, its structure, and concentration/
diversification on growth across countries (developing, developed and all) during 1992 and
2002, with the hypothesis that different dimensions of trade including commodity
composition and country concentration etc. have bearing on trade performance and growth.
The study brought out that primary exports, concentration index and diversification index
negatively and significantly, while manufactured exports positively and significantly affected
economic growth.
I. INTRODUCTION
Trade liberalisation has been examined in terms of three indicators viz (i) exports to
gross domestic product (X-GDP) ratio, (ii) imports to GDP (M-GDP) ratio and (iii) (X+M)
i.e. trade to GDP (T-GDP) ratio. Export structure has been examined in terms of (i)
percentage share of primary exports in total exports (XP/XT) and (ii) percentage share of
manufactured exports in total exports (XM/XT). To examine commodity diversification and
market concentration, diversification (DIx) and market concentration indexes (CIx) have been
used. Diversification index reveals the extent of differences between the structure of trade of a
country and the world average. The index value closer to one indicates a bigger difference
from the world average. The index value is calculated by measuring the absolute deviation of
the country share from world structure as follows:
∑ h ij − h i
Sj = i
2
where hij = share of commodity i in total exports of country j.
hi = share of commodity i in total world exports
234
Further, Herfindhal-Hirchman index, which is a measure of market concentration, has
been computed as follows:
2
239 ⎛ xi ⎞
⎜ ⎟
∑i=1 ⎜ ⎟ − 1/239
⎝X⎠
H =
j 1 − 1/239
Hj = country index with value ranging from 0 to 1 (maximum concentration)
xi= value of exports of product i
239
X = ∑ x i and 239= number of products (at the three-digit level of SITC, Revision 2).
i=1
Number of products exported include only those products with value greater than $ 100,000 or
more than 0.3 percent of the country's total exports. Data for all these variables were available
for 61 countries (29 less developed and 32 developed countries) for 1992 and for 97 countries
(51 less developed and 46 developed countries) for 2002, from various issues of World
Development Report, World Development Indicators and World Tables published by World
Bank and Statistical Annual Yearbook and UNCTAD Handbook of Statistics published by
U.N.
In order to study the effect of these variables on economic growth, measured in terms
of gross national income/product per capita, simple regression analysis and step-wise (step-
up) multiple linear regression analysis was carried out for all countries, less developed
countries and developed countries exports in the following way:
PCY= f (X-GDP, M-GDP, T-GDP, XP/XT, XM/XT, DIx, CIx)
It is hypothesised that
∂ (PCY) ∂ (PCY) ∂ (PCY) ∂ (PCY) ∂ (PCY) ∂ (PCY) ∂ (PCY)
, , and > 0 while , and <0
∂ (X - GDP) ∂ (M - GDP) ∂ (T - GDP) ∂ (X M /X T ) ∂ (X P /X T ) ∂ (DIX ) ∂ (CI X )
The simple regression results given in Table I shows that in 1992, variables namely X-
GDP, T-GDP, XP/XT and CIx positively (or negatively) and non-significantly (T-GDP in
isolation as well as in combination with XP/XT, XM/XT, CIx and X-GDP, while X-GDP, XP/XT
and CIx negatively but non-significantly in combination with other variables) affected
economic growth. Three variables namely M-GDP, XM/XT and DIx negatively but non-
significantly (XM/XT in isolation as well as in any combination with XP/XT, T-GDP, X-GDP,
CIx) affected economic growth. Of 7 variables considered T-GDP positively and XM/XT
negatively affected economic growth. There has been little impact of these variables on
economic growth during nineties.
In 2002, variables namely X-GDP, M-GDP, T-GDP and CIx in isolation as well as in
combination affected economic growth positively but non-significantly. Variables XM/XT
affected economic growth in isolation positively and significantly while XP/XT and DIx
affected it negatively and significantly, indicating increased role of manufactured exports in
235
economic growth and reduced role of primary exports and diversification index in economic
growth of developing countries.
236
237
238
In 1992, about 40 percent countries and in 2002, 30 percent countries had lower
diversification index than world average, indicating trade structure closer to world average for
these countries. Market concentration index was lower for 50 percent of the countries in 1992
and 60 percent of the countries in 2002, indicating diversification of markets of developed
countries.
Regression results given in Table II show that in 1992, as well as in 2002, variables M-
GDP, and CIX affected economic growth negatively but non-significantly, XP/XT and DIx
affected it negatively but significantly (DIx in 2002 only), while XM/XT affected it positively
and significantly. However, the variables X-GDP and T-GDP affected it negatively and non-
significantly in 1992, but positively though non-significantly in 2002. This indicated little role
of trade related variables in affecting growth. Primary exports and diversification index
contributed towards reducing economic growth while manufactured exports helped in
promoting economic growth in developed countries also.
For all countries regression results given in Table III show that in 1992, X-GDP and
M-GDP affected economic growth negatively and non-significantly; in 2002, X-GDP affected
it positively and significantly while M-GDP and T-GDP affected it positively and non-
significantly (T-GDP in 1992 also). Thus, X-GDP ratio helped in promoting growth in 2002 of
all countries. Variables, namely XP/XT, CIX and DIX negatively and significantly affected
economic growth in 1992 as well as in 2002 also. Variables DIx and CIx in isolation as well as
in combination with other variables (X-GDP and XP/XT in 1992 and with X-GDP and XM/XT
in 2002) negatively and significantly affected economic growth of all countries.
The study suggests that the industrial policy and trade policy should aim at promoting
exports in manufactures even when comparative advantages might exist in primary goods.
Developing countries should shun their dependency on primary exports through
industrialisation and lay stress on technology intensive industries (like electronics), as these
have higher potential for positive externalities (technology & knowledge spillovers) coupled
with higher productivity level (due to efficiency gains and economies of scale etc.). Efforts
should be made by countries (especially the developing countries) to make the trade structure
closer to world structure and diversify markets (especially by the developed countries).
Developing countries should change the trade structure while developed countries should
diversify markets for better gains from trade.
REFERENCES
239
Greenaway, D., W. Morgan and P. Wright. "Exports, Export Composition & Growth",
Journal of International Trade & Economic Development 8 (1), 1999,41-51.
Kruger, A.O. "Trade Policy as an Input to Development", American Economic Review,70,
1980,288-292.
Wacziarg, Romain. Trade Competition and Market Size, Mimeo, Harward University, Nov., 1997.
World Bank. World Development Report,1994.
World Bank, World Development Indicators, 2004.
UN, UNCTAD Handbook of Statistics, 2004.
240
INVESTOR RELATIONS CHALLENGES WITHIN
THE LIFE SCIENCES CATEGORY
ABSTRACT
This writing addresses unique challenges that Investor Relations face in regard to the
life sciences categories of biotechnology and pharmaceuticals. Within this business segment
companies are overseen by both the Securities and Exchange Commission and the Food and
Drug Administration. The paper goes on to highlight specific IR changes resulting from such
government regulation.
241
to certain groups in private. Corporate executives must remain vigilant in making sure they
follow these new regulations in order to spare themselves reaction from the SEC.
Sarbanes-Oxley 2002
The second important event contributing to the current state of investor relations is the
Sarbanes-Oxley Act of 2002. This Act (called SOX, by industry professionals) is intended to
create an independent regulatory structure for the accounting industry, higher standards for
corporate governance, increase independence of securities analysts, to create improved
transparency of financial reporting and provide new civil and criminal remedies for violations
of the federal securities laws (http://www.ffhsj.com/secreg/pdf/sc020730-2 .pdf). This Act
came as a response to the Enron and WorldCom scandals and other high profile fraud cases
that truly shook the financial community to its core. In an effort to curb corporate
misconduct, stricter regulations have been placed on the chief executive officers of any public
company. They are now required to verify important documents such as the Form 10-Q and
Form 10-K, and be held accountable for accuracy of the information. These changes are part
of the sweeping reforms that are dedicated to ensuring corporate accountability.
242
trials, followed by a series of phases of clinical trials that eventually result in human testing.
The final stage id FDA approval of the drug for sale to the public.
In the past, investors were willing to take risks on small biotechnology companies
with the assumption that the healthcare industry is a target-rich environment with many
opportunities for new breakthrough products that could become quickly and widely accepted.
Many early stage investors based their expectations upon the success model of start-ups
within the high tech industry in recent past decades. There have been numerous examples of
hardware, software and Internet-oriented brands rocketing from early stage companies to
global entities in very short time periods, oftentimes with very lucrative ROI’s for their early
investors.
In regard to the chart listed above, a crucial phase for investor relations practitioners
is when clinical trials are occurring. During this phase, new drugs are being tested and
scrutinized by the FDA. A notorious incident that reflects the pitfalls that can occur at this
stage was the situation with ImClone. In late December 2001 the FDA thwarted the
company’s application to file for approval for its cancer drug, Erbitix. The FDA’s action was
due to shoddy data and poor clinical trial research design. This negative news on its own was
a challenge for ImClone’s investor relations team. What happened next exacerbated the
problem when a second federal regulatory agency became involved. In January 2002 the
SEC launched an investigation into insider trading by company founder and CEO, Sam
Waksal (Forbes Magazine, Sept. 2004).
243
While not always true, it is generally accepted that investors are more likely to invest
in pharmaceutical and biotechnology companies that already have a drug or product in the
last stages of development. This strategy makes it slightly more likely that the investors will
see a return on their investment. Therefore, investor relations practitioners have to convince
their investors and the rest of the financial community that they have viable products in the
pipeline. This process is aided through the use of third party physicians who write research
articles in prestigious medical journals to validate the claims made by these biotechnology
and pharmaceutical companies about their drugs. Additionally, the side effects of certain
medications can either thwart their development or cause them to get taken off of the shelves
even after they have been approved by the FDA. Such has been the case with the highly
publicized problems that Merk is facing with Vioxx which was taken off the market in 2004
after being linked to an increase in heart attack risks. Recently, The New England Journal of
Medicine published an unusual expression of concern regarding the fact that Merk had
excised data on patients suffering heart attacks from a crucial study on Vioxx published in
2000. This incident has implications back to the early days of investor relations when a
primary IR responsibility was to oversee a company’s credibility in the eyes of various public
constituencies (New England Journal of Medicine, Dec. 29, 2005).
Another recent example of less than full disclosure occurred with a study done in
2004 for GlaxoSmithKline’s Praxil. New York State Attorney General Eliot Spitzer charged
the company with “repeated and persistent fraud” for concealing problematic issues of
efficacy and safety when children took Praxil for depression (Wall Street Journal. December
30, 2005, page 5).
IV. CONCLUSION
The life science and pharmaceutical industry as a whole has encountered some major
setbacks in public opinion. In January 2006 the FDA indicated that new guidelines for
preliminary phases of drug testing might be considered. These new guidelines could reduce
the amount of mandatory testing done before giving experimental medicines to humans.
Such a reduction of early phase studies could lead to cost savings as well as shortened time
frames relating to arduous testing cycles.
While drug companies should not be responsible for saving everyone inflicted with
disease, they most definitely should be held accountable for full disclosure of clinical trial
data prior to and after their products receive FDA approval. While the biotechnology
category is future-oriented, it is still subject to the founding tenets of IR practitioners. It is
imperative that these companies follow the mandates set forth by the SEC and the FDA so
that they are perceived to be credible by investors in particular and the public in general.
REFERENCES
A. BOOKS
Pincus, Theodore H. Investor relations: A Strategic Approach. Prentice hall, 1982.
Marcus, Bruce W., and Wallace, Sherwood Lee. New Dimensions in Investor Relations:
Competing for Capital in the 21st Century. Wiley, 1997.
B. ARTICLES
Sullivan & Cromwell. “Regulation FD – Practical Issues Raised by the SEC’s New Selective
Disclosure Rule”. September 7, 2000.
244
http://www.ffhsj.com/secreg/pdf/sc020730-2.pdf
http://slate.com/id/2122187
Forbes Magazine. September, 2004, page 37.
New England Journal of Medicine. December 29, 2005, Volume 353, Number 26.
Wall Street Journal. December 30, 2005.
245
GENETIC ENGINEERING ,BIOTECHNOLOGY AND INDIAN
AGRICULTURE , IPR ISSUES IN FOCUS
ABSTRACT
I. INTRODUCTION
Hypothesis: Genetic contamination of seeds of rice by non-Indians will reduce India’s market
share in the international market.
After an in depth analysis of the issues concerning the impact of IPR provisions on
agriculture, the Questionnaire was sent for an initial screening to National Council of
Agricultural policy and research , Pusa Road, New Delhi, and to Indian Institute of
Science, Bangalore. After the first screening the Questionnaire was checked by Shri
Devendra Sharma , agricultural expert and a journalist in Delhi .And in the third and final
stage the Questionnaire was finalized by Shri Biswajit Dhar, international expert on IPR
issues presently working as chief of WTO division at IIFT, New Delhi. The process of
collection of data was conducted through personal interviews, mail, surveys, and telephonic
interviews. The sample consists of the following five segments:
246
1. Non Governmental Organization and Farmers Organisations
2. Agricultural Scientists
3. Professors and Academicians
4. Seed companies
5. Experts
NGOS/Farmers Organizations
For NGOS ,the respondents were chosen carefully from the directory of NGOs in
India. All the NGOs were chosen from Northern India because rice is grown mainly in this
region only. For farmers organizations the respondents were chosen from the rice fields of
Karnal and Palwal from the state of Haryana and Pilbhit and Ghaziabad from the state of
U.P. These two states incidentally are the major rice producing states of India.
Agricultural Scientists: Under the aegis of Indian Council for Agricultural Research (ICAR)
, seed technology Division of National Council of Agricultural Policy Research has been
doing research on seed varieties over the Years . Most of the respondents were chosen from
this Institute. Other respondents were chosen from ICRISAT, Pune, and from ICAR campus,
Pusa Road, New Delhi.
Professors and Academicians: Professors were chosen from Universities and Institutes
where research is being done on agricultural issues and International business. Gobind
Ballabh Pant University, Pant Nagar, Indian Institute of Management , Bangalore and
Ahmedabad , Indian Institute of Foreign Trade ,New Delhi RIS, New Delhi, TERI, New
Delhi were the prominent institutions from where the respondents were chosen.
Seed Companies: Respondents were chosen mainly from seed Quest Yellow pages which is
an international organization that maintains the data base of seed companies in India and
other organizations in Delhi.
Experts: Respondents were chosen mainly from Institutions like, IGIDR, Mumbai and
Gujarat , Indian Institute of Pulses Research , Kanpur, and Indian Statistical Research
Institute , New Delhi .
The Sample
50 respondents each were chosen from the group mentioned above.
The entire data of 250 respondents was analyzed using the Chi- square test. Chi
square test was used because this test analyses the data and verifies the degree of difference
amongst the data collected as the respondents come from various parts of the country no
other test seemed to be better other than the chi-square test. The quantitative data have been
put in the table and the subjective responses have been analyzed thereunder as per the
following legend:
P – Professors F - Farmers / NGOS
E – Experts A -Agricultural scientists
S - Seed Companies
247
The following issues were raised to the respondents.
The freedom of local indian farmers to use imported germplasm of rice seeds may be
constrained by the breeders business interests particularly in the case of export oriented crops
because of genetic contamination(Saxena, Dhillon 2002).Exports of India’s basmati rice
abroad can get affected significantly. Contract farming will encourage development of seeds
along commercial lines on a large scale. (Ghosh 2003) Genetic contamination may affect
many small farmers and small exporters in India exporting basmati rice may be affected.
Biotechnology , private sector firms have emerged as technological leaders in a number of
important areas and agricultural research centers have comparatively become minor players.
This may reduce their ability to provide technological support to very poor farmers in India.
For the country as a whole the increasing reliance on a narrow genetic range of crops can
limit the varieties of rice for exports Given the fact that Basmati and Non Basmati rice is
grown in India patenting of seeds by non indians abroad can adversely affect India’s
competitive market abroad.(Bagchi, Bannerjee and Bhattacharya, 2004). Rice seeds from
harvested crops will be experimented upon and bred with locally adapted varieties by the
plant breeders. (Rangnekar,2002). This may enrich biodiversity of rice. Reduction of
agricultural subsidies by developed countries will provide competitive advantage to indian
exporters of basmati rice in the international market. Smaller companies will experience
increased difficulty to compete because the market for seeds is becoming fickle, plant variety
protection will scotch a market in second and subsequent generations of both open pollinated
and hybrid seeds (Gupta and Kumar 2002). With IPRs in seeds the aspect of genetic pollution
is coming in. Quality conscious buyers of rice from india may turn down export orders from
India due to quality aspects. Large scale commercial farming by plant breeders may tempt
them to go in for larger land holdings and therefore small farmers producing for export may
be hurt. (Rao, Niranjan 1997)
Testing of hypothesis
Hypothesis: Genetic contamination of seeds of rice by non-Indians will reduce India’s market
share in the international market.
Responses have been clubbed below in the format given below.
248
58 75 1296 17.28
85 124 1521 12.2
180 124 1521 12.2
123 124 1521 12.2
102 124 1521 12.2
131 124 1521 12.2
147.5
V= (r-1) X (c-1) = (4)x (2) = 8
For v= 8 X2 = 15.507
The calculated value is much lower than the table value.
The hypothesis stands null and void.
Protection of rice seeds will not affect India’s market share in the international market.
The hypothesis stands null and void.
Protection of seeds of non-basmati rice by non-Indians will not reduce India’s market share
of non basmati rice in the international market.
IV. CONCLUSION
Although the research proves that genetic contamination may not be dangerous in the
near future(Srinivasan, 2004), we still can’t take the situation casually. Organizations like
Monsanto are looking forward to the Indian market seriously. Indian research Institutes must
step up the research plan to upgrade the quality of the seeds. At the same time the Indian
Government should look closely look into the provisions concerning the aspect of plant
breeders rights, farmers rights, and farmers privilege. To be on the safer side farmers
organizations/ NGOs, Government Organizations should be involved to the hilt to synergize
the post WTO scenario concerning Indian agriculture in India.
REFERENCES
Bagchi, AK, P. Banerjee, and PK Bhattacharya (1984): “Indian patents and its relation to
technological development in India” – A preliminary investigation, Economic and
Political Weekly, 19(7): 287-304.
Cohen, J. I.(1999) “Managing intellectual property - Challenges and responses for
agricultural research institutes” Agricultural Biotechnology and the Poor: Addressing
Research Program Needs and Policy Implications. J. I. Cohen. Wallingford, OXON,
CAB International: 209-217.
Dwijen Rangnekar (2005) “No Pills for Poor People Understanding the Disembowelment of
India’sPatent Regime ” CSGR Working Paper No. 176/05 October.
Ghose Janak Rana (2003): “The Right To Save Seed .This study was conducted while the
author was a Centre Intern with the International Development Research Centre
(IDRC), 250 Albert Street, Ottawa, Ontario, Canada K1G 3H9, and Gene Campaign,
J-235 Sainik Farms,Khanpur, New Delhi, India 110062. All comments can be
forwarded to ranaghose@hotmail.com.
Gupta Sanjeev and Shiv Kumar(2002) “ Protection of Plant Varieties and Indian Agriculture”
, Yojana, December.
Pray, C.E., B. Ramaswami, and T. Kelley, 2001. "The Impact of Economic Reforms on R&D
by the Indian Seed Industry." Food Policy, 26:587-598.
Rao, Niranjan, C.,(1997), “Plant Variety Protection and Plant Biotechnology Patents: Options
for India”, Policy Paper no. 29, for UNDP funded Project LARGE, UNDP New Delhi
.
249
Rao, C.H.H.,and A.Gu lati.(1994) “Indian agriculture: Emerging perspectives and
Policy is sues, New Delhi: In dian Coun cil of Ag ricultural Re search and International Food
Pol icy Research Institute.Washington, D.C.
Saxena, S. and Dhillon, B. S., (2002) A critical appraisal of the Protection of Plant Varieties
and Farmers’ Rights Act 2001, India. NATP Trainers Training Jan 2002, Compilation
of Experts lecture notes,NBPGR, New Delhi, p.9.
Srinivasan, C.S., (2004)."Plant Variety Protection, Innovation and transferability: Some
Empirical Evidence." Review of Agricultural Economics, 28(4): 44520.
250
RESPONSE OF BUILDING COSTS TO UNEXPECTED CHANGES IN REAL
ECONOMIC ACTIVITY AND RISK
ABSTRACT
The construction industry is a major driving force of the U.S. economy. As such, it is
important to identify major determinants of construction costs and to know how these costs
change over time. This study examines the response of a popular building cost index to
unexpected changes in economic activity and risk.
I. INTRODUCTION
II. DATA
Our analysis examines the response of the growth rate in building costs to unexpected
changes in real economic activity and a measure of corporate or business-related risk. The
sample period is January 1989-August 2005. The Building Cost Index (BCI) is maintained
by Engineering News Record (ENR), a subsidiary of McGraw Hill. The BCI uses the
following when constructing the index: 68.38 hours of skilled labor at the 20-city average of
bricklayers, carpenters and structural ironworkers rates, plus 25 cwt of standard structural
steel shapes at the mill price prior to 1996 and the fabricated 20-city price from 1996, plus
1.128 tons of portland cement at the 20-city price, plus 1,088 board-ft of 2 x 4 lumber at the
20-city price. The BCI is used by contractors around the United States to gauge the state of
construction costs. The growth rate in industrial production (IPGROWTH) is used to
measure changes in real economic activity. The spread between Baa and Aaa corporate bond
rates is used to proxy for corporate or business risk in the economy and is denoted BAA-
AAA (Ewing, 2002). As we focus on growth rates in BCI and IP, the usable sample period is
February 1989-August 2005 for a total of 199 observations. Furthermore, we seasonally
adjusted the BCI index before computing the growth rate, denoted BCGROWTH. Table 1
presents descriptive statistics for the variables used in this study. Growth in construction
251
costs has exceeded the growth rate of industrial production over the sample period by nearly
9 percent, but is generally more stable.
III. METHODOLOGY
Table 2 presents results from the estimation of the three-equation VAR. The order of
the VAR was chosen to be 3. Intuitively, the VAR is a reduced form model and can be
thought of as predicting the current value of BCGROWTH based on information available at
time t, in this case, past values of each of the variables. The expectation of BCGROWTH is
thus dependent solely on past observations of itself as well as past values of BAA-AAA and
IPGROWTH. Deviations from this expectation are seen in the error term. The results of the
BCGROWTH equation indicate that the current construction cost growth rate value does
depend on past values of itself indicating some persistence in cost growth, the growth in real
economic activity, and the corporate risk measure
252
BCGROWTH IPGROWTH BAA-AAA
BCGROWTH(-1) 0.357680 0.095276 -0.000708
[ 4.98660] [ 1.23756] [-0.86781]
Figure 1 presents the generalized impulse response functions generated from the VAR
model. A significant impulse response function provides useful information about future
values of the growth in BCI. The generalized impulse response functions provide
information about the response of construction cost changes (BCGROWTH) to unanticipated
changes in the macroeconomic variables. In particular, unexpected changes in the state
variables constitute "news" and, thus, the generalized impulse response functions show how
long and to what extent BCGROWTH reacts to unanticipated changes in real output
(IPGROWTH) and the corporate risk measure (BAA-AAA). Statistical significance is
determined by the use of confidence intervals representing plus/minus two standard
deviations. At points where the confidence bands do not straddle zero, the impulse response
is considered to be different from zero (Runkle, 1987). The horizon is in quarters and is
represented on the horizontal axis.
253
CHART 1 GENERALIZED IMPULSE RESPONSE FUNCTIONS
Response to Generalized One S.D. Innovations ± 2 S.E.
-2
-4
1 2 3 4 5 6 7 8 9 10
-2
-4
1 2 3 4 5 6 7 8 9 10
-2
-4
1 2 3 4 5 6 7 8 9 10
The top panel of Figure 1 indicates that a one standard deviation shock to
BCGROWTH persists for two months. Thus, project managers and cost estimators can rest
assured that growth in the BCI returns to a long run value (i.e., unconditional mean) relatively
quickly so that future, longer horizon projections of BCI based on simple time series models
should be fairly accurate.
The bottom panel indicates a lagged BCGROWTH response to a risk shock of about 2
months. The response from an unexpected increase in macroeconomic business risk is
negative and persists for about 2 months. This finding is consistent with there being a decline
in demand for materials and labor when corporations are more likely to default on loans.
254
Generally speaking, the results presented in this paper are consistent with standard
macroeconomic theory. Unexpected increases in real economic activity place inflationary
pressure on building costs while increases in corporate risk depress the growth rate in
construction costs. The findings of this paper may help project managers and participants in
the construction industry better plan for unexpected changes in the economy. Moreover, the
results suggest that economic shocks have a relatively short-lived, and somewhat lagged,
impact on future changes in building costs.
REFERENCES
Aseidu, Y. and P. Gu. “Produce Life Cycle Analysis: State of the Art Review.” International
Journal of Production Research. 36, 1996, 883-908.
Dahlen, P. and G. Bolmsjo. “Life-cycle Cost Analysis of the Labor Factor.” International
Journal of Production Economics. 46-47, 1996 459-467.
Ewing, B. T. “Macroeconomic News and the Returns of Financial Companies,” Managerial and
Decision Economics. 23, 2002, 439-446.
Hastak, M. and E. Baim. “Risk Factors Affecting Management and Maintenance Cost of
Urban Infrastructure.” Journal of Infrastructure Systems. 7, 2001, 67-76.
Lutkenpohl, H. Introduction to Multiple Time Series Analysis. Berlin: Springer-Verlag, 1991.
Meiarashi, S., I. Nishizak, and T. Kishima. “Life-Cycle Cost of All-Composite Suspension
Bridge.” Journal of Composites for Construction. 6, 2002, 206-214.
Pesaran, M. H. and Y. Shin. "Generalized Impulse Response Analysis in Linear Multivariate
Models." Economics Letters. 58, 1998, 17-29.
Runkle, D. E. “Vector Autoregressions and Reality.” Journal of Business and Economic
Statistics. 5, 1987, 437-442.
United States Department of Transportation, Federal Highway Administration, Office of
Asset Management. Life-Cycle Cost Primer., 2002
Williams, T. P. “Bidding Rations to Predict Highway Project Costs.” Engineering Construction
and Architectural Management. 12, 2005, 38-51.
255
CHAPTER 8
ENTREPRENEURSHIP/SMALL BUSINESS
256
AN ANALYSIS OF FUNDING SOURCES FOR ENTREPRENEURSHIP IN THE
BIOTECHNOLOGY INDUSTRY
ABSTRACT
Entrepreneurship and innovations have driven the recent phenomenal growth in the
biotechnology industry. However, due to the very sophisticated technologies and processes
required to accomplish innovations, biotechnology is a very capital-intensive industry, and
sources of human capital and financing are critical components for survival and eventual
success of companies. This paper explores recent secondary data on the global biotechnology
industry, and it also presents data from a primary data collection project on sources of
funding.
I. INTRODUCTION
Yearly Us Biotechnology
Financing ($M) 2004 2003 2002 2001 2000 1999 1998
Initial Public Offering 1,618 448 456 208 685 260 260
Follow-on Financing 2,846 2,825 838 1,695 14,964 3,680 500
Other Sources 8,964 8,306 5,242 3,635 9,987 2,969 787
Venture Capital 3,551 2,826 2,164 2,392 2,773 1,435 1,219
Total 16,979 14,405 8,699 7,930 32,722 8,769 2,766
257
II. BACKGROUND
Niosi (2003) surveyed 60 Canadian biotechnology firms and found that “Access to
Capital” was the number one perceived obstacle to growth. Coombs and Deeds (2000)
indicate the importance of the capital markets and venture capital, but focus on alliances and
direct investment from large pharmaceutical companies (big pharma), both domestically and
internationally. Duca and Yucel (2002) report on the importance of venture capital to the
biotech industry, but also indicate the important role of public funding in this area. Dibner
and Howell (2002) looked at the various forms of funding biotechnology firms exploit during
their growth and the importance of venture capital firms versus initial public offerings.
Powell et al. (2002) looked at how close physical access to funding sources drives
geographical location of biotechnology firms. Friedman and Seline (2005) discuss how
private and public funding sources limit collaboration between biotechnology firms. Much
earlier, Paugh and Lafrance (1977) discussed sources of funding for biotechnology
companies. Included in their list were public equity offerings, partnerships with other
companies (both big pharma and biotechnology) and venture capital. US Department of
Commerce (2003) in their survey of barriers to competitiveness in the biotech industry found
that access to capital ranked third behind the regulatory approval process and R&D costs.
More recently, Levison, chairman and CEO of Genentech Inc., points out the decline
in the profitability of big pharma in recent years and argues for the R&D efficiency in biotech
firms (Ernst and Young 2005). He states “In 21 out the last 25 years large pharma was the
most profitable industry on Fortune’s “most profitable” industry list”. He goes on to point
out that this has not been the case in recent years. He uses the number of new molecular
entities (NME) produced by companies relative to R&D spending to highlight the efficiency
of biotech’s R&D relative to that of big pharma. In 2003 the biotech industry surpassed big
pharma in the number of NMEs. In 2004 it is estimated that big pharma spent over $50
billion on R&D compared to an estimate of $20 billion for biotech (Ernst and Young 2005).
In 2005 it is estimated that 35 new products with a sales potential of at least $150 million
each will hit the market. Of these 20 are expected to come out of biotech R&D. This
disparity in the R&D efficiency between the big pharma and biotech industries is also
highlighted by Moses et. al. (2005).
The stock market also seems to reflect investors’ sentiment concerning the disparity.
Figure I illustrate a 2-year comparison of the performance of the Pharmaceutical (^DRG),
Nasdaq (^IXIC) and the Biotechnology (^BTK) indices. Clearly, the BTK has outperformed
both the NASDAQ and the DRG indices. The indices demonstrate that the capital markets
have been an important source of funding for public biotech firms, and they have attracted
more interest and investment relative to the NASDAQ or DRG firms in recent years.
However, an important issue of funding relates to the “private” biotech firms, who do not yet
have access to the public capital markets. What are their perceptions of sources of funding
available to them? This study explores sources of funding for both private and public
companies. The Ernst and Young Global Biotechnology Report (2005) provide a wealth of
information based on industry statistics and the opinions of top industry executives and
experts in biotech investment and funding. However, as expected, their focus is often on
larger companies with greater financial potential. It is understandable why “top-tier”
companies are of greater interest to venture capitalists and fund managers. The advantage of
the current study, while modest in scope, does not have the “top-tier” company bias. Rather,
during the primary data collection we sought to obtain a much broader sample base.
258
Figure I. Two-Year Comparison of Pharmaceutical (^DRG), NASDAQ (^IXIC)
and Biotechnology (^BTK) INDICES
Secondary data analysis and in-depth interviews were the two methods utilized to
conduct exploratory research. The main source of secondary data was obtained from Beyond
Borders: The Global Biotechnology Report 2005 (Ernst and Young 2005). The primary data
was obtained by interviewing senior executives of selected private biotechnology companies.
Based on the results of these interviews a structured survey was developed for a descriptive
research design. The structured survey was posted on the university’s web site. An email
distribution list was developed to include executives and scientists from both public and
private companies. Electronic mailings were sent out to the sample with a link to the survey
site. There were a total of 48 responses. Both SPSS and Excel were used for data analysis.
The sample was predominantly US-based with a small percentage from Europe.
Approximately 60 percent of respondents were with private companies and about 40 percent
were public companies.
Data was collected on a number of important issues facing the biotech industry.
However, the main focus of this paper relates to funding issues. Respondents were given a
list of six sources of funding (developed from the exploratory research process), and they
were asked to rate the importance of each funding source to their company. A five-point
rating scale was used, with 1= not important and 5=extremely important. The six sources of
funding were (1) Venture Capital, (2) Private Investors, (3) Founder Investment, (4) Revenue,
(5) Licensing, (6) Stock Equity, (7) Other. Figure II ranks the sources of funding based on
three measures of central tendency, mean, median and mode. Based on Figure II, Revenue is
the most important source of funding in the sample. It scored an average of 4 on a five point
scale where 5= extremely important. Stock Equity is the second most important source of
funding. Founder Investment is ranked third, Private Investors are ranked fourth, Licensing
fifth and Venture Capital is ranked Sixth.
259
FIGURE II. COMPARISON OF RELATIVE IMPORTANCE OF SOURCES OF
FUNDING FOR OVERALL SAMPLE BASED ON CENTRAL TENDENCY
5.00
4.00
3.00
2.00 MEAN
MEDIAN
1.00
MODE
0.00
Stock Founder Private Venture
Revenue Licensing
Equity Investment Investors Capital
Figure III illustrates the relative importance of sources of funding based on the
extremes of the five-point scale. Revenue is again ranked number one because sixty three
percent of firms rated it as extremely important. The second most important source is
Founder Investment (29.5 percent rated it as extremely important). The third most important
source is Stock Equity (23.3 percent rated it as extremely important). The fourth most
important source is Venture Capital (22.7 percent rated it as extremely important). The fifth
most important source is Private Investors (18.2 percent rated it as extremely important).
Licensing is the lowest ranked (sixth) at 13.6 percent.
80.0
60.0
40.0
20.0
0.0
Revenue Founder Stock Venture Private Licensing
Not important 17.4 47.7 37.2 54.5 38.6 31.8
Extremely Important 63.0 29.5 23.3 22.7 18.2 13.6
260
Table III summarizes the differences between public and private companies in their
assessment of sources of funding. Revenue has the highest importance rating for both public
(mean=4.12) and private (mean=3.89) companies, but there is no statistically significant
difference between them. There are also no statistically significant differences in terms rating
the importance of Venture Capital, Private Investors or Founder Investment (t values < 1.64).
However, public companies rate licensing as having greater importance than do private
companies (t=1.71). Stock Equity is also rated significantly higher in public companies; this
accounts for the most significant difference between public and private companies (t=4.93).
Even though Founder Investment is below the benchmark for statistical significance
at the 95.5% level (t= -1.58), it is interesting to note that private companies do view it as an
importance source of funding relative to public companies. It is actually ranked as the second
most important source of funding for private companies. Based on anecdotal evidence, it is
believed that serial biotech entrepreneurs are an important source of funding for new firms.
They found companies which hit financial jackpots when those companies go public; they
then take some of their capital gains from the now public companies to fund new start-up
firms. This of course is a vital source of capital for start-up companies because VCs and
private equity firms are now more risk-averse, and are more cautious before they commit
capital to firms, since they want a much faster return on their investments. As a consequence,
VCs are courting firms that are much further along the development process, and so newer
and start-up firms cannot attract investment from these outside sources anymore.
Another potential source of funding for many firms without the Amgen or Genentech
potential is the merger and acquisition (M&A) path. In recent years, more technology is being
acquired to supplement the product pipelines within larger firms. So, the prediction by many
is continued increase in M&A activity involving pharmaceutical and biotech companies, and
within the biotech sector itself (including both public and private firms). The recent
acquisition of Abgenix (ABGX) by Amgen (AMGN) is an illustration of what may lie ahead
for smaller biotechnology companies with promising drug pipelines. Amgen, the second
largest biotech company has chosen to acquire Abgenix in a $2.2 billion deal. Following
positive phase 3 clinical trials for late-stage colorectal cancer therapies, biotech giant Amgen
thought that instead of sharing 50% of the profits with its partner Abgenix, it was better to
buy the company instead, and to get 100% of profits. The lingering question is whether an
increase in M&A activity is a two-edge sword. Does M&A activity involving entrepreneurial
biotech firms simultaneously provide financial support, while leading to a reduction in
technological innovation by firms? Is the serial entrepreneur/founder investor needed as a
critical contributor and catalyst for nurturing entrepreneurship and innovation within firms?
261
REFERENCES
262
THE IMPACT OF TEAM DESIGN ON TEAM EFFECTIVENESS
ABSTRACT
This paper reports on field research comparing initial and on-going designs of work
groups in two different organizations. Four components including task structure, group
boundaries, norms, and authority, were specifically compared. Teams designed within the
context of these four components were much more effective than work groups designed
without consideration of these components. This field study, therefore, supports earlier
results that design activities have a positive impact on team performance and project
outcome. In addition, appropriate design activities can result in stronger team self-
management.
I. INTRODUCTION
Teams literature describes the positive impact that teams have on productivity,
conditions under which teams are successful and factors which lead to team success. Recent
empirical studies suggest that well-designed teams tend to be more effective than work
groups that do not account for critical design factors. This paper reports the findings of a
field research tested components of team design and their impact on team effectiveness. In
particular, we used the framework established by Hughes, Ginnett and Curphy (2006) and the
initial research by Hackman (1987) and Wageman (2001) which provided the basis for four
critical components.
Another critical skill in determining success of teams and often more critical for new
product development teams is the set of political skills. These skills include the ability to
gain support from key areas outside of the team, to gain acceptance of the team’s output, to
gather required resources which allow the teams to work towards its goal, and to protect the
263
team against external threats and overcome obstacles in the team’s path. Likewise, internal
political skills are required of team members to confront overcome conflict issues as they
arise. An agreed upon conflict resolution process is necessary to provide opportunities for
intra-team cooperation and high performance levels (Katz, 1997).
Interpersonal skills comprise the sports analogy of team chemistry and are the
necessary component to allow for synergy. Synergy requires people to willingly and openly
share ideas, comments and criticism. Open communication and concentration on informal
networks separate technically skilled from high-performing teams. Katz (1997) identifies
effective communication as one of those characteristics usually associated with high
performing teams while Katzenbach and Smith (1993) suggest teams encourage open-ended
discussion and active problem solving.
264
II. FIELD RESEARCH
In the software company, teams were purposefully formed by identifying skill sets
and key roles necessary to create and develop software packages. All four variables
presented by Hughes, Ginnett and Curphy (2006) were purposefully included in the team
design phase. Task structure was the basis for team composition and membership. Top
management’s leadership style was inclusive and managers were concerned with leadership
development for all organizational members. Organizational culture encouraged open
questioning to ensure full understanding of organizational mission and task requirements.
Tasks were ambiguous by nature but unambiguous with respect to expectations and
performance criteria. Members perceived a climate conducive to open interaction and
commitment to both team and organizational members. Empowerment seemed to be an
everyday occurrence. As a result, team members were readily delegated autonomy necessary
to creatively solve problems that occurred and team members accepted leadership
opportunities.
Group boundary issues were openly considered. Skill set requirements was the basis
for determining team membership. Size was carefully controlled to ensure both efficient and
effective project completion. Organizational culture encouraged creative problem-solving,
highly cohesive teams, open communication, and effective conflict resolution. An inordinate
amount of time was spent on interpersonal skills to ensure member interaction would be both
positive and effective. Thus, both technical and interpersonal skills were included in team
design and seemed to have a positive impact on team output and performance.
As Hughes, Ginnett and Curphy (2006) suggest, team norms came from all three of
the possible sources. Team norms can: “(a) be imported from the organization existing
outside the team, (b) be instituted and reinforced by the leader or leaders of the team, or (c)
be developed by the team itself as the situation demands.” Importing norms from the larger
organization was a natural outcome of the culture since team members were highly
committed to the team and the organization; empowerment was an organizational norm; and,
team members readily created norms as required for task performance. Leadership styles of
top managers helped infuse team members with performance norms. All three of these
sources, therefore, were very apparent in our interviews with organizational members. High
cohesiveness levels further support the purposeful design of team norms in this whole process
of team development.
The last component of team design that was found to have a positive impact on team
performance is authority. In the computer software company teams were empowered to
create and make their own work rules with the authority necessary to collect needed
information and establish work processes to complete projects. The nature of client demands
required flexibility authority to respond immediately to changing situations. Authority did
not appear to ever become an issue of concern to either team members or top management.
265
All parties concerned with the projects responded to changing demands in a way that
appeared to be very effective. Performance outcomes and measurements support this
conclusion.
In the computer software company, because the norms of the team were consistent
with the culture of the organization, team members seemed to be loyal and committed to both
their team members and the organization. Authority differences were virtually non-existent
within the context of the workgroup. Leadership styles of management personnel naturally
supported self-managed teams within the context of true empowerment. These styles were
readily emulated by team members and were particularly apparent when the leadership role
rotated across team members.
The leadership style at the engineering design firm was more authoritarian and, as was
true in the computer software company, this style permeated throughout the workgroups.
This was most evident since a top manager typically directed each of the project groups that
he created. Members did not feel they could question management decisions and eventually
lost the willingness to do so. No real empowerment took place here and group members had
no real flexibility to modify task assignments, or how to do them.
Team effectiveness at the computer software company was rated high along a number
of dimensions. Internally, member satisfaction was very high. We would expect this given
the high level of team cohesiveness and autonomy. In addition, client satisfaction was very
high both with the specific team output and ability to interact with and modify task
requirements during the course of a given project. They appreciated the flexibility of these
teams and willingness of group members to undo, redo, or modify existing project modules.
Moreover, company management was delighted with how performance goals and
expectations were fully met and often surpassed. Since client and management feedback
often was provided directly to team members, everybody had the opportunity to gain
psychological rewards on the job.
266
the context of job performance and monetary rewards to be gained (extrinsic motivation)
rather than the satisfaction derived from intrinsic motivation of task accomplishment. Client
responses suggested work performance was adequate and minimally met expectations.
Likewise, management performance expectations were also met but rarely surpassed. Thus,
while all goals and expectations were met, there was little desire to go beyond a minimally
acceptable performance level. Faced with increasing competition, however, organizational
prospects were actually decreasing. Adequacy of performance suggested significantly lower
outcome results than found in the more effectively designed teams of the computer software
company.
IV. CONCLUSIONS
Results from this field study strongly support Wageman’s (2001) conclusion that
initial team design has a positive impact on team task performance. A lot of effort was put
into team design in the computer software company. Moreover, the specific design was
appropriate for self-managed teams and ultimately for team performance. All four of the
design components described by Hughes, Ginnett and Curphy (2006) were indeed present in
this firm. We could find support for all components being purposely included in team design.
Instead there seemed to be a conscious effort to create an appropriate organizational culture
to support team development and performance. This culture actually contained all four of
these components, or at least supported each of these four components. In any case, team
design seemed to lead to higher team performance and effectiveness in the computer software
company.
Group design in the engineering company lacked elements of each of the four
components tied to group effectiveness. For example, while the task structure included an
unambiguous project assignment for group members, there was no, sufficient, autonomy to
complete it. Likewise, technical skills were present in group members to initially complete
assigned projects. Team size also seemed to be appropriate. Interpersonal skills, however,
were clearly lacking, perhaps, purposefully so. Moreover, when task requirements changed
as a result of client concerns, skill sets did not include abilities to respond and solve these
267
problems. In addition, any interpersonal issues were beyond the scope of members’
interpersonal skills. As noted earlier group norms were purposefully discouraged in favor of
broader corporate norms. Finally, with respect to authority, members perceived a climate that
prevented leadership within the group or appropriate authority to effectively handle any
modification in the initial project definition.
REFERENCES
268
STRATEGIES IN STARTING YOUR OWN BUSINESS
ABSTRACT
I. INTRODUCTION
Evaluating the initial business idea and questions that will determine if the idea is
feasible. Areas of importance when starting a business are to determine if you are an
entrepreneur and the characteristics that lead to your utmost success. Having an idea with a
vision and a mission statement will communicate clearly the purpose of the business. How to
market the business beginning with an effective name, business card and a plan on how to get
customers to buy. The financial plan will determine how much money is needed to start the
business and what it will take to run the business profitably. The professional team is
essential in formulating the structure and systems.
The beginning stages of a business are crucial in determining if in fact this business
will succeed. One must also consider if the dedication and desire is strong enough to pursue
owning a business.
Starting a business is a major life event that takes persistence, passion, motivation,
desire, initiative and knowledge. It involves a lot of planning, research and skill. As times
have changed, there is currently no job security. Statistics show that more and more people
are starting their own business. Corporate training gives those who have been downsized the
expertise and owning a business allows them to control their own destiny.
Thinking about the business strategy, direction, what the business is, who the
customers are and why customers will do business with you are questions that need to be
answered. This is indeed the challenge. How can you differentiate your business from the
wide range of competition? How can you create excellent customer relationships and
customer service? The business idea can be formulated with the information presented in this
paper. Each step is part of the overall business plan and will bring the entrepreneur closer to
developing a feasible business. Business is not an exact science and it can change with time.
It is important to understand trends, the economy, society and the needs, which satisfy your
269
customers. What will make your business a success? It begins with the business idea, the
marketing plan, the financial plan, and the team of professional advisors.
Business flexibility and spontaneity must be realized, as not every idea works and
alternative strategies have to be considered. Having many different ideas on the business and
how to position the business is essential. The ability to think broadly and way outside the
expected norms will be helpful as there are customers in many markets. Talent, creativity,
flexibility and thousands of small ideas can create unlimited business opportunities. If you
pursue your dreams and learn all you can, you will succeed!
This information is for Entrepreneurs who have an idea that needs to be developed
and structured in order to formulate the business. It provides essential information that will
help determine if the business is realistic and feasible. All the information presented here
will inspire and motivate an entrepreneur with a plan and a strategy that can be implemented
to create a successful business.
Most businesses are developed when problems need to be solved. Come up with lots
of ideas and evaluate them, one by one, if one idea does not work then try another. Think
about a product or service that makes life easier and more enjoyable. If you believe in a
product or service, a strategy and a plan is what helps in developing a business. You learn as
you go, as there are no rules. There are many theories that make sense but to implement them
is much more challenging. Business is not an exact science. It is a general big picture of
opportunity that takes planning and a creative twist in order to make the business unique. A
good business idea is something that society finds useful and will support.
The vision and mission statement describes the business in three to four sentences. It
is what you would say to someone when you meet him or her. It is a short message that
distinguishes your business. Think about a message that is clear and describes what the
company does and how it will benefit the customer. The message has to be short, flexible,
and distinctive.
270
VI. CHOOSING THE RIGHT NAME FOR YOUR BUSINESS
Once you have chosen the business, the next step is the name, which represents the
character and the image of the business. A name reflects the essence of the business, and
what you stand for. It will create recognition of the company and the products and services
that you sell.
The business card reflects who you are and captures the essence of your business
image. Consider a logo, graphics or a photo. Collect business cards and identify what you
like and do not like. A business card can express the benefits you offer at a glance. It should
be persuasive and the message should be clear.
XI. CONCLUSION
Anyone starting a business needs to be prepared and understand the steps needed in
order to succeed. Everyone should pursue his or her passion in life. It takes time, planning
and constant thinking of the business strategy and angles to position a company. When all
271
risks and rewards are considered and the finances are in place, the business is ready to be
started.
REFRENCES
272
THE PERILS OF STRATEGIC ALLIANCES:
THE CASE OF PERFORMANCE DIMENSIONS INTERNATIONAL, LLC
ABSTRACT
273
competencies. As with many new consulting firms, generating viable marketing leads and
developing productive distribution channels became PDi’s greatest challenge (McMurty,
2003). This case describes a critical decision point where PDi has the option of continuing to
market new clients through in-house efforts, such as referrals, cold-calling, mass mailings/e-
mailings, and conference attendance, or through developing strategic alliances with other
firms, trading survey products and services for marketing leads, joint ventures and
distribution channels, as some experts recommend (Berquest, Betwee & Meuel, 1995;
Forrest, 1990). PDi’s partnership proposition is as follows:
1. Competitive rivalry in providing online survey services has exploded, with customers
regarding online surveys as inexpensive commodities. For example Survey Monkey (the
WalMart of the Web) offers a reliable online survey for about $50.00 per administration
period.
2. Major clients now want one-stop shopping, or they will go and find someone else who
can offer the complete survey process package. Segmenting out front-end survey
development and back-end action planning from administration, data collection, and basic
reporting is no longer competitive. Customers can afford to be more demanding and
selective.
3. Old school differentiators would be particularly open to adding products and features
to their offerings to make themselves truly distinctive, and worth the additional cost in the
eyes of their clients. A set of nationally benchmarked surveys would increase customer
loyalty by raising the switching costs – customers would lose their ability to use the
questions, and compare their performance to the normative data if they switched survey
hosting services.
4. The longer customers used the benchmarks, the more addicted they would become to
the comparative data, and the more time series and trend analyses could be offered.
5. Pre-survey needs analysis and post-survey action planning would ensure the surveys
were measuring important issues, valid sampling strategies were employed, any needed
unique, customized questions were added, effective administration protocols were followed,
and survey results led to action plans and implementation, to guarantee an impressive return
on investment for the survey process. This would avoid the “garbage-in, garbage-out” survey
dilemmas that frustrate so many managers – they do not get what they need to make a
difference.
6. For PDi, the time necessary to populate a benchmark database would be considerably
reduced through partnering with an existing firm with many current survey customers.
A strategic alliance seemed intuitively attractive – PDi would provide the pre- and
post-administration services, as well as the survey content. Our web hosting partner would
provide the technologic platform to administer the survey and generate the reports. PDi
would build their business with market leads from the partner, while the partner would build
their business by offering distinctive product and service offerings most competitors could
not match (Hamel, Coz & Prahalad, 2002; Kanter, 2002). As we presented our consulting
services and benchmark surveys to potential partners, we received the following three
business alliance offers (Company names are changed to honor confidentiality agreements).
Please help us evaluate the pros and cons of each of these three offers, so we can either select
the right partner, or decide to continue to stay independent and keep all marketing and
distribution efforts in-house.
274
II. PROPOSED ALLIANCE WITH SCAN CORPORATION
Scan Corporation is one of the biggest survey processing houses in the world, and is
an industry leader in the design and printing of scannable forms. Founded in the 1970s, Scan
Corporation employs approximately 600 hundred employees worldwide and reported 115
million in gross sales in 2004. Scan Corporation offers suites of fixed form and computer-
adaptive online testing engines as well as test scoring machines and Optical Mark Readers
(OMRs). These services dominate the education market, and are common in the financial
services, healthcare, government and hospitality industries as well. Scan Corporation
differentiates itself from most competitors through size and scope, offering complete, one-
stop-shop product and service packages. As they state in their Mission Statement: “From
forms and scanners to maintenance, outsourcing services and application software, no other
company can match Scan Corporation’s range of products and services.”
Both Tamson and Page had worked David Johnson, who became a Survey Services
Manager at the Scan Corporation in California. He was expected to establish a world class
consulting division to increase the scope and profitability of Scan Corporation’s survey
scanning business, including potentially offering benchmark surveys in employee and
customer satisfaction. Currently the primary offering of the Survey Services division is a
survey software package called “Survey Stats” which allows users to create a survey
electronically, administer the survey through a variety of media (Internet, LAN, e-mail,
paper, etc.), create a database, perform basic analyses (such as descriptive statistics and
demographic comparisons), and generate reports. Scan Corporation was interested in an
alliance with PDi to bolster sales of Survey Listen in three areas.
1. They needed comprehensive libraries of survey questions on different topics
(employee satisfaction, customer satisfaction, conference satisfaction, etc.) to offer in
conjunction with Survey Listen, as add-on modules. Each library of questions shipment
would generate a royalty of $75.00, regardless if it were an actual sale or a marketing give-
away.
2. PDi could offer pre and post survey administration services such as testing, validation
and action planning so Scan could offer clients a truly comprehensive survey processing
package. They would pay any related PDi travel expenses, and would receive 40% gross of
any direct joint contracts or referral contracts PDi received from Scan Corporation leads.
3. Scan Corporation wanted to own a series of survey benchmarks on topics such as
customer and employee satisfaction. These benchmarks would consist of standardized,
validated surveys around which Scan Corporation would develop comprehensive databases,
allowing for reports on national and industry norms. Scan Corporation would pay PDi
$25,000 for each benchmark, and agreed to give PDi exclusive first right of refusal on any
consulting services ordered in conjunction with those benchmarks.
This emphasis on ownership was reiterated in the proposed contract, where Scan
Corporation inserted a clause stating that Scan Corporation co-owned any custom surveys or
materials PDi subsequently produced for Scan Corporation clients. Preliminary
collaborations revealed the following observations:
• Scan Corporation’s products and services were premium-priced for the market.
• Scan Corporation reserved the exclusive right to arbitrarily adjust the contract to close a
deal. In some cases this meant deleting or reducing the contract for PDi services.
• Contacts with other consultants revealed that Scan Corporation managers tended to gang-
up on their vendors. This means that if there was a problem, about 5 different Scan
275
Corporation managers would call up separately, each demanding in-depth explanations
and updates.
• Scan Corporation’s sales force was undergoing both rapid consolidation and high rates of
turnover. Survey administrators and support personnel sometimes turned over mid-
project, causing occasional miscommunications and scheduling problems.
• Current external consultants/vendors also complained that Scan Corporation carried
“customer-focus” to an extreme. This means that the customer is always right and the
vendor is always wrong, even when subsequent customer demands are unreasonable,
ethically questionable and/or outside the scope of the contract. Scan Corporation’s
preferred method of dealing with these situations was to arbitrarily rebate part of the
vendor’s compensation back to the complaining customer without discussing the vendor’s
concerns with the customer. This heavy-handedness is a typical problem in strategic
alliances where one partner is significantly bigger and more powerful than the other
(Miles, 1999; Slowinski, 1996).
Tamson began discussions with WebMetrics while evaluating various web hosting
services. WebMetrics had been providing basic survey software and supporting services at a
reasonable price for our on-line survey efforts. Founded in the late 1990s, WebMetrics
employs about 35 employees and has 4.1 million in annual sales. Most of their clients were
small to medium size businesses interested in a lower-cost “do-it-yourself” type of survey
effort. PDi felt that WebMetrics might be particularly responsive to our proposition since
they are little more than a web hosting platform. Given increasing levels of competitors
offering equivalent or superior data collection and reporting options at better prices, we felt
WebMetrics would recognize the potential value of incorporating our services and
differentiating their offerings.
After lengthy negotiations and several on-site visits, WebMetrics proposed a licensing
agreement where they would market PDi services in return for a percentage of any resultant
contracts.
Basically the WebMetrics contract offered PDi at 66/34 split of net revenue, not gross
revenue. Any marketing expenses and sales commission costs would be split, 50/50 between
WebMetrics and PDi. We contacted marketing research consultants from SR Marketing
already working with WebMetrics as strategic partners, and found that they felt the
percentages WebMetrics requested, and the calculation of net revenues was reasonable.
WebMetrics was particularly interested in developing a benchmarking website capability
around employee satisfaction, customer loyalty, and Sarbannes Oxley ethical compliance.
They promised to dedicate a portion of their sales force to aggressively market these
products, and to make PDi products and services a top priority for some of their database
engineers and web programmers. Tamson and Dr Page traveled to WebMetrics offices near
Washington, D.C. and met with the proposed PDi support team, and made the following
observations:
• Two of the three WebMetrics owners appeared to be genuinely enthusiastic at the
prospect of a partnership, and committed to making it a success, while one was lukewarm
at the prospect of sharing the wealth.
• WebMetrics’s sales force featured high rates of turnover. Their two senior salespeople
were high quality, but complained of having little to offer high end clients; they were very
enthusiastic about the increased sales potential PDi offered. Tamson conducted an initial
product and service introduction tr
276
• Training program for all of the WebMetrics sales team. Post evaluation of the training
showed a very positive and motivated sales team who were very excited about the
potential alliance with PDi.
• Upon receipt of PDi copyright and trade mark surveys and related materials WebMetrics
began the habit of dropping the PDi copyrights and replacing them with their own. In
phone conversations and emails, the WebMetrics partners assured that this was an
oversight, and PDi would, of course, retain full copyrights to all of our original materials.
Tamson visited the San Francisco GTS office and made the following observations:
• The GTS office appeared to be in a state of disarray – cluttered, messy, and somewhat
unprofessional in appearance. Similarly the GTS employees were disorganized and
seemingly confused about where the company was headed.
• Key GTS decision makers, including the chief executive officer of the company, were
being replaced during the course of the negotiations (over a three month period).
• The GTS pricing structure was one of the highest in the industry. While fee assessed for
the use of the survey software was consistent with high end data collection services, the
additional GTS $1.10 per survey respondent charge was not.
V. CONCLUSION
PDi tried each alliance sequentially. The first alliance was with Scan Corporation,
which lasted two years. PDi ended the relationship for several reasons: turnover precluded
the kind of adequately trained staff needed to sell and support benchmark services; Scan’s
cost structure was not favorable for its consulting partners due to base charges were so high
277
that clients often balked at the prospect of adding PDi services; and royalty payments from
Library of Question placements and sales seemed significantly underreported. WebMetrics
signed a licensing agreement so they could offer PDi benchmark survey instruments to their
customers. However, despite the licensing agreement, PDi copyrights continued to be
removed in favor of WebMetrics copyrights. When PDi demanded the reinsertion of PDi
copyrights, they claimed partial ownership due to minor editing changes made while posting
these services on their website. A detailed letter from a contract lawyer ended this illegal
attempt permanently. Lastly, we backed away from the Global Technology Solutions (GTS)
deal. The pricing structure was so high it was unclear whether it was competitive. It seemed
the technology oriented leaders of this corporation overestimated that value of their
technology, and underestimated the need for differentiation. Further it was difficult to project
the marketing leads and potential income we could realize from being part of the Global
Technology Solutions (GTS) marketplace. We continue to develop our own marketing
contacts and regard future alliances with great caution..
REFERENCES
All corporate information is taken from corporate websites and from Hoovers, an online
business and industry database. Specific references will not be provided to honor
confidentiality.
Bergquist, William H., Julie Betwee and David Meuel, Building Strategic Relationships. San
Francisco: Jossey Bass, 1995.
Forrest, Janet E. “Strategic Alliances and the Small Technology-based Firm,” Journal of
Small Business Management, 28 (3), 1990, 37-45.
Hamel, Gary, Doz, Yves and Prahalad, C.K. “Collaborate With Your Competitors and Win,”
in Harvard Business Review on Strategic Alliances. Boston: Harvard Business School
Press, 2002, 3-27.
Kanter, Rosabeth Moss. “Collaborative Advantage: the Art of Alliances,” in Harvard
Business Review on Strategic Alliances. Boston: Harvard Business School Press, 2002,
113-131.
McMurtry, Jeannette M. Big Business Marketing For Small Business Budgets. New York:
McGraw-Hill, 2003.
Miles, Grant. “Dangers of Dependence: the Impact of Strategic Alliance Use by Small
Technology-based Firms,” Journal of Small Business Management, 37 (2), 1999, 20-27.
Nohria, Nitkin and Robert G. Eccles. Networks and Organizations: Structure, Form, and
Action. Boston: Harvard Business School Press, 2002
Slowinski, Gene. “Managing Technology-based Strategic Alliances Between Large and
Small firms,” SAM Advanced Management Journal, 61 (2), 1996, 42-59.
278
CHAPTER 9
279
INTELLIGENT AGENTS-BELIEF, DESIRE, & INTENT FRAMEWORK USING
LORA: A PROGRAM INDEPENDENT APPROACH
ABSTRACT
I. INTRODUCTION
The demand for intelligent agent software is likely to grow as both public and private
sector innovators seek to deploy adaptive, autonomous, information technologies to
production, scheduling, resource management, office assistance, information collection,
remote sensing, and other complex functions. Intelligent agents, also known as autonomous
agents, are distinguished from other software programs by their ability to respond to a
changing environment in pursuit of goals. One of the most critical stages of such intelligent
agent development is the basic research that goes into determining how to translate client
needs into agent software. Since programmers and operations managers have different
expertise, there is often a linguistic gap between the functional language of the customer and
the technical programming language of the agent developer. If the customer and agent
developer do not speak a common language, both time and money may be lost in needless
errors and misunderstandings. There is a need for a program-independent, unambiguous,
logically consistent language for describing the desired computer agent practical reasoning
abilities and behaviors. Program independence gives the customer maximum flexibility in
choosing a programming language to implement the desired functions. The avoidance of
ambiguity allows the customer to say exactly what she means. And logical consistency
ensures that only valid arguments will be generated by the agent’s knowledge base, avoiding
false beliefs about the world from being deduced from true beliefs.
280
suggested by some of the recent autonomous agent research is to combine the practical
reasoning tools developed by psychologist Michael Bratman’s belief, desire, and intention
(BDI) framework (1987), with some of the tools of first order logic. This hybrid language is
usually referred to as BDI logic. There are a number of BDI logics under development; here
we employ the Logic of Rational Agents (LORA) developed by Michael Wooldridge (2000)
to illustrate how BDI logic works in practice.
The authors faced the top-level description problem at NASA during the basic
research stage of specifying the behaviors of a community of agents, in this case, the
Autonomous Nano Technology Swarm (ANTS).i ANTS will be designed to engage in
practical reasoning and behaviors that implement the variety of functions necessary to
explore the asteroid belt and communicate observations to earth.
II. METHODOLOGY
Part one of this paper provides theoretical justification for the use of Bratman’s BDI
framework for understanding agent behaviors. Part two provides an explanation of how first
order logic can be combined with BDI and a dynamic component to account for agent
decisions in time. Part three presents a practical problem of describing the behavior of a
community of agents in a very complex scientific endeavor, in this case, the ANTS
exploration of the asteroid belt. We believe the insights presented here have practical
applications for the large variety of agents that will be developed in the near future.
The intentional stance treats a sufficiently complex system as if it had mental states
and engaged in practical reasoning. The term “intentionality,” first used in empirical
psychology by Franz Brentano (1973/1874), means directness towards an object. As John
Searle points out (1984), one of the unique features of mental states is that they are always
directed towards or about some object. If I perceive, I perceive something. If I believe, I
believe that something is the case. If I intend something, I have some purpose in mind. We
often use the intentional stance casually, as when we say the car does not want to start. But
we do not really believe that a car has desires. Regardless of where one stands in the
philosophical debate regarding whether computer agents have real or just “as if”
intentionality, both sides generally agree that the intentional stance is an economic way to
describe a complex system that engages in practical reasoning. Daniel Dennett (1978) points
281
out that this economy of expression becomes clear when we contrast the intentional stance
with the engineering stance. ANTS provides a good example of the economy of expression
that results from opting for the intentional stance for top-level description of agent behaviors.
282
(Bel q PriorityObject(Psyche))
Here, agent #24 (represented by the symbol “q”) believes that pysche is a priority object.
This belief can be updated or modified given additional information from the sensors or new
communications. In order to complete Bratman’s framework for practical reasoning, we now
add the intentional and emotive components: intentions and desires. Consider a desire as an
enduring goal. An agent may have a number of desires, only some of which are realizable at
any given time. An intention is a pro-attitude, that is, it is a movement toward an immediate
objective until that objective is fulfilled or some new event changes the current intention
(Wooldridge, 2000). This distinction between intentions and desires helps us to explain how
an agent may change course in response to environmental variables that change through time.
Here is an example of epistemic and emotive states combined, using LORA:
Memo 04/05/01.]
Worker ANT functions
λ Communications, resource management, navigation, local status (housekeeping), local
283
The ANTS is conceptualized here as a community of autonomous rational agents
whose behaviors are generated by a knowledge base (KB), a set of goals, inference
procedures, and percepts. Although each worker ANT is autonomous in terms of its own
function, it is subordinate to the function of the ruler. The intentions of the ruler are passed
on to the workers through the messengers and the workers actuate the plans that achieve the
goal. Each worker has as its permanent goal the appropriate and timely collection or
discovery of data from the target type of objects, the maintenance of health and safety, and
the timely communication of data to the messengers, who in turn report to the ruler and to
earth. The workers’ goals are sub-goals of the ruler’s goal and the worker’s plan at any given
state of affairs is a sub-plan of the ruler’s plan.
In the following scenario, the Ruler has received information about an opportunity to
view Psyche under ideal conditions and there is a group of worker ants in the neighborhood
of Asteroid Psyche (A). The Ruler forms an intention to attain a goal as a result of a
deduction that employs beliefs in its KB, acquired beliefs derived from current percepts, and
its belief derived from communications with all ANTS. There is a potential for a group of
workers and allied messengers to achieve the goal. This is exactly the sort of scenario
supported by BDI type logic ! For simplicity I leave out temporal considerations,
quantification, and proofs, and goal/sub-goal relations. I seek to illustrate here the usefulness
of BDI logic to model the practical reasoning or “mental states” of cooperating agents.
A= constant for Psyche
PfC = perception of the potential for cooperation
i = ruler agent
j = messenger agent
g = group of worker agents, each with specialized observation instruments
α = action
ϕ = goal to study Asteroid Psyche
achvs= attain through an action
Bel = Belief
Int = Intention
Des = Desire
π = Set of plans to be actuated (executed) by each worker to attain ϕ
ψ = Preconditions for ϕ to become the next goal (Science agenda from humans, combination
of percepts in relation to KB).
J-attempt = joint attempt
Assumption
The Ruler has formed an intention to achieve the goal of collecting data about
Asteroid Psyche. I do not here represent the deliberations that lead to this intention. We
begin with the process of mobilizing the workers to achieve the goal and a description of the
mental state of the community of agents. The Ruler forms an intention to attain a goal (to
study Psyche) as a result of ψ having been met. ψ, in this case, is a combination of the ruler’s
KB, mission priorities, current percepts, its belief that the goal has not yet been attained, its
belief that the ruler itself cannot or does not desire to achieve the goal by itself, and its belief
that there is a potential for a group of workers and allied messengers to achieve the goal. The
ruler will therefore intend not only the goal, but that the messenger intends the goal and that
284
the group of workers intends the goal and finally achieves the goal. This is exactly the sort of
scenario supported by LORA and other BDI type logic!
REFERENCES
285
THE PROPENSITY FOR MILITARY SERVICE OF THE AMERICAN YOUTH: AN
APPLICATION OF GENERALIZED EXCHANGE THEORY
ABSTRACT
I. INTRODUCTION
Propensity of young people for military service is an important issue for our
government and armed forces. To attract young Americans between the ages of 18-24 years
for enlistment, the military offers a variety of career opportunities. Despite these abundant
career opportunities, young Americans are not as interested in military service as their grand
parents were a few generations ago. The American young people have very little information
regarding the military and military life. Moreover, these young Americans have expressed a
declining propensity for military service (PMS) since the 1990’s (Secretary of Defense
Annual Report, 2000). PMS is defined as one’s “interest and desires, as well as plans and
expectations regarding military service” (Segal, Bums, Falk, Silver, & Sharda, 1998, page
67). The decline in PMS, over an extended period of time, may adversely impact both
military and civilian sectors of American life. The shortage of qualified military personnel
can impact the mission of the branches of the military.
The military views its recruitment efforts primarily as a marketing campaign (Navy
Personnel Research Studies & Technology, 1998); and therefore, social exchange theory can
be applied to the military recruitment process. Social exchange theory holds that “given
certain conditions, people seek to reciprocate those who benefit them” (Bateman & Strasser,
1984, page 97). To the extent that military enlistees derive satisfaction from the actions of
their military supervisors and their occupation, assuming that this military action is not
contrived, then enlistees may feel obligated to reciprocate. This reciprocation may take the
form of prosocial behaviors and/or reenlisting in the military.
The current research utilizes the four factors employed by Marshall (1998) to measure
the generalized exchange construct. Marshall found that the four factors contributing to
286
generalized exchange were: 1) perceptions of social responsibility ethic, 2) perceptions of
social equity, 3) perceptions of effectiveness of organization’s performance, and 4)
perceptions of community benefits. The current study added a fifth factor, “Voluntary
Association Activities (VAA)” to the generalized exchange model. An extensive search of
the literature revealed that only one study (Brown & Rana, 2005) had linked these five
predictors with social exchange theory. Brown and Rana found that generalized exchange
was directly related to PMS. It is therefore posited that a positive relationship exists between
generalized exchange and propensity for military service.
Hypothesis 1: Generalized exchange will be positively related to propensity for military
service.
Prior exposure to the military may also impact one’s propensity for military service.
Some scholars have opined that PMS is a result of family members’ prior or current
military service and patriotic motivations. Faris (1995) found that “most soldiers join the
Army in part for patriotic motivations which are primarily the result of direct
interpersonal influence of persons, usually relatives, who have served in the military”
(page 415). Moreover, Legree, Gade, Martin, and Fischl (2000) found that the attitudes of
both youths and parents predicted PMS and subsequent military enlistment. The present
study argues that prospective enlistees who have family members with prior military
service will have higher levels of PMS.
Hypothesis 2: The relationship between generalized exchange and PMS
will be moderated by prior exposure to the military.
We argue that VAA will be positively related to PMS. Voluntary behaviors are
defined as free will activities that are characterized as paid/unpaid work performed in civic,
charitable, or religious organizations for the greater good of the community (Youniss,
McLellan, Su & Yates, 1999). Eccles & Barber (2001) found that teenagers who engaged in
prosocial activities such as attending church and volunteering liked high school, had a higher
GPA, and were more likely to be attending college at age 21 than their nonparticipating
counterparts. Notwithstanding the plethora of studies indicating the beneficial effects of
student participation in VAA, a search of the literature only found one study that had
examined the linkage between VAA, social exchange, and PMS; Brown and Rana (2005)
found a direct relationship between VAA and generalized exchange. In light of these
findings, we advance that respondents with high levels of VAA will also have higher levels
of generalized exchange and PMS. It can, therefore, be argued that these students may be the
prospective recruits that the military should target for enlistment.
The commitment of new military enlistees to finish their term of enlistment may
determine job performance, job satisfaction, and reenlistment intentions (Allen & Meyer,
1990; Ganzach, Pazy, Ohayun & Braynin, 2000). Organizational commitment, as defined by
Allen and Meyer (1990), consists of three components: affective, continuance and normative.
The current research limits its focus to normative commitment; thus, affective and
continuance aspects of commitment are not addressed. Normative commitment is
characterized by the employee’s belief that he or she is obligated to stay with a particular
organization because of personal loyalty and/or allegiance. We conjecture that commitment
will mediate the generalized exchange-PMS relationship. In addition we also posit that
commitment will be directly related to both generalized exchange and PMS.
287
Hypothesis 4: The relationship between generalized exchange and PMS will be
mediated by commitment.
Hypothesis 5: Generalized exchange will be positively related to commitment.
Hypothesis 6: Commitment will be positively related to PMS.
This study examines the role of social exchange theory in the military personnel
selection process. Hiring prospective military recruits who are committed and more likely to
reenlist requires a thoughtful personnel selection strategy. In the proposed conceptual model
(See Figure I), community benefits, military performance, social responsibility, social equity
and VAA are hypothesized to influence generalized exchange. Generalized exchange, along
with the factor of commitment, is expected to predict PMS. We argue that community
benefits, military performance, social responsibility, social equity, and VAA are constructs
that provide indirect benefits to the American society. Generalized exchange (indirect
benefits) may, therefore, have a positive relationship with these five constructs, as
generalized exchange benefits accrue to the larger society and not to the individual directly.
III. METHOD
A telephone questionnaire was used to collect the data. The current study used data
collected under a grant funded by the Office of Naval Research (Grant No. 140110363;
Marshall, Brown & Gillon, 2001). The study employed a random quota sample of 300 males
and 300 females, between the ages of 18-24 years, who were unmarried, resided in the
continental United States, were non-institutionalized, were currently enrolled in a 4-year
college or lesser institution, and who did not have their own children at home. The overall
response rate was 59.3%. Structural equation modeling (SEM) was employed to test all
hypotheses. The linear structural relationships (LISREL, version 8.30) program was used to
develop and test all structural models (Joreskog & Sorbom, 1993).
288
Figure I Hypothesized Model
Community
Benefits
ξ1
γ11
ζ1 ζ2 ζ3
Military
Performance
ξ2
γ21
Commitment
Social γ31 Generalized β21 η2 β32 Propensity
Exchange Military Svc.
Responsib. η3
η1
ξ3
β31
γ41
Social
Equity
ξ4
γ51
Voluntary
Activities
ξ5
As indicated by the following fit indices, the hypothesized model had an acceptable fit
to the data, Chi-square= 375.731(p-value = 0.09), CFI=0.961, IFI=0.962, GFI=0.949, and
RMSEA= 0.033 (Joreskog & Sorbom, 1993). We now discuss the relationship among the
latent variables by inspecting the path coefficients. Hypothesis 1 stated that generalized
exchange would be positively related to propensity for military service. No support was
established for Hypothesis 1 because the path coefficient between PMS and Generalized
Exchange was not significant. Generalized Exchange did not influence PMS, indicating that
respondents who valued indirect benefits from the military may not necessarily have higher
levels of enlistment propensity as hypothesized. Support was established for Hypothesis 3,
which stated that VAA would be positively related to generalized exchange. Respondents
who engaged in voluntary behaviors were more likely to value indirect benefits from their
social exchanges. However, Social Responsibility and Social Equity were not significant
predictors of Generalized Exchange. These findings may be problematic for the military
branches that have traditionally depended upon patriotic and altruistic feelings among young
Americans to help facilitate enlistments.
289
A significant relationship was established between Community Benefits and
Generalized Exchange. The path coefficient between Generalized Exchange and Military
Performance was also statistically significant. Thus, respondents who valued indirect benefits
reported higher levels of Military Performance. In addition, the path coefficient between
Commitment and PMS was significant; thus, support was established for Hypothesis 6. The
Commitment of respondents predicted their level of PMS. Therefore, if possible, perhaps the
military recruiting managers may want to select committed prospective recruits; committed
recruits have higher levels of job satisfaction, performance, reenlistment intentions, and are
easier to train (Allen & Meyer, 1990). As expected, there was a positive and significant path
coefficient between Generalized Exchange and Commitment, which established support for
Hypothesis 5. It implies that respondents who valued indirect benefits to the larger society
also had high levels of Commitment. Hypothesis Two was supported because a chi-square
difference test indicated that Prior Military Exposure moderated the Generalized Exchange-
PMS relationship; lastly, Commitment was found to be a mediator of the Generalized
Exchange-PMS relationship, indicating support for Hypothesis 4.
V. CONCLUSION
These findings lead to the conclusion that prospective military recruits who have a
generalized exchange orientation may be more committed to the military and more likely to
reenlist as a result of their commitment. Our findings may also have practical implications for
military recruiting managers. Indeed, the military should consider directing its recruitment
efforts to young Americans embodied with these generalized exchange characteristics (i.e.
predisposed to voluntary behaviors and effectiveness of the military). Also, the military may
consider selecting prospective recruits with prior exposure to the military. This practice may
go a long way in assuring that recruits are endowed with the qualities identified as being
essential and predictive of propensity for military service. Furthermore, the results of this
study clearly indicate that parents influence the enlistment behavior of their children. If the
military has a good image in the eyes of the parents, then their children may be more inclined
to enlist in the military, ceteris paribus.
The current research, like other studies, has some limitations. First, the results are
limited to young Americans between the ages of 18 to 24 years. Thus, generalizability is
limited to this age group, and therefore social exchange theory may not explain the
propensity for military service outside of this age range. Another limitation of the study is
that the sample size did not contain a substantial number of African Americans (only 6.2%),
while they represent 22.44% of the active duty military (Department of Defense, 2002).
REFERENCES
Allen, Natalie. J., and Meyer, James. P. “The Measurement and Antecedents of Affective,
Continuance and Normative Commitment to the Organization.” Journal of
Occupational and Organizational Psychology, 63, (1), 1990, 18-38.
290
Bateman, Tony. S., & Strasser, Sammie. “A Longitudinal Analysis of the Antecedents of
Organizational Commitment.” Academy of Management Journal, 27, 1984, 95-112.
Brown, Ulysses J., III and Rana, Dharam S. “Generalized Exchange and Propensity for
Military Service: The Moderating Effect of Prior Military Exposure.” Journal of
Applied Statistics, 32, (3), 2005, 259-270.
Department of Defense. Department of Defense 27th Annual Population Representation in the
Military Services Report. 2002. http://dticaw. dtic. mil/ pr home/poprep2000/.
Eccles, Jamie. S., and Barber, Brice. L. “Student Council, Volunteering, Basketball, or
Marching Band: What Kind of Extracurricular Involvement Matters?” Journal of
Adolescent Research, 14, (1), 2001, 10-43.
Faris, James. H. “The Looking-glass Army: Patriotism in the Post-cold War Era.”
ArmedForces and Society, 21, (3), 1995, 411-435.
291
THE MARYLAND WAL-MART BILL: A NEW LOOK
AT CORPORATE SOCIAL RESPONSIBILITY
ABSTRACT
Wal-Mart has had phenomenal success becoming America’s largest and most
profitable corporate giant. It is estimated that by the end of 2005, it had created over
100,000 jobs, adding to its 1.2 million U.S. associates. It will pay billions in taxes, provide
health insurance to hundreds of thousands of employees, and support 100,000 charitable
organizations. It is widely recognized that the large modern corporation has many
stakeholder groups – employees, unions, suppliers, consumers, and the community in which
it operates. Related to this understanding is the view that such entities have responsibility to
attempt to act in the best interest of its constituent groups. To the extent possible, the
corporation should treat its employees fairly, bargain honestly with unions, make its products
safe, and be good citizens of the local community. (Barnes, 2005) Overall, Wal-Mart has
made a significant contribution to the employment base of many communities and has
provided health insurance for many of its employees. However, Wal-Mart officials
condemned the 2005/2006 Maryland General Assembly for demanding that the corporation
act, what the Assembly considers to be, responsibly toward its employees in the state by
assuming a fair share of their health care cost. Wal-Mart says it provides health care
coverage to approximately 45% of its U.S. workforce.(Lazarick, 2005) This research will
examine the image and ethical practices of Wal-Mart in light of the issues it confronts with
the Maryland General Assembly and other legislative agencies across the country.
I. INTRODUCTION
One of the most contentious issues affecting the 2005 Maryland legislative session
involved a bill that had a direct impact on a single company, WallBMart. The debate over the
specifics of the legislation continued well after the legislative session was over. The same
issue will be addressed in the 2006 legislative session while the likelihood of a final
resolution will not be achieved until some years in the future. Overall, the Governor, elected
officials, health care administrators and providers, consumer advocates, and the general
public are increasingly concerned about the growing population of uninsured and under-
insured and the rising cost of existing health plans and their impact on the aging population.
The health care crisis has led many health care professionals and administrators, as well as,
other consumer advocates to begin to scrutinize business practices to make sure that large,
profitable businesses that employ large numbers of people contribute their fair share to health
care cost for their employees. The impact of this legislation is nominal compared to the
number of Americans who are currently uninsured or underinsured. However, this legislation
sends a message to corporate America that it has a responsibility to be an active partner in
helping to lessen the health care crisis. Many Americans expect corporate giants like Wal-
Mart, to provide basic affordable health coverage for its employees who help make Wal-Mart
the wealthiest corporation in the world.
292
II. WAL-MART AND THE NATIONAL HEALTH CARE CRISIS
Several states, facing rapidly increasing Medicaid cost are turning to the private sector
to bear more of the cost. Wal-Mart, in particular, has been the focus of several states,
accusations that the company is providing substandard health benefits to its employees.
According to the New York Times, Wal-Mart full-time employees earn on average $1,200 a
month, or about $8 an hour. Some states claim many Wal-Mart employees end up on public
health programs such as Medicaid. A survey by Georgia officials found that more than
10,000 children of Wal-Mart employees were enrolled in the state=s Children=s Health
Insurance Programs (CHIP) at the cost of $10 million annually. Similarly, a North Carolina
hospital found that 31% of 1900 patients who said they were Wal-Mart employees were
enrolled in Medicaid, and an additional 16% were uninsured.
As a result, some states have turned to Wal-Mart to assume more of the financial
burden of its workers= health care costs. California passed a law in 2003 that will require
most employers to either provide health coverage to employees or pay into a state insurance
pool that would do so. Advocates of the law say Wal-Mart employee=s cost California health
insurance programs about $32 million annually.
According to the Daily Times, Wal-Mart says, that its employees are mostly insured,
citing internal surveys showing that 90% of workers have health coverage, often through
Medicare or family members= policies. Wal-Mart officials say the company provides health
coverage to about 537,000 people, or 45% of its total work force. As a matter of comparison,
Costco Wholesale provides health insurance to 96% of eligible employees.
293
Wal-Mart has conquered rural America with more than 3000 stores, but it desperately
needs to break into the urban market to maintain its phenomenal growth. Since its arrival in
the region 13 years ago, Wal-Mart has quickly planted 147 stores in Maryland and Virginia,
including 32 in the greater Washington area. It is now the number one private employer in
Virginia and one of the 10 largest employers in Maryland, with 52,000 workers. (Barbaro,
2005) Despite its more than 5000 stores and $285 billion in sales worldwide, Wal-Mart=s
future is closely tied to continued expansion and the importing of inexpensive Chinese
laptops, televisions, and clothing. (Wagner, 2005) These items are a significant fraction of
the goods sold by Wal-Mart. (Williams, 2005). Market analysts have indicated that Wal-
Mart’s image problem has had no measurable impact on consumers= willingness to shop at
the chain. Sales grew 11 percent in 2004 and Wal-Mart estimates that 90 percent of
Americans, or 270 million people, shopped at one of the company’s divisions in 2004.
(Wagner, 2005) This earned Wal-Mart $10 Billion in profits in 2004. (Struever, 2005)
In 2004, the retailer lost battles to build stores in Inglewood, California, Chicago, and
New York City. During the same time, dozens of local governments C including Calvert,
Prince William and Montgomery counties in the Washington, D.C. region B have passed
zoning rules making it difficult for Wal-Mart to expand in urban markets. (Barbaro, 2005)
Two Maryland legislative mandates, SB790 and HB1284, as amended established the
Fair Share Health Care Fund. The Fund is supported with monies received from employer
payments and other sources. The fund is subject to audit by the Office of Legislative Audit.
The proposed Fair Share Health Care Act would force Wal-Mart to increase its spending on
health care coverage for its Maryland employees. According to published reports, 80 % of
the employees are eligible for health benefits, but only 52% of eligible employees choose to
enroll in company-sponsored insurance for which they pay part of the premium.
The Fair Share Health Care Act requires companies with 10,000 or more employees
in the State of Maryland, to spend at least 8% of their payroll on worker health care, or pay
the difference to the State’s Department of Labor, Licensing and Regulation (DLLR). Health
care costs are payments for medical care, prescription drugs, vision care, medical saving
accounts, and any other health benefits recognized under federal tax law. Proponents of the
Bill say that the measure is a crucial public policy statement about the responsibility of
employers to their workers. They say that because Wal-Mart pays relatively little for its
employees’ health care, other businesses and citizens foot its bill by paying more in higher
insurance and taxes. Opponents, on the other hand, call the bill the first step in a move to
socialize health care. They argue that increasingly smaller companies will be subjected to the
rule and the required spending would rise (Green, 2005).
294
Wal-Mart officials have told state lawmakers that the firm currently spends between
5% and 7% of its Maryland payroll on health coverage. The legislation would force the
retailer to raise that to 8 percent or pay the difference to the state. According to Thomas A.
Finey, senior fellow at the Maryland Public Policy Institute, the Act would extend coverage
to 1000 to 3000 employees, thereby lowering the State’s uninsured rate by .05%. Wal-Mart
bill would extend coverage to 1000 to 3000 people, thereby lowering the state=s un-insurance
rate by, at best, about 0.05 percent. Finey suggests that in order for Wal-Mart to achieve that
gain, it will have to up its health insurance spending by approximately 5 million dollars a
year. Wal-Mart can free up the necessary money by cutting employee hours and skimping on
raises, bonuses and other perks. (Finey, 2005) There is little sympathy for that argument
especially since 2004 worldwide profits exceeded 10 billion dollars. Every person counts,
and whenever we can provide health coverage for someone who does not have health
coverage, we are creating a more humane society and lowering the cost for those who pay
premiums on a regular basis. Stakeholders should want healthy employees to continue to
maximize profit potential and demonstrate corporate responsibility. Finey also explained that
the real problem in Maryland is due to insurance mandates, state regulations and the special
tax on health care premiums. This could be a valid argument if insurance rates were rising in
Maryland and falling in the other 49 states. However, insurance costs are rising on a national
basis regardless of those factors. A comprehensive nation health insurance policy, tort
reform, a reduction in insurance fraud, doctors ordering multiple procedures to discourage
possible malpractice law suits, insurance company settlements rather than litigation are all
factors that could have an appreciable impact on reducing insurance cost.
Steven Pearlstein (2005), who writes for the Washington Post, indicated that the Wal-
Mart bill was purely a symbolic effort driven by the desire of the Democratic politicians to
demonstrate solidarity with union workers who see Wal-Mart as a threat to wages and
benefits that have risen to uncompetitive levels. He further indicated, that as a society, if it is
believed that everyone should have health coverage within the context of an employee
based system, small business should not be exempt (Pearlstein, 2005). The profile of most
small business the dominant employer business model, operate as a sole proprietary business
or a partnership, has limited resources and employ one, two or at the most three workers. To
expect a business of this size and resources to provide health coverage would in most cases
severely hinder future operations especially when many of those businesses operate on very
thin margins. Any bills attempting to require such a mandate would for the most part be dead
on arrival.
The Baltimore –Washington, D.C. area is one of the more expensive areas to live in
the United States. Housing, gasoline, food and transportation costs have a tremendous
bearing on wages and benefits. Wal-Mart wages, health coverage package and pension
benefits as previously indicated, are not in line with the metropolitan area.
After every legislative session in Maryland and other states, The Governor sets several
days aside for bill signing ceremonies. In 2005, the Governor held a veto ceremony that took
place in Somerset County (on the Eastern Shore of Maryland) at the Circuit County House in
Princess Anne where Wal-Mart Stores, Inc is planning to open a distribution center that could
employ hundreds of people (Fisher, 2005). The Governor has truly sided with Wal-Mart
executives by vetoing the Fair Share Health Care Bill. The bill passed in the State Senate
with one more vote than it needed to override the Governor’s veto, and was one vote short of
295
the mark in the House of Delegates. It should be noted that several delegates who support the
bill were absent on the day of the vote (Green, 2005). Despite the vote, Governor Ehrlich said
that the Wal-Mart Bill “threatens the economic health of this terrific county.” Somerset
County is the poorest county in the state and it anticipates the 800 jobs (paying an average
starting wage of $12 per hour) that will be generated by the distribution center Wal-Mart
plans to establish there. “Without employers, there are no employees. There is no health
insurance” (Hopkins, 2005). Governor Ehrlich characterized the issue as a fight over the
future jobs growth in state of Maryland, not just the Eastern Shore. The Governor=s remarks
places more emphasis on creating future jobs and less on retaining existing jobs that already
provide health care.
The Governor=s support of a seemingly less free market system that creates jobs
irrespective of the impact on the existing job market needs to be viewed with some
trepidation. States should protect existing jobs and expand job opportunities in areas that
offer competitive salaries and benefits. This is how states maintain a high standard of living
for it citizens and lessen demands on state services. Maryland=s approach to job creation
after the last recession focused on the new an emerging bio-tech industry, emerging
technologies, growing small firms, incubators, the health care related entities, and expanding
government contracts. This approach has allowed states like California, New Jersey,
Massachusetts and Maryland to move forward and create good paying jobs. The evidence is
clear that Wal-Mart has significantly increased sales and expanded its market share in the
greater Baltimore-Washington, D.C. corridor. This expansion of Wal-Mart threatens
employee=s quality of life and the standard of living. The union issue appears to be collateral
to the overall debate. Most employees and families are more concerned about being able to
pay for higher transportation costs, expensive housing, tuition increases, and other related
expenditures.
VI. CONCLUSION
Overall, Wal-Mart has made significant contribution to the employment base of many
communities, has helped community organizations, and has provided health insurance for
most of its employees. However, Wal-Mart can do better. A 2004 report by the Democratic
Staff of the House Education and Workforce Committee entitled,”Everyday Low Wages: The
Hidden Price We All Pay for Wal-Mart,” ‘analyzed the company’s books and assessed the
costs to U.S. taxpayers of the many employees so underpaid that they qualify for welfare
benefits.’ The report indicated that for a 200-employee Wal-Mart store, the government
spends $108,000 a year for children’s health care, $125,000 annually in tax credits and
deductions for low-income families, and $42,000 a year in housing assistance. The report
estimated that a 200-employee Wal-Mart store costs federal taxpayers $420,750 a year (this is
about $2,103 per Wal-Mart employee). This sum translates into a total annual corporate
welfare bill of $2.5 billion for Wal-Mart’s 1.2 million U.S. employees (Olesker, 11-25-05).
296
REFERENCES
Babbaro, M. (May 23, 2005). Putting on the Brakes, The Washington Post, May 23, 2005,
E1 & E10.
Barnes, J. (2005). Business Ethics and Corporate Social Responsibility, Law for Business, 9th
edition, 67.
Buchanan, C. (June 20, 2005) Wal-Mart, Press Release.
Department of Legislative Services, (May 9, 2005). General Assembly of Maryland, Fiscal
and Policy Notes, 1-2.
Firey, Thomas A. (April 29, 2005). Maryland=s 0.05 Percent Solutions, The Daily Record.
Fisher, J. ( May 20, 2005). Ehrlich: Wal-Mart Bill is Anti-Business, Daily Times.
Green, Andrew J. (November 18, 2005), Wal-Mart Hires More Lobbyists to Help Topple
Benefits Bill, The Baltimore Sun.
Hill, M. (July 24, 2005). Picture of Health, Perspective, The Baltimore Sun, C1.
Hopkins, J.( May 20, 2005). Wal-Mart to Delay Somerset Center, The Baltimore Sun.
Lazarick, L. (June 2005). Other Side of Wal-Mart Story, The Business Monthly, 9.
Olesker, Michael. (November 25, 2005). “Battling The Wal-Mart Behemoth Will Take
Guts,” The Baltimore Sun, B1.
Pearlstein, Steven (April 13, 2005). “‘Get Wal-Mart= Bill is Just For Show” Washington
Post, C1.
Struever, B. (2005). But Big Companies Now Must Pay Their Share of Health Care,
Washington Post.
Wagner, J. (April 6, 2005). Maryland Passes Rules on Wal-Mart Insurance, The Washington
Post.
Walker, A. (May 4, 2005). 500 Jobs to be Lost as Giant Revamps, The Baltimore Sun, 1.
Williams, L. (June 26, 2005). Danger Seen in China=s Economic Power, Perspective, The
Baltimore Sun, C1.
297
DISCRIMINATION, POLITICAL POWER, AND THE REAL WORLD
ABSTRACT
Every single person is a unique combination of genes, which are both the cause and
the consequence of our differences. This society has been formed on the principles of
diversity and yet succeeded to become the world’s major economic and political power, long
before the complexity of our differences emerged as an issue. Over the years minorities and
women have been viewed by society in many different ways regarding their function in
society and what it should truly be. There was wide disagreement about the particular jabs a
woman and minority might pursue. At one extreme, Americans believed that woman or
minority should work unless compelled to by the absence of the male breadwinner, and that
very few jobs were appropriate for women. At the other extreme, a very small minority of
Americans believed that every woman and minority should be free to follow the career of her
or his choice. Therefore this paper investigates whether there was any discrimination and gap
between women and minorities in this society or not and why do wage differentials exist
among white and nonwhite?
I. INTRODUCTION
During the early stage of our lives, we learn how to talk by reproducing what we hear,
we learn the values accepted as standards, by our surroundings, and we learn to like and
dislike. Historically, the American society has been formed on the principles of diversity and
yet succeeded to become the world’s major economic and political power, much before the
complexity of our differences emerged as an issue.
Minority in the workplace has become a popular buzzword in corporate America. But
what does it really mean? That is hard to say given the fact that different firms interpret the
notion of diversity differently (Wilson, p. 21). Minority in the workplace comes in different
forms such as race, gender, religion, age sexual preference, social, economic or ethnic
background, education, experience, etc. Our traditional perception of diversity in the
workplace has been focused on minorities and women. Some companies are focusing on the
interests of women and have created advisory teams to deal with the issue (Henderson, p.37).
But this approach has not been a solution for women and minority and only a very small
minority of Americans believed that every woman and minority should be free to follow the
career of her or his choice.
298
Over the years, women and minorities have been viewed by society in many different
ways to their function in society truly should be. Between 1890 and 1920, known as The
Progressive Era, nearly everyone believed that women’s first duty was to bear and raise
children and maintains the household duties, and that women were better fitted for these than
any other function. Nearly everyone agreed that it was sometimes necessary and proper for
some women to work, and that some jobs outside of the home were suitable for some women.
However, there was wide disagreement about the particular jobs a woman might pursue. At
one extreme, Americans believed that no female should work unless compelled to by the
absence of the white male breadwinner, and that very few jobs were appropriate for women.
Work and relationships between men and women are two subjects that are questions of
women’s employment. We can be certain that changes in the types of women’s work have
been influenced by what people have thought and felt about women and about the nature of
work.
The employment of women and minorities was a major public issue around the turn of
the century. The increase in labor-force participation of women has been called the
outstanding phenomenon of our century. Women’s participation in the labor force affects
every aspect of life, including trends in fertility, marriage and divorce, patterns of marital
power and decision-making, and demand for supportive services in the economy. For this
reason, it is said that the greatest changes of the twentieth century may result not from atomic
energy or the conquest of space, but rather from the tremendous increase in the proportion of
women working outside the home (Rozen, 1979).
II. BACKGROUND
Earlier in the century it was virtually impossible for women and minorities to obtain
professional training or t get white-collar jobs. But now at the end of the century they are
working as lawyers, doctors, journalists, and in a variety of other white-collar and
professional occupations. People at all levels of economy and social status were influenced to
some degree by the attitudes and ideas of both sides of the issue. The antagonistic view
toward the employment of women is perhaps best illustrated by the idea that almost all
women should retire from work when they marry.
The vast proportion of working women are in a restricted number of jobs and in low-
paying, low prestige, and in lower power positions than white men. Even within the same
major occupational groups, women’s earnings are lower than men’s (see Table). In addition
to receiving less pay, women are often excluded from important fringe benefits.
299
Median Weekly Earnings Of Full-Time Wage By Sex And Occupational Group*
___________________________________________________________________________
Occupational Group Women Men
Lawyers $624 $806
Managers, Marketing $470 $751
Engineers $580 $691
Accountants $398 $554
Secretaries $286 $322
Bus Driver $285 $389
Elementary School Teachers $415 $490
*Source: Earl E. Mellor, “Weekly Earnings in 1996,” pp.41-46.
The Table shows the median weekly earnings in several of the higher and lower paid
occupations in 1986. The gap in earnings between the highest paid and lowest paid
occupations is quite large. The most obvious difference is that the weekly earnings of women
are, as a rule, considerably less than the earnings of men. A portion of the earnings gap
between men and women is attributable to the fact that the majority of women are employed
in a fairly narrow set of low-wage; female-dominated occupations while a significant number
of men are employed in a different set of male-dominated, high-wage occupations. This
division of the occupational structure into “female” occupations and “male” occupations is
generally referred to as occupational segregation, although some economists prefer the less
pejorative term “occupational concentration”.
From a human capital perspective, the process begins in school, where boys and girls
acquire different quantities and qualities of human capital. Few boys, but many girls, expect
to become homemakers, perhaps not for their entire working lives, but at least for a time
when their children are young. Because of these different expectations, girls take courses in
school (for example, home economics) that explicitly train them for work in the home after
graduation. From an institutional perspective, this is one reason that women do not have the
same opportunity to acquire as much human capital as men. It is true that boys and girls
follow different tracks in school, but more because of sexism and discrimination than rational
choice. On one level, the sexism in our culture socializes girls to prefer office education
classes over auto mechanics in high school, or in college English literature over civil
engineering. These sex role stereotypes also influence parents and school counselors to push
boys but not girls into areas of study leading to careers. Girls also obtain less human capital
because of discrimination.
Because of the segmented nature of labor market and the unequal education and
training opportunities provided women and minorities, the majority of women and minorities
are crowed into a relatively small set of occupations in the labor market, resulting in intense
competition for jobs and low wages. Occupational crowding can not totally explain the low
level of earnings in minority occupations and in this case political power play an important
role!
IV. CONCLUSION
More than six billion people now live on our planet and every single person is the
unique combination of genes which is both the cause and the consequence of our differences.
In modern society particularly, work is a source of personal identity and people introduce
300
themselves by indicating the kind of work they do, and “does” because an important basis of
whom one “is”.
Over the years women and minorities have been viewed by society and political
power in different ways as to what their function in society truly should be. There was wide
disagreement about the particular jobs a woman might pursue. The increase in the labor –
force participation of minorities has been called the outstanding phenomenon of our century.
The growth of the industrial economy was on their side, as well as the movement of
democracy, which influenced much of the nineteenth century.
Therefore, the minorities make progress and today, four out of ten workers in the U.S.
are women and minorities. But the vast majority of women and minorities are still working in
a restricted number of jobs and in low-paying, low-prestige, and low-power positions. Even
within the same major occupational groups, women’s earnings are lower than men’s, and do
not have the same opportunity to acquire as much human capital as white men.
REFERENCES
Fox, Mary, & Hesse-Biber, Sharlene, Women at Work, pp. 1-12. Mayfield Publishing
Company, 1984.
Henderson, D. The Drive for Diversity. Air Transport World, p.32, June, 1995.
Kaufman, Bruce, The Economics of Labor Markets and Labor Relations, pp. 339-394. The
Dryden Press Co. 1989.
Osteman, Paul, “Sex Discrimination in Professional Employment: A Case Study.” Industrial
and Labor relations Review 32 no. 4, pp. 451-484, July, 1979.
Rosen, Friedas. “Women and Work Force: The Interaction of Myth and Reality.” In Eloise S.
Snyder ed., The Study of Women: Enlarging Perspectives of Social Reality, pp. 79-
102. New York: Harper & Row, 1979.
Szilagyi, Andrew, Management and Performance, pp. 352-355. Goodyear Publishing
Compny, 1981.
Wilson, M. Diversity in the Workplace, “Chain Store Age Executive with Shopping Centre
Age, “p.71, June 1995.
301
CHAPTER 10
FINANCE
302
THE SHORT TERM AND LONG TERM IMPACT OF THE STOCK
RECOMMENDATIONS PUBLISHED IN BARRON’S
ABSTRACT
I. INTRODUCTION
However, the fact that thousands of analysts employed by the investment firms in the
US write investment reports and make stock recommendations everyday show that at least
investment firms must believe their recommendations work. Some academic researchers
suggest that superior returns are possible too. (Brav and Lehavy, 2003) document significant
abnormal returns around analysts’ target price change. (Jegadeesh, Kim, Krische, and Lee,
2004) shows quarterly change in recommendations show robust results in predicting returns.
(Jegadeesh, Kim, Krische, and Lee, 2004) find that stocks favorably recommended by
analysts outperform stocks unfavorably recommended by them. Womack (1996) shows the
brokerage analysts’ recommendations have investment value. Similar positive findings can
also be found in Palmon, Sun and Tang (1994); Wijmenga (1990); Syed, Liu and Smith
(1989). Barber, Lehavy, Mcnichols, and Trueman (2001) document that investment
strategies based on the consensus recommendations, in conjunction with active portfolio
management yield annual abnormal returns greater than four percent.
We analyze daily abnormal returns in the U.S. markets from published stock
recommendations of the weekly financial magazine Barron’s. Using the sample from
January, 2004 to December, 2005, we examine how the stock prices in the US stock markets
303
react to the stock recommendations. Using event study methodology and market model as a
benchmark, we calculate abnormal returns to ascertain the impact of the recommendations
published in the Research Reports. We find: (1) there are no statistically significant short
term abnormal returns associated with the published recommendations in Barron’s based on
two week event window
and (2) there are no statistically significant long term abnormal returns associated with the
published
recommendations in Barron’s based on six month and twelve month event windows.
In this research, we ask the following five questions in our paper:
(1) Do security prices on the US markets react to the recommendations published in
Barron’s?
(2) Is there any information leakage prior to the publication of share recommendations?
(3) Do the recommendations possess real economic content or permanence, or are they
merely a 'self-fulfilling prophecy'?
(4) Can investors expect profit by following these recommendations?
(5) Are there any significant positive or negative abnormal returns before and after
publication?
(6) Will there be any abnormal returns over 6 month and 12 month period?
The Plan of this paper is as follows. In section II, we present our data. Section III
explains the model and methodology. The empirical results are analyzed in section IV. A
summary and conclusions section ends the paper.
The analyst recommendations used in this study are from Barron’s Weekly. The data
for stock returns are from CRSP database. To test the impact of the publication of
recommendations on abnormal returns, we define event day (day 0) as publication date. The
period plus and minus 10 days surrounding the event day is the 'event window.'
Every week, Barron’s contains a section “Research Reports” in the Market Wrap.
Here is the description about “Research Reports” from Barron’s: “Before an investment firm
recommends a stock for purchase, they'll research the company to determine whether or not
it's a good investment. This column provides a sampling of research report information from
various investment firms and analysts.”
Our sample includes 484 recommendations from Research Reports from January 2004
to December 2004. Table A-1 summarizes the recommendations by upgrading, downgrading,
maintaining, and initiating. Among 484 recommendations, 133 are buy rating, 27 are strong
buy, 6 are over weight, 3 are speculative buy, 106 are outperform, 3 are accumulative, 1 is
attractive, 1 is attractive, 37 are market perform, 44 are hold, 32 are neutral rating, 53 of them
were sell rating, 11 under weight, 13 under perform. The characteristics of our sample is
304
consistent with the findings in McNichols and O’Brien (1998), Barber, R Lehavy, Mf
Mcnichols, B Trueman (2001), and Kadan, Ohad, Leonardo Madureira, Rong Wang, and
Tzachi Zach (2005) in that a relative small percentage of the sample is sell recommendations.
The market model used to estimate the abnormal return for the jth stock in period t is as
follows:
R j,t= α j + βj Rm,t+ εj,t (1)
where
Rj,t = return on security j on day t
Rm,t = return on market on day t
αj = a constant over time, stable component of security returns
βj = beta of stock j, assume stable over time
εj,t = error term or return due to non-market forces (abnormal return).
Equation (1) is used to find the normal or expected returns. According to this model,
each security's return in period t is expressed as a linear function of the contemporaneous
return on the market and a random error term (j,t) which reflects security specific returns. The
coefficients of the linear market model (α, β) are estimated by regressing observed rates of
return for stock j on the corresponding rates of return for a market index. In computing these
parameters daily, instead of monthly, data are used because price adjustments may occur
within a few days after publication. Brown and Warner (1985) found that for daily returns the
market model was most successful in identifying abnormal performance.
We use GARCH model to improve the estimation accuracy. Since ordinary least
squares (OLS) models assume homoscedasticity in the error terms. A growing body of
literature indicates that many daily return series exhibit heteroscedasticity, and the variance
of the forecast error will depend on the size of the preceding disturbance. ARCH or GARCH
models have been widely used to deal with this heteroscedasticity problem in the time series
analysis.
We collected the 450 daily returns for each recommended stock. Then, we divided the
data into two time periods: the estimation period and the event window. The time line of
whole sample period is denoted from T=-189 to T=+260. We estimated market model
305
parameters over the estimation period beginning T = -189 through T = –11. This yields an
estimation period of 179 days. To investigate abnormal returns over 21 (10 days before the
event and 10 days after the event) day period, 6 month period, and 12 month period, we have
three event windows. 20 day event window period has twenty-one days from T=-10 to T=+10
including T=0, the event day, 6 month event window period has 114 days from T=-10 to
T=+130 including T=0, and 12 month event window period has 271 days from T=-10 to
T=+260 including T=0.
Assuming that the estimated parameters α and β remain unchanged over our sample
period, the expected return E(Rj,t) is computed for each stock over the event window, from t =
–10 to t = +10 for 21 day window, from t = –10 to t = +130 for 6 month window, and from t
= –10 to t = +260 for 12 month window.
E(R j,t)= α j + βj Rm,t (2)
The abnormal return (ARj,t) of stock j is defined as the deviation of each return on the
stock j from its expected return, given the return earned by the market index during day t.
Using estimation period data to estimate market model parameters and assuming that these
parameters hold in the event-window, the abnormal returns in the event-window period are
estimated as follows:
ARj,t = Rj,t - E(R j,t), or ARj,t = Rj,t – (α j + βj Rm,t ) (3)
where Rj,t and Rm,t are the observed daily returns for security j and the market index,
respectively, on day t during the event window.
If publication of share recommendations has no impact on the sample stocks, then on
average, one would not expect any abnormal return:
E(ARj,t) = 0 (4)
assuming the standard assumptions hold.
To determine the statistical significance of abnormal returns on any event day t of stock j, we
first compute the Standardized Prediction Error (SPEj,t), an approach originally proposed by
(Patell, 1976) and (Patell and Wolfson, 1999), and popularized in the finance literature by
(Dodd and Warner,1983). Next, we construct the test statistic Zt for every day t in the event
window, for all N stocks. As the SPEt for a particular event window day aggregates
observations from different periods and across all sample stocks, unfavorable and favorable
effects of confounding events may be offset.
Assuming that abnormal returns (ARj,t) are independently distributed and have a finite
variance and that the publication of share recommendations does not lead to abnormal
returns, the null hypothesis is that publication of share recommendations in Barron’s has no
systematic effect on recommended stocks' prices:
H0: publication of stock recommendations in Barron’s has no statistically significant
impact on stock price
H1: publication of stock recommendations in Barron’s has a statistically-significant
impact on stock price
We also investigate the true economic impact or permanence of the press
recommendations to ascertain whether or not a 'self-fulfilling prophecy' effect exists.
Additionally, we explore whether following press recommendations enable abnormal profits.
We compute the average cumulative abnormal return (ACAR) to analyze the aggregate effect
of such published information in the days prior to publication to determine whether any
'leakage' occurs.
306
abnormal returns during the estimation period are independently distributed and the
distribution of the test statistics is standard normal.
1.0024504
1.0003378
0.9982252
0.9961126
0.994
0
-8
-6
-4
-2
10
0
8
-1
Ev e nt Day
The significant AR on day +3 with .477% doesn’t warrant a profitable short term trading
strategy based on the published recommendation. As mentioned above a round trading of
buying and selling to take profit in US stock markets would involve transactions amounting
about 1.7%. However, investors are able to make a short term abnormal return profit by
buying the recommended stocks before the recommendations are published, a trading strategy
that requires “insider information.” Who are those possible profitable traders? They may be
the clients of the brokerage houses that make the stock recommendations, or may be the
people in the journal, or may be the people in the publication printing agency. The Z-value of
1.517 for ACAR of 3.336% on day +4 is at 12 percent significant level. From day 0 to day +4
the ACAR is 1.215% (3.336% (day +4) – 2.121% (day 0).) A strategy of buying the
recommended stocks on day 0 after the recommendation is published does not yield a reliable
abnormal return net of transaction costs of 1.7%.
To see if investors will earn abnormal return profits by following the newspaper
recommendations we compute the average cumulative abnormal returns CAR. The results of
cumulative abnormal returns are shown in Figure 3. From day -10 to day -1, the period before
the event day, the cumulative abnormal returns reach 2.10%. From the event day to day +10,
the cumulative abnormal returns are 2.02% (4.12 on day +10 minus 2.10 on day -1). We
derive two conclusions from Figure 3: first, the average transaction costs in US stock markets
for a single trade are 0.85% and 1.70% for a round trading. Investors are able to earn an
307
insignificant abnormal return profit by buying the recommended stocks on day 0 for 0.32%
profit (2.02% -1.70%). Second, investors are able to make abnormal returns if they know the
stock recommendation prior to the recommendation publication. If an investor buy the
recommended stock from day -9 and hold it to the day +10 the cumulative abnormal returns
are 2.13% net profit after the transaction costs (4.12%-1.70% -0.196%).
IV. CONCLUSIONS
REFERENCES
Barber, Brad M., Lehavy, Mcnichols, and Trueman, 2001, Security analyst recommendations
and stock returns, Journal of Finance, 56, 533-563
Bjerring, J. H., J. Lakonishok and T. Vermaelen, 1983, Stock prices and financial analysts
recommendations, Journal of Finance 38, 187-204.
Bollerslev, T., R. Chou and K. Kroner, 1992, ARCH modeling in finance, Journal of
Econometrics 52, 5-59.
Brown, S. J. and J. B. Warner, 1980, Measuring security price performance, Journal of
Financial Economics 8, 205-258.
Dimson, E. and P. Marsh, 1986, Event study methodolgoies and the size effect: the case of
UK press recommendations, Journal of Financial Economics 17, 113-142.
Dimson, E. and P. Marsh, 1984, An analysis of brokers' and analysts' unpublished forecasts
of UK stock returns, Jegadeesh, Narasimhan, Joonghyuk Kim, Susan D. Krische, and
Charles M. C. Lee, 2004.
Analyzing the Analysts: When Do Recommendations Add Value? Journal of Finance,
Volume 59, Number 3 (June, 2004), Page Numbers: 1083 – 1124.
Kadan, Ohad, Leonardo Madureira, Rong Wang, and Tzachi Zach, 2005, Conflicts of Interest
and Stock Recommendations - The Effects of the Global Settlement and Related
Regulations, November 2005, Working paper.
Lawrence, Martin, Qian Sun and Francis Cai, 1996, Press recommendations and abnormal
returns on the stock, exchange of Singapore, The Journal of International Finance,
Volume 4, Number 2, 49-163
Liu, P., S. D. Smith and A. A. Syed, 1990, Stock price reactions to the Wall Street Journal's
securities recommendations, Journal of Financial and Quantitative Analysis 25, 399-
410.
308
INVESTOR RATIONALITY IN PORTFOLIO DECISION MAKING:
THE BEHAVIORAL FINANCE STORY
ABSTRACT
I. INTRODUCTION
The end of the dot-com era of the late nineties and the continuing anxiety over stock
market performance has had a sobering effect on investors and necessitated a serious
revisiting of the rules of investing. Following the catastrophic events surrounding the bursting
of the speculative bubble in March 2000, new attempts are being made to explain the
behavior of financial markets, one of the foremost of which is in the area of behavioral
finance.
Interest in behavioral finance research has been fueled by the inability of the
traditional finance framework to explain many empirical patterns, including stock market
bubbles in Japan in the late eighties and the U.S, post-announcement earnings drifts. Most
modern textbooks in finance and investing appear to be silent on the influence of behavioral
finance on financial markets. Olsen (1998) notes, behavioral finance recognizes the
paradigms of traditional finance such as rational behavior and profit maximization in the
aggregate, but asserts that these models are incomplete, since they do not fully consider
individual behavior. Specifically, behavioral finance “seeks to understand and predict
systematic market implications of psychological and economic principles for the
improvement of financial decision making” (ibid.). Thus, the insight of how psychology
affects financial decisions, corporations, and the financial markets is finding greater currency
in mainstream finance.
Financial economists are increasingly coming to believe that the study of psychology
and other social sciences can shed considerable light on the unpredictable and erratic nature
of human behavior, and by extension, challenge the prevailing paradigm of efficiency of
financial markets, as well as explain stock market anomalies, market bubbles, and crashes.
Recognition of human biases and accompanying irrationality warrants greater investigation
so as not to repeat the mistakes of the past. As Jolliffe (2005) explains, “for investors who
309
bought technology funds during the internet boom, only to see their value halve when the
bubble burst, studying behavioral finance, the analysis of irrational investor behavior, could
pay big dividends.” Despite the authority conferred on the field in the awarding of the 2002
Nobel Prize in Economic Sciences to noted behavioral economist Daniel Kahneman,
behavioral finance is in a relatively incipient stage as a field of rigorous inquiry. Behavioral
finance uses models in which some agents are not fully rational, either because of individual
preferences or mistaken beliefs; it, thus, encompasses research that drops the traditional
assumptions of expected utility maximization with rational investors in efficient markets in
traditional finance. As Ritter (2003) explains, the twin cornerstones of behavioral finance are
cognitive psychology (how people think) and the limits to arbitrage (when markets will be
inefficient).
This study attempts to discuss the central tenets of behavioral finance and uncover its
impact upon investment decision-making at the individual level. It includes a discussion of
various psychological biases that result in suboptimal investment strategies and concludes
with a discussion of strategies for recognition and avoidance of these biases in portfolio
decision-making and individual retirement planning.
While many research studies have indeed shown that it is hard to 'beat the market', the
assumption of pervasive market efficiency has been muddied by recent events including the
internet stock bubble and the post-Enron reaction to the accounting and business practices of
a large number of US quoted firms. The burgeoning interest in behavioral finance and
growing body of research is questioning the impact of individual and crowd psychology on
decision-making in financial markets. Under the paradigm of traditional financial economics,
decision-makers are rational and utility maximizing. In contrast, cognitive psychology
suggests that human decision processes are subject to several cognitive illusions, those
caused by heuristic decision-making processes and those arising from the adoption of 'mental
frames', the most salient of which are discussed below.
310
recent past. If markets are fully rational, recent trends in share price should not impact on
future expectations of a share's price. People tend to put too much weight on recent
experience. As an example, when equity returns have been high for many years (such as
1982-2000 in the U.S. and Western Europe), many people begin to believe that high equity
returns are “normal.” Representativeness is poor protection against the laws of chance.
Overconfidence: People are overconfident about their abilities. The classic behavioral
characteristic of "overconfidence" leads many investors to believe they can consistently select
the best investment, manager and/or sector. Entrepreneurs are especially likely to be
overconfident. Overconfidence manifests itself in a number of ways. One example is too little
diversification, because of a tendency to invest too much in what one is familiar with. Thus,
people invest in local companies, even though it is bad from a diversification viewpoint
because their real estate (the house they own) is tied to the company’s fortunes, and they
already have significant human capital invested in the firm. Research shows that men tend to
be more overconfident than women. This manifests itself in many ways, including trading
behavior. Barber and Odean (2001) analyzed the trading activities of people with discount
brokerage accounts. They found that the more people traded, the worse they did, on average.
And men traded more, and did worse than, women investors.
Anchoring: Anchoring arises when a value scale is fixed or anchored by recent observations.
An example would include a case where a share has recently suffered a substantial fall in
price. An investor may be tempted to evaluate the 'worth' of the share by reference to the old
trading range. As an example, a company whose stock is trading at $10 a share; The company
then announces a 300% earnings increase, but its stock price increases only to, say, $12 a
share. The small rise occurs because investors are "anchored" to the $10 price. They believe
that the earnings increase is temporary when, in fact, the company will probably maintain its
new earnings level.
Loss Aversion: Loss aversion is based on the idea that the mental penalty associated with a
given loss is greater than the mental reward from a gain of the same size. If investors are loss
averse, they may be reluctant to realize losses and may even take increasing risks to escape
from a losing position. This provides a viable explanation for 'averaging down' investment
tactics, whereby investors increase their exposure to a falling stock, in an attempt to recoup
prior losses. Shefrin (2001) terms this phenomenon “escalation bias”.
311
item, while simultaneously saving at lower interest rates for a child's college fund. The use of
mental accounts could be partly explained as a self-control device. As investors have
imperfect self-control, investors may separate their financial resources into capital and
'available for expenditure' pools, in an effort to control their urge to overconsume. Investors
tend to treat each element of their investment portfolio separately, possibly forgoing the
benefits of portfolio diversification. This can discourage an investor from selling a losing
investment, and possibly forgoing an alternative investment opportunity, because its 'account'
is showing a loss.
If such patterns exist, there may be scope for investors to exploit the resulting pricing
anomalies to capture superior, risk-adjusted returns. Proponents of EMH, in fact, argue that
smart money will exploit such anomalies and drive prices to their fundamental values. Other
research, however, shows that rational investor trading is unable to completely offset the
actions of irrational investors. This, as pointed out by Miller (1977), is largely be due to the
inability of smart money to engage in short sales when the bulk of shares are held by
irrational investors. Using data on the interest cost of borrowing and lending shares in the
1920s and 1930s, Jones and Lamont (2001) show that shares that were more expensive to
short tended to be highly priced and had lower subsequent returns on average as predicted by
Miller's theory.
The preceding discussion has reviewed human behavioral biases and the manner in
which they impair sound decision-making and hurt investor pocketbooks. Strategies that
would be most beneficial to individual investor decision-making, at their core, require self-
awareness and discipline. Specifically, investors can immunize themselves from these biases
by following the following strategies:
Understanding biases: Recognition of biases in oneself and others can be the first step in
avoiding them.
Quantifying investment criteria: Quantifying investment goals prevents one from acting on
rumors, emotion, and other detrimental biases. The criteria for investing must first meet
quantitative benchmarks and can be supplemented by qualitative information such as the
firm’s recognition as a producer of quality products. Diversifying: The principle of
diversification was reinforced when Enron collapsed and $3 million portfolios evaporated in
value. Diversification across different industries and across different investment vehicles
312
(stocks, bonds, real estate, precious metals) would limit investment in one’s employer’s
stock. This is desirable all of one’s human capital is already invested in the employer-firm.
Controlling one’s investment environment: This entails checking one’s stocks once per
month, trading just once per month on the same day each month, reviewing portfolio annually
to track if investments are meeting desired strategies.
Understanding that earning the market rate of return, or even slight underperformance, is
not catastrophic to wealth: The strategies for earning abnormal profits usually exacerbate
cognitive biases and ultimately contribute to lower returns. Portfolio strategies based on
indexing inhibit the deleterious effects of biases and wring out the emotion out of investing
are, therefore, deemed the most successful.
VI. CONCLUSION
The extent of research in the field of behavioral finance has grown noticeably in the
past decade. The field merges concepts from financial economics, psychology and sociology
in an attempt to construct a more detailed model of human behavior in financial markets.
Currently, no unified theory of behavioral finance exists. Shefrin and Statman (1994) began
work in this direction, but so far, most emphasis in the literature has been on identifying
behavioral decision-making attributes that are likely to have systematic effects on financial
market behavior. While behavioral factors undoubtedly play a role in the decision-making
processes of investors, they do not quash all the predictions of efficient market theory; they
offer plausible explanations of financial markets which would otherwise be categorized as
anomalous. The current state of research from the efficient market and behavioral
perspectives therefore suggests that an inclusive and diverse approach in the choice of
theoretical explanations of the behavior of financial markets will be the pragmatic response to
the inconclusive results on either side of the debate. While, on the one hand, investors not
many investors are profiting from market anomalies, many will agree that the stock market
bubble burst of 2000 is better explained by hubris and irrational exuberance grounded in
behavioral finance than by the efficient markets theory. This research benefits individual
investors the most as it seeks to create awareness of the various human biases and the costs
they impose on their portfolios, and advocates voluntary detachment from the “emotion”
inherent in investing.
REFERENCES
Barber, Brad, and Terry Odean, 2001, Boys will be boys: Gender, overconfidence, and
common stock investment, Quarterly Journal of Economics, v. 116, 261-292.
Jolliffe, Alexander, 2005, Following the herd could cost you dear: Behavioral Finance, The
Financial Times (London Edition), January 29, c5.
Jones, C M, and O. A. Lamont, 2001, Short-Sale Constraints and Stock Returns, NBER
Working Paper No 8494.
Kahneman, Daniel, and Amos Tversky, 1979, Prospect theory: An analysis of decision
making under risk, Econometrica, 47(2), 263-291.
Miller, Edward M., 1977, Risk, Uncertainty and Divergence of Opinion, Journal of Finance,
32(4), 1151-1168.
Olsen, Robert A., Behavioral Finance and Its Implications for Stock-Price Volatility,
Financial Analysts Journal, 54, vol. 2, March-April 1998, 10-18.
Ritter, Jay, 2003, Behavioral Finance, Pacific-Basin Finance Journal, Vol. 11, No. 4,
September, 429-437.
313
AN ANALYSIS OF THE MOVEMENT OF FINANCIAL INDUSTRY INDEXES ON
THE STOCK EXCHANGE OF THAILAND
ABSTRACT
This paper provides an analysis of the movement of Financials industry indexes: Banking,
Securities, and Insurance sector indexes on the Stock Exchange of Thailand from January 3,
1995 through December 30, 2004. The results of the Durbin-Watson Test Statistic indicate
that the movement of the SET index and of the sub-sector indexes of the Financials industry
was random. The GARCH-M results show a positive relationship between the variances of
the sub-sector indexes of the Financials industry and the SET index. Furthermore, any shock
to SET will affect the sub-sectors since the persistence of stock market volatility is greater.
The results of Granger causality test indicate two-way causality relationships between
Securities and Insurance indexes, as well as between Insurance and Banking indexes, but
only a one-way relationship from Banking index to Securities index.
I. INTRODUCTION
This study presents the results of a research investigation into the movement of
Thailand’s Financials industry sector indexes in the context of the Stock Exchange of
Thailand (SET) index, which is a market capitalization weighted price index that compares
the current market value of all listed common shares with their base market value. The SET
Index calculation is continuously adjusted for new listings, delisting and capitalization
changes in order to eliminate effects other than price movement from the index (SET, 2004).
In addition, this study analyzed the relationships between the variances of the sub-sector
indexes of the Financials industry and the SET index. Finally, this study determined Granger
causality among Banking, Securities, and Insurance sector indexes. The sector indexes are
calculated from the prices of common shares in each sector.
This study used the daily stock index prices of (SET) and Financials industry:
Banking sector, Securities sector, and Insurance sector. These secondary data were obtained
from the SET Library, covering the ten year period from January 3, 1995 through December
30, 2004. The movement of the SET index and the Financials industry indexes: Banking
sector index, Securities sector index, and Insurance sector index was analyzed by using the
Durbin-Watson Test Statistic to determine autocorrelation.
314
The relationships between the variances of the sub-sector indexes of the Financials
industry and the SET index were analyzed with the Generalized Autoregressive Conditional
Heteroscedasticity in Mean (GARCH-M) model.
Granger causality test (Granger, 1969) and (Granger, 1988) was used to determine
Granger causality among the Financials sector indexes: Banking sector index, Securities
sector index, and Insurance sector index.
315
Also, as stated above, the SET as a proxy for the information set has a direct impact on the
risk. As a result, the information set at time (t − 1) becomes very important to the investor,
therefore the SET is introduced in the variance equation as an exogenous regressor, with one
period lag to account for the information at (t − 1) :
hit = w + α i et2−1 + β i hi t −1 + set t −1 (5)
The sum of ( α ) and ( β ) represents the rate at which volatility clustering persists through
time. If the sum of ( α ) and ( β ) equals one, then current SET volatility persists indefinitely
in conditioning the future variance. As the sum of ( α ) and ( β ) approaches unity, the
persistence of stock market volatility is greater.
(3.3) Granger causality test:
This study used Granger causality test to examine for causal linkages among sectors in the
Financials industry. There are three sectors in the Financials industry: Banking sector,
Securities sector, and Insurance sector. This study examined whether any sector index
Granger-caused any other sector index.
(4.1.1) Random Walk Theory: The theory of random walks implies (Eugene F. Fama, 1995),
that a series of stock price changes has no memory—the past history of the series cannot be
used to predict the future in any meaningful way. The future path of the price level of a
security is no more predictable than the path of a series of cumulated random numbers.”
Therefore, the SET index and the three sub-sector indexes of Financials industry behaved as
random walk markets for the time period from January 3, 1995 to December 30, 2004.
The findings from this present study mirror the results of previous studies on the
random walk theory, such as (Dyer, 1976), (Hasan, 2004) and (Van Horne and Parker, 1967)
(4.1.2) Efficient Market Hypothesis: This study also tested the efficient market hypothesis.
Since only price data was taken into account, only the “weak-form” of market efficiency
could be tested. The results in this study confirmed the “weak-form” of the efficient market
hypothesis, which states that past stock price movements is fully reflected in the current stock
price. It implies that technical analysis cannot be used to predict future stock prices. Previous
316
stock price information is unrelated to future stock prices, making it impossible to predict
future stock prices from historical price movements. Therefore, past stock price movement
within the SET index and the three sub-sector indexes of Financials industry was fully
reflected in future stock prices for the time period from January 3, 1995 to December 30,
2004.
The findings from this study mirror the results of previous studies on weak-form
efficient market hypothesis, such as (Al-Loughani and Chappell, 1997).
(4.1.3) Chaos Theory: Furthermore, the findings of unpredictable behavior within the SET
index and the three sub-sector indexes of Financials industry implies conformance to Chaos
theory, which states that it is impossible to predict future outcomes in such systems.
(4.2) GARCH-M Results
Table 3 Summary of Granger Causality Test Results of the Causal Linkages Among
Sectors in the Financials Industry for the Ten Year Period
Null Hypothesis P-Value Reject Null
Hypothesis
Banking does not Granger cause Securities 0.02640 Yes
Securities does not Granger cause Banking 0.23672 No
Securities does not Granger cause Insurance 5.60E-10 Yes
Insurance does not Granger cause Securities 0.00759 Yes
Insurance does not Granger cause Banking 0.04144 Yes
Banking does not Granger cause Insurance 2.60E-10 Yes
317
Table (III) presents a summary of the Granger causality test results of the causal
linkages among the three sub-sectors in the Financials Industry: Banking sector, Securities
sector and Insurance sector on the Stock Exchange of Thailand for the Ten Year Period from
January 3, 1995 to December 30, 2004. A value of Probability (P) of less than 0.05 means
that the corresponding null hypothesis must be rejected. Therefore, all but one null
hypotheses are rejected.
The results show that Granger causality runs both ways between Securities sector
index and Insurance sector index, as well as Insurance sector index and Banking sector index,
but only one way from Banking sector index to Securities sector index for the time period
covered in this study. It is not clear why the relationship between Banking sector index and
Securities sector index is one-way, but a possible explanation relates to the economic crisis in
Southeast Asia. During this crisis, which started in July 1997, a large number of the
companies that made up the Securities sector failed.
V. CONCLUSION
In conclusion, the findings of the Durbin-Watson Test Statistic confirm that the
movement of the SET index and the sub-sector indexes of the Financials industry are random.
The findings of GARCH-M confirm that there is a positive relationship between the
variances of the sub-sector indexes of the Financials industry and the SET index. The
findings of Granger causality test confirm that there are two-way causality relationships
between Securities sector index and Insurance sector index, as well as Insurance sector index
and Banking sector index, but only a one-way relationship from Banking sector index to
Securities sector index. Why this relationship is one-way is not clear, but it might be related
to the failure of more than half of Thailand’s non-bank financial institutions during the
economic crisis in Southeast Asia. Other researchers may be interested in investigating this
area further.
REFERENCES
Al-Loughani, Nabeel and Chappell, David.. “On the Validity of the Weak-Form Efficient
Markets Hypothesis Applied to the London Stock Exchange.” Applied Financial Economics,
7, 1997, 173-176.
Bildik, Recep and Elekdag, Selim. “Effects of Price Limits on Volatility Evidence
from the Istanbul Stock Exchange.” Emerging Markets Finance and Trade, 40, (1),
2004, 5-34.
Dyer, James C. “Random Walks in Australian Share prices: A Question of Efficient Capital
Markets.” Australian Economic Papers , 15, (27), 1976, 186-200.
Engle, Robert F., Lilien, David M. and Robins, Russell P. “Estimating Time Varying Risk
Premia in the Term Structure: the ARCH-M model.” Econometrica, 55, 1987, 391-407.
Fama, Eugene F.. “Random Walks in Stock Market Prices.” Financial Analyst Journal, 1995,
75-80
Granger, Clive W. J. “Investigating Causal Relations by econometric models and cross-
spectral methods.” Econometrica, 37, 1969, 424-438.
Granger, Clive W.J. “Some Recent Developments in a Concept of Causality.” Journal of
Econometrics, 39, 1988, 199-211
318
Hasan, Mohammad S. “On the Validity of the Random Walk Hypothesis applied to the
Dhaka Stock Exchange.” International Journal of Theoretical & Applied Finance, 7, (8),
2004, 1069-1086.
Rahman, Hamid and Yung Kenneth. “Atlantic and Pacific Stock Markets Correlation and
Volatility Transmission.” Global Finance Journal, 5, 1994, 103-119.
SET. The Stock Exchange of Thailand: Sector Information. Thailand Stock Exchange
Library, 2004.
319
THE SHORT SQUEEZE AT YEAR-END
ABSTRACT
In this paper, I attempt to assess whether the information imparted through the NYSE
and NASDAQ short interest news releases creates a trading rule. There has been much work
done on the impact that short interest releases has on the price discovery process. Specific to
this paper, however, is the issue of a short squeeze that seems to be in the public press of late.
It has been argued that the longer the coverage ratio, the more likely it is that shorters can be
tempted to cover. If there is a slight increase in prices through demand pressure, shorters
quickly cover their positions causing a larger increase than otherwise, with prices falling back
eventually. Using a sample of shorted securities ranked by short interest coverage ratio, I find
that there does not appear to be a consistent trading strategy.
I. INTRODUCTION
There are several potential reasons to short a security. Speculators infer that the
security is overpriced and its price will soon fall. Hedgers may sell short to lock in future
prices or to take an arbitrage position in combination with other securities. Further, investors
may take a short position in a security they already hold long, shorting against the box, for
the purpose of deferring capital gains taxes, to name a few. Most pertinent to this study,
however, is the issue that outstanding short positions eventually need to be covered. An
article recently in the Wall Street Journal presents a trading rule in anticipation of short
coverage (using the short interest ratio) through November and December (Zuckerman,
November 17, 2005, pg. C1). It argues that by creating a portfolio consisting of the top five
short positions (by coverage ratio), you would have done better than the S&P 500 over each
of the past four years.
Previous literature on US markets has examined short selling and its impact on
individual returns with mixed results. There are some excellent studies that examine the
relationship that short interest has on price discovery, some find a positive relationship (see,
for example, Brent, Morse and Stice (1990), Senchack and Starks (1993), and Figlewski
(1981), amongst others), and others find no evidence of a relationship (see, for example,
Woolridge and Dickinson [1994], and Vu and Caster [1987], amongst others).
Specific to this study, McDonald and Baron [1973] find that the short interest ratio
provides neither bullish nor bearish predictive power for future stock movements. Asquith et
al. (2004) find that the short interest ratio provides a signal only when using an equally-
weighted (not value-) portfolio to evaluate excess returns. The short interest ratio is
calculated as a measure of the present cumulated short position relative to total outstanding
capitalization, thus presenting a ‘days-to-cover’ ratio. The higher the ratio, the greater the
proportion of shorters relative to long investors.
This paper is organized as follows. Section II describes the data and methodology. In
Section III, the stock price reactions to short interest ratios are discussed. Finally, the
conclusions are summarized in Section IV.
320
II. DATA AND METHODOLOGY
Sample Design
The sample consists of all NYSE, AMEX and NASD companies whose short interest
ratio positions are disclosed in the WSJ from November 2002 through February 2003, and
from November 2003 through December 2003. The disclosure typically contains the top 10
firms with the longest coverage ratio in days. The sample further contains data on a control
group, namely, the next 10 firms organized by coverage ratio. This group is not categorized
as such in print. Since it is expected that most investors would typically focus on the
categorized list, there may be a difference in price impact for the control group. The WSJ
reports short positions on a monthly basis, determined from the required submissions of
trading houses detailing their short transactions through the prior month. Specifically,
member firms must convey all outstanding short positions of their clients to the NYSE and
NASDAQ based on settlement by the 15th of every month. Since it is settlement, the data
includes trades up to 3 business days prior. If the 15th is a holiday, then the reporting date is
the prior business day. The NYSE imposes a delivery deadline of 2 business days following
the 15th, and then disseminates the news to the public 2 business days following that. The
WSJ publishes the short data electronically on that date, after the markets close, and it
appears in the following days’ print edition. The process for NASD stocks is identical save
for the release to the public, which occurs 5 business days after the delivery deadline, also
after markets close. Thus, information contained in the press release is cumulative in nature
and typically close to two weeks old.
Methodology
The estimation period consists of daily stock price returns and equally-weighted index
returns from October 1st through to February 28th for both 2002 and 2003. Cumulated stock
returns are compared to index return values, both before and after the New Year. Descriptive
statistics are reported in Table 1. Excess returns for each firm are reported in Table 2.
Portfolios are created at the time of the newspaper release, and held until the end of the
holding period(s). In the case of October 2002, three portfolios are formed on October 25th,
and subsequently held until the end of October, November and December, respectively.
Table I below lists the number of companies used in the creation of each portfolio per month.
For example, December 2002 for NASDAQ lists 16 stocks that complete the top 10 portfolio,
and 16 in the control sample portfolio.
2002 2003
NASDAQ
Oct Nov Dec Jan Feb Oct Nov Dec
16 17 16 15 17 16 18 18
NYSE
Oct Nov Dec Jan Feb Oct Nov Dec
18 17 19 20 18 18 20 17
III. RESULTS
It appears as though NASD top 10 results show that a portfolio created on October
25th 2002 would have outperformed the EW index if held through to the end of November.
321
Results for 2003 show that you would have outperformed the EW index through December.
Portfolios created based on November releases would not have yielded positive performance
in 2002, but would have outperformed in 2003 if you held through November. Using the
control sample, results are strongly positive for October 2002 portfolios, yet not for October
2003 portfolios. November 2002 portfolios are again yielding positive excess returns, where
November 2003 portfolios are not. NYSE top 10 October portfolio results are highly positive
in 2002, and negative in 2003. And, finally, all NYSE control portfolios (save for the
November 2003) are negative.
Table II presents results on the equally weighted composite portfolio as well as the NASD
and NYSE top 10 and control portfolios. Each portfolio’s return is calculated through a
number of holding periods. For example, October 2002 has three portfolio returns; the first
presents an excess return from Oct. 25th through Oct. 31st, the second from Oct. 25th through
Nov. 29th, and the third from Oct. 25th through Dec. 31st.
2002 2003
Equally Weighted NYSE/AMEX/NASDAQ Composite Index
Oct Nov Dec Jan Feb Oct Nov Dec
0.028218 0.026808 0.00357 0.006921 0.008929 0.031286 0.01189 0.018404
0.147153 -0.00533 -0.00955 0.071142 0.052609
0.115015 0.111861
322
IV. CONCLUSIONS
It has been argued that short positions are relatively more risky toward year end as the
likelihood of a short squeeze increases for those that have more days to cover. This paper
shows that the results, when based on an equally weighted portfolio, are more consistently
positive for NASD listed stocks than for NYSE, yet the results are not unequivocally
consistent through the two years of this study. It appears as though a trading rule does not
exist when examining the top 10 short stocks in an equally weighted portfolio relative to an
equally weighted index.
REFERENCES
Aitken, M., A. Frino, M. McCorry, and P. Swan, 1998. Short Sales are Almost
Instantaneously Bad News: Evidence from the Australian Stock Exchange, Journal of
Finance, 53, 2205-2223.
Asquith, P., P. A. Pathak, J. R. Ritter, 2004. Short Interest and Stock Returns, Working
Paper, Harvard University.
Asquith, P. and L. Muelbroek, 1996. An Empirical Investigation of Short Interest, Working
Paper, Harvard University.
Brent, A., D. Morse, and E.K. Stice, 1990. Short Interest: Explanations and Tests, Journal of
Financial and Quantitative Analysis, 25, 273-289.
Choie, K., and S.J. Hwang, 1994. Profitability of Short-Selling and Exploitability of Short
Information, Journal of Portfolio Management, 20, 33-38.
Dechow, P. M., A. P. Hutton, L. Meulbroek, R. G. Sloan, 2001. Short-sellers, fundamental
analysis, and stock returns, Journal of Financial Economics, 61, 77-106.
Desai, H., K. Ramesh, S. R. Thiagarajan, B. V. Balachandran, 2002. An Investigation of the
Informational Role of Short Interest in the Nasdaq Market, Journal of Finance, 57,
2263-2287.
Diamond, D.W., and R.E. Verrecchia, 1987. Constraints on Short-Selling and Asset Price
Adjustments to Private Information. Journal of Financial Economics, 18, 277-311.
Elfakhani, S, 1997. Short Sellers Aren’t Always Right, Canadian Investment Review, Winter,
9-14.
Figlewski, S, 1981. The Informational Effects of Restrictions on Short Sales: Some Empirical
Evidence. Journal of Financial and Quantitative Analysis, 4, 463-476.
McDonald J.G. and D.C. Baron, 1973. Risk and Return on Short Position in Common Stock.
Journal of Finance, March, 97-107.
Rubinstein, M., 2004. Great Moments in Financial Economics: III. Short-Sales and Stock
Prices. Journal of Investment Management, 2, 16-31.
Senchak, A.J., and L.T. Starks, 1993. Short-Sale Restrictions and Market Reaction to Short-
Interest Announcements, Journal of Financial and Quantitative Analysis, 28, 177-194.
Vu, J.D., and P. Caster, 1987. “Why All the Interest in Short Interest?” Financial Analyst
Journal, 43, 76-79.
Woolridge, J.R. and A. Dickinson, 1994. Short Selling and Common Stock Prices. Financial
Analyst Journal, 20-28.
Zuckerman, G., 2005. Now Showing, Again: ‘Get Shorty’, Wall Street Journal, November
17, p. C1.
323
CHAPTER 11
324
PREDICTING INTERNET USE: TECHNOLOGY ACCEPTANCE FACILITATING
GROUP PROJECTS IN A WEB DESIGN COURSE
ABSTRACT
This paper explains about assigning group projects to students taking a web design
course. It illustrates the plan of a particular course in web design at Eberly College of
Business - Indiana University of Pennsylvania. The paper begins by distinguishing between
two methods that falls within category of group learning. Then it shifts the focus to describe
some of the advantages from adopting group projects and the challenges that face
incorporating such projects at college courses. The paper then focuses on a web design course
that incorporates group projects and shows that the design of the course addresses some of
the challenges that faces introducing group projects to college courses.
I. INTRODUCTION
Group projects are described here as tasks that are assigned to a group of two or more
students to work together to complete the assigned responsibilities in the project. Different
benefits could be gained from assigning group projects to students. However, assigning group
projects and then evaluating them has met by some challenges that need to be addressed prior
to including them in the course requirements.
Two paradigms noted regarding group work at college courses: Cooperative Learning
and Team-based learning. Smart and Csapo (2003) note the difference between the two:
“Cooperative learning can be characterized by three things: (1) Using assigned roles
within groups; (2) having the teacher monitor the groups to see how they are handling
the contents and how well the groups are working; and (3) spending time after the
small-group exercise to process the small-group activity. Team based learning differs
325
in that it relies on the teams themselves to individual and group performance and to
improve performance as necessary (p. 317)”.
This paper focuses on group projects as an activity within college courses as one of
the tools to implement in both of the paradigms mentioned here. In other words, the term
“Group Projects” mentioned in this paper is a tool that is used to facilitate the learning
process within either one of the approaches: team-based and cooperative learning.
Assigning group projects to students at courses gives different advantages. Smart and
Csapo (2003) identified four benefits from involving the students to work as groups:
“Enhancing communication and decision making, increasing productivity with higher
level of involvement, commitment and motivation, improving processes and
distributing workload” (p. 316).
Increasing Productivity
A study compared individual works performance versus group work performance.
The study found that the work performance for any group participated in the study exceeded
the performance of the best individual participated in the study (West and Hollingsworth,
2004). This finding underscores the fact that productivity increases as groups work together.
This section explains some of theses challenges or difficulties that face the teachers
that attempt to assign group projects for their courses.
Forming the Group
The first challenge to assigning group projects is the method to that the teacher
follows to forming the group. The teacher here is faced with two challenges: first he/she
326
needs to sell the idea of group forming to the class (Cohen et al, 2004). Second, the teacher
needs to establish a procedure or mechanism for selecting and forming each group.
Selling the idea to the students about the advantages of team work is important to the
success of the work being done. The students after all are faced with extra work of
communicating with their peers and also with the task of coordinating their work with their
group members. Forming the group can be done by leaving it to the students to form their
groups through self-selection or it can be completed through selection by the teacher - a
process intended to increase diversity in each group. Research showed that groups formed by
self-selection are least effective while groups selected by teachers are most effective (Smart
and Csapo, 2003).
327
The formation of the groups starts at the beginning of the semester when the teacher
surveys the students about their background and preferences. Each group work together until
the end of the semester when comes time to work on the final project, the students can then
form new groups.
The role of the students in the group changes from one project to another. The
students take rotating role in taking the responsibility of the project coordinator. A different
student is assigned to be the group coordinator for each project. The project coordinator is
responsible for collecting the individual work of the students, submit it to the teacher, present
it in the class and provide the project completion form. This last form contains the name of
the students and description of the tasks completed by each of them.
Textbook Selection
One of the textbooks selected for this course helps with assigning various group
projects. The textbook (Evans, 2004) is divided into nine tutorials and contain four case
projects at the end of each tutorial. Each tutorial describes steps that contribute at the end to
the formation of a final web site for a particular company. The case projects are application
of the concepts learned in the tutorial, each case represents a web site for a different
company. The students work individually on the web sites described in the tutorial and work
in separate groups for each of the case projects listed at the end of the tutorial.
Software Selection
Microsoft FrontPage 2003 is one of the software used in this course. There are two
features in FrontPage that help with coordinating the groups work and also with monitoring
the performance group members: Tasks view and source control. In the tasks feature,
individual students record the tasks and describe the completed duties within the project. In
source control, students can “check-out” individual pages to work on it. Pages that are
checked out remain in the custody of the student working on it until it will be checked in.
During “check-out”, other students cannot work on that particular file.
VI. CONCLUSION
This paper was about group projects and including them as a tool for student learning
and evaluation in a web design course. It started by explaining the difference between team-
based learning and collaborative teaching. It then explained the advantages and challenges
that face adopting group projects in general. Then it concentrated on a web design course at
Indiana University of Pennsylvania and showed that the design of the course addressed the
challenges of group projects. However, the result of this design, its’ outcome and the level of
satisfaction from the student is still unknown. The plan of the author of this paper is to
conduct a survey at the completion of the semester, asking the students about their
perspectives of the group projects and solicit suggestions on how to improve such process.
But the course is yet to be completed at the time of publishing this paper and thus this survey
and the results from the survey can be a subject of another paper in the future.
REFERENCES
Cohen, Elizabeth G., Celeste M. Brody, & Mara Sapon-Shevin(2004). Teaching Cooperative
Learning The Challenge for Teacher Education. Albany: State University of New York
Press.
328
Evans, Jessica(2003). New Perspectives: Microsoft FrontPage 2003 Comprehensive. Boston:
Course.
Holmes, Monica C, Nancy Csapo & D’Aubeterre Fergle (2004). “Assessment: How to Get
Feedback to the Students.” Issues in Information Systems. Volume V, 502-508.
Kovac, Paul J, Gary A. Davis, Donald J. Caputo & John C. Turchek (2005).. “Identifying
Competencies for IT Workforce: A Quantitiative Study.” Issues in Information Systems.
Volume VI. 339-345.
Michaelsen L. K., A. B. Knight, and L. D. Fink (2003). Team Based Learning: A
Transformative Use of Small Groups. Westport, CT: Praeger.
329
PREDICTING INTERNET USE WITH THE TECHNOLOGY ACCEPTANCE
MODEL AND THE THEORY OF PLANNED BEHAVIOR
ABSTRACT
Two popular technology acceptance models were used to predict behavioral intentions
and self-reports of Internet usage among college students in India. Like many countries in the
developing world, India’s population has the potential to gain economic and educational
benefits from increased involvement with the World Wide Web. Questionnaires containing
previously developed and validated scales were used to collect the data. Findings supported
both models. This suggests that predictors of technology acceptance developed for Western
samples may also apply in developing areas.
I. INTRODUCTION
TAM states that behavioral intention to use a technology derives from two beliefs:
perceived usefulness, defined as the expectation that the technology will enhance one's job
performance, and perceived ease of use, defined as the belief that using the technology will
be free of effort (Venkatesh & Davis 1996, Venkatesh 1999).
Literature Review. TAM and TPB have revealed similar predictive efficacy for technology
usage criteria: spreadsheet use (Matheson, 1991) and visits to a computer center (Taylor &
Todd,1995). Both studies involved college student samples. Mahmood, Hall, and Swanberg
(2001) conducted a meta-analysis of 57 studies that identified factors related to information
technology use. Of all the variables analysed across the studies, the TAM components of
330
perceived usefulness and ease of use exhibited the largest effect sizes on technology use.
Attitude had a medium effect. TAM has support as a predictor of student Internet use and
web sites (Anandarajan, Simmers, & Igbaria, 2000; Lederer, Maupin, Sena, & Zhuang, 2000;
Lin & Lu, 2000; Moon & Kim, 2001; Selim, 2003). In a partial test of the theory of planned
behavior, George (2002) reported that attitude was related to intention to use the Internet for
purchasing products. Furthermore, intention was linked to actual purchasing behavior. The
evidence indicates that the TAM is a powerful predictor of users' technology acceptance.
Although fewer studies have investigated the TPB in the technology usage context, it has also
appears respectable as an explanatory framework. However, national culture may influence
the models’ effectiveness in predicting technology acceptance. A non-significant relationship
between perceived usefulness and a microcomputer usage in Nigeria was attributed to
cultural factors (Anandarajan, Igbaria, & Anakwe, 2002). This finding was explained in light
of the abstractive versus associative character of cultures (Kedia & Bhagat, 1991).
Abstractive cultures employ linear thinking that uses a rational cause - effect paradigm to
create perceptions. This type of culture characterizes North America and Europe. Associative
cultures in Africa and Asia may not use a logical basis for linking events. Anandarajan et al.
reasoned that individuals in an associative culture might not connect perceptions of a
computer's usefulness with usage behavior. However, these authors did find that ease of use
and social pressure related as expected to usage behavior.
Shih and Venkatesh (2003) examined home computer use across the U.S., Sweden,
and India. Attitude and normative belief structures predicted (a) rate of use and (b) variety of
home computer uses for all three countries. More specifically, for the Indian sample, an
attitudinal belief concerning utilitarian outcomes of PC use was related to both usage criteria.
A control belief, difficulty of use, was negatively related to the usage measures. These
findings appear to provide some basis for the notion that perceived usefulness and ease of use
play a role in home computer use in India. Thus relatively little evidence pertains to
influences on technology or Internet acceptance among Indian users.
Present Study. This research explored the TAM and TPB in relation to Internet use
intentions and self-reported usage with a sample of college students in India. The models
specifically tested are diagrammed in figure 1. Differences between the TAM and TPB are as
follows: The attitude and social influence (subjective norm) components are unique to the
TPB. In a less individualistic culture such as India is purported to have, social factors may be
an important influence on behavior (Hofstede, 1991; Shih & Venkatesh, 2003). Furthermore,
while TAM states that behavior is affected by behavioral intention, TPB specifies that
perceived behavioral control and intention both impact actual behavior.
331
Figure 1
Perceived
Usefulness
Intention to Internet
use Usage
Perceived
Ease of
Use
Attitude
Perceived
Behavioral
Control
II. METHOD
Sample and Procedure. Two hundred sixty-nine college students from two university
campuses in northwestern India took part in the study. Twenty-four of the students reported
that they were not Internet users. They were excluded from the study's analyses, resulting in a
sample of 245. The participating colleges and the numbers of students from each were (a)
engineering (67), (b) rural management (78), (c) arts and sciences (100). The rural
management program was in a geographically separate location from the other two colleges,
which shared a campus. There were 142 men and 103 women in the sample. Average age was
20 years. Ninety-six percent of the sample was under age 25. Seventy percent were
undergraduates and 30% were in a master's program. Average length of Internet experience
was 22 months. Data was collected via questionnaires that were administered to students
during classes. Participation was voluntary and confidential. English was the medium of
instruction at the institutions involved in the data collection; however the questionnaire
administrators were fluent in Hindi as well as English so that they could effectively answer
any questions that might arise. Language did not appear to be a problem for the respondents.
Measures. Validated scales from previous research were used to measure the variables of
interest. Self-reported Internet usage was operationalized with the one-item scale developed
by Davis (1989: 329), modified so that the focus was on the Internet. Six-position categorical
boxes were labeled “Don't use at all,” “Use less than once each week,” “Use several times a
week,” “Use about once each day,” and “Use several times each day.” The constructs of
subjective norm, perceived usefulness, perceived ease of use, and behavioral intention were
measured with scales from Venkatesh and Davis (2000). These scales have been validated
and high reliability reported for each. Behavioral intention and subjective norm were both
332
two-item measures. Four-item scales were employed for perceived usefulness and ease of
use. The word “Internet” was substituted for “system” in the scale items.
Perceived behavioral control and attitude were assessed with measures reported by
Taylor and Todd (1995). Again, the items were modified for Internet usage. Perceived
behavioral control was a three-item scale that concerned what people think that the
questionnaire respondent should do. A “don't know” option was provided in the response
scales for subjective norm, perceived usefulness, perceived ease of use, perceived behavioral
control, and behavioral intention. The purpose of this was to reduce random responding and
guessing on the part of respondents. Attitude was measured on a general four-item affective
scale. The items asked respondents whether using the Internet was bad/good, foolish/wise,
whether they disliked/liked using the Internet, and whether it was unpleasant/pleasant.
III. RESULTS
Scale reliabilities are as follows for each of the scales: intention, .81; perceived
usefulness, .77; perceived ease of use, .83; subjective norm, .78; perceived behavioral
control, .80; attitude, .83. All were close to or above .80, an acceptable level consistent with
previous findings.
TAM Results. Multiple regressions were used to investigate relationships among the
variables in the model. Results are presented in Table I. Both perceived usefulness and ease
of use were statistically significant predictors of intention to use the Internet. Furthermore,
they explained 35% of the variance in usage intention. Significant correlations were evident
for behavioral intention and the Internet usage measure (r = .24, p < .001) as well as between
ease of use and perceived usefulness (r = .49, p < .001).
Table I: Multiple Regression Results For The Tam Analysis (Dependent Variable =
Behavioral Intention To Use The Internet)
Variable Beta R R2 F_
Perceived Ease of Use .29***
Perceived Usefulness .39*** .59 .35 64.53***
***p < .001
TPB Results. Regression results displayed in Table II suggest that all three of the model's
predictor variables, subjective norm, attitude, and perceived behavioral control, were
significantly and positively related to behavioral intention to use the Internet. The
independent variables explained 31% of the variance in the criterion.
Table II: Multiple Regression Results For The Tpb Analysis (Dependent Variable =
Behavioral Intention To Use The Internet)
Variable Beta R R2 F_
Attitude .13*
Perceived Behavioral Control .40***
Subjective Norm .22*** .56 .31 38.10***
*p < .05 **p < .01 ***p < .001
333
Regression results concerning prediction of the Internet usage measure suggested a
positive relationship for perceived behavioral control but that for behavioral intention was
non-significant (see Table III). Eleven percent of the variance in usage behavior measure was
explained.
Table III: Multiple Regression Results For The Tpb Analysis (Dependent Variable =
Frequency Of Internet Use)
Variable Beta R R2 F_
Behavioral Intention .06
Perceived Behavioral Control .30*** .33 .11 15.96**
*p < .05 **p < .01 ***p < .001
IV. CONCLUSION
TAM and TPB were supported as predictors of intentions to use the Internet. Smaller
but statistically significant percentages of self-reported usage were explained also. Behavioral
intention however did not predict Internet use, a finding inconsistent with TPB. Perceived
behavioral control had a stronger impact on Internet use. The addition of a computer self-
efficacy (Fenech, 1998) construct to TAM has been proposed to strengthen the model's
prediction of Internet usage behavior. The present finding concerning the impact of control
on usage appears to provide support for the incorporation of some type of self-efficacy
component to Internet use models.
Contrary to the findings of Davis et al. (1989) and Mathieson (1991), the present
research found that subjective norm was related to intention. This may derive from the
differences in the national culture of the data collection contexts. Alternatively, Taylor and
Todd (1995) suggested that subjective norm is more important for inexperienced users. The
average level of Internet experience for the present sample was 22 months, considerably less
than has been reported for U.S. college students (Jones, 2002). Future research might focus
on separating the effects of national culture and experience as influences on Internet usage.
The present findings were also contrary to the notion that a relationship between perceived
usefulness and usage criteria may not exist in Asian (associative) cultures (Anandarajan et al.,
2002). Perceived usefulness and behavioral intention were significantly related in the present
case, consistent with the results of Shih and Venkatesh (2003).
The relationships found in the present study were not as strong as in some previous
reports (Mathieson, 1991; Taylor & Todd, 1995). A potential explanation for this difference
concerns Internet access. Much TAM research has been conducted in settings where
technology use was readily available to students or even mandatory for employees. The
university campuses on which the present data was collected were observed to have less in
the way of computer hardware and fewer Internet connections than many universities in the
U.S. These limitations on access may interfere with intentions to use the Internet and also
with the intention-usage relationship. From a practical perspective, the findings suggest that
educational administrators might focus on creating perceptions among students that the
Internet is easy to use and useful, as well as fostering positive attitudes toward Internet use,
creating social expectations regarding usage, and improving students' sense of their ability to
use the Internet. Future research might directly compare TAM and TPB across Western and
Indian samples of users. The present study’s supportive results suggest that the models could
334
provide specific avenues for encouraging the user acceptance needed to fully participate in a
global information society.
REFERENCES
335
GLOBALIZATION AND ITS IMPACT ON AFRICA’S TRADE
ABSTRACT
I. INTRODUCTION
336
poorest African countries (O’Shea, 2005). Africa’s income gap, relative to the advanced
countries, has widened and per capita incomes in a number of countries have actually
dropped (Calamitsis, 2001). These income gaps have created imbalanced development and
poverty in Africa, which also affected the global capital flows into the continent (Gondwe,
2001). Consequently, globalization needs reforms in order to create a set of policies in the
area of trade. The current global policies of trade are grossly unfair for African countries and
others in the developing world (Stiglitz, 2003). Critics of globalization argued that
globalization paralyzed Africa by turning the continent into a cluster of wagon economies
whose engines is in the developed countries. To use an analogy, a donkey and an elephant
cannot be yoked together to pull a plow, for they are not of the same size or strength. Yet this
is what globalization has done to Africa. Due to differences in weight and size, the weaker
side is struggling to keep pace, while the stronger one reaps the benefits of globalization
disproportionately (Mao, 2003).
Africa has been integrated into the global economy as an exporter of primary
commodities and importer of manufactured products. Between the years of 1960 and 1969,
Africa’s average market share of total world exports was 5.3 percent and imports of 5.0
percent. These figures were dropped to 2.3 percent and 2.2 percent, respectively between the
years of 1990 and 1998. The decline in Africa’s total world exports are attributed to the
restrictions of the free market policy, slow growth of per capita income, high transportation
costs, and the continent’s distance from major markets. Africa has also failed to attract the
capital flows it needs because of negative perceptions of the continent’s economic and
political activities, its poor infrastructure, and inadequate legal framework, and lack of
enforcement contracts (Ajayi, 2001; The economist, 1994; Financial Times, 1994). For
example, more than 75 percent of African countries had trade regimes classified as
“restrictive” by the IMF in 1990. Now only 14 percent of African countries’ trade regimes
are classified as restrictive, while 43 percent are classified as open. Yet, on average, Africa’s
trade policies remain more protectionist than those of other countries including its major
trading partners and competitors (Sharer, 2001; Ajayi, 2001). In contrast, 61 percent of
countries outside Africa have trade regimes classified as open. All industrial countries
maintain open trade regimes. Africa’s current average tariff of about 19 percent is still higher
than the average of 12 percent for the rest of the world (Sharer, 2001). The 2005 Annual
Report by the World Bank Group also finds that African nations impose the most regulatory
obstacles on entrepreneurs and have been the slowest reformers in 2004. For example, an
entrepreneur in Mozambique must undergo 14 separate procedures taking 153 days to register
a new business. In Sierra Leone, if all business taxes were paid, they would eat up 164
percent of a company’s grows profits. In Burundi, it takes 55 signatures and 124 days from
the time imported goods arrive in ports until they reach the factory gate.
337
IV. IMPACT ON ECONOMIC AND POLITICAL DEVELOPMENT
According to the Economic Report on Africa (ERA) 2004, one of the principal
reasons for the holding back of Africa's economic performance has been the continuation of
military conflicts. The political crisis has had a significant impact on the social and economic
conditions of neighboring countries in the continent. African policy makers are aware of the
fact that substantial improvements in the economic and social situation of their populations
are contingent upon the maintenance of peace. Without peace, little or nothing can be
achieved. The ERA 2004 further noted that recent empirical research has shown how political
instability adversely affects human development as well as gross domestic product (GDP)
and export growth in Africa. Any improvement in Africa's economic and human development
will be constrained until all the political actors implicated (politicians, civil society, foreign
governments, and international organizations) make a concerted effort to resolve these
conflicts. For example, the persistence of a number of lower intensity conflicts (Uganda or
Sudan) has continued to handicap progress in social, economic and political fields. Therefore,
without peace, little or nothing can be achieved. Healthy business climate enhances foreign
direct investment, which in turn boosts economic growth in African countries (Harms and
Ursprung, 2002).
338
Although many trade and other regional cooperation agreements existed on paper,
there was a lack of political will, or of physical infrastructure, to make them work.
Nevertheless, regional integration could be an effective vehicle for integrating Africa into the
global economy. Much had to be done to create the conditions for reducing poverty. With the
support of regional and international organizations, African countries will be able to meet the
challenges of increasing growth, reducing poverty, and thus lay the foundation for political,
economic, and social stability (Daouas, 2001). Regional economic integration is also a
necessary element for ensuring Africa’s active integration into globalization (Daouas, 2001;
Gondwe, 2001). By introducing more open policies in its own markets, Africa has the
potential source to create new business opportunities by expanding integrated production
across the continent in agriculture, industry, commerce, finance, and social services. In doing
so, regional economic integration and cooperation, trade and investment, economic
efficiency, and growth will be enhanced (Sharer, 2001). The ERA 2004 further indicates that
Africa needs to make a concerted effort in reforming its own economies through a large
diversification of its productive structure if progress is to be made. Africa also clearly needs
to adopt more proactive policies in order to promote the integration of the continent into the
global economy.
According to the ERA 2004, African exports have been handicapped by industrial
country policies such as tariff escalation, tariff peaks and agricultural protectionism. At the
same time, the report noted that improvement is required in internal conditions, if the
continent is to improve its position in the international economy. The ERA 2004 further
noted that weak infrastructure, poor trade facilitation services, and the lack of physical and
human capital poses a major impediment to export sector development. Despite insufficient
progress towards fulfilling the Millennium Development Goals (MDGs), and the persistence
of political, social and economic problems in the continent, Africa has been making progress
since the lost decades of the 1980s and 1990s. In 2003, Africa was the second fastest growing
region in the developing world, behind Eastern and Southern Asia. Higher oil prices and
production, rising commodity prices, increased foreign direct investments, better
macroeconomic management, backed up by good weather conditions, supported this high
growth. As a result, real GDP grew at 3.6 per cent in 2003 compared to 3.2 per cent in 2002,
with North Africa putting in a strong performance (of 4.7 per cent). West and Central Africa
also exhibited respectable growth rates above 3.5 per cent. East and Southern Africa, in
contrast, registered paltry growth of 2.5 per cent (see Figure 1.1).
339
Figure 1.1; North Africa tops sub-regional economic performance in 2003
According to ERA 2004, however, some African economies experienced negative growth
rates. When compared with the growth figures for 2001 and 2002, it becomes clear that there
has been a slight decline in aggregate economic performance for Sub-Saharan Africa (SSA),
from 3.5 per cent in 2002 to only 2.9 per cent in 2003 (see Figure 1.2).
Figure 1.2: Rates of economic growth, North and Sub-Saharan Africa, 2001-3
According to the ERA 2004, the real per capita growth rates for North Africa and
SSA in 2003 are approximately 2.7 per cent and 1.7 per cent respectively, rates which are
inadequate to achieve the MDGs for poverty reduction. The recent establishment of a new
Commission for Africa, launched by British Prime Minister Tony Blair, in March 2004,
340
represents an important acknowledgement of the need to address the problem of Africa's
underperformance.
VI. CONCLUSION
This paper examined the trade of African countries in a broad spectrum within the
context of globalization. Globalization can promote both growth and decline. Whatever the
impact of globalization on the continent of Africa, Studies indicated that Africa could not
advance by isolating itself from the process. Therefore, Africa needs to adopt more proactive
policies in order to promote the integration of the continent into the global economy. Among
others, Africa needs to strengthen in the areas of energy policy, trade facilitation, and
competitiveness. Another principal reason for the holding back of Africa's economic
performance has been the continuation of military conflicts. The political instability in
several spots of Africa has had a significant impact on the social and economic conditions of
neighboring countries. African policy makers are aware of the fact that substantial
improvements in the economic and social situation of their populations are contingent upon
the maintenance of peace. Without peace, little or nothing can be achieved.
REFERENCES
341
UNIVERSITY EDUCATION, PERFORMANCE STANDARDS
AND THE REALITIES OF A GLOBAL MARKETPLACE
ABSTRACT
Americans are living in uncertain times as new global forces act on the economy.
Labor markets are experiencing difficult transitions brought about by intense global
competition. This paper examines certain practices in university classrooms which may be
putting our long-term economic health in jeopardy. One such practice is a cultural focus on
“feeling good about oneself” at the expense of “producing measurable results”. Building on
lessons from capitalism, the author makes a case for challenging students with higher
standards. A fundamental reality exists in a market-driven economy: businesses do not pay
for what they do not need. Universities must realistically educate students for a global
economy; the public would benefit by insisting on performance from its investment dollars in
higher education.
I. INTRODUCTION
As a culture, are Americans indulging themselves today in ways that will hurt us in
the future? Increasingly, thoughtful people are raising the question – about values, standards,
and education – and growing concerned at what they discover. The competitiveness of
America is the subject of heated debates. For example, Colvin (2005) made the case that
“Can America compete?” is not the right question in light of pressing global realities; the
more relevant question is “Can Americans compete?” His conclusion is that we’re not
building human capital the way we used to. He argues that our “greatest challenge will be
changing a culture that neither values education nor sacrifices the present for the future as
much as it used to – or as much as our (global) competitors do” (pg 82). John Doerr, one of
Silicon Valley’s most influential venture capitalists calls “education the largest and most
screwed-up part of the American economy” (pg 82).
This is especially troubling when developing nations are eyeing the American
standard of living with envy, and national boundaries of competitive advantages are
342
dissolving. Innovative ideas circle the globe in breakneck speed, relentlessly pursued by
huge amounts of investment capital eagerly searching for opportunities, regardless of national
origin.
It is time to get serious about what we need our system of higher education to do.
We’re looking at millions of people around the globe getting better everyday at what has
made us the world’s leading economy. Is the competitiveness of the American workforce in
jeopardy? As a business school professor, it is hard to ignore certain patterns of behavior in
the classroom. As taxpayers, we should be asking hard questions of our educational dollars,
because these dollars must fund critical national priorities and investments – viz. adequate
skills to compete successfully in a global economy, where “globally competitive” has
propelled itself into the national lexicon with frightening force and intensity. A compelling
argument can even be made that the direction our economic future takes is as critical to
society as homeland security.
Recent books have essentially acknowledged that learning makes up a small part of
contemporary college life. Nathan concluded that students don’t read what they’re assigned,
so it’s important to assign them less reading (2005). Seaman examined general student life at
twelve elite institutions, noting the rise in drinking and the advent of grade inflation in the
past few decades (2005). He also tackled the decline in academic rigor. Although his was an
unrepresentative group, these twelve elite institutions do set the tone for much thinking about
higher education, and cannot be ignored. Douthat (2005) argued that high school students
compete furiously to get into Ivy League universities, but are seldom stretched when they
arrive. To be objective, in its 2005 annual Survey of Higher Education, The Economist
concluded America’s system of higher education is the best in the world, precisely because
there is no system (e.g. no central planning at the federal level and a market-oriented
approach).
Yet, massive changes are threatening the traditional (elite) university for four reasons:
“massification” or the democratization of higher education, which is quickly spreading to the
developing world (e.g. China doubled its student population in the late 1990s, and India is
trying to follow suit); the rise of the knowledge economy; globalization; and competition
(Wooldridge, 2005, pg 3-4). The problem for policymakers is “how to create a system of
343
higher education that balances the twin demands of excellence and mass access, that makes
room for global elite universities while also catering for large numbers of average students,
that exploits the opportunities provided by new technology while also recognizing that
education requires a human touch” (pg 4).
This paper focuses on these “large numbers of average students”, since most
American universities (~3500) do not fall into The Economist’s 100 global elite. Exactly
what is the purpose of our system of higher education that is aimed squarely at this specific
market segment? What are realistic expectations to have of these students? Is society getting
its money’s worth?
V. HISTORICAL PERSPECTIVE
Almost from its inception, America has placed a high value on education. As one of
America’s earliest educators, Thomas Jefferson described the purpose of the university:
It cannot be but that each generation succeeding to the knowledge
acquired by all those who preceded it, adding it to their own acquisitions and discoveries, and
handing the mass down for successive and constant accumulation, must advance the
knowledge and well-being of mankind.
Benefiting the republic was one reason Jefferson founded a public university. He argued that
a proper education would advance “the prosperity, the power, and the happiness of a nation”
(Schaefer, 2004).
The psychologist Jean Piaget had a similar perspective:
“The principle goal of education in the schools should be creating men and women who are
capable of doing new things, not simply repeating what other generations have done; men
and women who are creative, inventive, and discoverers, who can be critical and verify-not
accept-everything offered.”
Yet from where I sit in the classroom of “large numbers of average students”, I see a
large group of students whose primary focus is obtaining a piece of paper. I see a move
toward “standardization” and “templates” that undermine creativity. Pragmatically, template
jobs are the ones most at risk in a global marketplace. “Jobs that will pay well in the future,”
says Coy, “will be ones that are hard to reduce to a recipe. These attractive jobs. . . require
flexibility, creativity, and lifelong learning” (2004, pg 50). Textbook knowledge alone is
inadequate in a global marketplace because workers in other countries read the same
textbooks. Nor are new ideas and innovations sufficient without a motivated, educated
workforce to efficiently and effectively exploit them. An education based on cultivating
skills and developing perspective, on informed judgment and continued learning, leads to
jobs that cannot be easily outsourced.
What I see too often are students whose primary focus is a grade, and a piece of paper
that says they’re educated. Hutchinson’s concern is that “. . . near automatic graduation of
college students who do little besides pay tuition, attend class, and act responsibly has diluted
the college-educated workforce. The baccalaureate degree is supposed to distinguish its
recipients as interested, literate, and able to analyze and problem-solve. As this guarantee is
344
eroded, so is the confidence of employers in the quality of the college-educated” (2005, pg
23).
I also see grade distributions that reflect pressures unrelated to actual student
performance – given by faculty who are being rated by their “customers”. One could argue
that one of the most misleading myths of American higher education is that education is a
business, students are customers, and educators must do whatever makes them happy or lose
them to the competition. Students are not customers, at least not in the conventional sense of
the word. They do not know what they need in terms of skills and knowledge.
Grades have all but disappeared from communicating honest assessments of student
performance. Recently a distinguished Harvard professor developed a solution for the
“game” of grades. He assigns two grades: one for the “official transcript” and the other
(known only to the student) to reflect the student’s actual performance (Wall Street Journal,
2003). A recent study found that nearly half the course grades given out on the Princeton
campus are ‘A-’ or above; university grades have come to operate under a sort of Gresham’s
Law, where easy A’s drive out honest B’s and C’s (Wall Street Journal, 2004).
I also see a troubling (student) culture moving toward a system where a large group of
people demand compensation for their intentions rather than their results, or worse, for their
subjective evaluations of their own performance. For example, I’m constantly amazed by the
number of mature students who argue they should receive an ‘A’ because they are ‘A’
students - a rather circuitous argument in logic. Logic suggests an individual should earn
rewards above the norm only when he has produced superior results. The “norm” has lately
been revised.
One of the harshest realities of global markets centers on costs, and particularly labor
costs. In fact, labor accounts for about 2/3 of the cost of making/selling products, making
greater labor productivity tantamount to survival in today’s economy (Cooper, 2004). In an
environment characterized by business pressures, organizations cannot afford to over reward
employees, at least not for long. According to Colvin (2005), American workers are
enormously more expensive than their peers almost anywhere in the world except in Western
Europe (which has its own social and economic problems). The downward pressure on U.S.
wages cannot be ignored, and in fact, Mandel notes that since 2000, the college wage
premium has shrunk (2005).
345
VIII. CAPITALISM AND PERFORMANCE
Perhaps capitalism can provide some useful direction. As the most creative and
dynamic of all economic systems, it has the ability to provide long-term global competitive
advantages and high standards of living. Three centuries of technology breakthroughs, in
fact, are the root of today’s abundance in the developed world, and those with a technological
edge (America, Japan, and Western Europe) still have the highest standard of living (Colvin,
2005).
But capitalism is also very demanding, and potentially brutal: it makes the most
specific demands of its citizens; it requires, at times, government to intervene to moderate its
injustices; it can be extremely harsh while competition sorts out market inefficiencies in an
altogether detached way. In free markets, employment and prosperity are the responsibility
of private enterprise. And to thrive, private enterprise demands performance. . . independent
of race, gender, IQ, or pedigree. Joseph Schumpeter’s creative destruction has allowed
American markets – capital, products, labor – to adjust to change faster than virtually any
other country, and that has been crucial to our prosperity. But our schools are no longer
keeping up.
Yet the fundamental business reality remains: businesses do not pay for what they do
not need. And businesses do not need a workforce that is accustomed to – or expects --
“above average” rewards for “average” contributions. It would appear that some of the
“lessons” being communicated in American universities are not the ones that are needed in
today’s job market.
346
REFERENCES
A. BOOKS:
Douthat, R. G. Privilege: Harvard and the Education of the Ruling Class. NY: Hyperion,
2005.
Nathan, R. My Freshman Year: What a Professor Learned by Becoming a Student. Cornell
University Press, 2005.
Salerno, S. Sham: How the Self-Help Movement Made America Helpless. NY: Crown,
2005.
Seaman, B. Binge: What Your College Student Won’t Tell You. NY: John Wiley, 2005.
B. JOURNAL ARTICLES:
Colvin, Geoffrey. “Can Americans Compete?” Fortune, July 25, 2005, 70-82.
Cooper, James C. “The Price of Efficiency.” Business Week, March 22, 2004, 8-42.
Coy, Peter. “The Future of Work.” Business Week, March 22, 2004, 50-52.
Einhorn, B. and J. Carey. “A New Lab Partner for the U.S.?” Business Week, Aug 22, 2005,
116-117.
Engardio, Pete. “A New World Economy.” Business Week, August 22/29, 2005, 52-58.
Hutchinson, Alvin. in a letter to the editor of Business Week, October 3, 2005, 23.
“Low Marks for High Marks” Wall Street Journal, September 5, 2003, W15.
Mandel, Michael J. “College: The Payoff Shrinks.” Business Week, September 12, 2005,
48.
Mandel, M. J. “Productivity: Who Wins, Who Loses.” Business Week, Mar 22, 2004, 44-46.
“Oh Say Can You ‘C’? Wall Street Journal, April 4, 2004, W15.
Schaefer, Naomi. “A Little Learning.” Wall Street Journal, April 23, 2004, W11.
Wooldridge, Adrian. “The Brains Business: A Survey of Higher Education.” The
Economist, September 10, 2005, 3-22.
347
INTERNET-BASED MARKETING COMMUNICATION AND THE
PERFORMANCE OF SAUDI BUSINESSES
ABSTRACT
The objective of this study is to investigate the relationship between the usage of
Internet by Saudi businesses as a marketing communication tool and some of the measures of
organizational performance. Results showed that businesses which utilize the Internet to
communicate and promote their products reported higher levels of sales, profits, image, and
customer satisfaction.
I. INTRODUCTION
Although Saudi businesses play a major role in the economic development in Saudi
Arabia, the utility of technological advances has not been adopted on a large scale to help
them to successfully communicate and market their offerings. Saudi businesses have to
realize the growing popularity and need of the Internet as a marketing communication tool.
The importance of the Internet stems from the fact that many people all over the world spend
more time searching it for marketing information. The usage of the Internet presents an
alternative channel to overcome some of the strategic challenges that face Saudi Arabia,
including the global access of Saudi products. The purpose of this study is to gain more
knowledge about the usage and impact of the Internet on the performance of Saudi
businesses.
Goldstein (2004) thought that the usage of the Internet has gone beyond selling
products and services to include providing detailed information needed to satisfy customers at
any place and any time of the day in the world. Additionally, Goldstein (2004) stated that
organizations go beyond economic motives to politically advance the democratic principle by
supplying customers with detailed information that concerns customers. According to Hill
348
and Jones (2004) these companies become responsive to customers to gain a sustainable
competitive advantage.
Kotler (2003) argues that people will establish faith, attitude and impression by action
and reflection, which will influence their purchasing behaviors. The impression felt by
individuals will have impact on one's behavioral decision. Some scholars apply this concept
in the retail field. Hansen and Deutscher (1977) and Kunkel and Berry (1968) explored the
influence of business image on consumer behavior and store selection, and develop many
facets than enhance the image. For the Internet marketer, the website is his/her store, which
has a large effect on the behaviors of consumers.
Hypothesis #4: Saudi businesses that use the Internet for marketing communication have a
greater perceived image than Saudi businesses that do not use it.
III. METHODOLOGY
The researcher used the questionnaire to collect information relevant to the usage of
internet technology by different Saudi businesses to communicate and market their products
and services. The questionnaire consists of three parts. Part one contains questions about the
usage/application, availability of an interactive website, the features of the website if
available to compare users and non-users on some performance measures. Part two consists
of questions about marketing managers’ perceived increase in sales, profits, perceived
satisfaction of customers, and perceived firm’s image. Part three includes questions about the
type of business, sales volume, profits, mode of business, size of business, and economic
sectors. Questionnaires were sent to 450 businesses listed by the Chamber of Commerce and
Industry in the Eastern Province in Saudi Arabia. A total of 149 surveys were returned, for a
response rate of about 33 percent. Only six surveys were not successfully completed.
Therefore, 143 questionnaires with complete data were used in the analysis of this study.
IV. MEASURES
This study investigates the difference in performance between Saudi businesses that
use the Internet to communicate their products and service with their customers and
businesses that do not apply it. The usage of the Internet is an independent variable while
performance measures are dependent variables. Performance measures include the
following: increase in sales, increase in profits, firm’s image, and customers' satisfaction
about the business as being responsive to their needs. Saudi businesses that use the Internet
for marketing were asked to respond to the statements that evaluate their customers' feelings
about their offerings on the Internet. Also, Saudi businesses that do not have access to the
Internet were asked to respond to the statements measuring the satisfaction of their customers
in the absence of Internet to market their offerings. Respondents were asked to report the
agreement and disagreement with the image statements on a five-point Likert scale.
349
V. STATISTICAL TESTS AND RESULTS
The researcher used SPSS to examine the four hypotheses in this study. Descriptive
statistics in Table 1 show the sectors, number of businesses, and their percentages in the
sample. The internal consistency of the study was high, Cronbach's alpha was 0.87.
Health 23 16%
Services 27 19%
Retailing and wholesaling 23 16%
350
Table 3 Performance Comparison of Saudi Businesses in the Presence and Absence of
Internet-Based Marketing Communication
Performance Measures df SS MS F Value
Sales Increase
Between Groups 1 3.55 .8911 1.78*
Within Groups 142 27.67 .3802
Total 143 31.22
Profits Increase
Between Groups 1 2.24 .6344 2.23*
Within Groups 142 24.13 .3317
Total 143 26.37
Customer's Satisfaction
Between Groups 1 2.45 .7127 2.03*
Within Groups 142 26.18 .3911
Total 143 28.63
Firm's Image
Between Groups 1 3.12 1.0236 1.97*
Within Groups 142 27.04 .5634
Total 143 30.16
**p < .05
Saudi businesses that used the Internet to communicate their products and services
reported a higher level of profits (mean = 4.98, sd = .68) than businesses that did not have
access to it (mean = 2.47, sd=1.09). F value shows a significant relationship between the
usage of the Internet for marketing communication and profits increase (F value = 2.23,
P<.05). Hence, results support hypothesis 2.
Results in Table 3 show that there is a significant difference (F value = 1.97, P< .05)
in firm's image between businesses that use the Internet for marketing communication (mean
= 4.65, sd = .59) and businesses that do not utilize it (mean 2.31, sd = 1.21). Saudi businesses
that use the Internet to communicate with major stakeholders reported greater images than
businesses that do not use it to market themselves. Hence, hypothesis 4 is supported in this
study.
351
VI. CONCLUSION
The Internet provides a powerful platform for businesses with home pages to
communicate their products and services. It provides a labor-efficient and cost-effective way
of distributing information to millions of potential clients in the global markets. Customers
can establish online communication with Saudi businesses via their home pages.
The findings of this study support strong and significant relationships between the
usage of the Internet in marketing communication and each of the performance measures of
Saudi businesses. The findings support the need for Internet communication activities to be
integrated in the overall marketing communications mix. Furthermore, many scholars have
predicted the decline of the traditional marketing function (Holbrook and Hulbert, 2002). The
decline in traditional marketing practices implies innovative ways of marketing
communication. However, one may argue that the Internet is a weak medium when used to
display the advertising of offerings. The strengths of the Internet as a form of digital word-of-
mouth are now very clear and coming to forefront.
The researcher recommends that Saudi businesses use the Internet effectively and
efficiently to market their products and/or services domestically and globally in order to
grow. Early adopters of the Internet have greater advantages over late adopters. Some
businesses use the Internet to promote themselves in geographic areas that are difficult to
reach through traditional physical distribution.
The findings show that the image of the firm can influence its profitability and sales.
The usage of the Internet increases the image of the firm which in turn increases its
performance measures such as increase in both sales and profits. This implies that
management of Saudi businesses should consider the usage of the Internet as a contemporary
marketing distribution tool. The findings show that only 1 percent of surveyed businesses
used the Internet interactively while the 81 percent used it for e-mail communication between
management and employees. This indicates that management should use it widely for many
purposes. Since the internet is becoming a global advertising medium, businesses using it are
always potentially addressing international prospects. This might lead to increased
international competition (Wymbs, 2000). Therefore, Saudi businesses must be prepared to
compete with foreign businesses in Saudi Arabia and global markets. In summary, the
findings of this study may encourage Saudi businesses to utilize the Internet to communicate
and market their products.
REFERENCES
Drennan, Judy & McColl-Kennedy, Janet R. (2003), "The Relationship Between Internet Use
And Perceived Performance In Retail And Professional Service Firms", Journal of
Services Marketing, 17(3): 295-311.
Goldstein, Gary B. (Summer 2004), "A Strategic Response to Media Metamorphoses", Public
Relations Quarterly, 49(2): 19-23.
Hansen, R.A., Deutscher, T. (1977), "An Empirical Investigation Of Attribute Importance In
Retail Store Selection", Journal of Retailing, 53(4): 59-72.
Hill, Charles W. & Jones, Gareth R. (2004), "Strategic Management: An Integrated
Approach", (6th ed.), Boston: Houghton Mifflin Company.
352
Holbrook, M.B., Hulbert, J.M. (2002), "Elegy On The Death Of Marketing, Never Send To
Know Why We Have Come To Bury Marketing But Ask What You Can Do For Your
Country Churchyard", European Journal of Marketing, 36(5/6): 706-32.
Kotler, P. (2003), Marketing Management, 11th (International) ed., Pearson Education Inc.,
Upper Saddle River, NJ.
Kunkel, J.H., Berry, L.L. (1968), "A Behavioral Conception of Retail Image", Journal of
Marketing, 32: 21-7.
Murphy, David Patrick, (March 1999), "Advances in MWD And Formation Evaluation For
1999", World Oil.
Sawhney, Mohanbir, Zabin, Jeff (Fall 2002), "Managing and measuring relational equity in
the network economy", Academy of Marketing Science Journal, 30(4): 313-343.
Shen, Fuyuan (Fall 2002), "Banner Advertisement Pricing, Measurement, And Pretesting
Practices: Perspectives From Interactive Agencies", Journal of Advertising, 31(3): 59-
68.
Wymbs, C. (2000), "How E-Commerce Is Transforming And Internationalizing Service
Industries", Journal of Services Marketing,14(6): 463-78.
Acknowledgement
The researcher thanks and appreciates the full support that he received from King Fahd
University of Petroleum in Saudi Arabia to accomplish this study.
353
IDENTITY THEFT: WHAT SHOULD I DO IF IT HAPPENS?
ABSTRACT
The Internet Crime Complaint Center has been collecting and compiling information
on cyber crime for the past four years. For the most current report, (2004), there was an
increase of 66.6% over 2003. The total dollar loss was 68.14 million with a median
dollar loss of $219.56 per case. There were 207,449 cases submitted to the Fraud Center
with untold numbers going unreported. Three percent of the losses exceeded $5,000.00.
Approximately 6000 of these cases were for identity theft. However, Rob McKenna, the
attorney general for Washington state estimates that the total cost for identity theft in the
U.S. costs consumers $53 billion a year.
I. INTRODUCTION
The word phishing originated sometime around 1996 as a result of hackers stealing
accounts and passwords from America Online. The analogy with the sport of fishing is
easily made. The hackers were using e-mail lures to “hook” or “fish” for passwords and
financial data from the “sea” of Internet users.
It is difficult too tell how many identity thefts are a direct result of phishing, however,
the Federal Trade Commission is responsible for collecting information on identity theft.
The latest information, released in July, 2004, indicates that 10 million Americans were
victims in 2003, and over 27 million since 1998 (FTC, 2004). Recent data indicates that
57 million were targeted in 2004 (Wehner 2005). Most often the stolen information is
used to commit credit card fraud. However, 20% of all victims reported that their
information was used to commit more than one type of fraud. It is not unusual for
criminals to use someone else’s stolen identity to give to the police when apprehended for
a crime. About one half of identity theft victims have no idea how their personal
information was stolen. The number of identity theft cases is increasing rapidly. The top
states for identity theft reports are Nevada, Arizona, and California (FTC 2004).
California has more than 100 cases reported every day. Sadly, local police are generally
ill trained to deal with the problem. According to the FTC, only about 30% of police
departments even take written complaints. Finally, the Social Security Administration
reports that 80% of the instances of misuse of social security information are related to
identity theft (FTC, 2004). According to the Washington state attorney general, identity
theft and the resulting financial fraud are the fastest-growing crimes in the United States.
These crimes cost consumers over $53 billion a year and are difficult to resolve or even
investigate (Government Technology, 2005).
354
II. THREAT FROM CELL PHONE USERS
Until recently, identity theft attempts, spam, and viruses have only been a problem for
Internet users. However, spammers and hackers have recently realized that there are new
opportunities with cell phones, given the use of digital technology. The recent technology
enhancements to cell phones have made them increasingly vulnerable. Many cell phone users
now have the option to access the Internet from their phone. This presents a whole new area
of concern and potential vulnerability for cell phone users. The same risks exist when it
comes to the Internet whether you are accessing it through a computer or a cell phone. As
such cell phone users are just as vulnerable to identity theft attempts, spam, and associated
viruses as people who use a computer.
• From January to March 2005, there have been over 1,000 vulnerabilities found with
cell phones, which is a 6% increase from last year.
• The use of adware, which is short for advertising software, makes spreading spam and
viruses incredibly easy. Between 2004 and 2005, McAfee reported a 20% increase in
threats using malware. Malware is malicious software such as viruses or Trojan
horses.
• In 2003, cell phone users were sent less than 10 million unsolicited emails. That
number is expected to rise to 500 million this year.
• Cell phone users accessing the Internet should protect themselves with anti-virus
protection, install the latest patches, and employ spam filters just like when using a
computer (Kay, 2004).
While it may be impossible to avoid the receipt of fraudulent e-mail, there are some
handy procedures to follow:
• If you receive e-mail that tells you, with little or no notice, that an account of yours
will be closed out unless you confirm your billing information, do not click or reply
on the link in the e-mail. The best policy is to contact the company given in the e-
mail via telephone or a website address that you know to be correct.
• Avoid e-mailing personal and financial information. Before submitting financial
information through a website, make sure the “lock” icon is on the browser’s status
bar. It tells you that your information is secure during transmission.
• Review credit card and bank account statements as soon as you receive them to look
for unauthorized charges. If your statement is late by more than a couple of days, call
your credit card company or bank to confirm your billing address and account
balances.
• Report suspicious activity to the FTC. Send the actual spam to uce@ftc.gov. If you
believe that you have been scammed, file you compliant at www.ftc.gov, and then go
to the FTC’s Identify Theft website at www.ftc.gov/idtheft to learn how to minimize
your risk of damage from identify theft (Cox, 2004).
• Look for misspellings and bad grammar in the e-mail. While an occasional mistake
can slip by any organization, more than one is a tip-off to be concerned.
• If the e-mail refers you to a website check the URL closely. It is easy to disguise a
link to a website. Examine the @ symbol in a URL. Most browsers will ignore all
characters preceding the @ symbol in the URL address for a site. So this web address
http:www.wellknownrespectedcompany.com@gonephishing.com may look like a
355
page for the well known respected company but it actually takes the unsuspecting user
to gonephishing.com. The longer the URL the easier it is to hide the actual
destination address. Other ways to conceal the true address is to substitute similar
looking characters so that paypal.com could be (and has been) spoofed as pay-
pal.com. Also, a zero can be substituted for the letter o in the URL (Kay, 2004).
• Use anti-virus software. Sometimes phishing e-mails contain software that can harm
your computer or track your activities on the Internet. A firewall helps make you
invisible on the Internet and blocks all communications from unauthorized sources
(FTC, 2004).
Gartner, a technology consulting firm, warns that phishing could slow the expansion
of the Internet. “The rise in phishing attacks is threatening consumer confidence as never
before” Gartner reported. Based on a survey of 5000 Internet users by Gartner, it is
estimated that 11 million people have clicked on a link from a phishing e-mail. It is
further estimated that 1.8 million people have given their credit card number or billing
address to a fake website. The company estimates that another million people have been
taken in without knowing it.
Most phishing thefts go unsolved, with the banks usually absorbing the cost. Banks
consider it cheaper to absorb the losses rather than spend money to try and stop phishing,
if it can be stopped. Gartner estimates that banks and credit card companies in the United
States lost about 2.4 billion dollars to phishing. A single group of 53 phishing thieves
arrested in Brazil stole approximately 30 million dollars (Wehner 2005).
IV. WHAT TO DO
V. CONCLUSION
Sadly, there is no central data base that captures all reports of identity theft (FTC
2004). Clearly, we need one. Identity theft is a difficult crime to investigate and even more
difficult to prosecute. Victims don’t even realize that they have been defrauded until weeks
later when credit card charges begin showing up on their statements or withdrawals show up
356
on their bank statements. Then there is difficulty in finding out who sent the e-mail. Given
the way that the Internet works, senders could be anywhere in the world and can easily
disguise their identity.
Laws have done little to stop identity theft. California has passed a law, Civil Code
1798.85-1798.86, which provides some guidelines for displaying and transmitting
information. Illinois and Washington state have recently pasted laws on identity theft but
it remains to be seen if they will be effective. In the short run, education seems to be the
best action to expose identity theft. It is in the best interest of banks and e-tailers to get a
message of vigilance out about identity theft and educate the public.
REFERENCES
Arkansas Democrat Gazette. (July 19, 2004). “Phishing Expeditions Hazardous for PC
Users”, 1D.
Wehner, Ross, Arkansas Democrat Gazette. (January 3, 2005). “Groups Unite to Battle
Phishing”, 1D.
Computers for Seniors, Inc. (April 5, 2004). “Phishing.” Retrieved (May 25, 2004) from
http:www.cfscapecod.com/.
Cox, Mike, Michigan Department of Attorney General, ( March, 2004). Fraudulent
Emails Thieves Intend to Steal Your Personal Information. Retrieved (April 10,
2004) from http://www.michigan.gov/ag/.
FTC. (July 21, 2003). “Identify Thief Goes “Phishing” for Consumers’ Credit
Information.”Retrieved (July 26, 2004) from http://www.ftc.gov/opa/2003.
Government Technology. (2005). “New Laws Aim to Protect Personal Data in Washington
State.” Retrieved (January 10, 2006) from http://www.Govtech/mag azine /story/php?id
=95594.
IC3. (May 10, 2005). “New Threat to Cell Phone Users.” Retrieved (November 1, 2005) from
http://www.ic3.gov/media/2005/050510-1.htm.
Kay, Russell. (January 19, 2004). Phishing, Computerworld, Vol. 38, No. 3, pages 44-5.
Mari, J. Frank. (2005). “Identity Theft: Prevention and Survival.” Retrieved (October 5,
2005) from http://www.identitytheft.org/.
Swartz, John. (April 27, 2005). “Cell Phones Now Rich Targets for Viruses,
Spam, Scams.” Retrieved from http://www.usatoday.com /tech/news/2005-04-27-
cell-phones-usat_x.htm
357
GOVERNMENT REGULATION OF THE OATH OF HIPPOCRATES:
HOW FAR CAN THE GOVERNMENT GO?
ABSTRACT
The United States Department of Health and Human Services has issued regulations
mandating that all physicians provide oral interpretation and written translation services to
limited English proficient persons free of charge and without reimbursement. The regulation
forbids physicians from relying on friends, family, and coworkers of the limited English
proficient person to provide informal translation services. This article discusses the
regulation, whether the regulation exceeds the government agency’s authority under Title VI
of the Civil Rights Act, forces physicians to speak in violation of the First Amendment, is an
undue burden on the patient-physician relationship long governed by the Oath of
Hippocrates, and was improperly imposed by the government without benefit of public notice
and hearing.
I. INTRODUCTION
Title VI of the Civil Rights Act prohibits discrimination on the basis of race, color, or
national origin “under any program or activity receiving federal financial assistance.” Most
physicians receive federal funds from the medicare and medicaid programs. President Clinton
in August 2000 issued Executive Order 13166 that directed all federal agencies to improve
access to all federal programs for Limited English Proficient (LEP) persons. On August 8,
2003, the Bush Department of Health and Human Services (HHS) issued a language rule that
provided that all physicians who receive any federal funds are required to provide oral
interpretation and written translation services to LEP persons free of charge and without
reimbursement. The rule specifically forbids physicians to rely on the LEP person’s family,
friends or other informal interpreters. The rule obligates physicians to provide a specific level
of translation services regardless of whether such services are actually used. The rule is
burdensome because hiring someone to be on standby and be immediately available, even for
unexpected patients is required. Physicians not in compliance are subject to the loss of federal
funds, to investigation for violation of Title VI, and likely prosecution by the Department of
Justice. By the way, the rule was issued without the traditional public notice and hearing
usually provided for government regulations. Clearly, the rule changes the way physicians
practice medicine and increases the likelihood of exposure to civil rights investigations and
lawsuits. For over 2000 years physicians have been guided by the Oath of Hippocrates. The
oath says: “I will follow that system of regimen which, according to my ability and judgment,
I consider for the benefit of my patients, and abstain from whatever is deleterious and
mischievous.” We now turn to a discussion of the effect of the language rule on the Oath and
the physician’s practice of medicine.
358
II. DISCUSSION
The first question is whether Title VI grants the Agency authority to establish a
language rule for physicians? The law is well settled that a government agency may not
exceed the authority granted it by a statute, Campbell v. Galeno Chemical Company (1930).
Title VI was intended to prevent discrimination against individuals on the basis of race, color
or national origin. Clearly, Title VI prohibits national origin discrimination. Physicians who
intentionally discriminated against patients from Poland or Mexico because they were from
Poland or Mexico would certainly fall within the prohibitions of statute. But can language
alone be considered the proxy of national origin discrimination? Title VI is silent on the
question of language. There are no reported cases that establish that language alone can be
used to create a suspect class under Title VI. The U.S. Supreme Court has decided that
Chinese students who did not speak English were entitled to equal protection opportunities,
Lau v. Nichols (1974). But the court said that the appropriate remedy might be to either give
instruction in Chinese or teach the students English. It did not find necessary nor order any
sort of translation services. In Guadalupe v. Tempe School District (1978), the 9th Circuit
said that the district was not obligated to provide bilingual instruction to Mexican-American
and Yaqui Indian children. Remedial instruction in English was deemed sufficient. Finally, in
Alexander v. Sandoval (2001), the Supreme Court ruled that Title VI only prohibits
intentional discrimination. Here, the reach of the attempted Agency language regulation is
not limited to intentional discrimination. The regulation allows an LEP person to file a
discrimination complaint based solely on a claim that a physician does not provide free oral
or written translation services without regard to whether there are other reasonable
alternatives available. There is no requirement to claim that the physician was intentional
singling out the person because of his language problems. Finally, if Congress intended Title
VI to include language as a proxy for national origin it had the opportunity to include such
language in the statute or amend the statute to so reflect. It has done neither. There is a
compelling argument that HHS exceeded the Congressional mandate of Title VI when it
issued the language rule for physicians.
The second question is whether language is really a proxy for national origin
discrimination under Title VI? Title VI simply prohibits intentional discrimination based on
color, race or national origin. We know of no court sanctioned finding that language is a
proxy for national origin. In Toure v. United States (1994), a native of Togo living in the
United States claimed that a U.S. Government Agency had to communicate with him in
French. The Second Circuit disagreed saying that providing notice in a person’s preferred
language was an unreasonable burden. And in a case directly on point, Soberal-Perez v.
Heckler (1983), ironically a case against HHS, the court rejected claims that the failure of the
HHS to provide notices in Spanish constituted national origin discrimination. The court said,
“Language by itself does not identify members of a suspect class.” The government’s
apparent position that language is a proxy for national origin discrimination finds no
compelling support in the case law. Maybe HHS’s lawyers ought to read their own cases.
Next, is the language rule in violation of a physician’s First Amendment rights? The
HHS language rule seeks to compel physicians to speak in a way mandated by the
government. It surely regulates the manner and the content of the way they speak. A fellow
named James Madison once said that the only thing worse than restricting a person’s freedom
of speech was compelling him to say things he didn’t believe. Madison’s statement is on
point here. The government cannot compel one to pay for a message contrary to his beliefs,
Pacific Gas and Electric Company v. Public Utilities Commission of California (1986). In
359
that case the State tried to force the utility to place ads opposing a proposed nuclear power
plant in the customer’s monthly bills. Clearly, the content of the speech between the
physician and his patient is medical advice and information just like the ads. This is not the
sort of information that the government may generally regulate. Here, the government
language rule regulates the mode of communication between patient and physician. Again,
the government may not regulate the choice of language used to convey a particular message,
Cohen v. California (1971). Also see Meyer v. Nebraska (1923) holding that a state may not
coerce the speaking of a particular language. One troubling effect of the regulation is that it
prohibits physicians and patients from speaking through people they both might prefer the
most, like family members or close friends. There is a considerable likelihood that the
language regulation is contrary to the First Amendment because any government effort to
suppress speech is presumed unconstitutional (Tribe, 1988).
Finally, what about the fact that HHS issued the language regulation without first
giving the public an opportunity to comment about the rule? Generally, the Administrative
Procedures Act (APA) mandates a notice and comment period when the government issues
binding rules. HHS says that the language rule does “not constitute a regulation subject to the
rulemaking requirements of the Administrative Procedures Act.” It says that the rule only acts
to clarify the existing legal requirements to accommodate LEP persons under Title VI. But,
alas, Title VI and its implementing regulations contain no such discernable requirement and
do not discuss language at all. Despite the failure of statutory language or caselaw to show
that language is a proxy for national origin, HHS claims it has a long standing tradition of
protecting LEP persons against national origin discrimination. The problem, however, created
by HHS’s position is that it reflects that HHS has a settled policy of treating language as a
proxy for national origin. If that is so, a settled rule, as in this case, issued by a government
agency, requires a notice and comment procedure under the APA. There was none. In
Appalachian Power Company v. EPA (2000), the EPA issued what it called “guidance”
concerning monitoring of emissions from power plants. The guidance was binding on the
power company and was issued without notice and a comment period. The Court found that
the mislabeled guidance was binding and that the APA required notice and a public comment
period. Here, the language rule is clearly settled Agency policy and cannot be considered
final without giving the public and affected physicians an opportunity to comment. The
language rule requires physicians to comply under penalty of loss of federal funds and/or
prosecution. In Appalachian Power (2000), the court said that if an Agency acts as if a
document issued a headquarters is controlling in the field and if it treats the document in the
same manner as a legislative act it is a final rule. That is exactly what we have here. The HHS
language rule is final. It is not enforceable because the notice and comment requirement of
the APA were not followed. So where do we go from here?
We are blessed that the very questions raised here are awaiting a decision of a panel
of the 9th Circuit Court of Appeals. The case is Colwell v. Department of Health and Human
Services (2005). The case originated in the Southern District of California. Much to the
surprise and chagrin of physicians everywhere, the District Court judge granted summary
judgement to HHS saying that the plaintiffs had failed to state a claim. We might say that the
lower court decision was also amazing to a lot of interested observers. We now turn to what
we think ought to be the conclusion of the matter.
360
III. CONCLUSION
Appellate courts like to decide cases on the narrowest possible grounds. The facts of
this case raise issues of whether Title VI gives the Agency authority to issue language
regulations, whether language can ever be a proxy for national origin discrimination, whether
the regulations implicate the physicians First Amendment right to communicate with their
patients, and whether the regulations were issued in accordance with the Administrative
Procedures Act? We contend, for the reasons discussed above, that the court ought to decide
that Title VI was never intended to give HHS the authority to issue language regulations, that
language is not a proxy for national origin discrimination, that the regulation runs afoul of the
First Amendment by regulating protected speech, and that the regulation is invalid for failure
to have a notice and public comment period required for a final rule. Sadly, there is a way out
for the appellate court without reaching the first three important issues. The court could
simply find that the regulations are not enforceable because of the agencies failure to follow
the APA notice and comment provisions. In such decision, the court would simply enjoin
enforcement of the language regulation until HHS takes the public comments required by the
APA and reconsiders the LEP language regulation. If, after reconsideration, the Agency again
publishes the language regulations, the parties will likely be back in court litigating the first
three issues. The best solution for all concerned is to leave the regulation of the practice of
medicine to the Oath of Hippocrates. These sort of government regulations are unduly
burdensome on physicians who are already swamped dealing with insurance companies and
medicare and medicaid rules. What physician who has LEP patients would not make arrange
to communicate with them? There are numerous economic and medical incentives to do so. It
is not unusual to see, for example, notices in business locations that Spanish is spoken here or
that there are menus available in Spanish. HHS ought to let the market handle the language
question.
REFERENCES
361
WHAT ARE THE BENEFITS, CHALLENGES, AND MOTIVATIONAL
ISSUES OF ACADEMIC TEAMS?
ABSTRACT
Academic teams are a vital part of the modern college classroom. Students’ working
in teams is one of the more common approaches to cooperative learning. Professors and
students should understand the different aspects of academic teams before they are used in
the classroom. They must be aware of the benefits of academic teams, the challenges of
academic teams, and techniques to motivate team members. Working on a successful team
not only helps students academically, it also helps to prepare them for the highly team-
oriented workforce.
I. INTRODUCTION
Author C. S. Lewis once wrote, “Two heads are better than one, not because either is
infallible, but because they are unlikely to go wrong in the same direction.” This quote
simply illustrates an important benefit of cooperative learning. Research has “shown that
together, students are able to achieve and learn more than any student is able to individually”
(Pfaff and Huddleston, 2003, page 37). Academic teams, a form of cooperative learning, are
becoming an important learning tool in many college classrooms. As with any type of team,
academic teams are faced with different challenges. It is important for professors, and team
members alike, to understand these challenges in order to promote team success. However,
the many benefits of academic teams make them a crucial part of a quality higher education.
One of the most commonly cited definitions of a team is by Katzenbach and Smith.
(1993). “A team is a small number of people with complementary skills who are committed
to a common purpose, performance goals, and approach for which they are mutually
accountable.” Proponents of team-based learning suggest that if work teams result in higher
productivity in the work force, then the same relationship should also hold in the educational
process (Bacon, 2005). The basis of the cooperative learning theory is that students working
in teams are able to apply and incorporate information in more complex ways than students
working individually.
Benefits of academic teams can fall into two categories: benefits in the classroom and
benefits to the individual. Benefits in the classroom include: Accomplish projects an
individual cannot do: Many desirable class projects are too large or too complex for one
individual to complete alone. Brainstorm more solution options: Different individuals looking
at the same problem will find different solutions. A team can review ideas and put together a
362
final solution, which incorporates the best individual ideas. Detect flaws in solutions: A team
looking at different proposed solutions might also find pitfalls that an individual might miss.
The final solution is that much stronger. Build a classroom culture: Members of effective
teams can form personal bonds, which are good for the individual and classroom morale.
Also, student on teams may form bonds, which extend beyond the classroom.
Increase Student Participation: In academic teams, students working together are actively
engaged in learning instead of passively listening to a teacher lecture. In a teacher-centered
class, the teacher speaks about 80% of the time. Thus it is estimated that in this typical
classroom, with 30 students in a class—much less than in many classrooms—each student
speaks less than 30 seconds each one-hour class period (Lie, 2004).
Promote Diversity: Cooperative groups may include students of varied racial or ethnic
backgrounds. Because students are actively involved in studying classroom issues and
communicating with each other on a regular basis they are provided with opportunities to
appreciate differences. As students are exposed to methods and ideas that come from a
diverse team, they learn different ways of approaching a problem.
Individualize Instruction: “With cooperative learning groups, there is the potential for
students to receive individual assistance from teachers and from their peers. Help from peers
increases learning both for the students being helped as well as for those giving the help”
(Lie, 2004). In classrooms that emphasize the lecture method, teachers cannot always stop to
help students that are having trouble keeping up with the class.
Decrease Anxiety: Many students do not speak out in a traditional classroom setting
because they fear appearing foolish. “In contrast, there is less anxiety connected with
speaking in the smaller group” (Lie, 2004). When students work in teams, the product comes
from the team rather than from the individual. Therefore, the focus is removed from any one
student and the entire team becomes responsible. Cooperative groups provide a safe
environment for students to communicate ideas without the fear of criticism.
363
IV. CHALLENGES OF ACADEMIC TEAMS
“Although the positive outcomes of teamwork are numerous, there are problems
related to work in group contexts. Some problems in team-oriented work result from the
actions of the students themselves” (Pfaff and Huddleston, 2003, page 38). It is important for
college professors and students to be aware of these challenges, in order to prepare for them.
Some of the challenges that academic teams can be faced with are social loafing, scheduling
conflicts, grading method, team member conflicts, and task perception.
Social Loafing: One common problem with academic teams occurs when a team member
does not contribute equally in the group. “[Research results] suggest that as the size of the
team increases, social loafing (i.e., the tendency of certain team members to free ride on the
efforts of others) is more likely” (Deeter-Schmelz, Kennedy, and Ramsey, 2002, page 115).
“[Social loafing] can be addressed by changing team membership or by increasing individual
accountability” (Pfaff and Huddleston, 2003, page 38). Providing an opportunity for students
to evaluate each other’s contributions may also decrease the occurrence of team members not
contributing to the group. To avoid any fear of social consequences linked to grading each
other, the team members’ evaluations should remain confidential.
Scheduling Conflicts: Many college students have jobs and family responsibilities that
make coordinating times for teamwork difficult. One way to alleviate some of the problems
involved with time coordination is for the teacher to provide class time for teams to work
together (Koppenhaver & Shrader 2003). Encouraging or requiring the students to use email
and/or chat rooms, can provide another means for students to easily communicate with each
other outside of class.
Grading Method: Students are very anxious about being in a situation where their
individual grade depends on the performance of other individuals. A professor’s grading
method for individuals and teams can also be problematic. Due to the nature of team grading,
a student may feel that they have lost control of their academic destiny. “Students frequently
complain, and express dissatisfaction, when their personal grade is heavily weighted by team
output as opposed to individual accomplishment” (Boughton, 2000). “Relying solely on
instructor-based evaluation can also cause problems; not including a peer evaluation in the
grade may adversely affect student attitudes toward teamwork” (Pfaff and Huddleston, 2003,
page 38). Project preparation by the instructor is important so the students know that grading
for a team project consists of more than just a team grade, but an assessment that is balanced.
Team Member Conflicts: Putting together groups of individuals with different backgrounds
and ideas will inevitably lead to disagreements and conflict. Many times, conflict occurs
when professors have control over assigning team membership. Students need to find ways
to handle this kind of challenge. One solution is to let students take charge of team
management. The team members choose how to resolve the problem. The professor abides
by the team decision as long as documentation of the offense is presented. Another way a
professor can alleviate this problem is to provide team building and team maintenance
instruction within the structure of the assignment. Remind the students that some
disagreement is normal and conflict is a part of the team building process. Without it, teams
may not be able to examine all points of view and synthesize information.
Task Perception: If the students perceive the group assignment as “busy work”, the attitude
of the students will be poor and the quality of the experience will be minimal if any at all.
364
One way to avoid this problem is to allow students the flexibility to choose their own topic
for study. The teams will likely choose something that is relevant and meaningful to them.
This helps avoid the idea of only “busy work.” Another solution is for the professor to
emphasize the importance of teamwork in real-world situations. Discuss the importance
corporate recruiters place on being able to work in teams as a hiring criterion in cooperative
and job recruitments and interviews. When students realize the personal benefits of
teamwork, the problem of poor attitudes toward group work should decrease dramatically.
V. MOTIVATION TECHNIQUES
Unfortunately, there is no single recipe for motivating students. The good news is
that, by their very, nature, academic teams can easily incorporate self-motivating strategies.
Many of the benefits of academic teams are related to intrinsic motivation. In a study
comparing cooperative learning to whole group instruction, “the most consistent results of
[the] study related to student motivation, all aspects of which were more positive during
cooperative learning” (Peterson and Miller, 2004, page 131).
Setting the Stage: The first step is to work on creating a class climate that encourages
cooperation. Communicate clear expectations to students about team projects on the first
day. Explain why teams are used and how they can benefit from them. Provide a non-
threatening, hands-on, introduction to teamwork that students can easily accomplish. Instead
of just telling students teamwork can be fun, demonstrate it by letting them develop team
logos and team names. An important step to team building is for the team members to be
acquainted. At the very beginning some time needs to be given to aid in the team
socialization process. Some short brainstorming exercises can help students get used to the
team process and understand the benefits of comparing different points of views (Nelson
2004).
Assigning Membership: A key to a motivated and successful team is how the members are
assigned. Three options are available: 1) students pick their own partners, 2) partners are
randomly assigned, or 3) partners are strategically matched by the instructor. When students
form their own teams, the potential for cliques to form within a team greatly increases, thus
potentially excluding some team members. Oakley, et. al. (2004) list several pitfalls of
letting students select their own groups. First, students of similar abilities tend to congregate
together: strong with strong, weak with weak. This limits interaction by preventing weaker
students from learning how stronger students approach problems and robbing the stronger
students of the educational values of peer teaching. A second pitfall is that group’s will
likely form around pre-existing friendships. This decreases the exposure to different ideas,
and such groups are more likely to encourage and cover for inappropriate behaviors like non-
participation and free riding.
365
teams. Student assets typically include such things as work experience, previous relevant
course work, skills, and perspectives from other cultures. Balancing the strengths and
weaknesses of members can help insure that the groups function well and do not have distinct
advantages over one another.
Project Selection: Students will be much more committed to a team project that has value
for them that they can see as meeting their needs, either long term or short term (Stewart, &
Powell, 2004). Students need to feel that the project is significant, valuable, and worthy of
their efforts. To increase motivation and sustain learning, team assignments should be
designed to address these kinds of needs and interests. Students are more engaged in
activities when they can build on prior knowledge and draw clear connections between what
they are learning and the world around them. Once a topic for the team project is agreed
upon, the team needs concrete tasks to accomplish and specific goals to meet in order to be
motivated to work together.
Give Control: In environments where others rigidly prescribe the students tasks and
activities levels of responsibility and commitment often wane. It is important to relinquish
some control to the students. For example, giving teams choices between projects,
developing their own topics, and letting the team pick their own leader. Even small
opportunities for choice, such during class time whether to work in the classroom or some
other location gives students a greater sense of autonomy. Giving students control does not
mean giving up control. There are some aspects of the project that the students will not like,
but they are essential. Fortunately, research suggests that students feel some ownership of a
decision if they agree with it, so getting students to accept the reasons some aspects of a team
project are not negotiable is a worthwhile endeavor (Ashraf 2004).
Feedback: If used correctly, feedback can function as a very powerful tool to motivate
students to participate and learn. Give students constructive feedback on a regular basis.
Teams need some indication of how well they are doing and how to improve. It is more
motivating to have direction than to wonder about potential problems. Although it is
important to evaluate the progress of the team project, critiques need to be presented with
tact. Feedback in the form of grades should include peer evaluations, this provides a way for
students to feel more in control of their group evaluation.
VI. CONCLUSION
366
also highly valued in the business world. To remain innovative and competitive, businesses
are looking for employees who can work and learn effectively in teams.
REFERENCES
Ashraf, Mohammad “A Critical Look at the Use of Group Projects as a Pedagogical Tool,”
Journal of Education for Business. (March/April), 2004, 213-216.
Bacon, Donald R. “The Effect of Group Projects on Content-Related Learning,” Journal of
Management Education. 29(2), 2005, 248-267
Deeter-Schmelz, D.R., Kennedy, K.N., & Ramsey, R.P. “Enriching Our Understanding of
Student Team Effectiveness.” Journal of Marketing Education, 24(2), 2002, 114-124.
Katzenbach, J.R. & Smith, D.K. The Wisdom of Teams: Creating the High-performance
Organization. Boston: Harvard Business School. 1993
Koppenhaver, G. D. & Shrader, C. B. “Structuring the Classroom for Performance:
Cooperative Learning with Instructor-Assigned Teams,” Decision Sciences Journal of
Innovative Education 1(1), 2003, 1-21
Lie, A. Cooperative Learning: Changing Paradigms of College Teaching. Retrieved (Date of
access – 2005, March 29) from http://faculty.petra. ac.id/ anita lie
/LTM/cooperative_learning.htm
Nelson, Bob “Motivating People Is The Right Thing To Do,” Corporate Meetings &
Incentives 23(11), 2004, 59-62
Oakley, B., Felder, R. M., Brent, R., & Elhajj, I. “Turning Student Groups Into Effective
Teams,” Journal of Student Centered Learning, 2(1), 2004, 8-33
Peterson, S.E. & Miller, J.A. “Comparing the Quality of Students’ Experiences
During Cooperative Learning and Large-Group Instruction”. The Journal of
Educational Research, 97(3), 2004, 123-133.
Pfaff, E. & Huddleston, P. “Does It Matter if I Hate Teamwork? What Impacts Student
Attitudes toward Teamwork?” Journal of Marketing Education, 25(1), 2003, 37-45.
Stewart, B. & Powell, S. “Team Building & Team Working,” Team Performance
Management 10(1), 2004, 35-39
367
CHAPTER 12
HEALTH COMMUNICATION
AND
PUBLIC POLICY
368
HEALTH, CULTURE, COMMUNICATION: PERCEIVED INFORMATION
GAPS/NEEDS OF FEMALE MINORITY PATIENTS & THEIR DOCTORS
ABSTRACT
I. INTRODUCTION
The need for effective communication between patients and their doctors has been
studied extensively (McGee & Cegala, 1998; Ong et.al, 1995). However, as the demographics in
the United States change, it is imperative to reexamine the traditional interpersonal
communication style between doctors and their patients, especially female minority patients. A
2001 Census Bureau Report indicates that the Hispanic/Latino population in the United States at
37 million now exceeds African Americans who number 36.3 million. A majority of the
369
Hispanics are of Mexican origin, including Mexican immigrants who are increasingly settling in
Nevada. In spite of these demographic changes, there is little evidence to indicate that the health
care industry makes any effort to ascertain the medical information needs of this population or
other multi-cultural patients in order to serve them effectively.
Studies of the treatment of women and ethnic minorities (Corea, 1985; Kreps &
Kunimoto, 1994) indicate a need for gender-sensitive and culture-sensitive approaches to
healthcare communication. Scully (1980) notes that poor and minority women are used as
teaching tools in hospitals to enable medical students to exercise the skills they will
subsequently use on affluent patients. Additionally, communication between women and their
doctors is usually one of significant imbalance (Scully, 1980; Raymond, 1982). The inequity is
partially explained by the long-standing belief by some that physicians are the only experts on
health; therefore, their authority should not be questioned. In a pivotal study of obstetricians-
gynecologists, Scully (1980) argues that medical education prepares doctors to gain the patient's
trust and confidence in order to manipulate her into doing what the physician wants her to do.
This tradition which is reinforced in medical school training perceives qualities like empathy,
cultural and gender sensitivity inconsequential to the makings of a doctor. In fact, studies
indicate that although communication skills are taught in some medical school, they are not part
of the curriculum. They may range from a four hour workshop to a five day course in
communication (Chase, 1998), and are often dismissed by the students as extraneous to their
medical training (Frederickson and Bull, 1992). In spite of acquired communication skills,
research shows that doctors frequently do not meet patients’ information needs because they
assume the patients do not need the medical information (DiMatteo, Reiter & Gambone, 1994;
Williams, et al., 1995). On the other hand, studies also reveal that patients seek little or no
information from physicians, albeit most contend they want information about their illness to
enable them to make decisions about their health (Beisecker, 1990). In essence, doctors and
patients view effective communication differently (Sanchez-Menagey & Stadler, 1994).
However, given that the quality of communication between physicians and their patients
profoundly affect the quality of patient health care (Roter & Hall, 1992), developing effectual
rapport with multicultural or minority patients is important for successful health care delivery to
that population(Kreps and Kunimoto, 1994).
A modified version of Sense-Making theory and method was used for data gathering
and analysis. Dervin (1976) developed the “sense-making” approach through a series of
extensive studies, since 1975, on human information needs and uses. Dervin (1989) defined
“sense-making” as a coherent set of concepts and methods used to study how people make
sense of their worlds, especially, how they create their information needs and uses for
information. Sense-making theory rests on the assumption that humans seek and/or use
information when they find themselves in circumstances that hinder their movement in time
and space (Dervin & Clark, 1987). Sense-making rests on the Situation-Gaps-Uses model
(Dervin, 1989). Situation is the time-space framework at which individuals make sense
of their situations or needs. Gaps are the information needs or concerns of respondents
as they move through time-space, and Uses/Helps indicate questions and answers and
other information that help individuals "bridge the gap" created by their needs, as well
as the uses to which the newly acquired information is put. Dervin noted that these
three parts of the sense-making model allow for inter-subjectivity of responses or the
creation of shared meaning.
370
Sample Selection. Participants were selected through purposive sampling, a non-probability
sampling procedure. Babbie (1986) described this kind of sampling as one based on the
researcher’s familiarity with the population, and judgments about the purpose of the study.
Patton (1990) affirmed that purposive sampling allowed researchers to choose "information-
rich cases that would clarify the research questions" (p. 173). In this case, although the
doctors and patients do not represent all of their peers, they constitute a subset of the
population under study, and therefore qualify as appropriate sample. The doctors were
selected from the directories of healthcare providers, while the female minority participants were
identified and selected through non traditional means.
Physicians: To ensure an adequate response from doctors who served the female population
in Reno/Sparks, 126 OB/GYN, Internal Medicine and General Practitioners were surveyed.
Participants were selected from directories of OB/GYN doctors listed in the Hometown
Health and Blue Cross Blue Shield directories and cross referenced with the yellow pages for
current address information and doctors that may not have been listed in these two health
directories. A total of 15 responses were received for a 12 percent response rate. Of these, 13
respondents participated for a 10 percent total response rate. However, given the negligible
response rate of the physician sample, the data was discarded.
Female Minority Patients: Participants were recruited through nontraditional means, ranging
from beauty salons, ethnic restaurants and businesses, to word of mouth, because the traditional
recruitment methods were not successful. A total of 20 female minority patients participated in
the study within a period of five months in 2004. The women’s ages ranged from 20 to 74 with
an average age of 42. Twelve of the females were Hispanics, four were African Americans, two
were Vietnamese, and two were Pacific Islanders. Twelve of the women were married while the
remaining were single mothers, widows or divorcees. Eight of the women were college
graduates, while the rest were high school dropouts, high school graduates, had some college, or
attending community college. Respondents were asked, in two focus group interviews, to
describe, based on their experiences, the context of their communication with a physician,
their medical information needs, whether the information sought helped or hindered their
ability to make decisions about their health.
Micro-Moment Time-Line interview. Of the 20 focus group participants, five whose responses
were similar were selected for a modified Micro-Moment Time-Line interview, a sense making
interview technique that provides detailed and meaningful responses to research questions. For
these interviews, the researcher re-described the situations that the respondents had alluded to
during the group sessions. The participants were asked to choose the questions or concerns
they considered pertinent and identify physician responses that helped or hindered their
ability to make decisions about their medical condition. These individual interviews lasted
between 60 and 100 minutes, and were conducted at restaurants in the Reno/Sparks area.
Individual Interviews: A modified version of the Micro-Moment Time-Line (individual)
interview technique was used for the individual interviews, because it allowed the researcher
to obtain more than superficial information, given the nature of the topic. Dervin & Clark
(1987) wrote that although Time-Line interviews were longer than regular interviews, the
advantages were tri-fold. They enabled both parties to establish rapport (and
consequently trust), provided a forum for clarification of questions and responses, and
enabled the researcher to observe some nuances that otherwise would not be possible
under other interviewing circumstances. These (individual) interviews focused on the
themes of three situations or experiences that emerged from the focus group discussion,
for which contextual information was sought. The situations dealt with context of
communication experiences, medical information needs, help and hindrances, and relational
experiences.
371
V. RESULTS
All 20 participants in the two focus groups and the individual interviewees freely
discussed the three areas of focus: context of communication experience, medical information
needs, helps and hindrances, and relationship experiences with their physicians. Nine themes
emerged from the focus groups and individual interviews: the need for physicians to be
patient, friendly and sensitive, 2) the need to spend more than 15 minutes with patients, 3) the
need for better communication between primary care physicians and specialists or between
primary care physicians when patients switched doctors, 4) the need to be culturally and class
sensitive so as to be able to communicate successfully with them, 5) the need to participate in
their (patients) education, 6) the need for male doctors to be more sensitive to female medical
concerns, 7) patients cherished long association with physicians, 8) patients find female
doctors more sensitive to female medical concerns regardless of race, ethnicity or social
status, 9) not all minority doctors were empathetic to minority women.
Using sense-making method and theory in this study provided rich and diverse
answers. It also highlighted Oakley’s (1986) assertion that in order to successfully interview
women, the interviewer sometimes became a psychoanalyst or a “friend.” This was evident
in all the focus group and individual interviews. Participants noted that the focus group
sessions gave them an opportunity to talk about issues they had never discussed with anyone
outside their families. It also made them to realize that they were not the only ones who have
had the experience of ineffective communication with physicians. The individual
interviewees recalled and recounted painful experiences about their encounters in hospitals
that they did not feel free to share in the focus group sessions. Participants said the sessions
helped them to learn more about themselves and learn from the experiences of other
participants. They shared information and swapped empowering ideas on how to prepare
themselves for the next visit to the hospital or doctor’s office.
VII. CONCLUSION
Although this study is only exploratory, used a small sample in a small urban center,
it provides some basis for an expanded examination of the topic. It also illuminates an area of
doctor-patient communication that has not been studied extensively, female minority patient
perspective. The preliminary data reveal the necessity of an understanding of lived
experiences of female minority patients, an under-served group that is growing as the recent
census figures indicate. A larger study would include a wider diversity of ethnic minority
women in large urban areas. Above all, the study reveals that perceived gaps in
communication exist between female minority patients and their doctors. The women assert
that probing questions are not asked sensitively, that medical jargon is confusing to them, and
that hospital environments are as intimidating as the doctors. The also study divulges that the
perceived ineffectiveness of communication encounters between female minority patients and
their doctors should be tackled. The areas of concern should be used as a springboard for
developing custom intercultural communication skills training that should be part of medical
school curriculum. Female minority patients should also be exposed to skills that would
enable them to develop effective communication skills that would be useful in a clinical
setting.
372
Suggestions for Health Care Policy Decisions
Physicians and other healthcare providers should view minority female patients
through their multiple roles as mothers, wives, employees, coupled with language, gender,
and cultural burdens. Thus it is suggested that gynecological and emotional questions be
couched in non-threatening language. To bridge the gap in communication with their doctors,
female minority patients need to be educated on how to communicate with their doctors. This
could be exemplified by making a list of questions to ask the physician, and mirroring
responses of the physician to ensure clear understanding. Medical centers should print simple
pamphlets in various languages on “how to talk to your doctor about your illness” for their
non-English speaking patients.
This exploratory study clearly indicates that female minority patients have different
views on what constitutes effective communication. The study is not intended to criticize
doctors or female minority patients; rather, it highlights the gap in communication between
these two parties. The study also suggests that if this gap is to be bridged, physicians should
develop effective communication skills that incorporate cultural and gender sensitivities.
These communication skills should be incorporated into medical school curriculum, so that
future doctors will recognize the importance of effectively communicating with their patients
regardless of their background. Female minority patients on the other hand must educate
themselves or participate in community-based programs where they can be taught
communication skills that would enable them to actively seek information, respond to
medical questions, and fully participate in their own wellness. Finally, health care policy
planners would do well to recognize the need to bridge the communication gap between
female minority patients and health care providers to ensure equitable delivery of health care.
REFERENCES
Babbie, Earl. The Practice of Social Research 4th ed. Belmont, CA: Wadsworth, 1986.
Beisecker, Analee. E. “Patient Power In Doctor-Patient Communication. What Do We Know?”
Health Communication., 2, 1995, 105-122.
Chase, Marilyn. HMOs Send Doctors to School to Polish Manners, The Wall Street Journal,
Health Journal, April 13, 1998.
Corea, Gena. The Hidden Malpractice: How American Medicine Mistreats Women. New York:
Harper, 1985.
Dervin, Brenda. "Strategies For Dealing With Human Information Needs: Information Or
Communication?" Journal of Broadcasting., 20 1976, 324-333.
Dervin, Brenda. “Audience as Listener and Learner, Teacher and Confidante: The Sense Making
Approach.” In Ronald E. Rice and Charles K. Atkin, eds., Public Communication
Campaigns. 2nd ed., Newbury Park, CA: Sage, 1989, 67-86.
Dervin, Brenda, & Clark, K. ASQ: Asking Significant Questions. Alternative Tools For
Information Needs And Accountability Assessments By Librarians. Sacramento:
California State Libraries, 1987.
This work was supported in part by a grant from the University of Nevada, Reno, Junior
Faculty Research Grant Fund, but does not imply endorsement by the University of the
research conclusions. Thanks to thank Cindy Petersen, MA. for help with data collection and
analysis.
373
THE FRAME-CHANGING STRATEGY IN SARS COVERAGE:
TESTING A TWO-DIMENSIONAL MODEL
ABSTRACT
This study examines the changing pattern of SARS coverage in the New York Times.
Based on a two-dimensional model, the study revealed that during the life span of the SARS
epidemic (March 2003 to January 2004), the newspaper employed a frame-changing strategy
on both the time and space dimensions to keep the event salient in the news. An
overwhelming majority of the stories employed the core frames, which were the frames that
originally registered the event on the news agenda. It was also found that during the 11-
month period, the newspaper shifted its focus on the core frame combinations, which further
supported the role of the frame-changing strategy in the coverage of a long-lasting event.
I. INTRODUCTION
The SARS epidemic was a tragedy that alerted the whole world about the
“vulnerability of global health systems” (Chang, Salmon, Lee, Choi, and Zeldes, 2004).
Eventually claiming more than 800 lives throughout the world (WHO, 2003, August 15), the
disease remained ignored for almost four month until international news media reported it
extensively following a SARS warning by the WHO in March 2003 (WHO, 2003, March 12).
Due to the lack of human knowledge about the disease as well as the way in which
coronavirus, later known as the cause of SARS, was transmitted, the news media closely
watched the development of the epidemic during the following months. This study examines
the process during which the SARS outbreak was portrayed in a major U.S. newspaper.
Media framing has been known as the process in which the media select and package
ongoing events and issues (Entman, 1993; Iyengar, 1991; Ryan and Sim, 1990; Schon and
Rein, 1994). Through selection and emphasis of certain aspects of an occurrence or an issue,
media actors (mostly journalists) are able to define a problem (Entman, 1993) within selected
context, thus creating a “constructed reality” (Turk and Franklin, 1987, p. 30). Studies on
agenda-setting, especially second-level agenda-setting, have repeatedly suggested that the
mediated version of reality may considerably affect people’s understanding of social reality
(e.g., McCombs and Reynolds, 2002). However, frames have often been treated as static
features of news content. Very little is known about the process of framing, especially when
involving an event with a long life span. In a recent attempt to address framing as a dynamic
process, Chyi and McCombs (2004) proposed a two-dimensional model, which takes into
account media focus on both the space and time dimensions. In this model, the space
dimension consists of five levels: individual, community, regional, societal, and international.
The time dimension consists of three levels: past, present, and future. Based on their analysis
of the coverage of the Columbine School shootings in the New York Times, they found that
different frames were employed on the space and time dimensions as the event developed. On
the space dimension, the media focus gradually shifted from the individual level to the
societal level. On the time dimension, a considerable amount of coverage focused on past
374
frames at the beginning, while the use of future frames increased during the second half of
the event’s life span. According to Chyi and McCombs, this frame-changing strategy was
used by the media to secure the salience of the event over time. They also found that the core
frames (the combination of community and present frames), which reflected the essential
features that initially made the Columbine School shootings a salient news event, appeared in
fewer than one quarter of the total number of stories on this topic. The majority of the stories
employed extended frames such as societal plus present.
III. METHOD
This study employed the method of content analysis. Using the Lexis-Nexis database,
a key word search for “SARS” or “Severe Acute Respiratory Syndrome” was conducted in
the New York Times between March 2003 and January 2004. A total of 1,098 stories were
retrieved from the database. A sample of 140 stories was constructed using the systematic
sampling method. The coding instrument in this study was borrowed from the scales by Chyi
and McCombs (2004). The coding unit is a story. Each story was coded individually on the
month of publication, the space dimension, and the time dimension. The space dimension
includes five categories. A story was coded in the individual category if it emphasized
individual SARS patients, reactions from their families or friends, or contextual information
about individual SARS cases or probable cases without referring to a larger area related to the
cases. A story was in the community category if it emphasized the SARS condition in a local
area, such as a large hospital, a town or a small city. A story was in the regional category if it
emphasized the situation in a large metropolitan such as Toronto or Beijing, a special region
such as Hong Kong, a province/state, or a large area that is within the border of a certain
country. The national category was a slight variation from Chyi and McCombs’s (2004)
societal level. A story was coded in the national category if it emphasized the situation within
a specific country. Finally, a story was in the international category if it emphasized the
situation across national borders, e.g., SARS research led by the WHO.
The time dimension includes three categories: past, present, and future. A story was
coded in the past category if it mainly provided historical background or traced relevant
events in the history, such as influenza and AIDS. A story was in the present category if it
focused on updates of the ongoing SARS epidemic. A story was in the future category if it
makes predictions about further developments of the situation, proposes actions to be taken,
or estimates future impacts of the event, including preventive procedures and proposals for
375
future medical research. A graduate student and the researcher participated in an inter-coder
reliability check. The Holsti’s inter-coder reliability for all the variables fell within the highly
satisfactory range of .87 to 1.00 (Holsti, 1969). The graduate student coded all the 140 stories
in the sample.
The first research question asks how the SARS stories were distributed over time. A
total of 1,098 articles were published about the SARS epidemic over the 11 months under
analysis. After picking up the topic on March 16, 2003, the newspaper cast intensive attention
to the event during April and May 2003, with an average of more than 10 articles about the
outbreak on a daily basis. However, the amount of coverage suddenly dropped to about four
stories per day in June. After July 2003, when the WHO announced that the whole world had
conquered SARS (WHO, July 5, 2003), the media focus gradually shifted away from the
epidemic. The amount of coverage reached its lowest point in November 2003, with a total
number of 19 stories published during that month. However, more stories appeared about
SARS around the turn of 2004, partly due to the widespread suspicion about a revival of the
disease during the winter months.
Research question two asks how the framing of the SARS epidemic changed over
time on the space dimension. The data suggested that international frames were the dominant
frames on the space dimension, alone accounting for nearly half (42.9%) of the total. One out
of three stories (36.4%) employed a national frame, and another 13.6 percent of the articles
adopted a regional frame. Very few stories used a frame at the individual (5%) or community
level (2.1%).
A closer examination of the three dominant space frames revealed a changing pattern
over time (Figure I). While international and national frames apparently led the coverage of
the SARS epidemic during the 11-month period, the whole framing process featured a zigzag
path at
70
60
50 Regional
40 National
30 International
20
10
0
3
3
03
04
3
3
03
03
3
3
3
r-0
t-0
l-0
-0
c-0
v-0
-0
n-
n-
g-
p-
ar
ay
Oc
Ju
Ap
De
Ju
Ja
Au
Se
No
M
Month
all three leading levels. International frames were the mostly frequently adopted frames
(57%) when the event first entered the media agenda in March 2003. But the percentage of
international frames continuously decreased until it reached an all-time low point in July,
accounting for only 30 percent of the total. There was a dramatic increase in the use of
international frames in August and September 2003, indicating that the media focus shifted
back to discussion in a broader global context. The percentage of international frames
376
suddenly dropped during October and November 2003, but regained momentum in January
2004, reaching another peak of 75 percent of the total. The overall pattern of international
and national frames suggested that the two types of frames complemented each other.
Although not a leading frame at the very beginning, national frames gradually gained salience
between March and July 2003. The percentage of national frames suddenly decreased from
60 percent to slightly over 10 percent in August, and then gradually increased (with some fall
in between) to reach a peak in December, after which national frames suddenly disappeared
altogether. The ebb and flow reflected the shift of the newspaper’s focus from and to
discussion of the SARS outbreak within a national context. An examination of the change in
the number of space frames suggests that regional frames appeared most often in stories
published during April and May 2003. During the remaining months, very few articles
focused on parts of a certain country. Research question three asks how the framing of the
SARS stories changed on the time dimension over time. Data analysis showed that only a
negligible proportion (1.4%) of the stories employed past frames. Figure II displays the
distribution of present and future frames. Present frames were the most dominant frames
during the whole life span of the SARS epidemic. During the first few months of the outbreak
and in September and December 2003, an overwhelming majority of the articles employed
present frames. Due to the limited use of past frames over time, the use of future frames and
present frames were complementary. When the media attention slightly shifted away from
current updates of the SARS situation, predictions and proposition about future preventive
procedures gained some prominence.
100
Percentage of Stories
80
Present
60
Future
40
20
0
3
3
03
04
3
3
03
03
3
3
3
r-0
t-0
l-0
-0
c-0
v-0
-0
n-
n-
g-
p-
ar
ay
Oc
Ju
Ap
De
Ju
Ja
Au
Se
No
M
Month
Because SARS coverage in the New York Times tailed off after May 2003, a
distribution of the number of stories on the time dimension may be more revealing. As
illustrated in Figure III, the number of stories employing present frames reached a peak
between April and May 2003, and gradually decreased during the remaining months. The
number of stories using future frames remained low during the whole process, with slight
fluctuations in April and July 2003.
377
Figure Iii: Change Of Time Frames In The Number Of Stories (N=138)
50
40
Number of Stories
30
Present
Future
20
10
3
3
4
3
3
3
3
3
r-0
t-0
n-0
n-0
l-0
r-0
g-0
p-0
c-0
v-0
y-0
Oc
Ju
Ap
Ma
De
Ju
Ja
Au
Se
No
Ma
Month
Research question four involves the relationship between frame use on the space and
time dimensions. Chi-square results suggest that on all three levels of space frames that were
frequently used (regional, national, and international), present frames were always the most
prominent on the time dimension. No significant correlation of frame use on the space and
time dimension was identified (X2 = .375, p. > .05). The core frames, which were the
“international plus present” combination, alone appeared in nearly half (44%) of all the
SARS stories. The most frequently adopted extended frames were the “national plus present”
combination, accounting for 38 percent of the SARS frames. While it is true that SARS
cases were reported in a number of countries throughout the world and the United States was
not heavily infected during the outbreak, within a SARS-infected country the outbreak can be
considered more a national occurrence than a global disease (Zeng, 2006). National frames
may be essential as well when a foreign correspondent reports within a SARS-hit country,
e.g., a New York Times correspondent reporting from Beijing. Therefore, the national frame
can be considered as a secondary core frame in the U.S. media. The combinations of
“international or national plus present” frames appeared in four out of every five SARS story.
The heavy use of these two combinations suggested that the newspaper took into
consideration of the nature of the SARS outbreak. The shift of focus between these two
combinations indicated that even when focusing on the nature of the event itself, the
newspaper employed a frame-changing strategy to maintain the salience of the event on the
news agenda.
V. CONCLUSIONS
378
patterns can be very complicated. Further research is needed to understand the factors that
influence such changes and how such changes may influence public perception.
REFERENCES
Chyi, Hsiang Iris, and McCombs, Maxwell. “Media Salience and the Process of Framing:
Coverage of the Columbine School Shootings.” Journalism and Mass Communication
Quarterly, 81,(1), 2004, 22-35.
Entman, Robert M. “Framing: Toward Clarification of a Fractured Paradigm.” Journal of
Communication, 43, 1993, 51-58.
Holsti, Ole R. Content Analysis for the Social Sciences and Humanities. Reading, Mass.:
Addison-Wesley Pub. Co., 1969.
Iyengar, Shanto. Is Anyone Responsible? How Television Frames Political Issues. Chicago:
University of Chicago Press, 1991.
379
NAVIGATING ILLNESS BY NAVIGATING THE NET:
INFORMATION SEEKING ABOUT SEXUALLY TRANSMITTED INFECTIONS
ABSTRACT
With changing norms about health care interactions, patients are expected to be active
in their quest for "good" health and their management of disease. The Internet has quickly
become a dominant source of health information with online bulletin board communities
affording people a place to gather both information and support. By sampling messages
(N=500) from two different online bulletin boards, the authors examine information seeking
techniques used by women with sexually transmitted infections. Such bulletin boards allow
those facing stigmatizing illnesses to manage health-related information in a variety of ways.
I. INTRODUCTION
Berger and Bradac (1982) identified three information gathering techniques used
during interpersonal interactions with a "stranger": (1) passive strategies (e.g., observing the
stranger); (2) active strategies (e.g., asking a third party questions about the stranger); and (3)
interactive strategies (e.g., asking the stranger questions). These information seeking
techniques have also been investigated within the context of illness events, since "people with
acute or chronic illnesses often seek information to understand their diagnosis, to decide on
treatments, and to predict their prognosis" (Brashers et al., 2002, p. 258). For example,
Brashers et al. (2000) concluded that persons living with HIVS/AIDS may use all three of
these techniques because of the chronic and complex nature of the illness.
380
information (pp. 70-71). Active techniques involved purposely “eliciting information from
multiple sources” and “monitoring for updated information” (p. 71), and interactive
information-seeking included “self-experimentation with illness and treatments” (p. 71).
Because of the stigma and uncertainty surrounding STIs (e.g., HPV, human
papillomavirus and HSV, herpes simplex virus) the person infected may need to rely on
passive, active and interactive information gathering techniques. For example, when dealing
with a stigmatizing illness, some might prefer passive techniques. As Parrott (1995)
contends, “persons who seek information but hesitate to disclose to others their need for
information may use impersonal press or broadcast sources to reduce their uncertainty more
often than they use interpersonal sources (p.178).
For the purpose of this paper, these information seeking techniques must be
reconceptualized to "fit" within a framework of online information seeking. Drawing on
previous work (e.g., Brashers et al., 2000; Sharf, 1997), the passive method is comparable to
online "lurking" in that the information seeker is simply observing within an "information-
rich" community. The active approach can describe intentionally eliciting information about
the illness (e.g., asking questions within the online bulletin board community). Finally, the
interactive information-seeking approach was used to describe “real-time" interactions,
allowing for free-flowing communication directly exploring illness-related issues (e.g., online
chatting; instant messaging).
IV. METHODS
Internet bulletin boards display "all messages that have been posted on it and their
respective replies” (Robinson, 2001, p. 707). The two boards selected for this study are
thriving communities: the HPV board, in existence since 1998, contained over 25,000
current and archived messages; the HSV board, beginning in 1999, contained over 12,000.
The boards required no passwords to read the postings, rendering them accessible to anyone
who stumbled across or actively searched for such websites. As illustrated by Robinson
(2001) in the Exemption Decision Model for Unsolicited Narrative Data from the Internet,
the first author applied for and received exempt status from the Institutional Review Board,
since users posting "to a freely accessible asynchronous board expect that persons unknown
to them may read, share, and comment on their postings” (p. 711) Yet, safeguards were still
used to protect users' anonymity: (1) unique identifiers were omitted, such as personal
names; (2) participant message identifiers are used rather than usernames (e.g., post 13 out of
381
250 HPV posts is referred to as HPV P13); and (3) message board urls (i.e., website
addresses) are not referenced in this paper.
After receiving IRB approval, 500 messages were downloaded from the two boards,
beginning with the most recent message posted and moving toward the earliest messages.
Given that the HPV site had approximately 25,000 messages, 250 (10%) were sampled to
enable persistent observation (Lincoln & Guba, 1985); 250 messages were also sampled from
the HSV site. Message selection was based on several criteria as to avoid temptation to select
the most provocative messages (e.g., only messages posted by users directly or indirectly
identifying themselves as females were selected for inclusion).
Given that this project was observational in nature, a qualitative content analysis
approach was used in data analysis (Berg, 2004), an approach that draws on grounded theory
methods (Strauss & Corbin, 1998). All 500 sample posts were combined into a single
message transcript, totaling 63 pages, with HPV posts and HSV posts separated into different
sections to allow comparisons within and between categories (Strauss & Corbin, 1998).
V. RESULTS
I have been reading posts from this club for almost a year. I just joined the club last
week. I was afraid to talk to anyone about HPV or warts. [HPV P4]
Second, there was also evidence that some participants desired a more interactive
approach. They too sought information, but they wanted to do so by “chatting.” Because the
HPV board members also had access to a board-associated “chat room,” there were
occasional requests made for other site users to “chat." “I must always sign on [to the chat
room] when no one else is on. I would love to chat, its kinda hard to be single with hpv, I
never know when the right time is to tell someone and should I even date someone that
doesn't have it?” [HPV P70]
Eight posters from the HPV board solicited partners for private chat sessions.
Because the HSV site did not have the chat option, no such requests were made; however,
there was still interest in finding resources. For example, one HSV board member asked:
Can anyone recommend specific sites with good, clear information on just what H [herpes] is
and how it is transmitted and how to possibly prevent transmission? [….] I am a bit new to
"surfing" and I haven't been in a chat room yet (although I'd like to) [HSV P79]
382
experiential information. Several HSV posters, for instance, wanted to understand herpes
outbreaks beyond just their symptoms: some sought information about outbreak-triggers,
such as foods or psychological stressors; others wanted outbreak-cues as to be able to predict
and understand their illness.
Yesterday I started having a weird feeling in my right thigh and the best way I can
describe it is as very sensitive to the touch although I cannot see any blisters there.
Can someone tell me is this is also considered an outbreak? or if someone else has
experienced this too? [HSV P55].
Especially for women who deemed their health care providers as untrustworthy or
inaccessible, these boards offered them a place to actively secure second opinions and
“substitute” medical advice. For example, once prescribed a treatment, some used the board
to solicit other’s opinions: “I just got prescribed Aldara today. I was just wondering what
other people's experience with it has been” [HPV P132]. Still another user asked, “I was
wondering if you had to be anesthetized…when you had your LEEP?? [Loop Electrosurgical
Excision Procedure] My doctor wants me to be” [HPV P156]. In essence, some women used
the Internet bulletin boards to actively seek information about whether their health care
providers were doing "what is best for me” [HPV P211].
VI. CONCLUSION
This research paper examined women's use of online bulletin boards in their passive,
active, and interactive information seeking efforts about STIs. Unfortunately, two techniques
(passive and interactive) could not be adequately addressed by this project. Unless the posters
explicitly admitted to “lurking” or “chatting” in order to gain information these techniques
could not be fully analyzed. Yet, what did emerge was that these bulletin board communities
can provide invaluable forums for women actively seeking to manage their STI experience.
Underscoring previous research (e.g., Sharf, 1997), results here within suggested two
primary benefits to using online communication: (1) it can provide a relatively “risk free”
environment where health-related issues can be explored; and (2) the participants can gain
support and information from people in similar circumstances. Internet sites have managed to
combine a “face-saving” impersonal medium with an interpersonal one. That is, an
individual with a stigmatizing health condition can still connect with others, asking questions
and seeking opinions, but by “hiding” her face behind a computer screen and her identity
behind a username, she may decrease the risk of embarrassment or rejection. Thus, the
anonymity promised by computer-mediated communication can encourage the use of all
three information-gathering techniques.
As the patient's role in health care continues to change (Parrott & Condit, 1996), there
is recognition that communication about health-related matters is not just taking place in
doctors’ offices, a point highlighted by these research findings. Yet, the active information-
seeking patient may face one complication in particular: information management is
becoming complicated by the "saturation of the media environment," making it "difficult for
individuals to avoid information about some health topics" (Brashers et al., 2002, p.265).
Essentially, then, patients now have to sort through the information that their providers give
them, as well as the information they secure through the Internet.
383
This study's findings help underscore the fact that the Internet can be a tool for the
empowerment, especially for those facing a stigmatizing illness. Online bulletin boards can
provide a place for diverse people to meet and help fulfill diverse information needs. Board
members can minimize their interpersonal risks (e.g., embarrassment) by simply asking for
information online. Essentially, then, by navigating the deep and wide waters of the Internet,
these women may be better prepared to navigate their own illness experiences.
REFERENCES
Berg, B. Qualitative Research Methods for the Social Sciences. Boston, MA: Allyn and
Bacon, 2004.
Berger, C.R., & Bradac. Language and Social Knowledge: Uncertainty in Interpersonal
Relations. London: Edward Arnold Publishers, 1982.
Brashers, D.E., Neidig, J.L., Haas, S.M., Dobbs, L.K., Cardillo, L.W., and Russell, J.A.
“Communication in the Management of Uncertainty: The Case of Persons Living with
HIV or AIDS.” Communication Monographs, 67 (1), 2000, 63-84.
Brashers D.E.; Goldsmith D.J.; Hsieh E. "Information Seeking and Avoiding
in Health Contexts." Human Communication Research, (14), 2002, 258-271.
Leonardo, C., & Chrisler, J.C. "Women and Sexually Transmitted Diseases." Women &
Health, 18 (4), 1992, 1-15.
Lincoln, Y. S., & Guba, E. G. Naturalistic Inquiry. Beverly Hills, CA: Sage
Publications, 1985.
Mishel, M.H. "Reconceptualization of the Uncertainty in Illness Theory." IMAGE: Journal
of Nursing Scholarship, 22 (4), 1990, 256-262.
Palefsky, J. & Handley, J. What Your Doctor May Not Tell You about HPV and
abnormal Pap Smears. New York: Warner Books, 2002.
Parrott, R. "Topic-Centered and Person-Centered 'Sensitive Subjects': Recognizing and
Managing Barriers to Disclosure about Health." In L.K. Fuller & L. McPherson
Shilling (Eds.). Communicating about Communicable Diseases. (pp:177-190).
Amherst, MA: HRD Press, 1995
Parrott, R.L., & Condit, C.M. "Introduction: Priorities and Agendas in
Communicating about Women’s Reproductive Health." In R.L. Parrott & C.M.
Condit (Eds.). Evaluating Women’s Health Messages (pp.1-11). Thousand Oaks,
CA: Sage Publications, 1996.
Robinson, K.M. "Unsolicited Narratives from the Internet: A Rich Source of Qualitative
Data." Qualitative Health Research, 11 (5), 2001, 706-714.
Sharf, B. "Communicating Breast Cancer On-line: Support and Empowerment on the
Internet." Women & Health, 26 (1), 1997, 65-84.
Strauss, A., & Corbin, J. Basics of Qualitative Research: Techniques and Procedures for
Developing Grounded Theory. Thousand Oaks, CA: Sage, 1998.
Wallis, L.A. "Preface." In D. Foley & E. Nechas (Eds.). Women’s Encyclopedia of Health &
Emotional Healing, (pp. xxi-xxii). Emmaus, PA: Rodale Press, 1993
384
COMMUNICATION AS CAUSE & CURE: SOURCES OF ANXIETY FOR
INTERNATIONAL MEDICAL GRADUATES IN RURAL APPALACHIA
ABSTRACT
I. INTRODUCTION
In a recent editorial, Bruijnzeel and Visser (2005) wrote, "Due to the increased
migration from the sixties in the last century up until now, health care in the 'modern Western
world' is more and more confronted with the consequences of a multiethnic society" (p. 151).
Along with the opportunities associated with a multiethnic society, there are also
complications. In particular, cross-cultural interactions can bring about anxiety, and
apprehension (Gudykunst, 2003), especially when interactants are poorly trained or lack
culturally-specific communication skills.
385
medical students (Blumenthall, 2005). Additionally, there is confusion about how to best
meet the cross-cultural challenges associated with the increased diversification of the U.S.
medical community. To help international health care providers adjust, communication
interventions are being proposed, including cultural immersion (Davis, 2003) and cultural
sensitivity training (Majumdar et al., 1999). Clearly, effective communication is key to
successful cross-cultural adaptation, but it can also be a source of problems.
IV. METHODS
IMGs (N=12) participated in 1-1 ½ hour long interviews at one of three rural
southeastern Appalachian clinics. Participation was voluntary and informed consent was
obtained from each medical resident interviewed. The sample included female (n=7) and
male (n=5) interns from seven different cultures, including: Caribbean (n=1), Iran (n=2),
India (n=4), Columbia (n=1), Denmark (n=1), Pakistan (n=2), and Peru (n=1). To preserve
confidentiality, participants are identified by a number (e.g., P8 for participant 8 out of 12).
A qualitative content analysis approach was used in analyzing the manifest and latent
elements in the interview content (Berg, 2004). As advised by grounded theory
methodologists (Strauss & Corbin, 1998), the twelve interview audio recordings were first
transcribed (167 pages of text) then coded. Initial open and axial coding efforts were done
independently by the first and second authors. Additional axial coding was done jointly to:
resolve differences; compare within and between categories; and select exemplar quotes to
386
illustrate emergent themes (Strauss & Corbin, 1998). To preserve the integrity of the data,
IMG quotes presented below were only edited for clarity or conciseness. If material was
removed or added by the authors, this is indicated with brackets.
EX: "A lot of the things were very new for me [….]"
V. RESULTS
I am a very social person. I always was. [….] you don’t have that social interaction
that much compared to what you would have in India. It is so crowded and you have
so many people talking to you. [….] You are pushing a hundred people right where
you are walking. Around here when you are walking, there is nobody else. So that
social isolation has a barrier (P8).
One resident acknowledged that being by himself was hard "especially in a small place" as a
rural Appalachian town. He added, "What do you do whenever you are done with all the
movies? [….] Not much, read, watch TV." He added that he would visit others' houses when
invited, "but that is not frequently" (P9).
Aggravating social isolation is cultural and racial isolation. "There is only one family
I know in [town] that is from my country," according to one woman from Pakistan. She
added that such isolation can lead to depression, thereby affecting an IMG's "ability to
perform" as a doctor (P7). For a few other IMGs who had already been living in the United
States, cultural isolation in their new Appalachian community was still anxiety-provoking:
"It was not that difficult in New York because the culture is such a mixed culture [….] So it’s
easier to blend in." This IMG revealed that it is a "big challenge" not interacting with
relatives or "anybody of [a] similar culture or background. […] It’s hard from time to time; I
feel like it has taken me a year to […] feel settled here." (P1)
"how are my testes?" [the patient asked]. I say, "what is that?" [….] "No, no you
mean, 'what are my test - what is the report?'" [….] “you want to know about your
test, not testes, because you are a female.” (P4)
387
Whether their patients are making broad cultural references to "Jenny Craig" (P11) or
regional references about "mamaw" and "papaw" (i.e., grandmother and grandfather; P10),
these expressions can affect the flow of the doctor-patient interaction. Language barriers
sometimes extended into the IMGs' interactions with colleagues. Culturally specific
expressions like "Oh my God" (P11) and humor can increase an IMG's sense of social
isolation. One woman admitted, for instance, that "it’s hard for me to joke in English"
because it is not her primary language (P7). Moreover, organizationally specific language,
such as "night floats" (P5) or "sharps" (P12) may not translate, leaving them concerned that
their new colleagues think of them as "lazy or stupid" when the IMGs are just confused (P5).
Yet most insisted that with time, patience, experience and concentration, they eventually
adapt to linguistic changes.
I am not very comfortable sitting and chatting with them [her supervisors]…but I am
much better than I was before. We have a lot of hierarchy in India [….] but here I see
people are very friendly and very casual (P3)
I’ve heard the more you interact, the more it is easier for you to know how to interact
with the patients. [….] Otherwise if you are just in your community, you cannot
interact with other people. (P3).
Positive communication during their adjustments also seemed important for some. As one
resident said, "I remember my husband used to say, 'Now give yourself six months (to adjust
to the new culture) [….] 'It happens and it happened to me too, you know'" (P11). Such
388
reinforcing messages, whether from colleagues, supervisors, or relatives, may be critical to
remind IMGs that cultural adaptation is possible.
VI. CONCLUSION
REFERENCES
Babrow, A.S., Hines, S.C., & Kasch, C.R. "Managing Uncertainty in Illness Explanation: An
Application of Problematic Integration Theory. In B. Whaley (Ed.). Explaining
Illness: Research, Theory, and Strategies. (pp. 41-68). Mahwah, NJ: Lawrence
Erlbaum Associates, 2000.
Berg, B. Qualitative Research Methods for the Social Sciences. Boston, MA: Allyn and
Bacon, 2004.
Blumenthall, D. "New Steam from an Old Cauldron: The Physician Supply Debate." The
New England Journal of Medicine, (350)17, 2005, 1810-1818.
Bruijnzeels, M, & Visser, A. "Editorial--Intercultural Doctor-Patient Relational Outcomes:
Need More to be Studied." Patient Education and Counseling, (57), 2005, 151-152
Davis. C.R. "Helping International Nurses Adjust." Nursing, (33), 6, 2003, 86-87.
Gudykunst, W.B. Bridging Differences: Effective Intergroup Communication.
Thousand Oaks: Sage, 2003.
Johnson,, P., Lindsey, A.E., & Zakahi, W.R. "Anglo American, Hispanic American, Chilean,
Mexican and Spanish Perceptions of Competent Communication in Initial
Interaction." Communication Research Reports, (18), 1, 2001, 36-43.
389
INTEGRATED SOCIAL MARKETING & VISUAL MESSAGES OF BREAST
CANCER INFORMATION TO AFRICAN AMERICAN WOMEN
ABSTRACT
“Images have meanings, and those meanings are not fixed. The ways in which
images are made, used and viewed all have an effect on their meanings” (Nelson, 2005, p.
2). This paper explores the use of visual breast cancer message symbols and how these
are communicating to the African American woman. This research uses the AIDA
marketing model within the social semiotic context to explore how African American
women interpret the visual signs commonly used to promote information and awareness
about breast cancer. To capture the attention of the intended audience marketers must
create Integrated Social Marketing Communication campaigns that first have visual
meaning to that audience. This paper concludes that use of the AIDA model is a useful
template to conduct social semiotic analysis to achieve the goal of producing culturally
targeted materials to promote awareness and understanding of health information.
I. INTRODUCTION
Two significant articles have reviewed the development of the pink ribbon campaign
and how it’s development has led to the destigmatization of breast cancer in the media.
The first article, reviewed the use of the pink ribbon campaign finding that this has created
an aura where “the ribbon, pink, round, feminine and innocent, is an advertisement for
both grassroots and corporate activism and the philanthropist as ideal citizen”(King, 2004,
p. 488). The whole month of October is now given over to publicizing breast cancer and
the ubiquitous pink ribbon is in magazines, on out of home advertising, television
programs, and as noted in Fran Visco speech to the National Breast Cancer Coalition (May
22, 2005), even the U.S. Congress got involved and passed a bill to light up the St. Louis
Arch in pink for breast cancer awareness month.
The second article investigating the pink ribbon campaign is Milden’s (2005), who
discusses how far public openness of breast cancer has come, from the late 1970’s when
women kept breast cancer a secret, to today when there are races and corporate sponsors
publicizing breast cancer survivor events. She evaluates two Race for the Cure events and
offers opinions---both hers, as a woman who has had breast cancer, and others---regarding
this transformation from a private disease to a public one. Fundamental to this
transformation is the use of the pink ribbon campaign. During her evaluation she notes
that for the most part the events are filled with European-American women with very few
African-American women present.
African American (AA) women have a lower incidence of breast cancer, but have a
significantly higher incidence of mortality rates, than do European American (EA)
women. Marbella and Layde (2001) examined breast cancer data by age, from 1981-
1996, finding that AA women in all age groups consistently had higher mortality rates
than EA women. Coward concluded that AA women need to develop “the belief that one
390
has choices and can take action” (2005, p.265) or self-agency, and to do this there needs to
be interpersonal, nontraditional interventions to empower and encourage AA women.
II. SEMIOTICS
Anderson proffers that semiosis “occurs for the physicalist at the moment of neural
response, for the perceptionist at the formation of a perception, for the constructionist at
the moment the sign is ideologically positioned, and for the actionalist as the moment of
becoming in action” (1996, p. 62). He contends that the physicalist and perceptionist is
“strongly dependent on the individual as the acting unit” (p. 62). While the constructionist
and actionalist “see semiosis as a collective process in which the individual participates
but cannot achieve on his or her own” (p. 62). This paper proposes that semiotics is a
research method, thus, the semiosis takes place with the interpreter---or the individuals,
those in their interpersonal network, and their environment. As noted by Watts “Umberto
Eco says people bring different codes to a given message and therefore interpret it in
different ways. Those codes have been assimilated and accepted as a result of social class,
education level, political ideology and historical experience” (2004, p. 387).
391
In his work he interprets a cover of an annual report to analyze the visuals, which he
contends must “speak to the viewer as loudly as the written word…[because] If the visuals
do not impart meaning at the attention stage the viewer will not even read the text” (p.
393). The process as proposed by Watts will be used in this analysis because if the AA
women are not attracted to the visuals they will not receive the message. However,
instead of the interpreter being this author, the AA women will interpret what they find as
meaning. This research will focus in particular how attention might lead to action—in this
case reading the material presented.
This semiotic analysis will first analyze the denotative meanings identified by the
respondents for the images presented, these meanings are then explored for the
connotative meanings that come from the culture of the interpretant—the AA women. This
research uses depth interviews to obtain the opinions of the AA women. The process of
depth interviews was chosen because of the sensitivity of the topic and the ability of this
method to probe for meaning. Respondents ranged in age from early 20’s to early 50’s
and were from a sample demographically representative in range of education and income
of AA women from a large Northcoast DMA.
During the process of the depth interviews the AA women were ask to participate in
a “word game” where they were to state the first thing that came to their mind after they
were shown an object, heard a word, viewed a photograph or pamphlet. The respondents
were shown a pink ribbon pin and ask to give the researcher their first impression. The
overwhelming majority of the respondents did not recognize pink as associated with breast
cancer---in fact none of them did. While, the women did think it had something to do with
cancer they more often thought it had something to do with the aids campaign. Typical of
the responses were:
1. “I know some people have pins that’s for cancer or aids or and I’m not sure what
the pink is for. I’m not familiar, that’s like my first impression. And, that’s a beautiful
pin. That’s my second impression”.
2. “When you think about it, just that the ribbon has been copied by so many people
at this point. That everybody has a ribbon in different colors and now everybody has a
band in a different color.”
The women were also shown a series of ten pamphlets that had been produced by
various government and nonprofit medical agencies. The pamphlets all aimed to aid
women in the understanding of breast health, however all had different designs, differing
amounts of information and seemed to the researcher to be targeted to various audiences.
The first four were provided for this research by a county governmental agency. One was
picked up at a physician’s office and the remaining five were obtained from a women’s
clinic that focused on women’s cancer and were specifically given out either individually
or to churches. This report will focus on two of the pamphlets distributed by the local
county governmental agency. These pamphlets are the most similar of the materials
investigated in both design and content, thus reducing potential conflict with reliability
and validity.
The first pamphlet (P1) examined was 5” x 8” printed on recycled paper, with a
green cover which depicted a cartoon of a woman doing a breast self exam in a mirror,
with typeface in a modified cartoon style part in black in smaller type (which read: What
392
every woman should know” and a part in white (the large type in white read: “about breast
health”. All illustrations in the pamphlet were drawn with thin line art and with more
information than the other pamphlets, however the women were not attracted to this 16
page pamphlet as typical responses suggests:
1. “…I don’t know if it’s interesting. It almost looks like it’s a little cartoon so that it
is going to attract people who might not read a lot. It almost look kid like to me…”
2. “I think they are just lines that are kind of colored in but it doesn’t relate to one
thing or another…it’s also the blandness….i think they are more likely to be thrown out
and not paid attention to. Where as the one that is more colorful have real people or the
drawings that distinguish and look like real people it’s more likely like someone will look
through”.
The second pamphlet (P2) was the same size at number one (5” x 8”) but is in color,
printed on slick glossy paper with the cover having a photograph of a EA women viewed
from the back but obviously doing a breast self exam, again this is a 16 page pamphlet and
is a mixture of photography and realistically drawn line art. This is the one pamphlet with
the pink ribbon which is on the back cover. The cover also has a headline of “Women”,
which is approximately 1.25 inches in height and is in a gold color, “and Breast Health” in
white smaller type. There are also six other phrases in smaller type with each in a different
color. Additionally, there is an index on the inside front cover which made one woman
respond about P1 as “even a small booklet could look like it has chapters” when
comparing the two. When comparing P1 and P2 another woman said she would pick up
P2 if both were lying on a table “because I don’t like the texture of pamphlet #1, I don’t
like the papery feeling, and it’s more appealing to the eye to pick up than #1”. The typical
response to P2 was:
“I think this is more knowledgeable than the first one you had. It [has] more colors,
more pulling you…Any type of thing that has color, it’s an eye catcher that one [P1] is
more dull, this one is more eye catching it’s telling you more things…I would pick this up
before I would pick up the other one.,.it’s saying more for 40 and older though. It’s giving
you a least an age group”
[note: there was nothing specifying age on P2 just the photograph of the woman]
As noted by Watt “the visual must speak to the viewer as loudly as the written word.
If the visuals do not impart meaning at the attention stage the viewer will not even read the
text” (2004, p. 393). One respondent eloquently validated this when describing P1 and P2:
“Once it (P2) got my attention then it made me want to know about it. Compared to
the other one, that would be the one I’d read first…The other pamphlet (P1) would have
been the last thing I would have looked at”.
III. CONCLUSION
This paper has used the AIDA marketing model within the social semiotic context to
explore how AA women interpret the visual signs commonly used to promote information
and awareness about breast cancer. First, the research above discussed the denotative
meanings for the image of the pink ribbon. This research contradicted research reported by
Frisby (2002), where her respondents could freely remember the pink ribbon campaigns but
less than 2% of the AA women in her study had an understanding of a single risk factor
associated with breast cancer. In this present study the respondents did not specifically
identify the pink ribbon with breast cancer associating it with aids more frequently.
Obviously, further research is required to determine the differences. If as proposed by King
393
(2004) and Milden (2005) the pink ribbon campaign is being flummoxed by all the other
ribbon campaigns then perhaps different symbols should be investigated. Next this paper
evaluated the connotative meanings that come from the culture of the respondent. There are
layers of meaning within the connotative meanings: first level, conception of self what did
they think, were they attracted to it; second level evaluation of the type of pamphlet was it of
interest and was it targeted to them; third, was it credible; fourth, would others read it. When
exploring the women’s reaction to the pamphlets color, photography, and glossy paper
attracted their attention. The women found P2 more credible and they said they would read
it. While P1’s recycled paper, which has a less expensive feel, and the cartoonish line art
detracted from the attention the women would have given it. Even spending time with P1
during the research the women concluded that this pamphlet had less information than P2
when in fact it had more.
This paper has explored the use of visual breast cancer message symbols and how
these are communicating to the AA women. Further, this paper has explored the AA
women’s semiosis of two pamphlets typically distributed by governmental agencies,
finding that visual graphics, use of color, and finish of the paper determine the attention
given the pamphlet, thus the action that will be taken. This research has shown that the
AIDA marketing model is a useful template to conduct a social semiotic analysis of
culturally targeted materials specifically with AA women. To capture the attention of the
intended audience marketers must create Integrated Social Marketing Communication
campaigns that first have visual meaning to that audience. This inclusion of the audience
in social semiotics has given rise to three important principles that are significant in the
communication process: [1] “All people see the world through signs; [2] The meaning of
signs is created by people and does not exist separately from them and the life of their
social/cultural community; [3] Semiotics systems provide people with a variety of
resources for making meaning” (Harrison, 2003, p. 48). These three principles are
important when choosing a sign to communicate with an audience. Social Marketers must
understand how the audience perceives the signs/ symbols, as well as the text when
developing the communication message. Signs/ symbols have different connotations
within different ethnic and cultural groups. In particular, messages related to breast cancer
or as Day questions whether the pink ribbon campaign is reaching the women most at risk
from the disease (2003).
REFERENCES
394
DTCA: HEALTH COMMUNICATION OR CAPITALISTIC PERSUASION
ABSTRACT
I. INTRODUCTION
Direct to consumer advertising (DTCA) has been faced with many opponents as well
as supporters since it first became a large industry because billions of dollars have been spent
each year since the early 1990’s to produce it. In recent years prescription drug
manufacturers have been changing how prescription drugs are marketed through DTCA).
Before the 1980s, prescription drugs were marketed only toward physicians and other
healthcare providers. Since then, a firestorm has raged over the intention of DTCA.
According to research in the U.S., DTCA has a major impact on the public awareness of
prescription drugs. The pharmaceutical industry spends more than $2 billion annually to
promote consumer desire for products and to increase market share, (Rosenthal et.al, 2002).
According to surveys, over 90 percent of the public reported seeing prescription drug
advertisements (Frank et. al., 2002). Also, an estimated 8.5 million consumers annually have
requested and received prescriptions from their physicians in response to DTCA, (Heinrich,
2002). Pharmaceutical companies have profited heavily due to the use of DTCA (Frank et.
al., 2002). The aim of this project is to examine literature on DTCA, specifically focusing on
the benefits and costs. The stakeholders surrounding this issue will also be focused on.
Future directions and limitations will be discussed as well.
395
These large numbers have sparked investigation on the advantages and disadvantages of the
practice and what effects they have on the stakeholders involved.
It has also been suggested that DTCA does facilitate communication between the
physician and patient about embarrassing or stigmatic topics such as a sexually transmitted
disease or erectile dysfunction (Lewis, 2003) Researchers also declare that DTCA provides
consumers with information about treatment options and other important health information
that they would otherwise not receive(Alperstein and Peyrot,1993), For example, DTCA
increase awareness and treatment of diseases such as diabetes, hypertension and the like
disseminating important information that many people would not receive otherwise, (Holmer,
1999). Researchers also claim that DTCA gives consumers information about treatment
alternatives to current treatment that may be ineffective or problematic, (Gold, 2003). It is
argued that people may not be aware that there is a certain medication out there for their
condition or that a disease is treatable without direct-to-consumer advertising.
Third, supporters of DTCA also believe that although a lot of money is spent on the
advertising of pharmaceuticals, it actually lowers healthcare costs (Holmer, 2002). Holmer
(2002) argues that since DTCA facilitates physician-patient communication, more tests will
be requested and preformed. These early detection measures will ultimately decrease
healthcare costs for the individual because the earlier a disease is diagnosed, the less evasive
and less expensive it will be (2002). It may also decrease insurance co-pays without
decreasing the quality of care one gets (Schommer et. al., 2004). For instance, if DTCA
prompts early detection, physicians will be able to treat more patients successfully. It would
also benefit the patient as well. Early detection is known to increase the likelihood of
survival or recovery if a condition is found (Holmer, 2002). This could be an important tool
for patients as well as those that treat them.
396
IV. DISADVANTAGES TO DTCA
Third, Gilbody et. al. (2005) state that DTCA is often misleading and that many
advertisements spend little time discussing harmful side effects. Researchers are concerned
with the fact that consumers are attacked by the preverbal hard sell from drug companies
(Watson, 2002). In other words, people are sucked in by the company’s advertising
techniques and away from the possible problems with the product. DTCA also may mislead
consumers to believe that they are suffering from a condition that can only be treated by these
pharmaceuticals, even if they are asymptomatic, (Gold, 2003). Fourth, this increases the cost
of doctor and hospital visits because the request of unnecessary tests and drugs, (Woloshin et.
al., 2001). Also, doctors report that they prescribe the most popular drugs to their patients,
(Gold, 2003) and feel pressure to prescribe specific brands, (Lewis, 2003). This can also
drive up costs and cause many medical problems. Opponents state that DTCA actually
increases the doctor’s visit due to the request of unnecessary tests by the patient (Woloshin et.
al., 2001; Lewis, 2003). They also state that DTCA is beneficial only to the pharmaceutical
companies and have no value to the health consumer (Lewis, 2003).
V. DISCUSSION
First, in this discussion of the literature, what the literature reveals about these effects
will be focused on, including some important areas of research. The next section will discuss
the limitations to the literature reviewed as well as methodological challenges surrounding
DTCA. The discussion will conclude with a discussion of future directions for researchers in
the field. Through analysis of the research, there are certain conclusions that one can
draw. First, current research in this field seems divided among the experts. There is not
enough expounding evidence one way or another regarding the benefits and harms of DTCA.
Next, it is important to study the short and long-term effects of DTCA. Both are important to
healthcare and the health of the public. Also, through analyzing the research, many important
areas of interest for DTCA researchers did emerge and should also be addressed. Research
regarding DTCA is divided among the experts and researchers interested in the field. As
shown earlier, many researchers believe the opposite of others regarding DTCA (Frank et. al.,
397
2002; Gold, 2003; Hansen et. al., 2002). Researchers are on the fence regarding important
issues including DTC advertisements’ effects on the doctor-patient relationship and
communication (Gilbody et. al., 2005; Gold, 2003;Murray et. al., 2004) as well as its effects
on the overall costs of healthcare (Lewis, 2003).
Both short and long-term effects of DTCA are important to the future research of this
topic. Short-term effects are extremely important at this time due to the newness of this
practice. Only being around since, 1997 (Schommer et. al., 2004), DTCA’s importance is
likely to continue to grow. Short-term effects could help us monitor DTCA and make
predictions on the long-term effects of the advertisements on important issues such as doctor-
patient communication as well as overall healthcare costs. It is also important, however, to
study the short-term effects of exposure on health communication as a whole, especially
between the patient and healthcare provider. This research is very important due to the
amount of money involved in the practice of targeting prescription drugs to consumers. As
stated earlier, expenditures for 2003 rose to $3.27 billion in the United States (Schommer et.
al., 2004). This is a large and very lucrative industry, finding swaying research could change
the face of this practice forever.
There were also certain limitations to the research that must be addressed along with
problems regarding the methodology of DTCA studies. Almost all of the literature examined
here was based on data collected from self-report surveys targeted at either healthcare
providers or healthcare consumers (Bell et al., 1999; Gold, 2003; Kopp and Sheffet, 1997;
Lewis, 2003;Watson, 2002; Weissman et. al., 2004). Self-report surveys are very helpful at
getting to opinions and the perceptions of participants; however, they can prove to be
problematic. Self-report surveys are not the most valid or reliable ways to collect data.
Subjects have been found to sometimes be dishonest on the survey for a number of reasons.
For instance, subjects may be dishonest out of fear of not fitting in with the rest of the
participants. Also, some participants try to guess what the researcher is looking for and
answer the questionnaire according to what they think the researcher wants to find. Also,
participants may not be honest in order to try and fool the researcher or they are just rushing
through the survey in order to finish. For these reasons, self—report surveys could be
problematic to researchers.
In most DTCA research, the sample is not fully representative of the population.
Most of the populations in the studies that make up the literature review were mostly
Caucasian (Gilbody et. al., 2005; Weissman et. al., 2004). There is not a representative
amount of other ethnicities taking these surveys. Looking at mostly Caucasian populations is
giving a skewed idea of patient’s perceptions of DTCA because not all patients are
Caucasian. Likewise, not all physicians are Caucasian. It is important to see how
participants from other ethnic backgrounds view DTCA as well. Also, most DTCA is not
diverse. Most advertisements are tailored to a Caucasian audience, however, recently a
number of drugs are advertising to other ethnic groups as well. Recently, Diflucan and
Valtrex are among two of the top drugs that are beginning to add members of other ethnic
groups to their advertisements. Also, male enhancement drugs as well as birth control are
adding diversity to their ads.
It is important to realize that DTCA never acts alone. Growing research shows that
there are other factors involved in DTCA’s effects on the doctor-patient relationship,
prescribing habits and use as well as increases in healthcare costs (Lewis, 2003; Holmer,
2003; Schommer 2004). For instance, the patient needs to be proactive about his or her
398
healthcare including information-seeking and have insurance or a regular doctor. Controlling
for these variables in DTCA research is important. However research is conducted, it is
important to at least keep these impacting variables in mind. Other important variables to
consider is the drug that is being advertised itself. Characteristics of the drug are important to
focus on such as, what the drug treats, seasonal variations in the drug, what kind of illness it
treats, and for how long (Hansen et. al., 2005). Other important aspects to consider include,
exposure to the ad, processing and communication effects, target audience action, sales and
market share, as well as profit being made off of the drug (2005).
VI. CONCLUSION
Through the analysis of the literature, some future directions of research can be
concluded. First, when conducting DTCA research, self-report can be helpful, however,
implementing other methods along with this could prove to be fruitful to research. An
experimental element is useful when studying the effects of DTCA and may improve validity
of the self-report survey. For example, using self-report along with doctor-patient
communication observation could tell the researcher more than the self-report survey alone.
This practice could help the researcher increase reliability and validity. Proponents argue that
the benefits of DTCA far outweigh what the opposition sees as costs. DTCA, which is only
currently allowed in the U.S. and New Zealand, is spreading and many other countries are
considering allowing pharmaceutical industry’s to begin DTCA (Lewis, 2003). Lobbyists in
Canada have also made a case for the implementation of DTCA and the country may soon
allow companies to begin the practice. The immanent spread of DTCA is also a cause for
more research. DTCA plays a major role in healthcare today and should be addressed. No
matter the outcome, billions of dollars and lives are at stake in this battle. It is the duty of the
researcher to search for what is in the public’s best interest. With the increase in healthcare
costs and increase of prescription use, DTCA has become a larger force in the fight for the
health and well-being of the public.
REFERENCES
Bell .R.A, Kravitz R.L, and Wilkes M.S. “Direct-to-consumer prescription drug advertising,
1989-1998.” Journal of Family Practice., 49, 1995, 329-335.
Frank .R. G., Berndt, E. R., Donohue, J. M., Epstein, A. M., and Rosenthal, M. B.. Trends in
Direct-to-Consumer Advertising of Prescription Drugs. Menlo Park: Kaiser Family
Foundation, 2002.
Gold, J. L., “Paternalistic or Protective? Freedom of Expression and Direct-to-Consumer
Drug Advertising Policy in Canada.” Health Law Review., 11,(2),2003, 30-38.
Hansen, R. and Droege, M. “Methodological challenges surrounding direct-to-consumer
advertising research- The measurement conundrum”. Research in Social and
Administrative Pharmacy,. 1, 2005, 331-347.
Health People 2010. (2005). Objectives. Accessed on September 30, 2005. Available at:
http://www.healthpeople.gov/Document/HTML/volume1/11HealthCom.htm.
399
INTEGRATIVE THEORY & COLLECTIVE EFFICACY: PREDICTORS OF
INTENT TO PARTICIPATE IN A NONVIOLENCE CAMPAIGN
ABSTRACT
This study explored crucial predictors within the integrative theory and applied them
to an actual communication campaign for preventing violence on campus. By exploring an
additional predictor of behavior change, collective-efficacy, this study examined how the
predictor leads to individuals’ intentions to perform a particular behavior. Results showed
that college students’ attitude and perceived norms significantly predicted intentions to attend
the campaign programs, with attitude having more salient influence on intentions. However,
self-efficacy did not significantly predict the intentions. Perceived collective-efficacy failed
to significantly increase its unique variance in intentions, even though it could solely predict
the intentions.
400
The integrated theoretical model (Fishbein, 2000; Fishbein et al., 2002) explains how
a given behavior occurs. With necessary skills and no environmental constraints, an
individual’s strong intention to perform any given behavior is most likely to result in the
behavioral outcome. In turn, strong intention is predicted by the three primary variables (i.e.,
attitude, perceived norm, and self-efficacy). Furthermore, each of these three variables is
generated by three corresponding beliefs: behavioral, normative, and efficacy beliefs.
Emphasis in this model is placed on developing appropriate types of prevention programs or
campaigns on the strength of each variable. If health-related attitudinal variables are most
salient in individual perception, for instance, campaigners must center on attitude-related
messages. They should attempt to augment individual intention, which is a significantly
powerful predictor of a behavior, by selecting appropriate messages based on this theoretical
background. This study proposes that collective efficacy should play a role in predicting a
behavior in a group needing collective actions and voices. Collective efficacy, defined by
Bandura (1995), means “people’s beliefs in their joint capabilities to forge divergent self-
interests into a shared agenda, to enlist supporters and resources for collective action, to
devise effective strategies and to execute them successfully, and to withstand forcible
opposition and discouraging setback” (p. 33). Bandura (1995) has highlighted the increasing
importance of collective efficacy in our society, where individuals are more mutually
dependent on one another than ever.
The strength of collective efficacy lies in its ability to predict the behavior change of
an individual who perceives a greater likelihood of “sharing” efforts with others, in addition
to individual effort. Perceived collective efficacy may be better exerted among each member
within their own social network group. Campo et al. (2003) argued that college students are
affected more by their own social network members, rather than “perceiving the typical
college student as being within their social network” (p. 486). This study addresses the
following research questions and hypothesis:
RQ1: To what degree do the three major determinants (attitude, perceived norm, and self-
efficacy) predict individual intentions to attend campaign programs for nonviolence
on campus?
H1: College students’ perceived collective-efficacy will positively predict their intention
to attend campaign programs for nonviolence on campus.
RQ2: If their collective-efficacy positively predicts the intention, to what degree do the
collective-efficacy increase its unique variance in the intentions among the three
major determinants?
II. METHODS
Participants and campaign programs for nonviolence on campus. Participants in this study
were 198 students: 135 students enrolled in one general education class and 63 students
enrolled in four laboratories of mass communication research at a Midwestern public
university. This study used a Midwestern public university’s actual campaign for
nonviolence because its goal is to reduce such problems as discrimination, harassment,
violence, and other abuses of power on campus. Research participants were shown lists of the
campaign events and they responded to questions based on the following measures.
Measurement. The questionnaire was designed to measure three variables grounded on the
integrated theory as well as perceived collective-efficacy. This study proposed predictors and
dependent variables based on Ajzen’s article (2002).
Exposure to campaigns for nonviolence on campus. This variable was measured by asking
one open-ended question about how many times respondents have attended campus campaign
401
programs during their college life. The programs were listed in the questionnaire to help
respondents understand.
Perceived susceptibility. Perceived susceptibility was measured with one item using a
1 to 7 Likert scale. The item asked about “How likely do I perceive myself to be a victim of
discrimination, harassment, violence or other abuses of power on our campus?”
Attitude. Attitude was measured with three items using a 1 to 7 semantic-differential scale.
Cronbach’s coefficient alpha was .875. The mean score for the three item scale was 4.190,
SD = 1.086.
Perceived norm. Respondents were asked to use a 7-point Likert scale to report their opinions
about whether 1) most people who are important to me; 2) people whose opinions I value;
and 3) most of the students in the college would value attendance at the programs of the
campaign for nonviolence. Reliability for this scale was also low (α = .425). The mean score
for the three item scale was 3.119, SD = 1.076.
Self-efficacy. Self-efficacy was measured by asking respondents to use a 7-point Likert scale
to report whether attending the programs was easy, up to them, and possible. Reliability for
this scale was also low (α = .485). The mean score for the three item scale was 4.119, SD =
1.076.
Collective-efficacy. A collective-efficacy scale for intention of attending the campaign
programs was created using three items with a 1 to 7 Likert scale. The three items concerned
whether the respondent as a member of groups on campus could attend the events in the
future, are able to create a positive group environment through their attendance; and we
attend because as a group we stick together. Cronbach’s coefficient alpha was .833. The
mean score for the three item scale was 4.315, SD = .968.
Intention to attend the campaign. Intention was assessed by three items asking respondents to
use a 7-point Likert scale to report whether attending the programs was expected of them,
whether they will make and effort to attend, and whether they will definitely attend.
Cronbach’s coefficient alpha was .768. The mean score for the three item scale was 2.719,
SD = 1.267.
Demographic items. Respondents were asked to provide their sex, college level, age, and
ethnicity.
III. RESULTS
Descriptive analyses
Of the 198 respondents, males composed 32.3% (n = 64) of the respondents while
females composed 67.2% (n = 133); .5% (n = 1) did not indicate sex. The respondents’
average age was 20.21 years old (SD = 1.81). The ratio of grade levels was distributed among
freshmen (25.8%), sophomores (22.2%), juniors (28.8%), seniors (22.2%), and graduates
(.5%). One respondent (5%) did not indicate grade level. Their ethnicity was mostly White
(88.4%). The average frequency of attending the campaign programs for preventing violence
on campus during college life was .38 times (SD = .93, min = 0, max = 6). Finally, the mean
point of the respondents’ perceived susceptibility was 2.68 (SD = 1.63) on a 1 (extremely
unlikely) to 7 (extremely likely) Likert scale. Each Cronbach’s alpha of perceived norm
(.425 to .453) and self-efficacy (.485 to .661) increased, respectively, when each second item
was deleted. However, this study used the original reliability because renewable ones did not
significantly affect p-values of the variables.
402
on campus. A multiple regression analysis was conducted to test three predictors within the
integrated theory, which excludes the actual behavior variable. The multiple regression
analysis 1 (Table 1) showed that attitude (β = .403, p < .001) and perceived norm (β = .355, p
< .001) were significantly associated with intentions, with attitude having more influence on
intentions. Self-efficacy, however, was not a significant predictor of intentions. These
variables accounted for 40.3% of total variance in intentions.
IV. CONCLUSION
The findings of this study suggest the importance of attitude and perceived norm
variables for designing campaigns to prevent violence on campus. Because college students’
intentions to attend the campaigns were under attitudinal and normative control, campaign
designers should reinforce, modify, or change target audiences’ attitudes and norms with
403
strategic messages. The integrated theory posits that the attitudinal variable is predicted by
behavioral beliefs or outcome evaluations, so campaign messages need to include positive
outcomes of action. If college students believe their participation in campaign programs will
produce actual benefits, they will have a better attitude toward the campaign. Normative
messages surrounding violence prevention also should be addressed in campaigns. Perceived
norm plays a crucial role in an individual’s behavior change. When college students perceive
that their significant others (e.g., family or friends) expect them to be involved in violence
prevention campaigns, they will more likely intend to do it. If students also observe their
significant others experiencing violence, they will more likely be interested in campaign
programs. Campaigners need to target not only each college student but also his or her
influencers by using normative messages. This study empirically revealed that perceived
collective-efficacy did not significantly increase its own unique variance in intentions to
attend campaigns. This consequence may come from the correlations among collective-
efficacy and other variables. However, there is still the possibility that collective-efficacy can
operate in individuals’ intentions because of the significant positive association between
collective-efficacy and intentions. Colleges’ collective sense of efficacy can form a positive
atmosphere to share and resolve problems related to their interests. This provides another
implication that campaigners for college community and issues can use strategic messages
including collective actions and voices. Future studies should examine this model with other
data sets and more reliable scales. For example, the reliabilities for measured constructs need
to be stronger, and it would be beneficial if the actual behavior was measured. Participants
also should be randomly selected from all college members, including faculty, staff, and
students, to examine how much more likely the college members intend to participate in
campaigns for preventing violence on campus. Moreover, collective-efficacy may be further
tested in a group of interest with collective actions and voices. Collective-efficacy still needs
to be examined in an individual or community who perceives a greater likelihood of sharing
efforts with others.
REFERENCES
Ajzen, Icek. “The theory of planned behavior.” Organizational Behavior and Human
Decision Processes, 50, 1991, 179-211.
Ajzen, Icek. “Constructing a TPB questionnaire: Conceptual and methodological
Considerations. 2002. Retrieved February 1, 2005, from
http://www.people.umass.edu/ajzen/pdf/tpb.m easurement.pdf
Bandura, Albert. “Self-efficacy: Toward a unifying theory of behavioral change.”
Psychological Review, 84, 1977, 191-215.
Bandura, Albert. “Self-efficacy mechanism in physiological activation and health-promoting
behavior.” In J. Madden IV, ed., Neurobiology of Learning, Emotion and Affect. New
York: Raven, 1991, 229-269.
Bandura, Albert. “Exercise of personal and collective-efficacy.” In A. Bandura, ed., Self-
efficacy in Changing Societies. New York: Cambridge University Press, 1995, 1-45.
Campo, Shelly, Brossard, Dominique, Frazer, M. S., Marchell, Timothy, Lewis, Deborah,
and Talbot, Janis. “Are social norms campaigns really magic bullets? Assessing the
effects of students misperceptions on drinking behavior.” Health Communication,
15(4), 2003, 481-497.
Fishbein, Martin. “The role of theory in HIV prevention.” AIDS Care, 12(3), 2000, 273-278.
404
CHAPTER 13
405
JUSTICE OR EFFICIENCY: ABOUT ECONOMIC ANALYSIS OF LAW
ABSTRACT
For a long time, economics has been accepted as the most powerful non-legal tool for
analyzing a wide variety of legal rules. It has been claimed that legal rules must be designed to
maximize economic efficiency roughly speaking to make the size of economic pie as large as
possible. However this study claims that legal rules should take into consideration of not only
economic rules but also political, historical, ideological, ethical and cultural foundations of the
society.
I. INTRODUCTION
It has been widely accepted that economics is among the non-legal factors that
influences the law the most. For a long time, it was believed that the law had a guiding role on
economic relations; the relationship between the law and economics was used to describe the
legal rules and to explain the behavior of explicit economic markets. However, starting in the
1960’s the relationship between the law and economics was shaped very in a different way.
Until about 1960, economic analysis of the law consisted greatly with the economic analysis of
antitrust laws. The records in antitrust cases provided very rich information about business
practices which helped economist discover the economic implications of such practices.
However these discoveries carried implications for the legal system, they were just trying to
explain the behaviors of explicit economic markets (Posner, 1998, pp. 25-28). Basically, the
traditional analysis of law and economics appraised the effects of legal rules on the economic
system. Since the 1960s, the application of economics has extended its reach beyond antitrust
law to other field of laws such as common law; to family law legal; to civil, criminal, and
administrative procedure; and so on. The contributions of Coase (1960), Calabresi (1970) and
Posner (1972) carried economic analysis to areas of law that were not limited to explanation of
economic market behaviors. The new (modern) law and economics approach mainly focused
on an “efficiency concept” initiated among others by Posner (1972). According to the
efficiency approach, legal rules either are, or ought to be, designed to maximize economic
efficiency, which means maximization of the social willingness-to pay. In this approach, legal
rules are considered as efficiency generating instruments. The primary purpose of this study is
threefold. First, this study attempts to enrich understanding of Economic Analysis of Law in
the content of allocative efficiency. Second, this study offers a critical view towards the
efficiency-oriented approach from a legal perspective. Finally, the study offers some general
conclusions regarding the application of the law and economics analysis to legal and social
practices and to what extent this approach has implications in the European legal system.
406
II. HISTORY
The modern approach of law and economics appeared at first with Ronald Coase’s
article on social cost and Guido Calabresi’s articles on torts and liabilities. With time, this
approach was carried out in a broader context applying economic analysis to the other fields
of law. Especially, tort law is greatly affected by economic models. By the Posner’s influence
on law and economics, the modern approach of law and economics was started being
evaluated in a general theory of law as well as conceptual tool for the improvement of its
practices. Coase’s Theorem offered a framework for analyzing the effect of initial
assignment of property right on allocative efficiency. According to the Coase theorem, as
long as there are no or negligible transaction costs, and property rights are well-defined,
parties to a dispute achieve the same efficient outcome (allocative efficiency) regardless of
initial assignments of legal entitlements. In conclusion, COASE argued that, regardless of what
the law says about whom is liable, economic efficiency dictates the outcome which provides
the maximum value to the parties involved in the negotiations. However, as noted by Coase,
in the presence of positive transaction costs, the initial assignment of legal entitlements
(property rights) does affect allocative efficiency, and the law directly affects economic
actives. "It would therefore seem desirable that the courts should understand the economic
consequences of their decisions and should, insofar as this is possible without creating too
much uncertainty about the legal position itself, take these consequences into account when
making their decision". Therefore, legal issues, as Coase has argued, primarily could be
solved by choosing social arrangements so as to minimize social costs, roughly defined as
transaction costs and allocative inefficiency. Even though, Coase recognized the existence of
positive transaction costs, his analysis was mainly based on the assumption of negligible
transaction costs which was extensively criticized in the law and economic literature.
One of the important studies in the development of ‘law and economics’ came from
Calabresi (1961), who established the foundations of this approach. As mentioned above, the
framework established by Coase, claimed that allocative efficiency was invariant to the initial
assignment of property rights in the case of ‘zero’ transaction cost. As acknowledged by the
study of Calabresi, transaction costs were in reality not zero which had serious implications on
reaching the efficient outcome (allocative efficiency). According to Calabresi, allocative
efficiency was obtained by assigning the responsibility to the party who incurred the least cost
to avoid similar future injuries. Under the rule of the "cheapest cost avoider", the outcome is
efficient if the "duty of care" is chosen at a level in which the social costs are minimized
(Diamond, 1975). To better understand the implication of this approach on the legal system,
let just look at a real life example: A passenger hired a taxi cab and was injured by someone
throwing a stone. That person was not caught, and the passenger sued the taxi driver
compensation. The question we pose is “is it more efficient to hold the taxi driver liable for the
injury?” According to modern approach of law and economics, regardless of initial allocation
of rights per se, the taxi driver should be liable, since the taxi driver is in a much better
position than passenger to purchase insurance against unforeseeable accidents. Therefore,
applying this efficiency rule minimizes the social costs and society as whole saves resources
(See Stringham, 2001, p. 44 for more examples and detailed discussion).
According to the modern approach of law and economics, legal norms and decisions
must be evaluated based on in point of economic efficiency concept. Therefore, the main
objective of law is the achievement of efficiency. The basic assertion is this: the rules
407
regarding the allocation of resources that every society's uses in its activities are determined
by the market economy. Since market prices determined by market forces, shape the behavior
of individual preferences, protecting ‘effectiveness’ of these market rules had been left to the
legal system. However, if the price mechanism on the market is running, the law maker’s
intervention to constitution of the prices can not be seen as rightful in terms of effectiveness.
The law maker, only, must fulfill its duty of providing institutional conditions for ensuring
economically efficient exchange relations. So, the legal system can have no worth away from
economics and from the needs of economics. Therefore, the most important and prior tool
that can guide to the jurist is through the means of ‘economic analysis.’
The legal system attempt to allocate the recourses in either a Pareto Optimality or
Kaldor-Hicks efficient manner (also is called as ‘weak Pareto Optimality’). Pareto Optimality
is a measure of efficiency. One social situation is said to be "Pareto optimal" to another if it
makes at least one person better off without making anyone else worse off. So, the Pareto
principle is used to legitimize legal rules; if a legal rule left everyone better off, then the rule
should be implemented; otherwise, it should be abandoned. Therefore, if an outcome is not
Pareto efficient, then it is the case that some individual can be made better off without anyone
being made worse off. It is commonly accepted that such inefficient outcomes are to be
avoided, and so Pareto efficiency is an important criterion for legal and economic systems for
evaluating outcomes. Pareto Optimality has been criticized for two main reasons. First, the
result produced by Pareto principle depends totally on the choice of the initial allocation.
Therefore, different results can be achieved depending on the initial allocation. Secondly, this
criteria only allows ordinal evaluation of preferences and ignores any mechanism to induce
decision makers to reveal any cardinal preferences such as satisfaction, happiness, etc. Given
the level of criticism that Pareto criterion received, a less restrictive criterion, Kaldor-Hicks
was adopted to evaluate legal rules. According to the Kaldor-Hicks’s criterion, one outcome
is superior to another one if those that are made better off (gainers) could theoretically
compensate all those that are made worse off (losers) for their loses and still be better off
themselves. Clearly, under the Kaldor-Hicks criterion, change considered to be an
improvement depends on whether or not compensation actually takes place.
The approach of economic analysis of law sets objectives for making the law and
judicial decisions more effective. Beyond that the economic analysis of law also determines
goals such as the economic efficiency of legal decisions and arrangements in order to reach
the said goals. This is one of the objectives of the laws and judicial decisions in every society.
In order to reach this objective, it is of prime importance to know the individual or national
economic benefits of the resolutions brought in by laws and the judicial decisions that are
more effective from economic aspects. However, the cost of obtaining this knowledge may
be prohibitive for the individuals as well as the Governments. For example, it is stated that
the Federal Republic of Germany allocated 78 million Marks (DM) in the year 1985 in order
to determine the relationship between forest losses, damages and the air pollution. Therefore,
this and similar examples demonstrate that the searching efficiency on existing and future
legal rules or decisions is difficult, time consuming and expensive (Kübler, 1990, p.695).
408
hypothesis that everybody can be aware of results of their behavior. Whereas, this approach
losses the sight of the fact that, it is impossible for most behaviors to be totally purified of
outside factors and the effects of third persons. Additionally its displays a rigid conservative
attitude in accepting changes of a distinct social situation under certain situations to be
efficacious in the event the situation does not become worse than before for anybody.
Because, in doing so the new (different) would be deemed to be unacceptable in the event
even one person falls into a worse situation than before. This understanding of economic
efficiency which takes into account the whole society and the effects of changes in the
society, expresses a theoretical defect which can not be corrected even by applying the
concept of justice or justice based on a concrete event (aequitas) in regards to the legal
disputes. Therefore, Pareto optimality favoring the status quo based on rigid individualism is
a criterion which could produce unjust results from the point of view of the law ( Behrens,
1994, p.42; Canaris, 1993, pp.388-391). In regards to the Kaldor–Hicks criterion, there is not
actual paid compensation as is case in Pareto optimality. On the contrary, the compensations
of losers are only hypothetical and do not actually need to take place. Under the Kaldor-Hicks
criterion, an efficient outcome is obtained if the gainers compensate the losers, whether or not
they do so. In a simple way, the Kaldor-Hicks criterion requires a comparison of the
monetary gains of one group versus the monetary losses of the other group. As long as
gainers gain exceed the losers losses as measured in dollars, the move is deemed efficient.
For example, if I am willing to pay a million dollars to have women drivers outlawed and the
willingness of women drivers to pay in order to be allowed to drive is less than a million
dollars, and the efficient outcome under the Kaldor-Hicks would be the prohibition of women
from the traffic system. Therefore, the option that maximizes the summation of total payoffs
(wealth), using the Kaldor-Hicks criterion to evaluate legal rules would be completely wrong.
Considering that a “human being is a social being is a valid statement; if that is the
case, a life, which started in a community, may be described with differing characteristics,
qualities depending on varied ages and different communities such as social human beings
(homo societas), urban human beings (homo urbanus) human beings with weaknesses (homo
faber) and rational human beings (homo rationalis). If urban human beings are taken into
consideration a thinking, feeling, behaving, in short a living human being, a reality, a fact
appears. It is true that these can be discussed, for example although the word “urban”
requires a cautious approach, these characteristics are almost always used for typifying,
finding collective similarities and thus distinguishing human beings from the others. With
such characteristics, qualities a human being is primarily and expressly a living,
contemporary being, not an abstract type! It does not always fall in line with every abstract
typecasting (Binswanger, 1961). In this context, for a human being solely possessing (even if
in appearance) and thus taking money or monetary values as a measure for everything or for
every issue is not essential, but before everything else and particularly “to became into being”
or “existence” and furthermore “coexistence” is essential. If that is the case, in final analysis
the expression “economic” in homo oeconomicus which is condemned (and should be
condemned) to remain a prototype with essential characteristics, qualities to be regarded as
strange in regards to the economic analysis of the law does not seem to be possible to accept,
to treat this characteristic, quality, as a characteristic, a quality human beings acquire in every
relationship they enter into, but as a characteristic determining all of their characteristics,
qualities.
409
V. CONCLUSION
In conclusion the relationship between law and economics should always be taken
into consideration. However, this relationship is not a relationship in which one is dominant
over the other or the other is suppressed, but a model that can be explained as a mutual
influence. One of them can be precede the other depending upon the place and time or may
have an influence over the other, but at a different place, time and circumstance just the
contrary may be realized. Economics has made a substantial contribution to our
understanding of the law, but the law has also contributed to our understanding of economics.
Courts routinely deal with the reality of such economic abstractions as property and contracts.
The study of law thus gives economists an opportunity to improve their understanding of
some of the concepts underlying economic theory.
REFERENCE
Behrens, P.: “Utilitarische Ethik und ökonomische Analyse des Rechts”, Die ethischen
Grundlagen des Privatrechts, Wien/New York 1994, pp. 35-51.
Binswanger, H.: “Über den homo actualis”, Die Rechtsordnung im technische Zeitalter,
Festschrift der Rechts- und Staatswissenschaftlichen Fakultät der Universität Zürich
zum Zentenarium des Schweizerischen Juristenverein (1861-1961), Zürich 1961, pp.
167-192.
Calabresi, G.: “Some toughts on Risk Distribution and the Law of Torts”, Yale Law Journal,
1961 , Vol: 70, pp. 499-553.
Canaris, C. W.: “Funktion, Struktur und Falsifikation juristischer Theorien”, Juristen Zeitung,
1993, pp. 377-391.
Coase, R. H.: “The Problem of Social Cost”, Journal of Law & Econonomics 3,1960, Vol:1,
pp.1-44.
Diamond, P.: “On the Assigment of Liability: The Uniform Case”, The Bell Journal of
Economics, 1975, Vol. 6, No.2, pp. 487-516.
Fezer, K. H.: “Aspekte einer Rechtskritik an der economic analysis of law und am property
rights approach”, Juristen Zeitung, 1986, pp. 817-824.
Forstmoser, P./Schluep, W.: Einführung in das Recht. Einführung in die Rechtwissenschaft,
Bd. I, 2. Aufl., Bern 1998.
Horn, N.:“Zur ökonomischen Rationalität des Privatrechts.-Die privatrechtstheoretische
Verwertbarkeit der «Economic Analysis of Law»”, Archiv für civilistische Praxis,
1976, 307-333.
Kuçuradi, İ.: “Etik İlkeler ve Hukuk”, Hukuk Felsefesi ve Sosyolojisi Arkivi, 2003, C. 8,
pp. 5-11.
410
WHAT DOES IT TAKE TO SUCCEED AS A HUMAN RESOURCES
PROFESSIONAL? A REVIEW OF U.S. HR PROGRAMS
ABSTRACT
The purpose of this paper is to examine U.S. university programs in human resource
management, with the goal of evaluating the consistency of program offerings. The program
review revealed that inconsistency rather than consistency is the rule in such programs.
Suggestions for enhancing the degree to which university programs contribute to successful
HR careers are offered.
I. INTRODUCTION
In 1990, Schuler pointed out that changes in the environment of business require HR
departments to move beyond their traditional support roles toward a member of the
management team. Schuler contends that that to gain credibility as part of the team, HR
professionals must see human resource issues as business issues and help line managers solve
problems through effective HR management (Schuler, 1990). Since the publication of
Schuler's article, a number of conceptual and empirical studies have been produced, echoing
his concerns and attempting to describe the need for change and the skills required of HR
professionals in their new role as team members. A detailed review of this literature will
soon be published in the Journal of Business and Leadership (Sincoff and Owen, in press).
Although Henenan (1999) argues persuasively for a need to focus on the "supply side
characteristics of effective human resource professional" (p. 97), Sincoff and I found in our
review that no consistent set of required knowledge and skills for successful HR practitioners
emerges from this literature. Scholars and practitioners vary in the extent to which they
recommend emphasis on traditional HR competencies (e.g., Brockbank, Ulrich, & Beatty,
1999), a subset of traditional HR competencies (e.g., Van Eynde & Tucker, 1997; Way,
2002), or a mix of HR fundamentals, leadership skills, and business fundamentals (e.g.,
Barber, 1999; Giannantonio & Hurley, 2002; Hansen, 2002). Kaufmann (1996) offers a
number of suggestions for improving HR university education, including augmenting basic
HR coursework with accounting, operations, and finance, an emphasis on communication
411
skills, and the use of case studies to demonstrate the HR role in managerial decision making.
He also suggests that programs that wish to grow must have an effective marketing program
or demonstrate superior ability to place graduates in good jobs.
The Human Resources Certification Institute (HRCI), affiliated with the Society for
Human Resource Management, has for years conducted research on key areas of professional
knowledge required of HR professionals, and offers two levels of certification (Foreman &
Cohen, 1999). However, certification is not required by most employees hiring HR
practitioners, indicating that the HRCI definition of professional knowledge is not widely
accepted in the HR profession. Furthermore, several authors point to a gap between what is
taught in academic programs and what is desired by business (e.g., Johnson & King, 1999;
Langbert, 2000).
This last point suggested a way to further knowledge in the field about current
perceptions of what knowledge and skills are necessary for effective HR professionals: to
review requirements in university HR educational programs. To ascertain the extent to which
academic programs emphasized the perspectives found in the literature, I reviewed the 2005
content of graduate and undergraduate university HR programs across the U.S., categorizing
the programs in terms of source (liberal arts or business) and emphasis (HR major or
concentration), and compared the required coursework for the programs. What emerges is a
bewildering variety of offerings with no clear reason for the variety.
The first part of the program review focused on establishing the types of programs
offered. Programs were categorized according to whether they are housed in a college of
business (56) or some other college (34), undergraduate programs offered (53), and graduate
programs offered (77). All undergraduate programs are offered in a college of business,
whereas some graduate programs in HR are offered by some other college, such as education
or a school of labor relations.
The second part of the review focused on program content. At the undergraduate
level, only one class is included as a requirement in all programs (i.e., both major and
concentration programs): a survey course on HR. No other course topics are consistently
required of students in HR undergraduate programs. Other courses most frequently required
in the HR major programs are labor relations, compensation, employment law, and staffing,
but the programs vary greatly in the number of required courses versus the number of
412
electives student can choose to meet the requirements of the majors. At one end of the
continuum, HR majors at California State University-Long Beach have one required course,
an HR survey class, and select three additional courses from a list of eight HR functional area
courses, which means that the program content varies from student to student, depending on
the electives chosen. At the other end of the continuum, HR majors at Wright State
University in Ohio have five required courses (HR survey, labor relations, employment law,
motivation and development, and directed research) with one elective chosen by the student
from a list of eight courses (but only one of these courses is an HR functional area course).
Florida State University's HR program represents the middle of the continuum; HR majors
have four required classes (HR survey, staffing, labor relations, and current issues), and
choose four electives from a list of seven HR functional area courses.
For the concentration programs, students are typically allowed to choose three or four
courses from a list that varies in the degree to which the topic reflected HR functional areas
such as recruitment, selection, performance appraisal, and employment law, with the HR
survey course as the only required course. For example, Cleveland State University offers a
four-course HR track, in which the HR survey course is required and students choose three
additional classes from a list of seven Hr functional area courses.
At the graduate level, MBA programs with an HR concentration or track reflect a bit
more consistency because of the limited number of HR graduate courses offered. A
concentration typically consists of three or four elective courses. An HR survey course was
required by all as one of the three, but the other courses reflect a variety of topics including
some HR functional areas (e.g., compensation) as well as topics that are more typical of the
field of organizational behavior (e.g., organizational development, leadership, and
motivation). At Fairleigh Dickinson University, for example, four courses comprise the HR
concentration: employment law, strategic HR management (an HR survey course with a
strategic emphasis), performance appraisal, and managing change.
IV. CONCLUSION
Pick up any human resource management textbook for a survey course in HR, and
you'll find an almost identical list of topics in the table of contents. You might think this
indicates a clearly defined common body of knowledge for the field, but the content of
university HR programs suggests otherwise. The original question was, What does it take to
succeed as a professional in HRM? The inconsistency among the current program offerings
suggests that as a group, universities do not have the answer.
413
study that is less then 10 years in duration (Barber, 1999). In addition, although the HR field
grew out of labor economics with a strong emphasis on labor relations (i.e., a liberal arts
perspective), it has been gradually absorbed into the field of business, where it tends to be
regarded more as a pragmatic concern like accounting, but without the respect that
accounting receives for its contributions to the bottom line. Perhaps the resistance to creating
a common body of knowledge and requiring a certification in the field to practice HR
contributes to that lack of respect. It is worth noting that not one of the academic programs
reviewed require a course in HR for students who are not majoring or concentrating in HR, a
state of affairs that does nothing to support the idea of HR professionals as a member of the
management team rather than as support personnel.
414
An important issue to address in any program is strategic HR. As Schuler (1990)
states, HR professionals must be prepared to demonstrate how HR activities make line
managers more effective. Instead, in most companies HR is seen as a stumbling block, a
series of hoops to jump through, a department that makes its information hard to access rather
than as a central contributor to organizational success.
HR can be successful as a partner at the strategic table only if the value of its
contributions is recognized and respected. Value will exist only if those contributions can be
defined in terms specific to corporate performance. Until the field of HR comes together to
define a common body of knowledge and skills appropriate for HR practitioners, the holy
grail of strategic input will continue to be illusive and academic programs will continue to
flounder in a sea of inconsistency.
REFERENCES
415
MINIMIZING THE NEGATIVE IMPACT OF TELECOMMUTING ON
EMPLOYEES
ABSTRACT
I. INTRODUCTION
Numerous benefits and disadvantages of telecommuting for both the individual and
the organization have been identified (Apgar, 1999; Cascio, 2000; Dimitrova, 2003; Mann
and Holdsworth, 2003). The benefits to the employee include (1) opportunities to remain in
workforce despite relocating, becoming ill, or taking on family care roles; (2) more time for
home and family, (3) reduced commuting time and expense; (4) greater job autonomy; (5)
fewer disturbances while working, (6) more flexible working hours, and (7) ability to work
from remote locations. Among the disadvantages to the employee are (1) social isolation,
(2) fewer opportunities for development or promotion, (3) the perception that telecommuters
are not valued by their managers, (4) limited face-to-face communication with colleagues; (5)
repetitive nature of many work tasks performed by telecommuters, and (5) lower job security
(Di Martino and Wirth, 1990; Dimitrova, 2003; Mann and Holdsworth, 2003). Many of these
416
employees feel a sense of separation from their traditional work environment and, to some
extent, their social environment. The isolation from the workplace also leads to career
stagnation since the telecommuter no longer has easy access to formal and informal
information networks (Mann and Holdsworth, 2003).
Among the reasons organizations have chosen to let people work from other locations
include (1) reduced expenses for the organization, including real estate and building
expenses, (2) possibility for increased productivity, (3) improved customer service, (4)
employee retention, and (4) environmental benefits (Cascio, 2000). Even though the
organization may experience benefits from this work arrangement, managing out-of-sight
employees is new to many managers and some managers have difficulty supervising this type
of employee arrangement.
Since telecommuters may feel a sense of separation from their traditional work
environment, organizations must explore ways to bridge the transition. Managers need to be
prepared to provide the support and linkages to the organization’s formal and informal
informational and social networks that the employees may need to develop and maintain
work relationships. Developing procedures and processes to provide opportunities for the
telecommuter to remain connected with formal and informal information networks will help
minimize the telecommuters sense of separation from their organization-related networks.
417
social isolation, decreased opportunities for development or promotion, and the perception
that telecommuters are not valued by their managers, limited face-to-face communication
with colleagues, repetitive nature of work tasks, lower job security, and career stagnation.
The following are suggestions that could be used to retain and further integrate the
telecommuter in the culture:
1. Develop plans and processes to ensure formal and informal communication opportunities
occur between various organizational constituencies, including the manager, and the
telecommuter’s peers.
2. Provide opportunities for the telecommuter to have reliable access to job postings and
development opportunities through both formal and informal communication channels.
3. Review the organization’s selection criteria for telecommuting assignments. Employees
who initiate the telecommuting work arrangement to balance work-family demands may be
more willing to accept the isolation as a trade off.
4. Establish a culture that embraces diversity. Baruch (2001) noted that organizational
cultures that embrace diversity do not just cater to the needs of women, minorities, and the
disabled. Instead, as Baruch notes, diversity is the management of different needs and
different modes of work, and telecommuting forms a significant role in enabling effective
management of diversity in both aspects—groups with special needs, as well as people who
can get the best output utilizing a variety of operational models.
5. Because of the repetitive nature of the work of some telecommuters, consider reviewing
the Hackman and Oldham job characteristics model of work motivation (Hackman and
Oldham, 1980) which proposed conditions under which individuals will become internally
motivated to perform effectively on their jobs. Perhaps changes could be made in the
structure of the assignments to provide increased skill variety, task identity, task significance,
and feedback, and perhaps these changes could increase the satisfaction for the telecommuter.
6. Develop performance standards and goals that are appropriate for the tasks being
preformed by the telecommuter. Establish clear objectives and goals and develop specific
measures of assessment that minimize ambiguity in the minds of remote workers concerning
what is expected of them, how it will be measured, and where they stand at any given point in
time. Managers should communicate to the telecommuting employee all aspects of these
performance standards.
7. Provide telecommuters with opportunities to work from satellite offices. Satellite offices
are sometimes available in suburban locations nearer the homes of employees and offer
employees increased opportunities to develop work relationships and minimize social
isolation, and opportunities to minimize career stagnation, while minimizing the employee’s
commuting time.
V. CONCLUSION
418
REFERENCES
Apgar, M. “The Alternative Workplace: Changing Where and How People Work.”
HarvardBusiness Review, 76(3), 1999, 121-139.
Baruch, Y. “Teleworking and Quality of Life.” In P. J. Jackson & J. H. van der Wielen
(eds.), Teleworking: International Perspectives—From Telecommuting to the Virtual
Organization. London: Routledge, 2001.
Cascio, W. F. “Managing a Virtual Workplace,” Academy of Management Exectuive, 14(3),
2000, 81-90.
Caudron S. “Workers’ Ideas for Improving Alternative Work Situations,” Workforce, 77(12),
1998, 42-49.
Di Martino, V. and Wirth, L. “Telework: A New Way of Working and Living,” International
Labour Review, 129(5), 1990, 529-554.
Dimitrova, D. “Controlling Teleworkers: Supervision and Flexibility Revisited,” New
Technology, Work and Employment, 18(3), 2003, 181-195.
Dunham, K. J. “Telecommuters’ Lamet”, The Wall Street Journal, 236(85), 2000, B1, B18.
Hackman, J. R. and Oldham, G. R.. Work Redesign. Reading, Mass.: Addison-Wesley,
1980.
Igbaria M., and Guamaraes, T. “Exploring Differences in Employee Turnover Intentions and
Its Determinants Among Telecommuters and Non-Telecommuters,“ Journal of
Management Information Systems, 16(1), 1999. 147-164.
Mann, S. and Holdsworth, L. “The Psychological Impact of Teleworking: Stress, Emotions
and Health, New Technology, Work and Employment, 18(3), 2003, 196-211.
Stanworth, C. “Telework and the Information Age,” New Technology, Work and
Employment, 13(1), 1998, 51-62.
419
IMPLICATIONS OF THE FAIRPAY OVERTIME INITIATIVE
TO HUMAN RESOURCE MANAGEMENT
ABSTRACT
Few labor issues are as polarizing as overtime rights. After years of study, discussion,
public debate, and comment, the Department of Labor introduced sweeping changes to the
Fair Labor Standards Act of 1938 (FLSA). Under the rubric of the FairPay Overtime
Initiative (FPOI), the federal law addressing overtime went into effect on August 23, 2004.
The FPOI clarifies employee rights to overtime pay for human resource managers as well as
protecting employers from costly lawsuits. This paper gives an explanation of the initiative
with implications for employees and employers. Key changes in the FLSA are highlighted
and new exemption tests are detailed.
I. INTRODUCTION
The FLSA of 1938 requires that most employees in the United States be paid at least
the federal minimum wage for all hours worked and receive overtime pay at one and one-half
times the regular rate for all hours worked over 40 hours in a workweek. Defined within the
act are certain types of employees who are exempt from both minimum wage and overtime
pay, i.e. if that worker is employed as a bona fide executive, administrative, professional,
outside sales, or computer employee. These exempt categories are cumulatively referred to as
the white collar exemption. To qualify for such exemptions the job description and/or
employment contract must meet certain salary and job duties tests (FLSA, 1938). The past
thirty years have seen these tests become outdated resulting in debate over the need to either
pay overtime to exempt employees or to redefine exemption status (Khorsandi & Kleiner,
2001).
On April 24, 2004 the Wage and Hour Division of the United States Department of
Labor (DOL) responded to these decades-old exemption descriptions with new regulations
relating to white collar exemptions of the FLSA called the FPOI. The purpose of the new
FLSA regulations was to modernize, update, and clarify the criteria for these exemptions and
to eliminate legal problems that the prior regulations caused. This article presents a
discussion of the rationale behind the new regulations, an explanation of the rules developed
by DOL, and concluding comments regarding the implications and benefits of such
regulations for employees and employers.
420
II. REASONS FOR INCREASED LITIGATION
Every president since Jimmy Carter has tried unsuccessfully to simplify federal
overtime pay rules which are contained in the FLSA. The climate changed dramatically in the
late 1990s primarily due to the increase in employee lawsuits brought under the Act against
employers. Employees claimed they were being denied overtime benefits provided under the
Act and were winning multi-million dollar judgments against their employers for non-
compliance with the regulations (Becker, 2004; Crawford, 2004). The number of class-action
suits based upon the provisions of the FLSA climbed from 31 in 1997 to 102 in 2003—over a
300% increase (“Judicial Business”, 2004). The result of such increased litigation is
estimated to cost the economy more than $2 billion annually (National Association of
Convenience Stores, 2004).
Increases in wage and hour lawsuits can be attributed to the desire of employers to cut
costs and increase productivity. Competitive pressures have forced companies across most
industries to cut jobs and revamp their work force deployment, blurring the lines between
employees authorized to receive overtime pay and those who are exempt. Because certain
employees did not have to be paid overtime and could work unlimited hours without
receiving any additional compensation, organizations began to increasingly classify
employees as exempt under the FLSA when, in fact and by law, the employees should have
been classified as nonexempt. In response to such organizational behavior, increasing
numbers of managerial, administrative, sales, and temporary employees began filing high-
visibility class-action lawsuits against employers for unpaid overtime.
Another driving force that contributed to the increase in lawsuits was that the FLSA
regulations provided for significant attorneys’ fees in addition to the damages arising out of a
misclassification or non-classification of employee(s). In many cases, the courts applied
provisions allowing for double damages. Plaintiffs are now entitled to liquidated damages in
an amount equal to the unpaid overtime on their FLSA claim (29 U.S.C. 216.b, 2004). The
FLSA originally made such damages mandatory (Overnight Motor Transportation Co. v.
Missel, 1942). However, the Portal-to-Portal Act (1947), made doubling discretionary rather
than mandatory, by permitting a court to withhold liquidated damages in an action to recover
unpaid minimum wages. Nevertheless, there is still a “strong presumption in favor of
doubling” (Walton v. United Consumers Club, Inc., 1986). It appears then that double
damages are the norm; single damages the exception. The potential for attorneys’ fees being
awarded in addition to damages, actual and double, thus attracted many attorneys to the
FLSA litigation arena.
To qualify for exempt status (i.e., exempt from paying overtime), employees
generally must meet certain tests regarding their salary and job duties. More specifically, the
DOL has outlined three tests in the FPOI which must be met by each white collar exemption
category in order for him or her to qualify under the available exemptions to the FLSA
requirements (FairPay: DOL’s, 2004). Under the regulations these tests, when correctly
applied, determine which positions are eligible for exemption from overtime pay and which
are not.
The first test is the salary-basis test. To be exempt from overtime pay, employees
must be paid a pre-determined fixed salary (not an hourly wage) that is not generally subject
421
to reduction due to variations in quality or quantity of work performed. Salary is defined as
including only the guaranteed portion of an employee’s pay; not any benefits, bonuses,
incentive payments, commissions, or other inducements. This definition of salary has long
been the standard rule under federal overtime law and has not been changed with the new
initiative. Also, the employee must be paid the full salary for any week in which he or she
performs work, and the employee need not be paid for any work week when no work is
performed. Furthermore, rates cannot be prorated for employees who work less than 40 hours
per week.
The second test is the salary-level test. To be exempt from overtime, the new rules
require that employees earn a minimum salary of $455 a week, or $23,660 a year. This is
triple the prior minimum salary of $155 a week, or $8,060 a year. Examples of employees
most likely to be affected include fast-food managers, office managers, and some retail floor
supervisors. Additionally, the new proposed regulations provide for a new white collar
classification referred to as highly compensated employees. These white collar employees
who earn more than $100,000 a year are generally exempt from overtime pay under the new
law (29 C.F.R.541.602, Part 825, 2004).
The third and last required qualification is called the duties test. This test represents a
major change to the Act and incorporates the most significant revision to the final FLSA
regulations. The focus of the duties tests for exemption classification is based upon the
employee’s primary duty. Primary duty means the principal, main, major, or most important
duty that the employee performs. Factors to consider when determining the primary duty of
an employee include, but are not limited to: 1) the relative importance of the major or most
important duty as compared with other types of duties; 2) the employee’s relative freedom
from direct supervision; 3) the relationship between the employee’s salary and the wages paid
to other employees for performance of similar work; and 4) the amount of time spent
performing the major or most important duty (DOL, FLSA Overtime Security, n.d.).
All employment positions are presumed to be entitled to overtime pay unless the
duties tests and the salary tests indicate that the position falls within one of the five job
classifications identified in the Act as being exempt from overtime pay. To identify whether
or not a white collar employee is exempt, that employee must fall within one of the following
defined classifications: 1) executive (including sub-classifications of manager and business
owner), 2) administrative, 3) professional (including learned and creative sub-classifications,
4) computer, and 5) outside sales personnel.
To qualify for the executive employee exemption, each of the following four
conditions must be met: 1) the employee must be compensated on a salary basis at a rate not
less than $455 per week ($23,660 per year); 2) the employee’s primary duty must be
managing the enterprise, or managing a customarily recognized department or subdivision of
the enterprise; 3) the employee must customarily and regularly direct the work of at least two
or more other full-time employees or their equivalent; and 4) the employee must have the
authority to hire or fire other employees, or the employee’s suggestions and
recommendations as to the hiring, firing, advancement, promotion, or any other change of
status of other employees must be given particular weight (DOL, Fact Sheet #17B, 2004).
422
An exempt administrative employee is one ‘‘whose primary duty is the performance
of office or non-manual work directly related to the management or general business
operations of the employer or the employer’s customers… and whose primary duty includes
the exercise of discretion and independent judgment with respect to matters of significance”
(29 C.F.R 541, 2004, p. 22137). The regulatory criteria that define this category include the
following: 1) the employee must be compensated on a salary fee basis at a rate not less than
$455 per week; 2) the employee’s primary duty must be the performance of office or non-
manual work directly related to the management or general business operations of the
employer or the employer’s customers; and 3) the employee’s primary duty includes the
exercise of discretion and independent judgment with respect to matters of significance
(DOL, Fact Sheet # 17C, 2004).
The new regulations provide a narrow interpretation for the specific classification of
outside salesmen. To qualify as an outside sales employee and the exemption, the employee
must meet the following qualifications: 1) the employee’s primary duty must be making sales
as narrowly defined within the Act, or solicit purchase or service contracts or rental type
contracts for the use of facilities for which a consideration will be paid by the client or
customer; and 2) the employee must customarily and regularly be engaged away from the
employer’s place or places of business (DOL, Fact Sheet # 17F, 2004).
The new regulations contain a separate subpart for the computer professional
exemption. To qualify for the computer employee exemption, the following conditions must
apply: 1) the employee must be compensated either on a salary or fee basis at a rate not less
than $455 per week or, if compensated on an hourly basis, at a rate not less than $27.63 an
hour; 2) the employee must be employed as a computer systems analyst, computer
programmer, software engineer, or other similarly skilled employee in the computer field
performing computer-related duties (DOL, Fact Sheet # 17E, 2004).
V. CONCLUSION
An update of the overtime pay regulations contained in the FLSA is long overdue and
the DOL’s FPOI is a reasonable solution to eliminating and correcting the existing
deficiencies of the present FLSA regulations. The FPOI is definitive in its attempt to clarify
and simplify the FLSA, eliminating highly litigated problem areas. In theory, the new
regulations modernize the FLSA standards; represent a substantial improvement over past
rules; and satisfy the debate over paying exempt employees for overtime by redefining
exempt status and duties tests. After comparing the present regulations with the proposed
regulations it is our contention that the update would be beneficial to both employees and
employers. It is our belief that the FPOI is both beneficial to employers and employees,
making it easier for employees to know their rights, for employers to understand their
obligations, and for DOL to be able to aggressively enforce the FLSA (Boehner, 2004).
423
Knowledgeable and informed employees are the first line of defense against dishonest
employers who seek to evade the requirements of the FLSA. Any new regulations should
enable employees to more easily recognize when they are owed overtime pay and will reduce
investigation and enforcement costs when violations occur (Kersey, 2004). The prior
regulations are unnecessarily complicated, outdated, and do not benefit employees.
Employers will also benefit from clearer rules because they avoid the risk of legal
confusion and costly litigation. Any new regulations should permit disciplinary deductions
for violations of workplace misconduct rules, provided the deduction is pursuant to a
uniformly applied, written, disciplinary policy. It is foreseeable that the number of FLSA
lawsuits brought against employers will continue to increase unless abated by new and
modernized regulations which are more definitive of the exempt classifications under the
FLSA.
REFERENCES
“Battle Engaged Over New OT Rules”. (2004, August 23). CNN Money. Retrieved
September 8, 2004 from http://money.cnn.com/2004/08/23/news/economy/overtime/
Becker, C. “A Good Job for Everyone”. Legal Times. Retrieved March 3, 2005 from
http://www.aflcio.org.
Boehner, J. (2004, April 20). “Chairman of the House Education & the Workforce
Committee Comments”. Retrieved September 8, 2004 from
http://edworkforce.house.gov/issues/
108th/workforce/flsa/factsheet042004.htm
Crawford, K. “OT Pay: Winners and Losers.” CNN Money. Retrieved September 3, 2005
from
http://money.cnn.com/2004/08/05/news/economy/overtime/index.htm/
DOL, “Fact Sheet #17A: Exemption for Executive, Administrative, Professional, Computer
& Outside Sales Employees Under the Fair Labor Standards Act (FLSA).” Retrieved
September 24, 2004 from
http://www.dol.gov/esa/regs/compliance/whd/fairpay/fs17a_ overview.htm
DOL, “Fact Sheet #17B: Exemption for Executive Employees Under the Fair Labor
Standards Act (FLSA).” Retrieved October 15, 2004 from
http://www.dol.gov/esa/regs/
compliance/whd/fairpay/fs17b_executive.htm
DOL, “Fact Sheet #17C: Exemption for Administrative Employees Under the Fair Labor
Standards Act (FLSA).” Retrieved February 17, 2005 from http://www.dol.
gov/esa/regs/compliance/whd/fairpay/fs17c_administrative.htm
DOL, “Fact Sheet #17E: Exemption for Employees in Computer-Related Occupations Under
the Fair Labor Standards Act (FLSA).” Retrieved February 9, 2005 from
http://www.dol.gov/ esa/regs/compliance/whd/fairpay/fs17e_computer.htm
DOL, “Fact Sheet #17F: Exemption for Outside Sales Employees Under the Fair Labor
Standards act (FLSA).” Retrieved February 15, 2005 from http://www.dol.gov/esa/regs/
compliance/whd/fairpay/fs17f_outsidesales.htm
DOL, “FLSA Overtime Security Advisor.” Retrieved February 18, 2005 from
http://www.dol.
gov/elaws/esa/flsa/overtime/glossary.htm?wd=primary_duty
424
THE IMPACT OF KNOWLEDGE MANAGEMENT CONCEPTS
ON MODERN HRM BEHAVIOR
ABSTRACT
I. INTRODUCTION
Defining KM is not only problematic but also varies from person to person, context
and use. Simply stated KM is the practice of capturing, preserving, developing, sharing and
using an organisation’s knowledge assets (Bukowitz & Williams, 1999). These knowledge
assets may include databases, documents, policies, procedures (explicit knowledge) and
previously un-captured expertise and experience in individual employees (tacit
knowledge/human intellectual capital). By efficiently managing its knowledge assets, an
organisation can create new capabilities, superior performance, encourage innovation and
enhance customer service, now or in the future (Srikantaiah & Koening, 2000; Nonaka et al,
2000; Malhotra, 2003). It is clear from current research that KM is part of a continuous
business improvement process, it relates to the way an organisation works and is therefore
about organisational development. Thus KM is about people, individual growth/learning, and
the organisational processes and infrastructure which facilitate corporate learning (TFPL,
1999; Argyris & Schon, 1978). Moreover, it is now recognised that mastery of KM, requires
a skilful blend of people and business processes (Hildreth et al, 1999).
425
organisations, in combing the capabilities of technology and employees to meet
organisation’s strategic objectives (Nonaka & Tekeuchi, 1995). Moreover, this is largely due
to a reliance on a ‘technology push’ approach to KM, which is not conducive to achieving the
necessary culture and context required to promote organisational learning. Instead,
Damondaran and Olphert (2000) assert that organisations must take a socio-technical
approach, which has as its main objective the management and sharing of knowledge to
support the achievement of organisational goals. This approach maintains that while tools can
certainly facilitate the implementation of the knowledge process, they must be taken in
context and implemented as part of the overall effort to leverage organisational knowledge
through integration with the business strategy, HRM systems (which encompasses HR
planning, recruitment and selection, appraisal, training and development, incentive systems,
employee relations and incorporates specific company practices, procedures and formal
policies), culture, current technologies and other processes (Harrison & Kessells, 2004).
426
promoting, recognising and rewarding people who model sharing behaviour as well as those
who adopt best practices. Thus, it is effective to design approaches that reward for collective
improvement as well as individual contribution of time, talent and expertise. This highlights
the importance in supporting the shift from individual learning to collective learning for
organisational benefit and supports the need to reinforce people to take responsibility for
voluntarily participating in the activity of sharing and leveraging knowledge. Evidently, if
organisational learning is to develop, then not only must there be suitable learning and
change management processes instituted in the organisation there must be leadership to
support the KM initiative. Hansen et al (1999) maintain that only strong leadership can
provide the direction a company needs to choose, implement and overcome resistance to a
new KM strategy. Without this guidance, there is a danger that semi-autonomous teams will
overdevelop their working cultures and procedures for creating knowledge in isolation to
address dilemmas rather than collaborating to address the longer strategic business needs
(Siemienuch & Sinclair, 1999). It is up leaders to encourage collaboration across boundaries
of structure, time and function. KM programmes have also been found to benefit from senior
management support, since strong support from executives is critical for transformation
oriented knowledge projects, but less necessary in efforts to use knowledge for improving
individual functions or processes. They can help to set the tone for a knowledge-oriented
culture, by communicating messages to the organisation that KM and organisational learning
are critical to the organisations success, provide resources for infrastructure and clarify what
types of knowledge are most important to the company (Davenport et al, 1997). As such, it is
clear that leadership is necessary for improving individual functions or processes while senior
management support is crucial for transformation/change. More importantly, both are
significant for employee ‘buy-in’ and KM success and must work concurrently.
IV. CONCLUSION
Despite employee awareness and belief in the strategic importance of KM, acceptance
of the EIM system with in the organisation was slow. Given the low usage and poor press of
the system in enabling KM it is possible to suggest that low KM success is inevitable.
However, there is a diversity of factors that can influence this outcome. Indeed, the data have
led to the identification of multiple interconnected themes explaining these findings. These
findings have a profound bearing for the HRM. For instance, the ‘communication strategy’
must be considered to ensure employee buy-in and support (KPMG, 2000), but responses
indicated a lack of ‘informed communication protocols and guidelines’. For example, there
seemed to be little consistent communication about the purpose of the KM initiative, fuelling
doubt and mistrust in terms of the company’s intentions. There was also lack of system
guidelines and policies, which were found to have negative implications for employee views
regarding the system. Additionally, questions were raised as to whether everyone had
received the same literature and whether the company’s KM objectives were clear and
consistent with all other related policies, as one person stated, “there is pressure to sell
services to others in the company group yet the stated aim of the system is to make know-
how widely and freely available”. Communication is seen as a key facilitator for KM (Boone,
2001).
427
there was a “need to educate staff regarding when it is most appropriate to use the system”.
The findings emphasise that the right type of training has the potential to improve user
attitudes and encourage a better understanding of the company objectives. For example,
taking a practical approach to training might be beneficial, “successful usage has been
achieved by spending half a day with individuals one by one” (Siemienuch & Sinclair, 1999).
As such, these findings suggest that training is necessary to keep company and user skills
aligned to KM needs, more specifically, training has a powerful influence on KM.
Continuous professional development is considered to be essential to professional and
knowledge workers (Robertson and Hammesley, 2000). In addition, responses suggested that
the company would benefit from training needs analysis. Hansen et al, (1999) argue that
codification and personalisation strategies require that organisations hire different kind of
people and train them differently. Parallel to these findings, employees felt that there might
be more support for KM, if it was perceived to fit in with the company ‘reward scheme’,
individual achievements, co-operation and sharing. As one employee stated there is a “need
to institutionalise KM and make it part of staff assessments”. Therefore, it is essential for
companies to recognise that reward schemes must incorporate a ‘learn from errors’/positive
motivation approach (Schein, 1993). Moreover development programmes that allow for the
assessment, feedback, coaching and development of people at all organisational levels are
needed, that guide individuals to develop and prepare themselves for changes in their work.
Reward systems indicate what the organisation values and shapes individual behaviour.
Traditional reward systems reward those who produce rather then share (Zarraga & Bonache,
2003). Knowledge sharing rewards is, therefore, about lowering the cost of sharing or
increasing the benefit associated with that type of behaviour. Therefore, it can be argued that
group incentives, promotion schemes that encourage individuals to be more collaborative and
360 degrees appraisal systems can create an appropriate climate for the transfer and creation
of knowledge. Similarly, these when applied systematically, can lead to a smoother
socialisation, internalisation and externalisation of knowledge with in the organisation.
428
that they have not been treated with trust, and if they feel that work related commitments
have not been kept, then employees are less willing to share knowledge at work (Patch et al,
2000; Damodaran & Olphert, 2000). The findings also suggested the need to decentralise
authority where possible and to create more ownership, to achieve the necessary culture for
KM success. The findings also pointed toward a strong sense of ‘management commitment’
and ‘leadership’, as important factors for knowledge transformation. Additionally, the need to
achieve attitude change from the management team was highlighted as influential in
exploiting KM. However, one main problem was that the EIM system was not being used by
all managers but by their secretaries instead. Evidently, a key behavioural change is required
at high levels in the organisational hierarchy. Individuals need change from the current to the
desired behaviour for KM success (Damodaran & Olphert, 2000) and this need to start with
seniors management on a practical level. Studies (Depres & Hiltrop, 1995; Howrwitz et al,
2003) have found that knowledge workers tend to have high need for autonomy, significant
drive for achievement, stronger identity and affiliation with a profession than a company and
a greater sense of self direction. These characteristics make them likely to resist command
and control imposition of views, rules and structure, Further more, findings suggested that
‘leadership skills’ were necessary to support the KM initiative, respondents stated, “we need
leadership on a practical level, someone to help us develop skills to support knowledge
sharing”. For example, employee responses further indicated that the system was used most
successfully when the work groups had a strong need for EIM and most importantly when
departmental heads demonstrated commitments to the system. The employees expected that
team leaders would provide, “coaching on the benefits of sharing knowledge”, training on
how to use the system and provide practical support to assist them in adapting to new way of
working i.e. team working and virtual collaboration (McDermott, 1999; Hansen et al, 1999).
Despite this expectation responses highlighted the lack of leadership, drive and influence. By
using a case example the paper establishes how HRM practices in an organisation can instil a
knowledge vision and encourage knowledge sharing in an organisation. Furthermore,
appropriate HRM procedures and HRM systems are fundamental to EIM/KM success.
REFERENCES
429
EMPLOYEE PERFORMANCE EVALUATIONS
PUBLIC VS. PRIVATE SECTOR
ABSTRACT
This study is designed to prove the alternative hypothesis that employee evaluation
systems currently used in the public and private sectors contain biases and do not adequately
reflect the true performance of the employee. To do this, this study will disprove the null
hypothesis that employee evaluation systems currently in use in the public sector adequately
reflect the performance of an employee. By correlating employee appraisals with their
perceived “best” and “worst” supervision and their perception of appraisal system accuracy,
this study will be able to assess the validity of the appraisal system in use.
I. INTRODUCTION
430
2000). In reality, objective performance is verifiable performance. This definition provides
the basis of this study. The dictionary defines objective as uninfluenced by emotion, based
on observable phenomenon. As human beings, the concept of being not influenced by
emotion is difficult to escape. Most human decisions are greatly influenced by emotion.
How then can we escape the emotion to becoming objective?
Alternative Hypothesis. Employee evaluation systems currently used in the public and most
private sectors contain biases and do not adequately reflect the true performance of the
employee.
There are four commonly held myths regarding employee evaluations, the system
provides feedback, the system is aimed at improving performance, the system standardizes
appraisals to make them objective, and the system protects employers from lawsuits for
wrongful termination (AT&T, 1997). In fact, a study by AT&T in 1997 disproved these
myths in their entirety. The study argued that these myths represent an idolized view of
appraisal systems that is achieved by a very few organizations. In fact, the study also
concluded that poor performers account for about 10% of the workforce. Therefore, it is
unrealistic to bias appraisal systems high and expect greater achievements by the workforce.
431
solely within the employee’s control by both the employee and their supervisor (Bernardin,
1989). Both of these examples demonstrate the willingness of the evaluation system to be
influenced by other values. Unfortunately, this value system does not take the effects of
diminishing the reliability of the performance evaluation as a strategic decision-aiding tool.
A growing number of studies now indicate that the relationship that develops between
the supervisor and their employees form the basis for differential treatment (Duarte, 1993).
Supervisors develop high quality relationships with a few employees but not others.
Supervisors then often offer preferential treatment and benefits to the high quality
relationship employees with the premise that these employees assist the supervisor in
reaching their goals and commitments.
Implicit stress theory leads individuals to make assumptions about one trait based on
another trait (Rotondo, 1995). Organization behavior research has revealed the presence of
implicit stress theory in the employee appraisal process. This suggests that the rater makes
assumptions about the level of stress the employee is subject to and judges the outcome based
on that stress. In short, an employee with an assumed high stress job may be rated more
leniently by their superior, while an employee with an assumed low stress job may have a
more critical superior. In reality, the job description should not contribute significantly to the
overall behavior outcome. An employee meeting and exceeding the requirements of their job
description should not be further appraised on the difficulty or stressfulness of that job. The
underlying assumption is that the employee’s compensation package adequately reflects the
level of effort required and further preferential arrangements in the appraisal are unnecessary
(Rotondo, 1995). A recent study showed that employees generally responded well to job
satisfaction when rated highly on their evaluations, but then job satisfaction fell steadily after
a four year period (Blau, 1999). This provides strong evidence that inflated evaluation do not
have long lasting organizational effects. While new employees need to be encouraged to
make decisions, take some chances, and help the organization compete, continued over
emphasis of performance capability leads to an under-valuation of the appraisal system. Thus
this factor needs to be taken into account and diminish the case for an escalated evaluation.
The next section describes the methodology used in this study. The design will be
outlined and the selection of the subjects will be addressed. Also, the variables will be
432
defined and the data collection process will be outlined. The next section will then conclude
with and the procedures and data analysis.
V. METHODOLOGY
The population used in this study will be from the public sector. The public sector
selected will be from the Department of Defense. In particular, this study examined
knowledge and management workers. Candidates chosen from this population were in the
middle of their careers as knowledge workers. Subjects experience level ranged between 10
and no longer than 20 years of. Samples will be drawn from about 110 Engineers and 50
program managers. A minimum of 30 percent response rat is required to obtain statistical
significance of both groups.
VI. RESULTS
Forty-Two surveys were collected, just short of the required sample of fifty. The
survey data consisted of fundamentally two populations, a GS 13 pool of which there were 31
respondents, and a GS-14 Pool of which there were 11 respondents. Due to time limitations,
the deadline for data collection was set and additional solicitations were not possible. This
collected sample was sufficient to proceed with analysis. Further work must be accomplished
to statistically validate the result of this relatively small sample.
433
Question number 2 of the survey was an assessment of the employee’s perception of
the evaluation system. The results showed and average of 2.84 for GS-13’s and 2.73 for GS-
14’s on a 5 point scale as shown in Attachment 3. The standard deviation was observed at
1.25 and 0.79 respectively. This is an indicator that there is general consensus about the
results of this question. Thus, most employees, regardless of grade, perceive the appraisal
system is between ok and a modest dislike. However, GS-14 respondents show a population
where the Gaussian distribution is slightly shifted to the right. None of the GS-14
respondents liked the evaluation system “very much”. This may be due to the lower sample
size collected in the GS-14 population, or it may be an indication that as the employee’s
grade increases, they search for a more meaningful evaluation system. For the purpose of
this study, most respondents believe the system is adequate to accomplish the evaluation of
the employees.
VII. CONCLUSIONS
This study did not disprove the null hypothesis. This study provided data that
supported the null hypothesis. Of a sample of 165 knowledge workers at mid career level, 30
percent responded to a survey within a three week period. The small sample size was
sufficient to perform data analysis to evaluate the hypothesis and to draw some organizational
conclusions. Most employees, regardless of grade, perceived the appraisal system is between
ok and a modest dislike. The GS-14 population had higher appraisal ratings with a tighter
standard deviation than the GS-13 population; higher capability employees received
promotions. Higher grade employees demand more appraisal quality. Both populations
perceive the appraisal system to be accurate. The data collected showed that GS-14
employees attributed accuracy in the appraisal when their rating was high, and lower
accuracy when their rating was lower. This was not the case with the larger GS-13
population, which viewed rating accuracy independently of the achieved rating. The dyadic
supervisor-employee relationship was more significant of an impact at the lower grade than it
was at the higher grade. Among the GS-13’s, a “bad” job was attributed to having a “bad”
supervisor in the employee’s perception. The GS-14’s were not critical of their supervision.
Finally, since the employee acknowledges a low appraisal, and since the employees as a
population find the appraisal system to be accurate from the analysis of question 5, it is
conclusive that the appraisal system in place at this organization is accurate. This study
should be expanded, however, to include additional geographic locations within the same and
with different organizations.
REFERENCES
Blau, G. (1999). “Testing the longitudinal impact of work variables and performance
appraisal satisfaction on subsequent overall job satisfaction.” Human Relations, 52(8),
1099-1113.
Bolino, M., & Turnley, W. (2003). “Counternormative impression management, likeability,
and performance ratings: the use of intimidation in an organizational setting.” Journal
of Organizational Behavior, 24(2), 237-250.
Bernardin, J. (1989). “Increasing the accuracy of performance measurement: a proposed
solution to erroneous attributions.” Human Resource Planning, 12(3), 239-251.
Duarte, Neville T., Goodson, Jane R, and Klich, Nancy R. (1993). “How do I like thee? Let
me appraise the ways.” Journal of Organizational Behavior, 14(3), 239-249.
434
Ferris, G., Judge T., Rowland K., and Fitzgibbons, D. (1994). “Subordinate influence and the
performance evaluation process: a test of a model.” Organization Behavior and
Human Decision Process, 58(1), 101-136.
435
TO REPORT OR NOT REPORT: A MATTER OF GENDER AND NATIONALITY
ABSTRACT
This paper seeks to examine the influence of nationality and gender on the
effectiveness perception of sexual harassment reporting behaviors. Although gender did not
impact whether the respondents viewed reporting harassment behaviors as effective, the
influence of nationality was strongly supported. Specifically, we found that the US subjects
were more likely than Thai subjects to view victims’ decisions to report sexual harassment
behaviors as more effective in stopping these unwanted advances than other types of
responses. The practical implications and limitations of this study are also discussed.
I. INTRODUCTION
One of the more perplexing, unanswered questions in the sexual harassment literature
is why more people are not reporting harassment incidences (Loy and Stewart 1984; U.S.
Merit Systems Protection Board 1995). Sexual harassment is a serious and costly problem
that occurs in many countries (Fitzgerald, Drasgow et al. 1997; Maatman, 2000; Ismail and
Lee, 2005). However, since there is little agreement as to what actions truly constitute sexual
harassment behaviors (O’Connor et al., 2004), victims tend to respond each harassment
incident differently.
Victim responses are important in the sexual harassment process because they
significantly alter the situation by either stopping or facilitating more harassment behaviors
(Dan, Pinsof et al., 1995). Failure to promptly respond to harassment usually causes the
victim to lose credibility (Seagrave, 1994), as coworkers begin to doubt whether the
harassment ever occurred at all. By ignoring the harasser, it may be interpreted as the victim
accepting or even welcoming further harassment in the future (Perry et al., 2004). In addition,
this inaction may complicate the legal process in establishing and negatively impacting the
victim’s case (Fitzgerald et al., 1995). Since “failure to report” is one of the most intriguing
problems of the sexual harassment phenomena, the current authors chose to investigate this
particular issue.
The purpose of this paper is to explore the demographic factors that influence a
victim’s perception of the decision to report sexual harassment behavior. Specifically, does a
victim’s gender or nationality establish whether they consider reporting sexual harassment
behaviors an effective response? The remainder of this paper will review sexual harassment
responses and the determinants of those responses before developing and empirically testing
hypotheses. The results of this investigation are examined and the practical implications and
limitations of this study are discussed.
436
II. CATEGORIES OF RESPONSES
The most common victim responses to sexual harassment are indirect (Seagrave,
1994). These responses include ignoring or avoiding the harasser (Loy and Stewart, 1984).
Even though more direct approaches such as communicating with the harasser or making
complaints are more effective at ending or minimizing harassment, they tend to be used much
more infrequently (Benson and Thomson, 1982; Gruber, 1989; Gruber and Smith, 1995;
Gutek and Koss, 1993). Dealing more assertively with sexual harassment (e.g., filing a
lawsuit) may result in both positive and negative consequences. On the negative side,
victims may have to cope with negative emotions, such as retraumatization and feelings of
powerlessness (Fitzgerald et al., 1995; Lenhart and Shrier, 1996; Stambaugh, 1997). On the
positive side, victims may receive compensation, settlement, personal growth, confidence,
and feelings of self-worth if they effectively confront the harassers (e.g., the victims win the
lawsuit) (Stambaugh, 1997). As a result, victims should set realistic goals and carefully
consider both the costs and benefits before they decide how to best respond to sexual
harassment (Lenhart and Shrier, 1996).
The broadest category of responses to sexual harassment involves the victim’s decision as to
whether or not to report the harassment behaviors. Most victims choose not to report
the behaviors and tend to use responses that are informal and passive. Only a small
number of victims take formal actions and report the behavior (Loy and Stewart 1984;
U.S. Merit Systems Protection Board, 1995).
The victims’ decision of how to respond to sexual harassment depends on the many
factors. The person has to first determine that the behaviors constitute sexual harassment
before making the decision on how to react. Sexual coercion or quid pro quo sexual
harassment behaviors are more severe (U.S. Equal Employment Opportunity Commission,
1992; Paetzold and O'Leary-Kelly, 1996) and the most identifiable and agreed upon forms of
sexual harassment (Sheffey and Tindale, 1992; Williams and Cyr, 1992; Gutek and
O'Conner, 1995).
The severity of sexual harassment behaviors has been shown to be the strongest
predictor of responses, with the more severe behaviors leading to the more direct responses
(Gutek and Koss, 1993; Gruber and Smith, 1995). On the other hand, a hostile environment
or gender harassment behaviors are more ambiguous and the perceptions of sexual
harassment under these conditions may be influenced by the victim’s gender (Baird et al.,
1995; Stockdale et al., 1995; Foulis and McCabe, 1997; Hendrix et al., 1998). Research
strongly indicates that females are more likely to report sexual harassment than males (Baker
et al., 1990; Perry et al., 2004; Jackson and Newman, 2004). On the other hand, male victims
may perceive the behavior as flattering and respond by consciously denying that the behavior
is sexual harassment (Thacker, 1996).
National culture may also play a role in influencing the responses to sexual
harassment behaviors. According to Hofstede (Hofstede, 1980; Hofstede, 1991), people from
different cultures differ in dimensions such as individualism vs. collectivism, masculinity vs.
femininity, power distance, uncertainty avoidance, and long-term vs. short-term orientation.
Collectivistic cultures, for example, may feel that seeking support and not reporting the
behavior are more effective responses, while individualistic cultures tend to prefer a more
437
direct approach. Another example of this cultural impact may be found in the value
dimension of power distance. High power distance cultures tend to respect and fear
authority and thereby may not perceive reporting sexual harassment behaviors as an effective
response. Low power distance cultures promote equality, even in unequal authority
relationships, and thereby may empower sexual harassment victims to perceive reporting
harassment behaviors as an effective response.
IV. HYPOTHESES
The current authors propose that when sexual harassment behaviors are severe and
easily identifiable, females are more likely to believe that reporting harassment behaviors is a
more effective strategy than males do. Also, individualistic and high power distance cultures
are more likely to identify reporting as a more effective strategy than collectivistic cultures
possessing a low power distance. Thus, the two major hypotheses that form the structure of
this study are as follows:
H1: When compared to men, women will rate reporting severe sexual harassment
behaviors as more effective.
H2: When compared to participants from Thailand, U.S. participants will rate
reporting severe sexual harassment behaviors as more effective.
V. METHODS
228 business students in a southern university in the U.S. and 260 business students
from an English-speaking university in Bangkok, Thailand completed questionnaires that
included their perceptions of vignettes depicting sexual harassment behaviors and rankings of
effective responses to such behaviors. 107 Thais were male (41.2%) while 140 U.S. students
(61.4%) were male. Over 95% of both groups were under 25 years old.
VI. RESULTS
The first hypothesis contends that women rank reporting as a more effective way to
respond to sexual harassment behaviors when compared to men. The results from ANOVA
438
showed no significant differences between the ranking by female vs. male (F(1,407) = 1.273, p
= .260). As a result, hypothesis 1 is not supported.
The second hypothesis suggests that the U.S. respondents will rank reporting as more
effective response to sexual harassment behavior when compared to Thai respondents. The
result affirms that U.S. respondents significantly rank reporting as a more effective response
((F(1,407) = 6.075, p = .014).
We also found a significant interaction effect between gender and nationality (F(1,407)
= 7.978 p = .005) which was not previously hypothesized. A plot of estimated marginal
means between gender and nationality shows that the U.S. female ranked reporting as the
most effective among all groups, followed by Thai and U.S. male who were almost in
agreement. Thai female ranked reporting as the least effective among all groups (see Figure
1).
Nationality
4.00
Thai
American
3.80
Estimated Marginal Means
3.60
3.40
3.20
3.00
2.80
Male Female
Gender
Figure 1 Estimated Marginal Means by Gender and Nationality
As indicated by the results discussed above, the current authors found that nationality
makes a difference in whether a person thinks reporting sexual harassment behavior is
effective. Males and females with different nationalities tend to have different opinions
regarding the effectiveness of these harassment reports. In order for businesses to operate
smoothly in today’s global environment, sexual harassment must be prevented or resolved
439
effectively. However, a universal sexual harassment policy thought to be useful in the U.S.
may not work in other countries. Companies should be sensitive to the local culture and seek
to incorporate the differences in their policies and training. It may also be beneficial to
create multiple and friendly reporting channels suitable for both males and females, while
encouraging the reporting of sexual harassment incidents.
This study has a few limitations, including using self-report, student participants from
only two countries. We hope that this will serve as a basis for future investigation in work
settings in many countries, and with other demographic factors, thereby, contributing to our
better understanding and efforts to mitigate the numerous adverse effects of sexual
harassment.
REFERENCES
440
CHAPTER 14
INSTRUCTIONAL/PEDAGOGICAL ISSUES
441
CHANGING THE MEDIA IN THE MIDDLE EAST: LEBANON IMPROVES
JOURNALISM & MASS COMMUNICATION EDUCATION
ABSTRACT
This paper assesses higher education in journalism and mass communication programs
in Lebanon. Its main objective is to determine whether or not students (future communication
professionals) are receiving adequate education and training before entering the job market.
The findings suggest that the study of journalism and mass communication in Lebanon is
advancing but it has a long way to reach the standards of well-recognized programs in the
U.S.
I. INTRODUCTION
The media industry has witnessed many changes in recent years. More than ever
before, communication professionals have to acquire certain skills to survive a world
characterized by stiff competition and new technologies. The Arab world cannot just watch a
wave of new developments in media education and training but it has to accommodate these
changes. Lebanon is selected in this study for various reasons. First, the country has more
media than any other Arab state. Second, although Lebanon suffered from a civil war (1975-
1991), Israel’s occupation of the security zone in South Lebanon (1978-2000) and political
assassinations (1977-2005), the country is still considered by many people in the Middle East
as the lone shining star of personal liberty and freedom of speech. Third, relative to its size,
Lebanon has more journalism and mass communication higher education programs than
anywhere else in the Arab world. Fourth, Lebanon adopts a liberal policy in commercial,
financial and fiscal activities. Such a policy has given the country a distinctive role in the
region where Beirut has regained its reputation as a major center of publishing, banking, and
trade.
II. METHODOLOGY
This study is based on: (1) online research of journalism and communication
programs at all universities in Lebanon that offer degrees or courses in journalism and mass
communication; (2) in-depth interviews through long distance phone calls and/or email
exchanges (from July 25 to September 10, 2005) with four full-time university professors, a
director of media training center, and a news editor; and (3) professional knowledge and
experience of the author who served as a news reporter for over five years in the Middle East,
completed his undergraduate study in public relations and advertising in Lebanon and his
graduate work in journalism and mass communication in the U.S. , and has taught for 19
years at five U.S. universities. Two of the four Lebanese professors hold Ph.D. degrees in
mass communication from the U.S. A third professor has an MA degree in journalism from
France, and the other holds an MBA degree from the American University of Beirut.
Furthermore, two of the four professors had professional experience in news reporting before
they started teaching at their respective universities. One professor had expertise in media
planning and currently manages an integrated marketing communication firm. The fourth
professor had no professional experience in news media. Each of the four professors is
442
currently teaching different courses ranging from news reporting to news editing, media
management, advertising, public relations, marketing communication, etc. Two of the four
professors work at the Lebanese University and the others work at Notre Dame University.
LU is a public university and has the largest student body of all universities in Lebanon.
About 1500 students are enrolled annually in its College of Information and Documentation.
NDU, on the other hand, has the highest enrollment of communication majors in a private
Lebanese university (460 students). The director of the media training center holds a Ph. D.
from France. He taught journalism courses in Lebanon for over 15 years. The news editor
holds an MA degree. He is also a part-time instructor at a local university and hosts a political
program at a major television station.
III. FINDINGS
443
Notre Dame University is another Lebanese school that integrates theory and practice.
However, NDU, unlike many other Lebanese educational institutions, requires students to do
an internship at a local or regional media outlet. Its Department of Mass Communication
encompasses three sequences: advertising and marketing, journalism and public relations, and
radio and television. The university has also a graduate program in media studies that
includes advertising, journalism, and electronic media. A few universities offer courses rather
than degrees in journalism and mass communication. For instance, the American University
of Beirut lists a public relations course in the marketing concentration of the Business School.
AUB also provides journalistic and other forms of specialized writing courses in the English
Department. Interestingly, Haigazian University has an advertising and communication
major, much like American universities. However, this program is part of the Business
Administration and Economics School. It requires an internship and focuses on business with
three communication courses – introduction to advertising, consumer communication, and
integrated marketing. On the other hand, the Holy Spirit University of Kaslik and St. Joseph
University of Beirut offer degrees in radio and television. The former has a program in
graphic arts and advertising which requires some mass communication courses. This program
is part of the Fine Arts Department and concentrates on art techniques such as drawing,
painting, etc. No university in Lebanon has in-house media outlets. Thus, students have no
opportunity to run a radio or television station, produce newspapers, and manage a public
relations or advertising firm.
Academic and Professional Qualifications of Faculty Members
The two professors from the Lebanese University severely criticized the hiring
process at their institution. While many faculty members have Ph.D.s, some are imposed on
the college for political or confessional considerations. Thus, nepotism plays a significant
role. The two representatives from Notre Dame University asserted that all faculty members
at their department have appropriate degrees with technical skills and training in their
respective areas such as radio, television and journalism. NDU also relies heavily on part-
timers who come from the media industry in Lebanon to teach a variety of courses such as
video production, newspaper production, newsgathering, media planning, public relations
techniques and other courses.
Relationships Between Universities and Media Organizations
The professors at the Lebanese University expressed different opinions about the
relationships between their university and media institutions. One claimed that no
relationships exist because the university does not allow full-time faculty members to work
elsewhere. Thus, a professor who teaches news writing cannot work as a reporter or editor at
a newspaper. The other professor pointed out that many administrators and faculty members
have tried hard to establish good relations with the media, and, to some extent, they have
succeeded. He also said many alumni are now news reporters or editors and they keep in
touch with their former teachers. The two professors from NDU admitted that no official
relationships with media organizations exist although the university has made serious
attempts to establish a partnership with the Lebanese Broadcast Corporation (LBC). Several
NDU students, however, do their training (internship) at LBC and some of them end up
working for the station. NDU has a solid relationship with the leading Lebanese newspaper
An-Nahar because Mr. Walid Abboud, a part-time instructor at the university, is a long-time
editor at An-Nahar. He takes senior students in journalism under his wing and helps them do
journalistic work at the newspaper.
Media Educational Research
All four interviewees believe that media educational research is almost non-existent.
The official government’s Center for Studies and Research, in collaboration with the
Lebanese University and other universities, has embarked on some research that relates to
444
media. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has
organized conferences and workshops in an attempt to enhance journalists’ capabilities and
foster freedom of expression/press and exchange of information. In May 2005, the UNESCO
conducted a two-day workshop titled “Media Ethics and Freedom of Expression.” The
workshop, which was carried out in coordination with LU and Alumni of the Lebanese
University’s College of Information and Documentation, included training sessions on
newsgathering and reporting. One NDU professor said that the situation is improving a bit
but only because world organizations and other countries are getting involved. He added,
“What they call in America ‘publish or perish’ is used here as a stick but rarely is a college
professor kicked out of a university because he or she didn’t publish.” Two professors
suggested the following to entice more research productivity among faculty: (1) use the
“stick” and not just swing it in someone’s face; (2) be serious about the need for genuine,
useful, and relevant research; (3) emphasize quality and not quantity; (4) state rewards
clearly; (5) use financial as well as academic rewards; (6) tell prospective researchers that it
is in their best interest to do research; (7) associate research with noble causes; and (8)
organize workshops and seminars on “how research is done.”
Required Changes to Meet New Challenges
The four full-time professors claimed that there has been a rapid growth in
communication programs throughout the country. They also noted that the majority of
universities are applying Western models to deal with worldwide developments. However,
they expressed a wide range of views about the necessary steps to deal with changes in the
media industry. One NDU representative asserted that the Arab world first and foremost
needs a new wave of managers, administrators, opinion leaders, rising citizens, new sets of
values and professional practices in all disciplines including the journalism profession and
other communication-related areas. He added, “We are a long way from making strides in
that direction. Yet, we cannot and must not remain stagnant or complacent in spite of the
difficulties.” He also admitted that “all changes will remain cosmetic if we don't make our
future journalists and communicators apply what they learn on the ground through a number
of well defined and carefully designed courses that sternly require a combination of theory
and practice.” The other NDU professor suggested organizing workshops and inviting more
media professionals and visiting professors from overseas. On the other hand, the two
professors from the Lebanese University said their institution has to work on two fronts to
meet new challenges: (1) evaluating the current programs in an attempt to apply new ones,
and (2) holding conferences with Arab and international organizations to reach agreements on
training programs and education exchanges.
Media Training Opportunities
Three of the four professors strongly believe that training and development of media
professionals in Lebanon lag behind international standards. The following are the reasons:
(1) lack of financial resources at the disposal of newspapers, magazines and television
stations, many of which claim financial loses; (2) resistance of some journalists to change;
(3) government’s lack of support; (4) people’s negative perceptions of journalists in the
Middle East, (5) many news reporters don’t know how to use new technology; (6) some
owners of news organizations have no background in communication and do not recognize
the significance of training; and (7) the turnover rate among reporters is high because of their
dissatisfaction with the status quo. The other professor asserted that the training of media
professionals is fine “because many of our graduates have won national and international
awards for their work in several fields.” He added, “The only difference is our media
professionals, compared to others in developed countries, have less access to information
sources and lower margin of freedom.”
445
While the number of universities that offer programs in journalism and mass
communication is growing, there is only one training center. Established in 1995, An-Nahar
Training Center offers training to one of four groups: An-Nahar’s employees, Lebanese
journalists, Arab journalists, and beginning journalists. The training sessions range from 3 to
14 days. Many sessions are conducted in coordination with delegates from American and
European organizations. In general, student training focuses on practical aspects of
newsgathering, reporting, and editing. The center also enhances students’ journalistic skills
and exposes trainees to new developments in the field. A common issue that the center
frequently faces is the trainees’ academic background. Dr. John Karam, Director of the
Center, noted that most educational programs in the Arab world emphasize theory over
practice. The gap is very wide between what students learn and the skills they need to acquire
in order to practice contemporary journalism. In some countries, students are taught outdated
theories. In other countries, students lack essential journalistic writing skills. His colleague
Mr. Walid Abboud shared the same view. Four years ago, Mr. Edmond Saab, executive
editor of An-Nahar and long-time communication practitioner, pointed out that he often
recruited reporters based on their qualifications and talents and not their degrees in
journalism and communication (Saab, 2002). He mentioned that some of his trainees became
presidential spokespersons, editors-in-chief or holders of other prestigious positions in
Middle Eastern media. Saab, like many of his American counterparts, deplored the poor
writing skills of journalism students.
IV. CONCLUSION
REFERENCES
446
THE PROTEAN CAREER MODULE: APPLIED MANAGEMENT AND FINANCE
EXERCISES FOR ASPIRING PROFESSIONALS
ABSTRACT
Hall (2004) describes protean careers as more relational, self directed, self-defined
and cross-functional than traditional careers. This career module describes three weeks of
exercises that engage students in protean career planning by applying basic concepts from
several disciplines. Based on career and team concepts from management, students
collectively conduct career interviews and deliver presentations. They also utilize finance
concepts to describe how they would save and invest. The target audience includes
undergraduate students who want to develop life skills in career selection and financial
literacy.
The basis for the career interviews is the realistic job preview concept taken from the
organizational socialization model (Nelson, 1987). Realistic job previews involve
organizational representatives sharing both positive and negative information about the job or
organization with potential employees (Nelson, 1987; Phillips, 1998). It is advisable that
students engage in realistic job previews prior to joining organizations so that they have
pragmatic expectations of their jobs, organizations and managers. Based on research, when
there are honest and comprehensive realistic job previews, employees will develop greater
organizational commitment and satisfaction and a lower chance of turnover (Phillips, 1998).
Goals of the career interview activities are to learn about the job search, interview and
selection process. The skill objectives are to practice working in teams, networking and oral
communication. The first week students read the assigned career interview materials
(DeSouza & Alleyne, 2002; Hayes, 2001; Holland, 1985; Super, 1980). These readings help
clarify many of the realistic job preview issues. As a result of reading and discussing these
articles, students can ascertain the level of alignment between their desired jobs,
organizational cultures and their personal needs before finalizing their career choices.
447
During week one, students also form career interview presentation groups based on
similarity of career interests. Groups determine ground rules and create project plans to
maximize their performance. They work together to locate and interview two entry level
professionals and two graduating senior students. Students use the career interview questions
with entry level professionals to discover job responsibilities, needed skills and personal
goals or values (see Table I). They also have to link the professionals’ comments to theories.
For example, based on Holland’s (1985) typology, students might address how well the
interviewees’ personality types fit with the interviewees’ career choices. The other career
interviews involve graduating students who have accepted job offers or are in the final stages
of the interview process (see Table II).
The group context of the professional and student interviews are examples of
relational learning. This type of learning is often associated with protean career development
because it enhances information sharing and accessibility that can sometimes be difficult
without long term organizational membership (Hall, 2004). The entry level professionals
must currently have positions that the students would be eligible to hold immediately upon
graduation. None of the interviewees can be relatives or family friends. The rationale for the
prohibition of familiar others is to broaden their professional networks. This becomes
especially beneficial in times when the employment economy is not at its strongest (DeSouza
and Alleyne, 2002).
Week three focuses on career interview presentations. Each team has 20-25 minutes
to communicate responses from the career interview questions, incorporate career
experiences and personal reflections. Career experiences integrate creativity into the career
presentations to make them interesting and informative. Past career experiences include a
courtroom etiquette skit from a lawyer career team and an own-your-own-hair salon
infomercial by a beauty career team. Students also reflect on how their career choices align
with their top three values and how their careers fit their definitions of success. These
448
reflections are consistent with a self-directed protean career focus because students build their
self-awareness by refining their career choices, values and definitions of success.
Presentations are evaluated on the articulation of responses from the career interview
questions and personal reflections, eye contact, clarity and intonation of speech, audience
interaction and creativity.
The second week of the career module covers personal finance. The objectives are
for students to learn how to save and invest their money. Given the inhibitions that
sometimes accompany the topic of money, students individually complete the finance
activities. As a way to kick off this component, the instructor presents and discusses
information with the class on savings and investment statistics. For example, based on a
Northwestern Mutual Generation 2001 study, just eight percent of college students feel very
knowledgeable about financial planning (Gillin, 2002). While people are living longer,
medical costs are escalating and social security funding is growing more questionable; fifty-
six percent of Americans are not preparing for retirement in such a way to maintain their
current standard of living in their senior years (American Saves, 2000). The Bureau of
Economic Analysis reports the September 2005 average personal savings rate for Americans
as negative four-tenths of a percent (Commerce Department, 2005); outlays outpace income.
Statistics from the 2001 Survey of Consumer Finances reveals that only about 63% of Whites
and 48% of racial minorities save; in addition, the median net worth of racial minorities is
about seven times less than that of their White counterparts (Aizcorbe, Kinnickell, & Moore,
2003). Evidence suggests this is partially due to lower rates of stock ownership by minorities
(Aizcorbe, Kinnickell, & Moore, 2003). The lack of adequate savings and financial
knowledge prevents average Americans, particularly racial minorities, from quickly paying
off debts and fully taking advantage of investment opportunities. Vehicles, such as the
personal finance component of the career module, become important tools to expose people
to stocks or mutual funds as viable investment options.
After the instructor shares financial literacy information with the class, students read
and discuss financial literacy readings (Browna, 2001; Brownb, 2001; Linton, 2002;
McKinney, 2001; Mintzer & Mintzer, 1999; Social Security On-Linea, 2005; Social Security
On-Lineb, 2005). These materials explain why short and long term investments are
important and the status of social security. In addition, students use this content to define
basic concepts such as risk tolerance, stocks, certificates of deposits and diversification. The
present protean career landscape suggests a need to have short term reserves to cover
emergencies and long term funds to maintain their standard of living in retirement (Brownab,
2001; Hall, 2004; McKinney, 2001).
449
4. Go to Salary.Com or like salary website to estimate your starting salary. Subtract the
associated federal and state taxes. Set aside 10% of your salary after taxes for long term
(LT) savings, (i.e., retirement). Specifically identify at least two stocks or mutual funds
to invest your savings. What are the year-to-date, three year and five year rates of return?
What is the name of the financial institution where you would put your money? How
does your choice of LT instruments fit your risk tolerance?
5. Based on monthly expenses and starting salary in questions 3-4, do you have enough
money to set aside the recommended amounts for ST and LT savings? Show a
calculation to support your conclusion. If your calculation is positive, do nothing. If it is
negative, specifically explain how you would adjust your revenues and/or expenses to
become positive.
6. How did you incorporate feedback from trusted others, (i.e., parent, teacher, relative,
working professional), to enhance your ST/LT investments? Note contact information.
After reading the finance materials, students answer questions on the topics of social
security, financial instruments, risk tolerance, short term funds, and long term retirement
savings (see Table III). Students specifically identify their official retirement age and
solvency issues associated with social security. After estimating monthly expenses, they
identify the financial instruments they would use for short term savings, the associated
interest rates, and the names of the financial institutions. Browna (2001) recommends that
single adults have short term or rainy day funds that consist of three to six months expenses
in case of unexpected situations, (e.g., downsizings, family deaths). Students only have to set
aside two months because the career module timeframe only covers their first year as
working professionals. As it relates to long term investments, students estimate their starting
salaries then follow Linton’s (2002) suggestion to set aside 10% of their revenue to
adequately fund their retirement. Next, students determine if they will have enough money to
pay their regular expenses and also fund investments at the recommended levels.
III. CONCLUSION
Pfeffer and Fong (2002) reference the need for business schools to be more
interdisciplinary in their teaching approach to enhance the connection between business
education and workplace practice. The career module implements their suggestion by
creating protean career exercises that integrate self-awareness, relational learning and cross-
functional knowledge. Students raise their self-awareness by reflecting how well the
proposed entry level positions align with their skills and values. They also engage in
relational learning in the career interview presentation teams. In addition, students use
knowledge of management and finance to complete the career interviews and personal
finance activities. Students entering the work world will face multi-disciplinary problems,
work in teams and need self-awareness to best direct their own careers. It is important that
they begin to develop or refine the capacity for these types of situations or skills sooner
versus later.
REFERENCES
Aizcorbe, Ana M., Kinnickell, Arthur B. and Moore, Kevin B. Recent Changes in
U. S. Family Finances: Results from the 1998 and 2001 Survey of Consumer
Finances. Retrieved November 28, 2005, from
http://www.federalreserve.gov/pubs/bulletin/2003/0103lead.pdf, 2003.
450
America Saves. Most behind in Retirement Saving. Retrieved November 28, 2005,
from http://www.americasaves.org/back_page/retirement.cfm/, 26 April 2000,
2000.
Browna, Carolyn M. “The Power of One.” Black Enterprise, 32(3), 2001, 93-98.
Brownb, Carolyn M. “Solo Parenting.” Black Enterprise, 32(3), 2001, 101-106.
Department of Commerce Bureau of Economic Analysis. Personal Income and
Outlays: October 2002. Retrieved November 28, 2005, from
http://www.bea.gov/bea/newsrel/pinewsrelease.htm, 2002.
DeSouza, Winifred and Alleyne, Sonia. “Getting a Foothold on Your Career.”
Black Enterprise, 32(7), 2002, 105-110.
Gillin, Eric. Generation Y Flunks Finance 101. Retrieved November 28, 2005, from
http://www.thestreet.com/markets/ericgillin/10007059.html/, 22 January 2002, 2002.
Hall, Douglas T. “A Protean Career: A Quarter-Century Journey.”
Journal of Vocational Behavior, 65, 2004, 1-13.
Hayes, Cassandra. “Choosing the Right Path.” Black Enterprise, 31(9), 2001, 108-113.
Holland, John L. Making Vocational Choices: A Theory of Vocational Personalities Work
Environments. Englewood Cliffs, NJ: Prentice-Hall, 1985.
Clinton, Clifton. A Decade-by-Decade Guide to Retirement Planning. Retrieved
November 28, 2005, from http://www.401khelpcenter.com/mpower/feature_110702a.html,
2002.
McKinney, Jeffrey. “For Richer or Poorer?” Black Enterprise, 32(3), 2001, 109-113.
Mintzer, Rich and Mintzer, Kathi. The Everything Money Book. Holbrook, MA:
Adams Media Corporation, 1999.
Nelson, Debra L. “Organizational Socialization: A Stress Perspective.” Journal of
Occupational Behavior, 8, 1987, 311-324.
Pfeffer, Jeffrey and Fong, Christina T. “The End of Business Schools? Less Success Than
Meets the Eye.” Academy of Management Learning and Education, 1(1), 2002, 78-
95.
Phillips, Jean M. “Effects of Realistic Job Previews on Multiple Organization Outcomes: A
Meta-Analysis.” Academy of Management Journal, 41(6), 1998, 673-690.
Social Security On-Linea. Frequently Asked Questions about Social Security’s
Future. Retrieved November 28, 2005, from http://www.socialsecurity.gov/qa.htm, 2005.
Social Security On-Lineb. Social Security Full Retirement and Reductions by Age. Retrieved
November 28, 2005, from http://www.ssa.gov/retirechartred.htm, 2005.
Super, Donald. “A Lifespan, Lifespace Approach to Career Development.” Journal of
Vocational Behavior, 16, 1980, 282-298.
451
TEACHING APPROACHES AND SELF-EFFICACY OUTCOMES IN
AN UNDERGRADUATE RESEARCH METHODS COURSE
ABSTRACT
I. INTRODUCTION
For many years our department has required research methods as a core curriculum
course for our undergraduate majors. The rationale behind this requirement is multifaceted.
First, faculty believe an important learning outcome for our undergraduates is the ability to
critically examine claims about communication. Second, faculty believe training in research
skills prepares students to construct more credible claims (see Winn, 1995). Third, research
requires discipline, which in a course designed to engage students in the research process can
become a potent learning outcome. Despite faculty belief in the importance of having a
research methods course in the core curriculum, students seem to carry a relatively high
anxiety about the subject matter and task requirements in such a course. Regardless of how
the course is framed, faculty observe characteristic student fear associated with steps in the
research process, particularly with units on statistics. Therefore, an important but implied
learning objective for an undergraduate research methods course is the reduction of fear, or
the increase in comfort level, for performing specific research tasks. The purpose of this
study was to investigate the relationship of teaching outcomes to objectives and techniques in
an undergraduate research methods course.
452
students were presented with problems requiring critical thought and creativity to solve. This
pragmatic problem-based approach could achieve the overall course goal of training students
to construct more credible claims. A problem-based approach could focus on the design of a
project or proposal which requires students to both make choices for particular research
methods and to justify those choices (McBurney, 1995). Additionally, a problem-based
approach could result in a range of transferable skills which may be relevant in the workplace
(Sproken-Smith, 2005). However, as Winn (1995) pointed out, requiring a completed project
may be impractical for a variety of reasons including the amount of time required by faculty
in administering projects for larger classes. Winn (1995) and Clark (1990) suggested
requiring students to complete a research proposal which does not require data analysis on
individual student projects. Similar to those suggestions for structuring research methods
courses given above, the research methods course in the current study has structured
assignments as subsections of a research proposal. Each assignment is evaluated
independently from other assignments. However, each assignment builds upon the tasks
accomplished in the previous assignment.
The overall goal for teaching any course is to increase understanding and skill in applying
course related material to real life situations. This goal is especially important in the
undergraduate research course where, as it has been noted above, students come into the
course with considerable trepidation about the course requirements. Ultimately, students
should achieve some sense of accomplishment as well as mastering course material in order
to realize a level of self-efficacy necessary to overcome research anxiety. To determine if this
goal has been achieved, the following research questions are proposed:
RQ1 Is there a measurable change in undergraduate students’ perceptions of their own
abilities to critically examine and conduct research?
RQ2 Does having prior experience conducting research make a difference in comfort level
with research tasks for students enrolled in an undergraduate research methods
course?
III. METHODOLOGY
453
sufficiently high (α = .86). For the post-test condition, the reliability of the instrument was
very high (α = .90).
IV. RESULTS
When averaging the scores for the pre-test dependent variables and comparing them
to the average scores for the post-test dependent variables, results indicated that a measurable
and positive change in undergraduate students’ perceptions of their abilities to critically
examine and conduct research. Specifically, students reported more comfort in accomplishing
all ten research tasks at the end of the semester (M = 3.83, SD = .73) compared to the
beginning of the semester (M = 3.11, SD = .70), t(51) = -6.27, p <.01, ω2 = .42. Correlation
between the pre- and post- conditions was both moderate and significant (r = .33, p = .017, N
= 52).
Table I
Central Tendencies For Comfort Level With Research Tasks
Pre-condition Post-condition
Task M SD M SD
3. Choosing search tools 3.04 .99 3.94 .92
4. Evaluating sources 3.31 1.02 3.90 1.18
5. Citing Internet sources 3.38 1.03 4.02 .98
6. Organizing the literature review 3.19 1.10 4.04 1.03
7. Developing hypotheses 2.98 1.11 3.88 .98
8. Developing methods 2.85 1.04 3.69 .90
9. Analyzing statistics 2.54 1.06 3.25 1.01
454
Table II
Summary Of Within-Subjects Repeated Measures Analysis For
Comfort Level With Research Tasks Pre- Versus Post-Condition
Task df F J2 p Wilks’ /
3. Choosing search tools 1,50 23.58 .32 .000 .68
4. Evaluating sources 1,50 10.68 .18 .002 .82
5. Citing Internet sources 1,50 15.76 .24 .000 .76
6. Organizing the literature review 1,50 20.83 .29 .000 .71
7. Developing hypotheses 1,50 27.05 .35 .000 .65
8. Developing methods 1,50 19.94 .28 .000 .72
9. Analyzing statistics 1,50 16.79 .25 .000 .75
For the task of discussing findings, students’ comfort level appeared to increase
slightly from the beginning of the semester (M = 3.46, SD = 1.09) to the end of the semester
(M = 3.77, SD = .96), although no statistically significant difference was found either within-
subjects or between cohorts of students who had previously conducted research and those
who did not. Additionally, no interaction effects were found between having conducted
research previously and having attended a class in research methods. Several post-tests were
conducted to determine if other characteristics of students impacted the results. In particular,
the researcher was interested in determining if having prior experience participating as a
research subject made a difference in perceived level of comfort in performing research tasks
for students enrolled in an undergraduate research methods course. Both between-subjects
effects (F(1,50) = 4.05, p = .05, J2 = .07), and within-subjects effects (F(1,50) = 9.85, p =
.003, J2 = .07, Wilks’ / = .83) were uncovered for the task of selecting a topic (1). Table III
below shows the means and standard deviations for the pre- and post- conditions for task one
for students who had or had not previously participated as a research subject. No interaction
effects were uncovered between cohorts of students and condition for research tasks.
Additionally, no between-subjects effects were uncovered for research tasks 2 through 10.
Table III
Central Tendecies For Comfort Level With Selecting A Topic
Research Participant? M SD
Pre-condition
Yes 3.20 1.21
No 3.76 .90
Post-condition
Yes 3.89 1.05
No 4.29 .77
The researcher was also interested in determining if student classification made a
difference in research comfort levels for undergraduate students. No between-subjects or
interaction effects were uncovered, although a significant within-subjects effect was found
(F(1,50) = 31.04, p < .001, J2 = .39, Wilks’ / = .61). Finally, the researcher was interested
in determining whether the study concentration of a student made a difference in perceived
comfort levels with research tasks. Results indicated that concentration made no difference in
perceived comfort levels with research tasks, although within study concentration categories
comfort level increased from the beginning of the semester to the end of the semester
(F(1,50) = 36.91, p < .001, J2 = .43, Wilks’ / = .57). No interaction effects were found
between study concentration and perceived comfort level with research tasks.
455
V. CONCLUSION
REFERENCES
Clark, Ruth A. “Teaching Research Methods.” In J. A. Daly and Associates, eds., Teaching
Communication: Theory, Research, and Methods. Hillsdale, NJ: Lawrence Erlbaum,
1990, 181-191.
Dobratz, Marjorie C. “Putting the Pieces Together: Teaching Undergraduate Research from a
Theoretical Perspective. Journal of Advanced Nursing, 41, (4), 2003, 383-392.
Irish, Donald P. “A Campus Poll: One Meaningful Culminating Class Project in Research
Methods.” Teaching Sociology, 15, (2), 1987, 200-202.
Maier, Scott R., and Patricia A. Curtin. “Self-efficacy Theory: A Prescriptive Model for
Teaching Research Methods.” Journalism and Mass Communication Educator, 59, (4)
2005, 352-364.
456
McBurney, Donald H. “The Problem Method of Teaching Research Methods.” Teaching of
Psychology, 22, (1), 1995, 36-38.
Scheel, Elizabeth D. “Using Active Learning Projects to Teach Research Skills Throughout
the Sociology Curriculum.” Sociological Practice, 4, (2), 2002, 145-170.
Spronken-Smith, Rachel. “Implementing a Problem-Based Learning Approach for Teaching
Research Methods in Geography.” Journal of Geography in Higher Education, 29, (2),
2005, 203-221.
457
TOP 10 LESSONS LEARNED FROM IMPLEMENTING ERP/E-BUSINESS
SYSTEMS IN ACADEMIC PROGRAMS
ABSTRACT
This article describes ten key principles which inform the selection and
implementation of ERP software in academic programs. Factors which lead to success in
industry mirror many of the success factors in academia. In addition, academia faces some
unique challenges that differ from industry and require special attention.
I. INTRODUCTION
Schools of Business around the country are grappling with how best to integrate and
exploit the capabilities of today’s enterprise, e-business systems into their programs. The
debate seems to have moved from whether or not it makes sense to utilize this type of
software – the benefits and business cases are so compelling – to how do we install, support,
maintain, integrate and fully leverage such software. Importantly, academia must
acknowledge that these are complex systems that commercial organizations have often taken
years and millions of dollars to get fully up and running. Increasingly, those involved in
industry-academic partnerships that focus on integrating enterprise systems into curriculum,
are finding the complexities and challenges facing academia very similar to commercial ERP
implementations. The following insights are based on the experience of two institutions who
have implemented ERP systems into their curriculum and an industry partner who has (1)
three years of reviewing these types of projects, (2) conversations with hundreds of faculty
and (3) a dozen actual implementations. These insights are coupled with current ERP
implementation issues.
The partnership between industry and the University must be more than the simple
software donation to be successful teaching ERP. The acquisition of software, typically via a
grant or donation from a sponsor is the first step. For the industry partner, just handing or
“awarding” software to a school and expecting that they will get it installed, running and
integrated into courses is unrealistic. Research and white papers on the implementation of
ERP systems in industry have shown that much more must be accomplished for success to be
attained (Bedell, Bohanan, Marler, Dulebohn, 2003; Bedell & Floyd, 2002). ERP benefits
accrued by industry are a direct result of effective preparation and implementation. And so it
is in academia. Academics must be prepared for these systems in order to use them
effectively. Astute designers of curriculum know that software is not the answer, but that
“appropriate use” is. Software assists in providing the means to illustrate concepts and to
enlighten through a hands-on, learn-by-doing technique. However, this is not enough: the
educator must provide the “why” and the “how come” and then integrate these three into a
realistic curriculum. Industry partners need to understand the faculty’s point of view and
458
acknowledge the level of effort required to be successful in the classroom (Floyd & Bedell,
2002). They must understand that the successful academic program will be one where a
relationship exists with the vendor, not a quick tax write-off and photo in the College paper.
Each partner must understand the life cycle of acquisition and implementation, the level of
effort required by the academic in each phase, and the level of support the vendor must
provide. Key phases include a technical set-up phase, a faculty-training phase and a
curriculum development and integration phase and a ‘go live’ phase where the students use
the software.
If you don’t have a faculty member(s) who is willing to go the extra mile to
incorporate this into the curriculum, then you will not be successful. Similarly, if that
faculty’s efforts are not supported by the College, failure is on the horizon.
Passion and perseverance are the key ingredients to move these types of academic
collaborations forward. Regardless of all the support a vendor may provide (as suggested in
item #1); there will be a burden that only the faculty can address. Finding technical resources
– staff, servers, labs - getting the Dean’s buy-in, collaborating with other departments,
revising and updating courses, and getting release time for training are some of the nuts and
bolts implementation details that just have to get done (Bedell, Floyd, Glasgow, & Ellis,
2006). These items represent the blocking and tackling of implementing complex systems.
At the same time working with a single faculty member who is passionate but does not have
the buy-in of the College’s administration can leave a program open to being cut if the faculty
member moves. Given the level of commitment that implementing an enterprise system
takes, all parties need an understanding of expectations, commitment and timeline. It helps to
have this in writing! Launching this type of initiative can provide a university a leadership
role among peers (Bedell & Floyd, 2002). However, the institution’s leadership must see the
program as adding value to its goals and be prepared to make the commitment to bring it to
fruition.
If the faculty member really doesn’t know the application, then there’s no way that he
can effectively incorporate it into the curriculum. Like the old real estate adage about
location, if there is one aspect that can not be stressed enough it is training. From the industry
partner’s perspective, at the end of the day, the goal is for the school, faculty and students to
have a positive experience using the software and to walk away with a good feeling or
impression about the company. Well trained faculty who understand the features and
functionality of the systems will be better able to utilize the software in their courses; to
articulate the features and benefits; better able to place the software in an industry context
and more likely to be effective product evangelists. Faculty design courses to achieve specific
learning outcomes – it is not a simple matter to determine how to take a complex suite of
software tools, integrate them into class lab exercises and know that they are supporting the
desired learning outcomes. This would be a challenge for anyone, but someone who is not
versed in the capabilities of the software itself, will have no chance at being successful.
459
V. LESSON 4: TECHNICAL SUPPORT
When a class is taught using technology as the delivery mechanism, make sure that
there are technical people around to keep things going. Like the training issues cited above,
ensuring that the software is installed correctly and fully operational is important for the
success of the educator and for the image of the software vendor. As noted in Lesson 1,
“appropriate use” of the ERP system does not entail dealing with systems that are not fully
functional. And as faculty rely on the technical environment for the success of their
classroom, systems which fail reflect poorly on the faculty involved. They will be perceived
as teaching a class where time is spent resolving technical problems rather than learning
about key issues. In addition, the software donated by the industry partner is perceived as
poor quality and thus one of the reasons for the donation (positive interactions by students
with the software) fades away. From the industry partners perspective, having an up front
“qualification” or assessment process helps. Installing, setting up, optimizing and maintaining
commercial, enterprise e-business and database systems requires certain expertise. Helping
prospective participants understand the technical requirements and assessing their ability to
utilize and support these systems is a necessary step. While a full-time, dedicated Systems
Analyst (SA) or Database Administrator (DBA) may not be necessary, it is imperative that
the school looks at the technical requirements and develops a plan provide support through a
specific technical support person.
Know the scope of the project that you wish to pursue so that you can get all the
players into place. If you intend to span organizational boundaries, then it will take more
effort! The similarity of issues between a commercial ERP implementation and faculty from
different departments and disciplines launching an ERP-focused academic program is
astounding. Who leads, what vendor to use, differences in opinion between departments,
resistance to change, the impact on technological innovation to current practices, the role of
technology in integrating inter-organizational processes – each has its counterpart in
academia (Bedell & Floyd, 2002). Issues like the impact on curriculum, resistance to
change, the impact on course sequences, resources, and the technology’s ability to break
down barriers need to be planned for. Accounting and HR may now want to work together
(Bedell, Canniff, Richtermeyer, & Weston, 2004). Operations Research may now want to
require new MIS courses as prerequisites so their students are familiar with software and data
that can be shared among courses. Such interactions will take time to work out. Technology,
egos and emotions can be a volatile mix. Like the debates in industry implementations –
vanilla or customize, accept the built in “best practices” or revise it to “how we have always
done things” - faculty will quickly recognize the implications of deploying ERP systems that
will be shared by departments. An astute institution will recognize the “change management”
implications and use the project to innovate and revitalize.
Know what it will cost you so that you can know if it has been worth it. Also,
develop some measurement methods to determine success. Both sides of most industry-
academic partnerships will have to justify the investments and show some ROI. The
academics will need to document the impact on learning and show the benefits to the students
and the institution – facilitated through the development of clear, concise learning goals and
460
objectives and assessment (Bedell, 2003b). From the industry partners’ perspective, metrics
like number of faculty trained, number of courses utilizing the software, number of students
impacted will be important. Over time, recruiting and hire statistics are a valuable way to
show the impact of the partnership. Other areas like linkages with commercial customers,
participation by faculty at conferences, white papers, etc. are all quantifiable items that
document the benefits.
If you build alliances with industry, you will have a place to put your graduates and
you will have folks who can come into the classroom and share their experiences. An
important aspect for today’s universities is relevance. In the world of ERP, building
relationships with the industry users of the ERP systems makes sense. These industry partners
can provide advice on curriculum issues while being actively engaged in supporting these
University initiatives through recruitment of its graduates, setting up internships, and
participating on advisory boards. In an example of our environment, a relationship was
developed with an organization that was a PeopleSoft customer. This relationship enabled
the development of class activities to be realistic and helped to focus the classroom initiative
on those capabilities that are most often considered to be mission critical (Bedell, Floyd, et
al., 2006). For the sponsoring software vendor, having their customers participate in an
industry-academic partnership, provides validation of their academic alliance program and
investment. Programs that “grow the pipeline” of IT talent are great. but initiatives that
impact customers’ key issues – like sourcing IT resources for specific projects are even more
valuable.
Implementing an ERP system is like a marriage, not a flirtation. To pursue this, one
needs to decide that this will be a long-term effort. Otherwise the return on investment will
be too low to participate in such a program. Sustaining any relationship – personal or
professional – takes ongoing effort. In a partnership focused on rapidly changing areas like
IT applications and education, agility and commitment will be key. Software versions
continually change, new applications come out, and a school’s priorities can change, faculty
move on or the institutions leadership changes. This commitment ties back to the institutions
commitment but it also reflects that the relationship or project is never a “finished” product.
The true benefits to both parties come from ongoing collaboration and interaction – new
opportunities can develop, other faculty and/or departments may want to participate, and new
tools and resources will need to be integrated into courses. Many schools have indicated that
its takes 18 –24 months to fully understand and exploit all of the features of an enterprise
class system. Based on the expense and effort that launching this type of initiatives takes, it is
unlikely any school or faculty member is going to invest the time if the project will only
impact a single course. There may be pilots, a phased rollout and incremental development,
but as long as there is an overall vision or goal the partners will recognize that each piece is
building toward an end goal. Like assessments of commercial ERP implementations that
indicate, “you are never done,” integrating enterprise e-Business applications require a long
term sustaining effort. From the academic side, it is important that faculty stay interested and
actively using the product. A reward structure that rewards faculty for experimenting with
and developing ERP activities is one possibility. Product exercises need to change often
enough so that classroom activities continue to provide a bridge to reality. From an
organizational perspective, some concern exists as to how to maintain the long hours of work
461
that the project team inevitably needs. The interest of project team members must be
maintained through the reward structure.
ERP can be implemented piece meal (e.g., just install the general ledger app and AR,
or just install HR, or just install CRM), or it can be a suite of applications all installed at one
time. The material can also be installed for one class or for a collection of classes or for a
complete curriculum. However, it is best to plan and to take into account the level of effort
(See Lesson 1) and the faculty involvement (See Lesson 2) and the administrative
involvement (See Lesson 4). Academia like industry may decide how to best implement ERP
applications into their curriculum. Lesson 5 suggests that one can think of ERP as breaking
down academic walls, of building bridges across departments through shared enterprise
applications, data, and concepts. To do so, the college must develop an implementation
strategy with a multi-faceted approach that encompasses both technology and curriculum.
(Floyd & Bedell, 2002).
Key thought: To be successful, the ERP system needs to provide a rich enough data
repository in order to run a rich assortment of reports and analyses. Otherwise the system is
focused purely on transaction activities rather than reporting and analysis activities (Floyd &
Bedell, 2002; Wyrick, Bedell, & Rahmani, 2004). The development of a dataset is an
important task that is rarely accomplished well. Such a dataset provides a key resource that
extends the usefulness of an ERP initiative from understanding the back office, operations
view of the organization to a managerial role where data is mined and trends and important
issues are explored. No easy solution to this goal has yet presented itself, yet its importance is
critical to a strong curriculum with a management focus.
XII. CONCLUSION
REFERENCES
462
Marler, J., Dulebohn, J. & Bedell, M. “Using technology to teach human resource
management.” Invited presentation to be presented at the Sixty-Fifth National
Academy of Management Annual Meeting. Honolulu, HI, 2005.
463
PROFILES IN ELECTRONIC COMMERCE RESEARCH
ABSTRACT
1. INTRODUCTION
What exactly constitutes electronic commerce is an open question. There are even
diverging opinions on the term’s contraction (e.g., “e-Commerce,” “E-commerce,”
“eCommerce,” or simply, “EC”). Terms such as i-Commerce, iCommerce, Internet
Commerce, Internet Sales, Internet Selling, Web Commerce, Online Sales, Buying and
Selling Online, eTransactions, eSales, eTrade, cyber-market, and cyber-business have also
been used to describe essentially the same activity.
Some have equated electronic commerce with electronic data processing (EDP) in
business applications because “electronic” computers were being used for “commerce.”
Using this simplest definition, e-Commerce could be considered to have been conducted for
the past 50 years. A more restricted definition is also used (Turban, et al., 2004, page 3):
“Electronic commerce describes the process of buying, selling, transferring, or exchanging
products, information, or payments over computer networks, including the Internet.”
However, businesses have been using computer communication through networks for
at least 30 years through electronic data interchange (EDI), e.g. inter-bank transfers of funds
(Alter, 2002, page 184). Others equate e-Commerce with I-Commerce, i.e., commerce
occurring only on the Internet. However, businesses use a wide variety of software
technologies on the Internet with different communication protocols including electronic mail
(SMTP/POP), file transfer (FTP), Internet relay chat (TCP/IP), and of course, World Wide
Web pages (HTTP).
We believe that the vast majority of researchers and the general populace use the term
e-Commerce to refer to business activity over the Web. Based on this definition, the
beginning of electronic commerce could be considered to have begun around 1995, roughly
the same time the Netscape Navigator 1.0 browser and Internet stock tracking indices first
appeared. Similarly, research on electronic commerce first began to appear in the mid- to
late-1990s and several new journals with “electronic commerce” in their titles began to be
464
published. For example, the first issue of the International Journal of Electronic Commerce
appeared in the fall of 1996. Thus, the field is only about a decade old and is continuing to
mature as an area of research. Although there have been several meta-studies of publications
in the field of Management Information Systems (e.g., Alavi & Carlson, 1992; Claver, et al.,
2000; Culnan, 1987) and sub-fields such as Group Support Systems (e.g., Pervan, 1998;
Wong, 2000), to our knowledge, none have been conducted to survey this first decade of
research in e-Commerce. Little is known, for example, of who the leading researchers and
institutions in the field are, as well as what trends have occurred over the years.
This study is perhaps the first to make a systematic analysis of research in this field.
The results show that a large number of researchers at many institutions are conducting
research in e-Commerce, and to date, much of the research has been focused on overviews or
on particular applications. Of the empirical research in the area, almost half has been
conducted via surveys, indicating that more research using other methodologies may be
needed for a balanced perspective.
II. SURVEY
The number of articles in the four surveyed journals appears to have peaked in 2002,
roughly two years after the peak of the market in e-Commerce. This lag could be due to the
delays in the publication process, e.g., reviewing, revising, and placing the manuscript in
press. It is widely recognized by most researchers that the time from initial submission of a
manuscript until the date the article finally appears in a leading academic journal often is 1-
1/2 to 2 years, and in some cases, even longer. According to this analysis, the number of
articles related to e-Commerce should be on the rise again, as it has now been over two years
since the nadir of the market cycle.
465
indicates that the author has been a primary contributor to the research or has fewer co-
authors, and we believe that the geometric mean is a more accurate representation of a
researcher’s contribution to the field. (In the related tables: AR = Arithmetic Rank, AM=
Arithmetic Mean, GR = Geometric Rank, and, GM = Geometric Mean).
466
Andrew B. Whinston Univ. of Texas, Austin 8 6 2
Eloise Coupey Virginia Tech. 8 3 3
Mahesh S. Raisinghani Univ. of Dallas 8 3 3
Ting-Peng Liang National Sun Yat-Sen Univ. 8 4 3
V. TYPES OF RESEARCH
Using the classification framework (Figure I) developed by Alavi and Carlson (1992),
we analyzed the type of e-Commerce research by research methodology. Of the 344 articles
studied, 309 (89.83%) were classified as academic research. Initially, we identified 23
categories but then narrowed the list down to 9 (Table IV). While articles on overviews have
declined over the years, the number of articles about mobile commerce are growing.
Topics 1998 1999 2000 2001 2002 2003 2004 Total % Total
Type of 3 1 17 1 4 26 7.56%
e-Commerce
Overview of 6 17 26 14 6 3 72 20.93%
e-Commerce
e-Commerce 6 12 9 10 1 38 11.05%
Issues
e-Commerce 4 12 17 20 11 64 18.60%
Applications
467
e-Commerce 5 5 2 6 2 20 5.81%
Economic
e-Commerce 11 14 9 2 5 41 11.92%
Model
e-Commerce 1 11 12 3.49%
CRM
e-Commerce 6 13 5 2 16 10 52 15.12%
Markets
Mobile 2 4 7 6 19 5.52%
Commerce
Total 12 13 51 75 99 52 42 344 100.00%
The numbers of empirical and non-empirical papers were roughly equal, with surveys
typically being used as a research methodology, and these studies represented almost half of
all empirical research. Conceptual papers represented most of the non-empirical studies and
many papers dealt with overviews or particular applications.
VI. CONCLUSION
At the close of the first decade of research in electronic commerce, this paper presents
a summary of what has been done. The field is still young, with many researchers at a large
number of universities involved in the area, and although there are clear leaders, none are
identified as being dominant forces. Results also show that although the number of articles
has declined from a peak in 2002, they could be expected to increase soon. Overview articles
are expected to continue declining, as e-Commerce is well understood now. Other emerging
topics within the field, however, such a mobile commerce, are expected to rise dramatically.
Finally, most research has been based upon surveys. While important, we believe a greater
focus on experiments, and especially field experiments, is needed to present a balance.
There are several limitations with this first analysis of the e-Commerce literature. For
example, a certain amount of human error is certain to occur when counting author
contributions and classifying articles. Some articles may be only marginally relevant to a
topic, or might be relevant to multiple topics.
REFERENCES
Alavi, M., and Carlson, P. “A Review of MIS Research and Disciplinary Development.”
Journal of Management Information Systems., 8, 4, 1992, 45-62.
Alter, S. Information Systems: The Foundation of E-business. Prentice Hall, Upper Saddle
River, New Jersey, 2002.
Athey, S. and Plotnicki, J. “An Evaluation of Research Productivity in Academic IT.”
Communications of the Association for Information Systems., 3, 7, 2000, 1-20.
Bharati, P. and Tarasewich, P.”Global Perceptions of Journals Publishing E-commerce
Research.” Communications of the ACM., 45, 5, 2002, 21-26.
Claver, E., Gonzalez, R., and Llopis, J. “An Analysis of Research in Information Systems
(1981-1997).” Information & Management., 37, 4, 2000, 181-195.
Culnan, M. “Mapping the Intellectual Structure of MIS, 1980-1985: A Co-citation Analysis.”
MIS Quarterly., 11, 3, 1987, 341-353.
Laudon, Kenneth C., and Carol G. Traver. E-Commerce. 2nd ed. Addison Wesley, 2004.
468
Pervan, G. “A Review of Research in Group Support Systems: Leaders, Approaches and
Directions.” Decision Support Systems., 23, 1998, 149-159.
Turban, E., King, D., Lee, J., and Viehland, D. Electronic Commerce 2004: A Managerial
Perspective. Pearson Prentice Hall, Upper Saddle River, New Jersey, 2004.
Wong, Z. “Group Support System Research in the 1990s: Leaders in Research Productivity.”
Journal of International Information Management., 92, 2, 2000, 53-61.
469
PROCESSES FOR THE CREATION OF PERFORMANCE SCRIPTS
ABSTRACT
This paper explains in detail and demonstrates the general processes of script creation
for training uses. While there is not a lot of research available to practitioners about script
creation uses in training, the research that does exist gives credible support for script creation
applications. There is a body of research in cognition and cognitive processes that
tangentially treats script-creating behavior, however, that body of research is highly technical
and esoteric and not of practical value to most university faculty & staff, trainers, facilitators,
and HRD specialists.
I. INTRODUCTION
In much of the training and education aimed at performance that takes place in
universities, businesses and other organizations we find that there is much activity aimed at
the creation, refinement, and practice of scripts or other scenistic approaches for achieving
goals and for the continuous improvement of performance. The performance may be
individually or group-based. The creation and use of scripts is practiced frequently in
education aimed at professional work, for example, teacher-education, law, engineering,
management, social work, and nursing, although the effort is very seldom labeled script
creation.
To some, it might seem that script creation is a substitute term for behavior modeling
(see Bandura, 1977, 1986). However, the term, script creation, as explained in this paper
represents a process of activities and sequential learning that includes behavioral modeling as
one component or segment. Behavior modeling, as a feature of social learning theory,
stimulates observational and/or vicarious learning. As a form of training, it represents one
substantial part of script creation.
Many trainers and educators take the process of the creation of scripts or performance
guides for granted, and few attend to the critical dynamics of the process. Many of them do
not use the term, script, as the term may seem to convey ambiguous or, perhaps unflattering,
meanings. For example, a script suggests that we are training people to follow some pre-
determined rubric, outline, template, or job aid that prescribes performance. The purpose of
this work is to offer ideas and suggestions for HRD practitioners (trainers, facilitators) that
aim to promote optimal script-making processes that: (1) may have a very substantial
influence on individual learning, (2) contain ordered or sequenced intellectual tasks; and (3)
are at the core of several powerful educational and training designs. Some of those designs
are briefly reported in this work. As Leleu-Merviel, et al. (2002) assert, the use of script
approaches to guide the design of learning environments is becoming a more frequent
practice.
Knowing more about the process of script creation can help to ground instruction and
training design and can stimulate the facilitation of learning. Script creation, if facilitated
well, can also serve as a model for self-learning. That is, in the models expressed later in this
470
work, learners have a process available to them that they might use independently in other,
similar or related, situations to improve performance. As suggested by Ayas & Zeniuk
(2001), the project-based learning that script creation processes enable can assist in the
development of learning capabilities that extend into one's future. Additionally, social
cognition as expressed in the processes may enhance general social skills and accelerate peer
acceptance (Mostow, et al. 2002).
II. BACKGROUND
WHAT IS A SCRIPT?
According to Lord and Kernan (1987) scripts may serve a dual purpose: they help one
to interpret the behavior of others, and aid in generalizing behavior. Hence, they may guide
the planning and execution of familiar and/or repetitive tasks. In supervisory or managerial
work, for example, it is likely that relatively common scripts exist for a variety of activities
such as conducting informational meetings, coaching an employee, conducting performance
appraisal sessions, and the like.
The process explained here is a general process, one that may be adapted to a variety
of training and educational purposes. This process has been adapted to several, specific
training approaches and examples of these approaches are reviewed later in this work. The
script creation process consists of several steps or components that not only aid in script
creation but also in the refinement of scripts, and in continuous improvement of script-based
work.
471
1. A body of information, data, etc., is offered (the individual or group). This can take the
form of a scenario, a case situation, an issue or some known problem that is currently active.
Usually, this information is in written form. It may also be represented in video material or by
direct observation. This process is facilitated by one who has attained familiarity with the
script creation approach. Two examples of a body of information are: customer service
problems in a call center, and the effective leading of an ad hoc task force. For purposes of
this paper, we will correlate the steps of the script creation process with the issue of
effectively leading an ad hoc task force.
3. Discussion takes place in open forum. We seek to clarify what we know, what we do not
know and/or have questions about. This discussion has as one of its objectives the
commencement of recognition of information and performance gaps. At the least, we need to
attain consensus about the critical issues, assumptions and problems that the information
reveals to us. To assist in this work we use Brady's (1996, 2004) conceptual framework to
shape and crystallize our mental models of reality. That is, we want to address situational
issues using these five elements or screening tools:
Brady's framework helps to provide criteria for content selection and emphasis. It also
helps us to envision possible relationships among various aspects of reality. A fundamental
assumption is that participants already possess considerable knowledge and need to
reorganize it to make it more useful. Participants need to supplement knowledge with insights
and skills that will help explain more fully what they already know (Brady, 2004, p. 280).
Some consensus building takes place among participants.
4. Brainstorming potential interventions for treating the issues and problems takes place.
Once possible interventions are identified, tasks are parsed to the individual(s) or groups as
follows:
List - What is the specific behavior required to skillfully execute the application
of the intervention?
Access/Summarize - What research, authoritative information, etc., must be
reviewed and considered to successfully implement the intervention?
Reconcile - In order to put forth a tentative action plan for the inter-
vention, it is necessary to reconcile our behaviors list with the research
472
information in order to identify and recommend a more precise set of
behaviors.
In this aspect of the work we need to be very clear about the differences between
performance as behavior and performance as outcomes. Another way to state this is that
activities and tasks are important and need to be well defined and understood; and the results
or outcomes of the various activities must be defined, made clear and understood (Cardy,
2004). The foundation of the script takes shape.
5. Script Identity. At this point there is identification of the behavioral script elements
necessary to address the issues or problems heretofore examined. Consensual processes have
taken place to achieve this result. The result is temporary as validation processes still need to
occur. Per our example of the task force we now have some reasonably clear notions of what
specific behaviors need to occur in order to commence effective leadership of the task force.
In essence, the first iteration of script creation is nearing completion.
6. Model the intervention. The modeling is a rehearsal of the behavioral script. Such rehearsal
requires the leadership of the facilitator. Participants actually act-out (perform) for each other
the behaviors heretofore identified. In terms of experiential learning, this phase represents
active experimentation.
8. Repeat the modeling of the intervention. Rehearsal. The effectiveness criteria established
in the preceding step will inform this work. At this point sufficient preparation has been
achieved so as to try out or execute the script-as-intervention in our particular setting.
473
Lyons (2003, 2004) used script creation processes extensively in skill development
and performance improvement training and education. Script creation was housed within a
training design that applied skill charting activities. Skill charting is a tool that uses the
general script creation model explained in this work to help a group of employees or students
to focus very intensely on skilled, behavioral performances that attend a particular key result
area, such as customer satisfaction. In one study (Lyons, 2003) team leaders' performance of
specific supervisory and leadership skills with team members was improved from using script
creation processes in their training. In another study (Lyons, 2004) a senior management
team making use of script creation processes was able to positively influence a serious
employee turnover problem through the skillful creation of behavioral profiles of ideal work
associates.
Finally, in a recent study (Lyons, 2004) a training model was developed that made use
of hypothetical problem situations (cases, incidents) with script creation activities
superimposed on the case analysis work. The resulting approach was named Case-Based
Modeling and was used to improve the performance of team members in certain performance
areas such as skillfully managing meetings. The approach has broad applicability for training
in general supervision, management, and for higher education in business and management.
With adaptation, the approach could be used in many different situations and with many
different occupational groups.
V. CONCLUDING REMARKS
Learned in this examination of the script-creation process is: (1) the sequence of
events and the general dynamics of the process are capable of being defined and, ultimately,
assessed; (2) the process, in-general, provides guidance to HRD professionals in the use of a
variety of scenistic tools, (3) script-creation processes are grounded in meaningful theories
and concepts of adult learning and motivation; and (4) existing empirical research generally
supports the efficacy of script-creation processes as a stimulus for learning and change. All of
this has positive implications for HRD practice, particularly the practice that involves training
and/or facilitation.
The few studies reported above give evidence that script creation processes are
effective tools for training and education. Because they heavily involve learners in the
construction of information, continual evaluation of information, and in the application of
information into their repertoire of possible responses to issues and problems, the training
methods are somewhat labor-intensive. The general process offered in this paper responds
very well to the motivational needs of the adult learner.
There are additional outcomes that learners may experience. For example, it is not
unusual for participants to: experience growth in interpersonal skills; learn to rely on and
better appreciate the special skills and talents other team members bring to the tasks; have
greater understandings of trust and cooperation; and, recognize they have much control and
autonomy with regard to their learning. Another such consequence may be the establishment
of reflection as an organizing process in group work (Vince, 2002). Clearly, reflection is a
fundamental feature in many steps of script-creation processes and the activity may be self-
reinforcing.
474
REFERENCES
Allen, S. J., and Blackston, A.R. "Training Pre-service Teachers in Collaborative Problem
Solving: An Investigation of the Impract on Teacher and Student Behavior Change in
Real-world Settings." School Psychology Quarterly., 18, (1), 2003, 22-51.
Ayas, K., and Zeniuk, N. "Project-based Learning: Building Communities of Reflective
Practitioners." Management Learning., 32, (1), 2001, 61-76.
Bandura, A. Social Learning Theory. Englewood Cliffs, N.J.: Prentice-Hall, 1977.
Bandura, A. Social Foundations of Thought and Action. Englewood Cliffs, N.J.:
Prentice-Hall, 1986.
Bandura, A. Self-Efficacy: The Exercise of Control. New York, W.H. Freeman & Co., 1997.
Bickford, D.J., and Van Vleck, J. "Reflection of Artful Teaching." Journal of Management
Education., 24, (4), 1997, 448-472.
Brady, M. "Education for Life as It is Lived." Educational Forum., 60, (3), 1996, 249-255.
Brady, M. "Thinking Big: A Conceptual Framework for the Study of Everything." Phi Delta
Kappan., 86, (4), 2004, 276-281.
Bryman, A., Stephens, M., and Campo, C. "The Importance of Context: Qualitative Research
and the Study of Leadership." Leadership Quarterly., 7, 1996, 353-370.
Cardy, R.L. Performance Management: Concepts, Skills and Exercises. Armonk, N.Y.: M.E.
Sharpe, 2004.
Carver, R. "Theory for Practice: A Framework for Thinking About Experiential Education."
Journal of Experiential Education., 19, (1), 1996, 8-13.
* For the complete paper and references, please contact the author.
475
TEACHING OVERSEAS USING A COMPRESSED COURSE DELIVERY MODULE
ABSTRACT
The purpose of this paper is to present an alternative course delivery module that can
be used by universities to deliver courses to non-traditional students. Specifically, the paper
focuses on degree programs that are offered using a compressed delivery format. Qualitative
data from a major U.S. institution’s overseas educational pursuits is used to put forth the case.
This paper contributes to the academic literature by providing institutions of higher education
with information that would be useful when developing and/or reviewing course delivery
techniques at their respective universities.
I. INTRODUCTION
Troy University is a state institution of higher education with a main campus in Troy,
Alabama. However, the university’s programs outside Alabama have historically been much
larger than their in-state operations. Like other institutions such as the University of
Maryland (Rubin, 1997), Troy University offers educational opportunities to the nation’s
armed forces through programs in Europe and the Far East. The university’s outreach
programs fall under the jurisdiction of University College (UC), the department that oversees
programs delivered by Troy University outside the state of Alabama.
The size of University College is large, serving over 20,000 students in more than 10
countries. University College consists of three domestic regions and has extensive
international operations as well. The international division consists of educational programs
delivered in Germany and throughout the Pacific Rim including the U.S. overseas territory of
Guam. University College also has a military contract with Pacific Air Forces Command
(PACAF) to provide graduate degree programs on military bases.
In the current dynamic environment universities have many course delivery options
(Kennen and Lopez, 2005; Yang, 2003; Parish, 1993). The purpose of this paper is to discuss
two alternative course delivery formats in the Pacific Region and to contrast them with the
traditional 15 - week semester delivery system used at many academic institutions. In
particular the authors focus on the fact that the courses are offered in a compressed format.
The authors put forth their case with respect to two programs that existed during the 1998 to
2003 contract period: Master’s of Science in Management (MSM) and Master’s in Public
Administration (MPA). This paper contributes to the academic literature by identifying
course delivery modules and by advantages and disadvantages of teaching graduate students
using a compressed course schedule.
476
II. MASTER OF SCIENCE IN MANAGEMENT (MSM)
Prior to August 2003, Troy University offered the MSM program at military
installations in Japan, including Okinawa, South Korea and Guam. The program required
students to successfully complete 10 courses covering a variety of business disciplines. The
program also required students to pass a comprehensive examination involving completion of
a corporate strategic audit in six hours.
Courses were offered in both a weekday and a weekend format during a term that
lasted eight weeks. Weekday courses were offered either on Mondays and Wednesdays or on
Tuesdays and Thursdays for 8 weeks. Weekday classes met from 6:00 to 9:00 p.m. on the
evening in question. Weekend courses were offered using a three alternating weekend
format. The classes met on Saturdays and Sundays from 9:00 a.m. to 4:30 p.m. Regardless
of the format, the scheduled class meetings had to total 45 contact hours as stipulated in the
contractual relationship. The university ran six terms during a given academic year. Two
courses were offered each term, one in a weekday and one in a weekend format.
There are certain unique expectations required of both students and professors when
courses are delivered in this format. It is the instructor’s responsibility to prepare a detailed
course syllabus and make it available to the students at least 2-3 weeks ahead of the start of
the term. The course syllabus needs to be more detailed than in a traditional format since
students need adequate time to prepare for the first class meeting. Unlike in a traditional
delivery module, students should not be “green” on the first meeting day of the course. The
PACAF contract stipulated that students should write a 15 to 20 page term paper in each
course. This paper needs to be written during the course of an eight-week term. Students are
generally expected to come to the first class meeting ready to discuss the paper. Hence, the
syllabus should provide detailed guidelines regarding the paper.
The instructor needs to be available to the students ahead of the start of the term.
Students may contact the instructor with questions regarding the term paper or the assigned
reading material. The syllabus should provide all relevant contact information for the
instructor with a statement concerning the preferred method of communication. If the
instructor has travel plans prior to the start of the term, checking electronic mail and
responding promptly to student inquiries is imperative.
The students need to mentally prepare themselves for a course that is delivered in this
format. The students need to realize that such courses involve heavy and often intense
workloads within a relatively short period of time. Preparing for examinations is often a
challenge. For example, coverage of material may end on Monday and the exam may be on
Wednesday. In a traditional MSM program, classes typically only meet once a week thereby
giving students a whole week, with a weekend in the interim period, to prepare for
examinations. Students also need to make a commitment to read the assigned chapters and
supplemental material for the day prior to the start of class. This enhances the learning
process, regardless of the delivery approach, and in a compressed course format it is crucial
for student success. In such a format, students who fail to read ahead may find themselves
unable to keep up with the rest of the class.
Given this format, one might wonder about the impact of the format on the learning
and retention process. Course surveys conducted at the end of each course as well as exit
surveys conducted upon successful completion of the degree program addressed this issue.
477
Although the results of the surveys were mixed, by and large students did not feel that the
format compromised the value of their degree.
Each course within a term is 6 weeks in length, with a staggered start. The first course starts
at the beginning of the term, while the second course starts approximately two weeks into the
term. The student contact time, a total of 45 hours, is compressed into a nine-day period that
includes two weekends. A typical schedule for the contact time runs from 8:00 a.m. to 5:00
p.m. on Saturday and Sunday of the first weekend. Students have Monday off and then have
classes on Tuesday, Wednesday, and Thursday evenings from 6:00 to 9:00 p.m. Friday is an
off day followed by an 8:00 a.m. to 5:00 p.m. schedule on Saturday and Sunday of the second
weekend.
From a student perspective the format provides time on the front end to prepare for
the contact time, with the contact time occurring roughly in the middle of each course,
followed by time at the end to wrap up course requirements. During the preparation phase,
students are responsible for assigned readings and meeting other course requirements. The
wrap up period normally includes finalizing a research paper or taking a final examination.
The schedule provides the opportunity for students with demanding travel requirements to
compress their contact time into a nine-day period. This is invaluable for military members
with demanding training and operational requirements. It enables students to start and
complete courses that they might not otherwise be able to accomplish.
478
In terms of pedagogical approach, the compressed schedule might be described as a
mixed model. It allows for the full amount of contact time (45 hours) of a conventional
course while providing flexibility to the student in the preparation and wrap up phases (both
may be done in more of a distance learning format). From an instructor perspective the
format requires finding meaningful ways of helping students prepare for the contact time,
keeping students focused during the contact time, and being available to support student
needs during the wrap up phase. During the preparation phase the key concern is building
adequate incentives into the course so students actively prepare for the compressed contact
time period. The key during the nine-day contact time period is to keep students actively
engaged in the course material. Finally, during the wrap up period support must be provided
to the student to assist in completing research papers and other assignments to include exams
from the contact period.
The format is more conducive to some courses. Courses with independent, or at least
moderately independent, modules work best because they can be treated as discrete units.
Examples of courses compatible with the approach are organization theory, public policy, and
human resource management. Such courses fit the format well because they have elements
that build moderately upon one another. This benefits the student because if they miss a
module of instruction they can return to class with a relatively seamless transition. On the
other hand courses that have modules with a dependency relationship are less conducive to
the format. Examples of such course include research methods in public administration and
economics for public managers. These courses have modules that build upon one another. If
the student misses a particular module it creates a more difficult learning situation for the
student upon return to the class.
479
Variety in delivery of course material is essential. Video cases, written cases, class exercises,
computer labs, and student presentations add variety to the learning experience.
An advantage of a 15 week schedule is that it offers time for adjustment and smooth
transitions from one learning module to another. With a compressed schedule there is little
time for adjustment once the contact time starts. The key to avoiding adjustment is to prepare
wisely. Once the contact time starts the instructor must be prepared in totality to deliver
quality instruction in a seamless manner. The corollary to no time for adjustment is that
adjustment is inevitable. The need to adjust will occur. The best-developed plans must often
be changed. The challenge is to make constructive and expeditious adjustments given the
little time available for adjustment. This is particularly critical for the weekend sessions. A
particular adjustment problem is the need to watch transitions. Some courses require more
emphasis on transition from one block of instruction to another block of instruction. If
students don’t successfully make the transition they will have difficulty with subsequent
blocks of instruction. Practical exercises are a good device for determining if the class, as a
whole, is ready to move on.
The compressed schedule has a key advantage over a 15 week schedule in that it
offers flexibility to the student. Face to face contact time with a faculty member is
compressed into a limited number of days; in the case of the MPA program, a nine-day
period. This provides flexibility to the student in meeting course requirements and the
demands of the workplace. It is especially critical for clientele groups with significant travel
and training demands. The compressed schedule offers the advantage of flexibility, as with
distance learning, but it also offers the full amount of faculty contact associated with a
traditional 15 week course. In a sense it embodies a key attribute of both delivery modes.
V. CONCLUSIONS
While the compressed schedule offers some unique pedagogical challenges, it does
provide many advantages to the adult learner. For institutions with an interest in offering
outreach programs it provides an alternative to Internet based distance education
(Krishnamoorthy, 2005) that emphasizes the value of student flexibility and traditional
approaches that emphasize the value of face to face contact time. The compressed schedule
represents a mixed approach which maximizes face to face student/instructor contact time
while allowing a significant amount of flexibility to the student. The compressed schedule is
especially suited to the adult learner that values interaction with a faculty member but needs
schedule flexibility to meet the demands of personal and professional requirements.
REFERENCES
480
Miller M. and M-Y Lu “Serving Non-Traditional Students in E-Learning Environments:
Building Successful Communities in the Virtual Campus”, Educational Media
International, March-June 2003 v40 i1 p163(7).
Parish-Cirasa, Anne “Shaping Graduate Education’s Future: Implications of
Demographic Shifts for the Twenty-First Century Demographic Trends and
Innovations in Graduate Education”, Paper presented at the Annual Conference of
the Canadian Society for the Study of Higher Education (Ottawa, Ontario,
Canada, June 10-12, 1993).
Rose, Mike “Non-traditional students often excel in college and can offer many benefits
to their institutions”, Chronicle of Higher Education, Oct 11, 1989 v36 n6 pB1(2)
Rubin, Amy Magaro “All around the world, U. of Maryland offers classes to U.S.military
personnel” Chronicle of Higher Education, March 21, 1997 v43 n28 pA55(2).
Trinkle, Dennis “Distance education: a means to an end, no more, no less”, Chronicle of
Higher Education, August 6, 1999 v45 i48 pA60(1)
Yang, Kaifeng and Marc Holzer, “Web – Based Distance Education in Public
Administration Programs”, PA times Education Supplement, October 2003
481
CHAPTER 15
INTERNATIONAL BUSINESS
AND
MARKETING
482
ANTECEDENTS OF EGYPTIAN CONSUMERS’ GREEN PURCHASE BEHAVIOR:
A HIERARCHICAL MODEL
ABSTRACT
This study investigates the influence of various demographic and psychographic factors
on the green purchase behavior of Egyptian consumers. Using a large sample of 1093
consumers, a survey has been developed and administered across Egypt. The findings from
the hierarchical multiple regression model confirm the influence of the consumers’ ecological
knowledge, concern, attitudes, altruism, and perceived effectiveness, among other socio-
demographic factors, on their intention to purchase green products. Results show that
skepticism towards environmental claims is negatively related to consumers’ intention to buy
green products. The study also discusses how the present findings may help policy makers
and marketers alike to fine- tune their environmental and marketing programs.
I. INRODUCTION
Compared with what has been happening in the West, consumers in Egypt, as well as
in the wider context of the Arab world, are just at the stage of green awakening. This may
explain the fact that little is understood about consumers’ intentions to purchase
environmentally friendly products in this part of the world. Indeed, researchers agree that
very little research has been done concerning cross-cultural studies on environmental
attitudes or behavior of different ethnic, cultural, or religious groups (Klineberg, 1998;
Schultz and Zelezny, 1999). To remedy this void in the literature, this study attempts to look
at the influence of various demographic and psychographic factors on the green purchase
behavior of Egyptian consumers.
483
II. RESEARCH OBJECTIVE
The objective of this research is to identify those factors that influence the intention to
buy environmentally responsible products among Egyptian consumers. Seeking to determine
factors that affect green purchase decisions is important in theory development, policy
decisions, and methodological reasons. Research on eco- orientation is important from a
theoretical standpoint because even though environmental concerns are part of corporate
social responsibility and ethics frameworks, researchers have largely ignored eco- specific
topics related to consumer behavior, values, and culture. From a public policy standpoint, it is
important to know what motivates consumers to buy environmentally friendly products if a
pro- environmental change policy is to be successfully implemented. Finally, from a
methodological measurement standpoint, this research seeks to extend our knowledge about
environmentally friendly behaviors to the Arab world where virtually no research has been
conducted in the realm of eco-orientation.
Drawing on research from North America, Australia, Asia, and Europe, The following
hypotheses are developed.
H1: Women are more likely to express their willingness to purchase green products compared
to men.
H2: Green purchase intention is negatively related to age, with older persons less likely to
purchase green products.
H3: Green purchase intention is positively related to educational level, with highly educated
persons more likely to purchase green products.
H4: Environmental knowledge is positively related to consumers’ intention to purchase green
products.
H5: Environmental concern is positively related to consumers’ intention to purchase green
products.
H6: Environmental attitudes are positively related to consumers’ intention to purchase green
products.
H7: Perceived consumer effectiveness is positively related to consumers’ intention to
purchase green products.
H8: Altruism is positively related to consumers’ intention to purchase green products.
H9: Skepticism towards environmental claims is negatively related to consumers’ intention to
purchase green products.
IV. METHODOLOGY
484
measured by way of multiple measures and the general trend in measuring environmental
issues is via several items instead of single item questions (Gill et al. 1986). The items
contain an explicit key expression representing the specific construct . Positive and negative
formulations of the items were presented for guaranteeing the content balance of the study.
All items are based on scales that have been previously validated.
V. RESULTS
Hierarchical regression analysis was used to test the research hypotheses. Table I
shows a summary of results of the hierarchical regression analysis. As seen in table I, when
the three demographic variables were entered into the regression equation in the first step, the
coefficient of determination (R2) was found to be 0.112 indicating that 11.2 per cent of green
purchase intention is explained by these demographic variables. This result confirms
Balderjahn’s (1988) study, which found that demographic and socio-economic variables such
as education, income, and family size are only of limited value in explaining different degrees
of environmental attitudes. In a similar vein, Olli et al. (2001) found that socio-demographic
correlates explain only 10 % of environmental acts.
In the third step, perceived consumer effectiveness, altruism, and skepticism towards
environmental claims scales were entered. The decision to enter these three independent
variables is based on Ajzen and Fishbein’s (1980) theory that specific attitudes are better than
general attitudes as predictors of related behavior. When the three scales were entered the R2
increased from 45.8 per cent to 66.2 per cent indicating a change of 20.4 per cent, which is
significant (p<0.001).
In the fourth and final step, the green purchase attitudes scale was entered in the
equation in order to gauge its impact as an independent predictor. From the final regression
equation (model 4), it can be seen that R2 increased from 66.2 per cent to 76.4 per cent
indicating a change of 10.2 per cent, which is significant (p<0.001). Thus, the final model
explains 76.4 per cent of the variation in consumers’ intention to purchase green products.
485
Table I. Hierarchichal Regression Results
STANDARDIZED SI
COEFFICIENTS G.
Beta F
R² F C
MODEL T SIG. R² H
CHANGE CHANGE
A
N
G
E
1 (constant) 16.807 .000 .112 .112 45.780 .0
age .118 1.819 .069 00
sex - .758 -10.269 .000
education - .660 -7.201 .000
altruism
skepticism
attitudes
From the final regression model, it can be observed that the standardized coefficient for
gender is negative (β = - 0.208) and significant at the 0.001 level. As gender was coded as a
dichotomous variable: 0 (female) and 1 (male), this regression coefficient implies that non-
males (i.e. females) express more intention to purchase green products. This result provides
486
support for the first hypothesis. The standardized coefficient for age is negative (β = - 0.621)
and significant at the 0.001 level. This result provides support for the second hypothesis.
Surprisingly, education level was not related to higher intention to buy green products (p =
0.998). This result fails to support the third hypothesis. We found perceived environmental
knowledge to be positively and significantly (at the 0.01 level) related to ecologically
favorable attitudes and behaviors (β = 0.060). This result supports the fourth hypothesis. The
standardized coefficient for environmental concern is positive (β = 0.345) and significant at
the 0.001 level, which supports the fifth hypothesis. We found environmental attitudes to be
positively and significantly (at the 0.01 level) related to green purchase intention (β = 0.640).
This result supports the sixth hypothesis. The standardized coefficient for perceived
consumer effectiveness is positive (β = 0.202) and significant at the 0.001 level. The strong
positive relationship found in this study between perceived consumer effectiveness and green
purchase intention provides a strong support to the seventh hypothesis. The standardized
coefficient for altruism is positive (β = 0.135) and significant at the 0.001 level. This finding
supports the eighth hypothesis. The standardized coefficient for skepticism towards
environmental claims is negative (β = - 0.136) and significant at the 0.001 level. This result
strongly supports the ninth hypothesis.
REFERENCES
Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting behavior.
Englewood Cliffs, NJ: Prentice Hall.
Balderjahn, I. (1988). Personality variables and environmental attitudes as predictors of
ecologically responsible consumption patterns. Journal of Business Research, 17, 51-
56.
Banergee, S., Gulas, C., & Iyer, E. (1995). Shades of green: A multidimensional analysis of
environmental advertising. Journal of Advertising, 24 (2), 21-31.
Bhat, V. (1993). Green marketing begins with green design. Journal of Business & Industrial
Marketing, 8, 26-31.
Kapelianis, D., & Strachan, S. (1996). The price premium of an environmentally friendly
product. South African Journal of Business Management, 27(4), 89-95.
Klineberg, S. (1998). Environmental attitudes among Anglos, Blacks and Hispanics in Texas:
Has the concern gap disappeared? Race, Gender, and Class, 6, 70-82.
Mainieri, T., Barnett, E., Valdero, T., Unipan, J., & Oskamp, S. (1997). Green buying: The
influence of environmental concern on consumer behavior. Journal of Social
Psychology, 137, 189-204.
McDaniel, S., & Rylander, D. (1993). Strategic green marketing. Journal of Consumer
Marketing, 10(3), 4-10.
McEachern, M., & McClean, P. (2002). Organic purchasing motivations and attitudes: Are
they ethical? International Journal of Consumer Studies, 26, 85-92.
Menon, A., & Menon, A. (1997). Enviropreneurial marketing strategy: The emergence of
corporate environmentalism as market strategy. Journal of Marketing, 61, 51-67.
Olli,E., Grendstad, G., & Wollebaek, D. (2001). Correlates of environmental behaviors.
Environment and behavior, 33, 181-208.
Polonsky, M., Carlson, L., Grove, S., & Kangun, N. (1997). International environmental
marketing claims- Real changes or simple posturing? International Marketing Review,
14, 218-232.
Schultz, P., & Zelezny, L. (1999). Values as predictors of environmental attitudes: Evidence
for consistency across 14 countries.. Journal of Environmental Psychology, 19, 255-
265.
487
CULTURE-DRIVEN CONSUMER MARKET BOUNDARIES: AN APPROACH TO
INTERNATIONAL PRODUCT STRATEGY
ABSTRACT
As the global markets become intensely competitive, the need to redefine and
redesign market boundaries becomes critical. A frequently overlooked factor in defining and
redefining market boundaries is the dynamics of cultural values. Culture has long been
relegated as a contextual factor in market determination and not considered an active
determinant of market boundaries. This paper examines culture’s role as an active
determinant of market boundaries and segments and proposes culture–driven framework. It
offers an international product strategy that permits appropriate product classification and
positioning within these boundaries.
I. INTRODUCTION
As the global markets for goods and services become intensely competitive, the need
to examine and redefine market boundaries becomes critical for survival and growth.
Companies seek to break out of existing overcrowded segments or by creating innovative
new market spaces to gain competitive advantage. A major influence that triggers
redefinition is trade competition (Day and Shocker 1976). Cvar (1986) suggests periodical
redefinition of market boundaries and segments as a characteristic of successful competitors
in the global markets.
Since cultural values heavily influence consumers to accept and or reject the products
or services, a culture-driven market classification will enable businesses to assess their
market opportunities from a unique and new perspective. Looking at the market through the
prism of cultural values and redrawing market boundaries using cultural parameters offers
488
marketers a potent new tool to address global consumer needs, design new products and
develop responsive product strategies.
We define cultural market boundary as the potential size and extent of the market that
can be determined by the commonly shared cultural values of a group of people. Values
uniquely differentiate them from others who do not share same cultural value patterns These
cultural values primarily guide their buying habits, motives and other market related
decision-making behavior.. Four types of market boundaries and areas can be conceptualized
based on culture and their characteristics identified and differentiated for designing consumer
product strategy. These are Culturally Assimilative Market, Culturally Exclusive/Blocked
Market, Culturally Peripheral Market and Neo-Cultural Market.
489
Neo-Cultural Market Boundary
Culture is a dynamic phenomenon and evolves over a period. Culture shifts can occur
in response to internal and external influences that affect social norms, values, and attitudes
and create changed consumer behavior patterns, preferences, tastes and needs. Raval and
Subramanian (2000) describe the nature of cultural shift:
“Culture shift may be subtle and or act as an undercurrent that may in longer run
erode, hamper, or completely wipe out the demand for goods and services. Culture shift is
frequently subterranean and invisible in nature and may elude detection by normal
environmental scanning techniques. It takes a long time for the subterranean changes to
become manifest and be openly visible”.
Swedish anthropologist Ulf Hannerz (1992) has put forward the concept of cultural
complexity. He has demonstrated how most contemporary societies are currently stratified
into “varied zones of meaning” where multiple cultural communities not only are located
very near each other, but also above all penetrate each other. When this occurs, it is critical
for marketers to examine what market boundaries that these changes create. Neocultural
market boundaries are crated by an amalgam of values and norms that occurs when multiple
cultures and/or subcultures interact and influence each other, creating new value patterns and
lifestyles. Culture shift and cultural complexity become powerful drivers of emergence of
neo-cultural market boundaries. The direction of this market depends on whether the
pendulum of change of values swing toward convergence of values toward fundamental
and/or core values of the society or toward divergence from it to modernity or uncharted
future course yet to come as predicted by Hannerz. Culture shift occurs when prevailing
values tilt toward core or fundamental values. The concepts of culture shift and cultural
complexity offer opportunities to marketers to redraw or develop new market boundaries.
The firms in the global market have to examine the nature and type of their products
to decide which market they will fit in and how to position them in that market category.
Raval and Subramanian (2000) have provided the framework for product classification and
positioning that is culture-driven. Their framework of product classification and positioning
is relevant to culture-driven market boundaries. They classify culture-based products into
culturally congruent, culturally blocked, culturally obligatory, culturally peripheral, and
culturally undesirable products. The framework provides cultural product attributes and
relates it to appropriate positioning strategy.
490
same strategy as for mainstream products, except with additional emphasis on its ethnic
identity.
Neo-Cultural Market
Products conducive to neo-cultural market fall into two categories: one that responds
to culture shift in markets and the other as a response to cultural complexity. Culture can shift
to conservative or modernity mode. Conservative mode calls for the consumption of
conservative products. These trends are exhibited often in the clothing and fashion markets.
Their demand fluctuates according to values shift at a particular time. A useful product
category is products that respond to multiple values created by the cultural complexity. These
products attempt to gain favor by representing multiple value combinations in the products.
The product positioning in this market segment depends on the emergence of multiple and
complex values that may be unpredictable sometimes. Strategic market planning may
anticipate emergence of such values and create a new marketing mix designed to attract the
emerging segments.
491
Impact of Culture on Elasticity of Demand
Economists have related the concept of elasticity mainly to price, income and degree
of substitutability. Not much work is done in relation to culture as a variable for establishing
elasticity. Culture is critical for determining the degree of elasticity of products in a given
cultural market boundary, as shown in the table below.
Products suitable for assimilative market may have variable degrees of elasticity
because they compete with mainstream products and any variations in prices will cause
consumers to be sensitive to their prices. Lower the prices people may buy more, higher the
prices they have more opportunities to substitute mainstream products. Products sold in
culturally exclusive markets have highly inelastic demand since they are culturally imperative
and there are a small number of suppliers in the market. On the other hand, culturally blocked
products have totally inelastic demand since there is zero demand in that niche market
because they are excluded. Culturally peripheral may have elastic demand depending on the
marketing efforts. The demand is elastic for the ethnic segments of the market and relatively
inelastic for the mainstream segments. The elasticity for demand for products that may be
sold in neo-cultural market may vary from totally elastic to totally inelastic depending on the
degree of popularity among the population that grow with multiple values with varying zones
of meaning.
V. CONCLUSION
REFERENCES
492
Kluckhohn, F. and Strodtbeck, F. L. Variations in Value Orientations. Evanston:
Peterson. 1961.
Raval, Dinker and Subramanian, Bala, “Culture Shift Risk analysis for Multinationals”
Journal Global Competitiveness 8(1), 2000, 382.
Raval, Dinker and Subramanian, Bala, “Culture Based Product Classification in Global
Marketing for Competitive Advantage” Journal Global Competitiveness 9(1), 2001,
419-428.
493
DOMAIN KNOWLEDGE SPECIFICITY AND JOINT NEW PRODUCT
DEVELOPMENT: MEDIATING EFFECT OF RELATIONAL CAPITAL
ABSTRACT
This study examined the relationships among supplier’s domain knowledge specificity,
relational capital and joint new product development from the transaction cost analysis and
social exchange perspectives. Data collected from Taiwanese information hardware
manufacturers were used to conduct empirical test. The results show that supplier’s domain
knowledge specificity positively influences relational capital and joint new product
development, and relational capital also positively influences joint new product development.
Besides, relational capital partially mediates the relationship between supplier’s domain
knowledge specificity and joint new product development.
I. INTRODUCTION
494
are applied to understanding patterns and rules particular to a specific context. Expertise
deployment leads to increasingly effective issue diagnosis and problem solving based on
greater levels of familiarity and understanding of the nuances of a particular exchange. While
such domain-specific knowledge is very valuable in the context of a particular relationship,
investments made in creating the knowledge have less value in other relationships. Thus, the
first hypothesis is proposed:
Hypothesis 1: DKS is positively related to JNPD
As firms work with each other trust is built among individual members of the
contracting firms because of the close personal ties that develop between them (Macaulay,
1963). Relational capital creates a mutual confidence that no party to an exchange will
exploit others’ vulnerabilities even if there is an opportunity to do so (Sabel, 1993). When
organizational resources are applied to understanding patterns and rules particular to a
specific context, this domain knowledge contributes to trust and closeness between
individuals of the firms based on greater levels of familiarity and understanding of the
nuances of a particular exchange,. Therefore, this study hypothesized that there is a positive
relationship between domain knowledge specificity and relational capital.
Hypothesis 2: The level of relational capital is positively associated with the level of
supplier’s domain knowledge specificity. New product development (NPD) tasks are
intrinsically uncertain about the relationship between inputs and outputs and usually involve a
high risk of failure (Cooper, 1997). NPD processes are plagued by many unforeseen
disruptions and delays. If partners trust each other, constructive dialogue and cooperative
problem solving allow difficulties to be worked out. Supplier’s involvement in new product
development depends on trust of the supplier, and the suppliers’ trust on customer is
positively associated with the degree of joint new product development with customer.
(Walter, 2003). Consequently, this study hypothesized that:
Hypothesis 3: The level of joint new product development is positively associated with the
level of relational capital.
Relational capital, as defined, resides upon close interaction at the personal level
between alliance partners. When organization applied resources to understanding patterns and
rules particular to an exchange contest, it can learn more its partner. This customization of
knowledge leads to increasingly effective issue diagnosis and problem solving based on
greater levels of familiarity and understanding of the nuances of a particular exchange and
then develops trust. Trust reduces fears of exploitation and minimizes feelings of
vulnerability (Boon & Holmes, 1991), and facilitates the possibility of joint new product
development. Therefore, this study hypothesize that the relationship between domain
knowledge specificity and joint new product development is partially mediated by relational
capital.
Hypothesis 4: The relationship between domain knowledge specificity and joint new product
development is partially mediated by relational capital.
III. METHOD
A. Measurement
Measures of the variables were first developed based on previous researches and each
indicator was measured on a 7-point scale from “strongly agree” to “strongly disagree”.
Supplier’s Domain Knowledge Specificity (SKS): Domain knowledge specificity is defined
as the degree to which critical areas of knowledge of a supplier firm are specific to the
495
requirements of a buyer (Subramani & Venkatraman, 2003). It is measured in terms of five
items that reflected the level of specialized intangible investment in developing an
understanding of the buy’s requirements and the distinct context of interaction. Items were
drawn from Nooteboom, Gerger & Noorderhaven (1997) and Subramani & Venkatraman
(2003) to construct this scale. The Cronbach’s α measure of reliability for this construct is
0.8525.
Relational Capital (RC): Relational capital is defined as mutual trust, respect, and
friendship that reside at the individual level between alliance partners. It is measured in terms
of 5 indicators that reflect the level of mutual trust, respect and friendship between partners.
Items were drawn from Kale, Singh and Perlmutter (2000) to construct this scale. The
Cronbach’sα measure of reliability for this construct is 0.893.
Joint New Product Development (JNPD): Joint new product development is defined as the
extent to which supplier and buyer have jointly developed the product (Stump, Athaide &
Joshi, 2002). It is measured in terms of 5 indicators that reflect the level of new product co-
development relationship between supplier and the focal buyer. Items were drawn from
Stump, Athaide & Joshi (2002) to construct this scale. The Cronbach’s α measure of
reliability for this construct is 0.9069.
B. Data collection
The sampling frame of this study includes the Taiwanese firms dedicated in
manufacturing hardware product of information industry, namely system, peripherals, and
modular components. Multiple data sources were used in preparing sample list. Six hundred
and seventy questionnaires were mailed and 144 questionnaires were received, representing
an effective response rate of 21.56%. As suggested by Armstrong and Overton (1977),
assessment of nonresponse bias indicated no significant difference between early and late
respondents (P<0.01). This research chose managers in charge of OEM/ODM business as the
only informant. This approach is consistent with the general recommendation to use the most
knowledgeable informant (Huber & Power, 1985).
IV. ANALYSIS
496
Table I Estimated Models Standardized Coefficient (t value)
Independent JNPD / Model 1 RC / Model 2 JNPD/ Model 3
variables
** **
SKS 0.607 (8.753) 0.461 (5.940) 0.340 (4.681) **
RC 0.451 (6.209) **
F test 76.607** 35.284** 55.376**
R2 0.369 0.212 0.460
2
R (adjusted) 0.364 0.206 0.452
**p value<0.01 ( ) t value
V. CONCLUSION
Consistent with prior research, this research found support for a positive effect
between supplier’s asset specificity and joint new product development (Heide & John, 1990;
Zaheer & Venkatraman, 1995; Joshi & Stump, 1999), and a positive effect between supplier’s
asset specificity and relational capital (Nielson, 1998). Furthermore, adding a variable,
relational capital, significantly increased the explained variance in JNPD (comparing model 1
with model 3). Besides, the partially mediating effect of relational capital on the relationship
between SKS and JNPD is proved. This confirms the argument that relational capital partially
mediates the effect between SKS and JNPD in the seller-buyer relationship.
497
are made in this research and yet offer only a cross-sectional test for these arguments. A
longitudinal methodology whereby the evolution of a supplier-buyer relationship is measured
would be the optimal design to support causal arguments. Finally, this study focused on
limited variables only. An avenue for future research would be to include a broader range of
contextual and situational factors that may influence the extent of joint new product
development.
REFERENCE
Armstrong, J. S., and Overton, T. S. “Estimating Nonresponse Bias in Mail Survey.” Journal
of Marketing Research., 14, 1977, 396-402.
Boon, S. D., and Holmes, J. G. “The Dynamics of Interpersonal Trust: Resolving Uncertainty
in the Face of Risk.” in Hinde R.A. & Groebel, J. (edits). Cooperation and Prosocial
Behavior. Cambridge: Univ. Press, 1991, 190-211.
Cooper, R. G. “The Dimensions of Industrial New Product Success and Failure.” Journal of
Marketing., 43, 1997, 93-103.
Dyer, J. H. “Specialized Supplier Networks As a Source of Competitive Advantage:
Evidence from the Auto Industry.” Strategic Management Journal., 17, 1996, 271-
291.
Dyer, J. H., and Singh, H. “The Relational View: Cooperative Strategy and Sources of
Interorganizational Competitive Advantage.” Academy of Management Review., 23,
1998, 660-679.
Heide, J. B., and John, G. “Alliances in Industrial Purchasing: the Determinants of Joint
Action in Buyer-supplier Relationships.” Journal of Marketing Research., 27, 1990,
24-36.
Huber, G., and Power, D. “Retrospective Reports of Strategic Level Managers.” Strategic
Management. 6, 1985, 174-180.
Joshi, A.W., and Stump, R. L. “The Contingent Effect of Specific Asset Investments on Joint
Action in Manufacturer-Supplier Relationships: An Empirical Test of the Moderating
Role of Reciprocal Asset Investments, Uncertainty, and Trust.” Journal of the
Academy of Marketing Science., 27, (3), 1999, 291-305.
Kale, P., H. Singh and Perlmutter, H. “Learning and Protection of Proprietary Assets in
Strategic Alliances: Building Relational Capital.” Strategic Management Journal., 21,
2000, 217-237.
Macaulay, S. “Non-Contractual Relations in Business: A Preliminary Study.” American
Sociological Review., 28, 1963, 55-67.
Nielson, C. C. “An Empirical Examination of the role of “Closeness” in Industrial Buyer-
Seller Relationships.” European Journal of Marketing., 32, (5&6), 1998, 441-463.
Neter, J., Kutner, M. H., Nachtsheim, C. J., and Wasserman, W. Applied Linear Statistical
Models, 4th ed. Times Mirror Higher Education Group, Inc., 1996.
Nooteboom, B., Berger, H. and Noorderhaven, N. G. “Effects of Trust and Governance on
Relational Risk.” Academy of Management Journal., 40, (2), 1997, 308-338.
Sabel, C. “Studied trust: Building new form of cooperation in a volatile economy.” Human
Relations., 46, (9), 1993, 1133-1170.
Sioukas, AV. “User Involvement for Effective Customization: an Empirical Study on Voice
Networks.” IEEE Trans Engineer Management., 42, (1), 1995, 39-49.
Stump, T. L., Athaide, G. A., and Joshi, A.W. “Managing Seller-buyer New Product
Development Relationships for Customized Products: A Contingency Model Based
on Transaction Cost Analysis and Empirical Test.” The Journal of Product Innovation
Management., 19, 2002, 439-454.
498
CUSTOMER SATISFACTION FOR TELECOMMUNICATION SERVICES: A
STUDY AMONG ASIA PACIFIC BUSINESS CUSTOMERS
ABSTRACT
I. INTRODUCTION
499
an intermediary that manages the flow of the value chain by matching buyers and sellers in
order to achieve specific objectives. Selecting a balance between risk and reward addresses
the ways buyers and sellers create and share value (Wayland and Cole, 1997). Many high-
tech products and services require face-to-face contacts (Wayland and Cole, 1997).
According to Dijksterhuis et al. (1999), co-evolutionary effects of organisations can take
place both within a company and between companies, and therefore interacting with only
certain customer persons is not enough. But most ‘satisfaction’ research has used U.S.
subjects to develop and test satisfaction theory (Spreng and Chiou 2003); thus, such measures
of quality and satisfaction may be less applicable and less meaningful in other countries,
thereby leading to less-than-optimal results. This is no less true for services than for products;
the service sector is taking on increasing importance in the global economy, particularly in
most advanced countries, such as those in the European Union, USA as well as Canada,
Japan, and countries in the Asia-Pacific region. Thus the rationale for this study – a study of
customer satisfaction in business to business services arena the Asia Pacific region.
This study is done to improve the understanding on the customer satisfaction drivers
among the telecommunication operators (customers) in the Asia Pacific region. Customers
are mostly large telecommunication operators who provide services to consumers such as
mobile network, fixed lines, internet and broadband access. The main objective of this
research is to increase the understanding of how a vendor can successfully improve and
maintain its customer satisfaction leading to customer loyalty in a telecommunication market.
The following research questions guide this study:
1) What are the kind of perception customers have against the performance of product or
services provided by a telecommunication vendors and also their importance?
2) Is there a difference in how customers perceive the key processes in the different sub-
regions of the Asia Pacific ?.
Eighteen processes which are generic to a typical supply chain cycle of a telecom
operator were identified as the attributes to study the customers satisfaction. They range from
pre sales to operation and extending into warranty and post warranty or maintenance of the
product or services being sold and deployed to the customer. A questionnaire was developed
using the 18 attributes identified – two (10 point) rating scales were provided for each
questions one for the respondents to indicate their level of satisfaction against the attribute
and also about their perceived importance level for the same. Simple means are calculated
eliciting a score for both the importance and satisfaction levels on each attribute. These data
was then analyzed using the ‘importance-performance analysis developed by Martilla, and
James (1977). To answer the second question, data were further sliced by sub regions of
China and “rest of AP”. The data was collected through a survey, conducted via the web
since the customers are widespread around the region. A total of 300 + (the total number of
customers of the telecom company in Asia Pacific) questionnaires were sent out a total of 247
samples of completed questionnaires returned (a response rate of 66%).
Overall, Importance rated higher than Satisfaction on all attributes in the three groups,
except for the attribute of “Billing and Payment”. When comparing both the importance and
satisfaction scores, the largest gap is for Pricing where the importance score is high while the
500
satisfaction is significantly lower compared to the other attributes. This is followed by
Operational and Product functions, that is (in order of magnitude) Technical Support, Repair
Services, Exchange Services and Services Quality and Product Quality. The least importance
or having a positive gap is Billing and Payment. Looking by the absolute satisfaction level,
Product Quality, Installation and Sales/Relationship has the top three most satisfied process
areas. All of these are above 8.5 on a 10 point scale. This is closely followed by Service
Quality Satisfaction.
Those 4 quadrants are illustrated above. The area to focus will be High Satisfaction and High
Importance as a “selling point” in the marketing effort by the company. Another quadrant
that is equally important is the Low Satisfaction but High Importance area. Process areas
falling in this quadrant are candidates for ‘improvement initiatives’ When data was plotted on
the chart, all the 18 process areas were falling into the High Satisfaction and High Importance
area. This is likely because the processes are matured and therefore the customers demand
and satisfaction level tends to be higher. Due to this the High-High quadrant was further
segmented to sub-quadrants to analyze the areas of strength and areas to improve.
Strength to Leverage
Area for
Improvements
From the Figure 3 below, clearly the Billing and Payment is in the high satisfaction and low
importance area denoting it as an “over investment”. Areas that is of strength to the company
is Product Quality, Installation Services and Sales and Relationship Management and
Services Quality. Areas that need most improvement are Pricing which seems to be the ONE
biggest area that need serious attention for improvement.
501
Figure 3: Importance – Satisfaction Analysis of the 18 Attributes
Satisfaction
Importance
In order to understand if there is any difference in the strength and improvement areas by sub
regions such as between the Greater China and AP other than China, the data were sliced
further by China results and AP results. The Importance-Satisfaction matrix were plotted and
analyzed as below.
In China, (Figure 4) the over-investment continue to be Billing and Payment along with the
area for improvement is the Pricing. A noted difference in China compared to overall results
is that Program Management and Professional Services seems to be of greater strength along
with all other strength identified earlier ie. Sales and Relationship Management, Installation
Services, Product Quality and Services Quality.
Importance
502
Importance-Satisfaction Analysis for Customers from Asia Pacific (AP) Region other
than China
Analyzing for the AP region other than China (Figure 5) , the satisfaction level is
generally lower compared to China. While over-investment and area for improvement are the
same as for China ie. Billing and Payment and Pricing respectively, Sales and Relationship
Mgmt seems to be markedly lower than China. Program Management and Professional
Services satisfaction is lower with regards to its importance rating.
IV. CONCLUSIONS
Results of the study seem to indicate that customers in AP region value the price
much more than most other processes and the vendor needs to provide extra effort in
improving its ‘pricing strategies’ in winning or maintaining the business in Asia Pacific
regions. This is clear because as an emerging market, AP used to be a
Importance
high margin region where vendor financing was common in which the vendors provide
financing to expand customers network and the customers pay as they grow. The results
further highlight that processes identified are different by various sub regions. For example
customers in China value the basic processes such as Installation compared to rest of Asia
Pacific where they value enhancement processes such as Professional Services. This reflects
the fact that each market segment is unique due to the culture of the people, government and
politics that a global vendor especially those from west need to be sensitive to the differences
and approach each segment with unique marketing strategies into to capture and sustain
business in this very diversified region. Thus, this study can provide a platform for global
marketers on how customer satisfaction data can be used effectively to improve the customer
loyalty and for new entrants on how to customize marketing strategies they are offering in a
new region.
503
REFERENCES
Dijksterhuis, M., F. Van den Bosch, H. Volberda., “Where do new organization forms come
from? Management logics as a source of coevolution”. Organisation. Science. 10, (5),
1999.
Kotler, Philip. Marketing Management: Analysis, Planning, Implementation, and
Control", 9th ed., Prentice- Hall, New Jersey, USA. 1998.
Martilla, J.A. and James, J.C., “Importance-performance analysis. Journal of Marketing, 41,
(1), 1977, 77-79.
Mittal, Vikas, Kumar, Pankaj and Tsiros, Michael.. Attribute-Level Performance,
Satisfaction, And Behavioral Intentions Over Time: A Consumption-System
Approach", Journal of Marketing ,63, (2), 1999, 88-101.
504
PORTRAYAL OF GENDER ROLES
IN INDIAN MAGAZINE ADVERTISEMENTS
ABSTRACT
In this study, the authors examine and discuss the portrayal of gender roles in English
language Indian magazine advertisements of five durable and non-durable products.
I. INTRODUCTION
II. METHODOLOGY
505
number of male models is shown in significantly larger number of cases than female models
in Indian magazine advertisements. Also, 42 ads (49%) of the 86 total ads had no female
models in them at all while only 10 (12%) of the ads had no male models. These findings
perhaps reinforce the strong cultural bias towards men in the Indian society. They also
indicate that women are basically restricted to household activities rather than going out and
acting as bread winners, while on the other hand men are considered as providers of
economic security.The setting for all five products taken together for the ads were mainly
outdoors (46 out of the 86 ads = 54%) with only 6 ads (7%) showing a home or private
residence. A store/restaurant or an occupational setting was each seen in 10 (12%) of the ads
while 14 ads (16%) had settings that were not clear.
Exhibit I
Characteristics Related To The Advertisements As Number Percent
A Whole %
1 Setting:
Outdoors 86 53.5
Other/Not Clear 14 16.3
Store/Restaurant 10 11.6
Occupational 10 11.6
Private Residence 6 7.0
(B) Exhibit II: Characteristics of the Individual Models Shown In The Advertisements
The majority of the models seemed to be between the ages of 15 to 30 years, 74
(56%) of male models and 38 (63%) of female models were in this age group. The marital
status of the majority of models was not clear from the ads. Although most of the models
were not shown as actually working, but both the male models 104 (79%) as well female
models 40 (67%) appeared to be well dressed and were shown in situations that would lead
one to believe that they were well-to-do and gainfully employed. The male models 78 (59%)
and the female models 32 (53%) also seemed to be semi-professionals or holding mid-level
business positions. Almost none of the models (88% of male models and 87% of female
506
models) were shown as spokespersons for the product being advertised and almost none of
them (96% of male models and 97% of female models) were shown to be either seeking or
giving advice or help. These findings reflect the target audience and readers of Indian
magazines – educated middle class and upper class urban Indians who are seen as being
successful in their careers. The products advertised are rather expensive products (airlines,
cigarettes, cars, computers, and hotels) and only well-to-do Indians can afford to use these
products. Although the traditional roles of Indian women are that of dutiful housewives and
mothers, however, in recent years as urban women get more educated, they are going outside
the house to work (Blackwell, 2004; Reddy, 2003). The majority of the male models 72
(55%) in the ads appeared employed in jobs outside the home and 36 (27%) shown in the role
of a spouse or boyfriend. The majority of the female models, 30 (50%) and were shown in
the role of a spouse or girlfriend. The majority of both male 68 (52%) and female 38 (63%)
models were also shown in a recreational mode. The models were shown in non-physical
activities in a majority of the cases, 70 (53%) for male models and 34 (57%) for female
models. These results reinforce the importance given to marriage for both genders in the
Indian society. Furthermore, for both males and females free association with the opposite
sex and dating are not looked upon favorably. None of the male models (100%) and almost
all of the female models (93%) were shown as being frustrated. The dress of almost all the
male models (97%) and most of the female models (73%) was western, and 27% of the
female models wore Indian clothes. The dress styles of almost all male models (99%) and of
all female models (100%) were both demure and non-provocative. These results are not
surprising. In urban areas most Indian men wear western attire (Culturegrams, 1998, Tyabji,
1985). India was under the British rule for 200 years and the western influence particularly
in the case of men’s attire has remained unchanged.
Exhibit II
Characteristics Male Female
Of Models In The Advertisements Models % Models %
N=132 N=60
1 Age:
Under 15 years 4.5 13.3
15 years to 30 years 56.1 63.3
30 years to 50 years 37.9 23.3
Over 50 years 1.5 0
2 Marital Status:
Married 13.6 33.3
Not Married 7.6 20.0
Not Clear 78.8 46.7
3 Employment:
Shown In Work Situation 15.2 6.7
Non-Work Situation Shown But Appears 78.8 66.7
Employed
Appears Unemployed 6.1 26.6
4 Occupation:
Professional / High Level Executive 22.7 6.7
Entertainer / Professional Athlete 7.6 0
Semi Professional / Mid-Level Business 59.1 53.3
507
Non-Professional / White Collar 1.5 6.7
Other / Not Clear 9.1 33.3
9 Activity:
Physical – Sport 25.8 13.3
Non-Physical 53.0 56.7
Inactive 21.2 30.0
10 Frustration Shown:
Frustrated 0 3.3
Not Frustrated 100 93.3
Not Clear 0 3.3
11 Dress Type:
Western Dress 97.0 73.3
Indian Dress 1.5 26.7
Neither 1.5 0
12 Dress Style:
Seductive 1.5 0
Demure 98.5 100
13 Level Of Sexism
Provocative 1.5 0
Non-Provocative 98.5 100
508
Although a majority of urban Indian women wear typically Indian dress–sari and
shalwar-kameez (Culturegrams, 1998, Tyabji, 1985), it is not uncommon to find some
females wearing western type clothing also to their corporate jobs (Cox and Daniel, 2000).
These young urban women like men seemed to be influenced by western multinational
corporations promoting western brands successfully in the Indian markets. A comparative
study on gender roles portrayals in the United States and Indian magazine advertisements by
Griffin et al. (1994) found that some Indian magazine advertisements portrayed women in
their traditional roles but because of the western influence in India, this trend was changing
and there were more advertisements that were depicting them in career-oriented roles and as
outgoing and enjoying an active life also. Modesty in dress styles of both males and females
in the Indian advertisements mirror the cultural values of the Indian society.
IV. CONCLUSION
In conclusion, the findings of the present study must be interpreted keeping in mind
the limitations of the study in terms of sample size and selection of the product categories and
the English medium national magazines that target upper and middle class urban Indians.
Nevertheless, since adverting reflects the social values of a given society, the overall findings
of the present study suggest to international advertisers that in developing their advertising
campaigns for urban Indian markets it is important to reflect the traditional cultural values of
the Indian society where men could be shown as providers of economic means, successful in
their careers, educated, well-to-do, married, younger individuals. Although the typical role of
the Indian women is that of a housewife, however, depending upon the product being
advertised women could be shown as being employed as semi-professionals and in western
clothes. For example, in case of cigarette advertisements where women were shown along
with male models all the females were shown in western attires and shown as being young,
educated, employed, and having a good time in an outdoor setting. For most of the car
advertisements where women were shown with their male counterparts they were wearing
sarees and appeared to be young, educated, well-to-do Indian housewives shown in non-
working roles. Interestingly, women were neither shown as sex objects nor in demeaning
roles in the advertisements that were sampled in the present study. Further research on
different product categories, examination of other magazines are needed to gain additional
insights into the portrayal of gender roles in Indian magazine advertisements.
REFERENCES
509
Pingree, S., R. Hawkins, M. Butler, and W. Paisley. “A Scale for Sexism,” Journal of
Communication, 24 (4), 1976, 193-200.
510
CHAPTER 16
LEADERSHIP
511
STUDENT LEADERSHIP AT THE LOCAL, NATIONAL AND GLOBAL LEVEL:
ENGAGING THE PUBLIC AND MAKING A DIFFERENCE
ABSTRACT
512
programs.” While Prof. O’Hair highlighted the leadership of faculty and students in
providing real time information and coverage of the convention via a specially designed web
site (www.ncapress.com).
Specifically, the objective in this descriptive analysis will be to highlight how the
Boston NCA press and promotion effort evolved, as well as to outline its strategies for
promotion. Initially, a brief synopsis of past press and promotion efforts will be provided.
Following these descriptive and detailed synopses, which will include the operations and
strategies employed in at the NCA Boston conference, will be recommendations to enhance
future conventions not only of NCA but other similar organizations.
Past NCA press and promotion efforts included the national office working closely with the
local arrangements committee in sending out press releases concerning noteworthy guest
speakers planning to speak at the annual conference, as well as seminar research and awards
to take place at the convention. Another traditional effort also was to inform local high
schools, junior colleges and colleges and universities, which might not be aware of NCA’s
mission and purpose, of the convention in the effort to attract publicity as well as potentially
new members for the organization. An additional objective was to make available experts,
as well as NCA personnel, for any questions local media might have regarding the
convention or external events. For example, during political campaigns many NCA scholars
and practioners are interviewed at the convention for their insights on pertinent issues, such
as candidate debates, current events, etc. In addition, local politicians, academic, business
and government leaders frequently speak at the NCA conventions, which have generated
press regarding the annual event. At the conventions, news releases are often generated from
the national office, but local press and promotion teams also have sent out information,
especially during the course of the convention in the effort to highlight winners of major
awards and to highlight honors, etc. Coverage of the convention for those in attendance, as
well as members who do not attend, has been provided traditionally in a special issue of
Spectra, the newsletter of NCA, published in January following the November convention.
The local events coordinator for press and promotion for the Boston Convention was
Dr. J. Gregory Payne, of Emerson College, who met several times during the summer with
the local NCA committee headed by local convention chairs – Drs. Anne Mattina of Stonehill
College and Sarah Weintraub of Regis College. Classes involved in the Emerson Press and
Promotion effort included Dr. Payne’s Advocacy and Argument, which consisted primarily
of freshman and sophomores, and upper class members of a senior seminar course in political
communication, The Public Affairs Matrix. The new/old media technical abilities of the
student effort was enhanced by the fact that many of the Advocacy and Argument students
were journalism majors, with at least an introductory journalism course which included the
use of pod casting and other innovative novel means of communication and promotion. Parts
of each class were devoted to strategizing on potential approaches to the NCA project. In the
effort to utilize the talents of various departments at Emerson, the Press and Promotion team
included EmComm, Emerson’s award winning internal marketing communication student
organization, supervised by Prof. Douglas Quintal. The merging of student and faculty
talents enhanced the creative possibilities, but this new approach of inter-departmental
cooperation also created tensions – some positive and others less so. Some emerging issues
of this new collaborative approach included the following: who was who directly was in
charge of the operation, distinguishing and identifying the reporting and supervisory lines
513
among those students from different departments involved, as well as the time line
expectations of the project. From a learning perspective, such dynamics are more often than
not characteristic of creative talent in the workplace.
While the strategic capabilities Emerson College students, faculty and staff for the
2005 NCA convention was long in the making, the NCA Press & Promotions website
developed almost overnight. Throughout the day preceding the convention, EmComm, and
graduate student David Twomey worked collaboratively to establish the website. By the start
of the convention the next morning, a fully functional website was up and running with letters
from the chairs of the local arrangements committee and faculty at Emerson College, a
schedule of events at the convention, press releases, contact information and links to
background on Boston in general.
Emerson student William Luken took charge of digital photography at the convention
and through partnership with David Twomey, posted the pictures online each evening for the
enjoyment of all at the convention. The pictures included captions detailing the names of
those in the pictures and what events were featured in each. The website for this effort can be
viewed at www.ncapress.com.
As a result of the entire experience, the students gained real-world experience for
promoting a large event with cutting edge technology that in turn fascinated and impressed
those attending the convention. There was truly an extraordinary, lasting impression of
Emerson College on many professors and students from other colleges across the nation
through the hands-on work of all Emersonians involved.
Similar opportunities arouse when UNICEF New England honored Emerson students’
reputation in communications and marketing to plan, promote and co-sponsor the first World
AIDS Day observed on a large scale in Boston. This invitation was due to Dr. Gregory
514
Payne being a board member of UNICEF New England, a relationship that provided the
opportunity for this learning experience.
Much of the team from the NCA convention was reassembled. EmComm was
charged with creating the marketable identity for the event and creation of print promotional
and informational materials for distribution. What was ultimately very different with this
particular event as opposed to the NCA convention was that students were also deeply
involved in the planning and organizational process of many events throughout the duration
of the day. Planning for these events began in October with the formation of a student
committee which worked in association with other student groups, such as student
government, comedy troupes and Model UN. Through this collaboration lead by
undergraduate student Michael Hawkes and graduate student David Twomey, events were
planned throughout much of the day focusing on UNICEF’s work with third world nations on
the AIDS epidemic, but also on the local impact of AIDS, specifically with younger people.
Students lead by Dr. Payne, again with assistance from the Department of Journalism,
used flash audio recorders to record interviews with UNICEF staff who have visited African
nations torn by the horrible effects of AIDS on all ages in their communities. They also
interviewed students, faculty and staff who worked tirelessly to make the day’s events a
success and candid interviews with Emerson students with their thoughts on AIDS in
general. As with the NCA convention, students returned to the college’s computer labs to
upload and post their audio interviews to the event’s website. Graduate student and assistant
to Professor Payne, David Twomey, established the Boston World AIDS Day website. It
includes the audio and podcasts of the interviews taped by students, photos taken by student
William Luken, blog posts by students college-wide on the day’s events and their personal
thoughts on AIDS, a schedule of events for day, press releases and contact information.
The creative talents and brainstorming of ways to highlight the event produced some
noteworthy publicity. For instance, students obtained a proclamation from the state assembly
of the Commonwealth of Massachusetts, recognizing World AIDS Day as well as UNICEF
and Emerson College’s efforts to bring attention to the day. The event was a marked success
gaining the recognition of the Emerson College President, state and local officials, and
UNICEF staff nationally who were looking for ways to involve college students in such
projects.
IV. CONCLUSION
In summary, the two case studies above demonstrate that today’s world is one where
classrooms have no walls and where education can help make the difference in engaging
students and citizens to work together in the pursuit of the common good. It is the hope of
the authors that such opportunities will be further pursued in the effort to further the feeling
515
of community in a world that desperately needs a global commitment to public diplomacy
and a belief that each of us can make a difference.
REFERENCES
Hanlon, John. “Live from Boston… it’s NCAPress.com!” Spectra, National Communication
Association, Vol. 42, No. 1, 2006, 7.
Heifetz, Ronald. Leadership without Easy Answers. Harvard University Press, Cambridge,
MA. 1995.
Kenny, Maureen, and Lou Anna Simon. Learning to Serve: Promoting Civil Society Through
Service Learning. Kluwer Academic Press, Norwell, MA. 2002.
Pinto, Amanda. “College Observes World AIDS Day.” The Berkeley Beacon, Emerson
College, Vol. 59, Issue 13, 2005, 1 & 5.
Peltak, Jennifer. “National Communication Association, The.” Official website. Washington,
DC. January 2006.
<http://www.natcom.org>
Smitter, Roger. Personal interview. Executive Director, National Communication
Association. Boston, MA. 2 Dec 2005.
Twomey, David. “NCA Convention 2005 Press & Promotions.” Official website. Boston,
MA. January 2006.
<http://www.ncapress.com>
Twomey, David. “World AIDS Day Boston.” Official website. Boston, MA. January 2006.
<http://www.aidsdayboston.com>
Watson, Martha. Personal interview. President, National Communication Association.
Boston, MA. 20 Nov 2005.
Watkins, Jason. “2005 Convention Draws Nearly 5,500.” Spectra, National Communication
Association, Vol. 42, No. 1, 2006, 1 & 19.
Wren, J. Thomas. The Leader’s Companion; Insights on Leadership through the Ages. Basic
Books, New York, NY. 1995.
516
CHAPTER 17
MANAGEMENT OF DIVERSITY
517
ORGANIZATIONAL CULTURE AND CUSTOMER SATISFACTION: A PUBLIC
AND BUSINESS ADMINISTRATION PERSPECTIVE
ABSTRACT
This study examines the relationship between organizational culture and customer
satisfaction. Organizational culture influences the shape of an organization at all levels and
has a strong effect on the customer’s satisfaction rating. It is suggested that the relationship
between organizational culture and customer satisfaction is greater when employees and
management share the culture values of an organization.
I. INTRODUCTION
The members of an organization (Hofstede, 1980) bring with them their own personal
cultures that come from their families, their communities, their religions, any professional
associations to which they belong, and their nationalities. In an effort to understand
organizational culture, researchers have explored how various internal processes, such as
individual selection, socialization, and the characteristics of powerful members such as the
founders of the organizations, or group members, can influence the organizational values and
outcomes. It has also been suggested that increased employee empowerment increases
customer satisfaction. That is one of the reasons that many human resource managers now
518
place as much emphasis on identifying organizational culture as they do on mission and
vision.
II. OBJECTIVE
Many experts believe that developing a strong organizational culture is essential for
successful customer satisfaction. While the link between organizational culture and
organizational effectiveness is far from certain, there is no denying that each organization has
a unique social structure and that these social structures drive much of the individual behavior
observed in organizations (Frost, 1985). The aim of this paper is to examine the impact of
organizational culture on customer satisfaction in an organization. Factors that influence
employee behavior play an important role in determining how the organization will act, how
it will accomplish its goals, and how it will treat its customers (Figure I).
Figure I
Factors of Organizational Climate and Customer Satisfaction
INTERNAL SERVICE PERFORMANCE
FACTORS FACTORS FACTORS
Leadership
Planning Involvement Monetary Gain
Customer Focus Reliability Value Gain
Information and Responsiveness Self Satisfaction
Analysis Assurance Acknowledgement
Human Resource Empathy
Focus
Mission Statement
Economic Conditions
Language
Training
Even today, organization culture is the one key that points to the success of such
organizations as Southwest Airlines, Microsoft and Oracle and the failure of leadership in
other organizations such as Hewlett Packard and Walt Disney. It is widely recognized that
cultural differences is one of the most common reasons for failure in mergers (AOL and Time
Warner). In 1989, less than a decade after the term corporate culture (Kotter, 1989, Heskett,
1989) came into general use, Time, Inc. blocked a hostile bid by Paramount by arguing that
its culture would be destroyed or changed by the takeover to the detriment of its customers,
its shareholders and to society. The judge ruled in Time’s favor. A recent electronic search of
the topic on the Internet suggests that in the past five (5) years alone, authors have published
53,500 articles and reports attempting to examine the related effect of culture in the
organization arena.
519
Digital Equipment Corporation’s Strategic Quality Efforts” by Betty Bailey and Robert
Dandrade, the authors discuss the experience and successes of companies that include Banc
One, Taco Bell and others. They stress that the correlation of putting employees and
customers first with profits has necessitated new ways of managing and measuring success.
In Corporate Culture and Performance by John P. Kotter and James L. Heskett, the authors
state that culture is more important to the bottom line performance than strategy, structure,
financial analysis and management systems. Their studies show the following:
1. Corporate culture can have a significant impact on a firm’s long-term economic
performance.
2. Corporate culture will probably be an even more important factor in determining the
success or failure of firms in the next decade.
3. Corporate culture that inhibits strong, long-term financial performance is not rare; it
develops easily, even in firms that are full of reasonable and intelligent people.
4. Although tough to change, corporate culture can be made more performance
enhancing.
In Organizational Culture by Peter J. Frost, Larry F. Moore, Meryl Reis Louis, Craig
C. Lundberg and Joanne Martin, the authors state that the structuring of an organization into
work roles in turn influences patterns of interaction found in the organization. Collective
understanding may, therefore, shade conceptions of self as well as opinions of others and, in
essence, provide an interpretive system that employees can use to make sense of ongoing
events and customer satisfaction. In Communication World August-September 1998 v 15 n7
p50 (2), the article on “A Winning Culture Beats the Competition” (corporate culture) (Cover
Story) by Jill Langendorff Folan describes how a corporate culture can differentiate an
organization from its competitors. The author speaks about how a winning culture encourages
trust, learning, growth, and courage and expects out-of-the-box thinking, creative problem
resolving, quality customer service and excellent performance on a daily basis. She states that
the only sustainable competitive advantage a company has is its people. Culture, leadership,
and commitment elements are critical to the success of an organization.
This study has one purpose: to examine the impact of organizational culture on
customer satisfaction in an organization. The study utilizes data gathered from two
organizations to assess how organizational culture impacts customer satisfaction. It was
predicted that organization culture would significantly differentiate if the focal point of the
organization were customer focus. Moore and Kelly (1996) proposed that having a service-
oriented culture allows human service organizations to meet customer expectations in a way
that a more technically focused culture could not. The authors further claim that it is not
possible to understand service quality in a human service industry without viewing the
service from the client’s point of view. Additionally, providing employees with the
information and knowledge they need to immediately solve customer problems as opposed to
needing managerial intervention has further increased customer satisfaction levels (Benko,
2001).
520
IV. METHODOLOGY OF RESEARCH
The study presented in this research utilized a tool to measure employee and customer
satisfaction. A survey was created which measured satisfaction with several areas including
autonomy in resolving problems, a supportive work environment, culture diversity, training,
courtesy, initial services, and overall experience. The survey instrument was self-
administered.
The Denison culture of adaptability (Denison, 1984) measures an organization’s customer
focus, organizational learning, and ability to create innovation and change. Employee
empowerment, employee development, and training are indicative of what Denison refers to
as a high involvement culture. It has also been suggested that increased employee
empowerment increases customer satisfaction. Other research emphasized the importance of
employee training and development as tools to enhance customer satisfaction.
Procedure: Participants received the satisfaction survey from the management teams of their
various organizations. Responses were collected from employees across all levels of the
organization. Participants were instructed to place completed surveys in designated survey
boxes located throughout the organizations. Customer satisfaction surveys were provided to
the customers by the various staff providing the services. Customers were asked to complete
a survey soon after they receive the service. Participants were instructed to place completed
surveys in designated survey boxes located throughout the organizations.
Participants: One hundred and seventy-seven (177) employees and four hundred and sixty-
five (465) customers received the surveys. Surveys were completed by fifty (50) employees
at Employer A’s location and twenty (20) employees at Employer B’s location.
Approximately seventy (70) participants (employees) from both locations completed the
satisfaction survey. Surveys were completed by 200 customers from Employer A’s location
and (15) customers from Employer B’s location. Approximately two hundred and fifteen
(215) customers from both locations completed the satisfaction survey. Overall 100% of the
employees at Employer’s A location were satisfied with the organization, and 25% of the
employees at Employer’s B were satisfied with the organization. 70.57% of the customers at
Employer’s A location were satisfied with the service received, and 13% of the customers are
Employer’s B location were satisfied with the service received at the organization. The
research will reflect the data used to compile the results from both organizations.
V. DATA ANALYSIS
The data was analyzed based on the number of responses received from each
organization. Surveys were tallied and compared to create a chart, which reflects satisfaction
levels. The satisfaction scores are reported as the percentage of customers who responded in
the various areas. An abbreviated summary of results and the survey are below. (Figures II &
III)
521
Figure II Figure III
SURVEY RESULTS EMPLOYEE SATISFACTION SURVEY
ABBREVIATED SUMMARY OF SURVEY RESULTS FOR EMPLOYERS A & B ABBREVIATED EMPLOYEE SATISFACTION SURVEY
RESULTS
300 In order to measure staff satisfaction with the quality of management support and
direction being provided, the department is instituting this survey instrument to
gather feedback. Change occurs when validated data is presented to demonstrate
250 the need for change. Place the completed survey in designated boxes. Please be
candid and frank with your responses to the following questions or statements:
200 Please circle the response that best addresses the question or statement for you.
VI. RESULTS
VII. CONCLUSION
522
REFERENCES
A. BOOKS:
Hofstede, G. Culture’s Consequences: International Differences in Work Related Values.
Beverley Hills, CA: Sage Publication, 1980.
Deal T.E., and A. A. Kennedy. Corporate Cultures: The Rite and Rituals of Corporate Life.
Harmondsworth, England: Penguin Books, 1982.
Frost, Peter L., Larry L. Moore, Meryle Louis Reis, Craig Lundberg, and Joanne Martin.
Organizational Culture. New York, NY: Sage Publication, 1985.
Kotter, John P. and James L. Heskett. Corporate Culture and Performance. New York, NY:
The Free Press, 1992.
B. JOURNAL ARTICLES:
Bailey, Betty, and Robert Dandrade. “Employee Satisfaction + Customer Satisfaction +
Sustained Profitability: Digital Equipment Corporation’s Strategic Quality
Efforts”. Center for Quality of Management Journal., v 4, 1995, n3.
Benko, L.B. “Getting The Royal Treatment.” Modern Healthcare Journal., 39, 2001, 28-32.
Denison, D. “Brining Corporate Culture to the Bottom Line.” Organizational Dynamics., 13,
(2), 1984, 4-22.
Folan, Jill Langendorff. “A Winning Culture Beats The Competition (Corporate Culture).”
(Cover Story). Communication World., 15 , 1998, n7, 50 (2).
Moore, S.T. and M. J. Kelly. “Quality Now: Moving Human Service Organizations Toward a
Customer Orientation to Service Quality”. Social Work., 41, 1996, 33-40.
Bliss, William G. “Why Is Corporate Culture Important?” Workforce., 78, 1999, 12 W8 (2).
523
DIVERSITY IN THE WORKPLACE
ABSTRACT
This exploratory research analyzes the diversity in the workplace today within two
different Fortune 500 companies and shows the different support groups and methods of
diversity used. The two selected organizations are not only Fortune 500 companies, but two
of the largest and most profitable corporations in the United States. Each of these
organizations is capable of making a significant contribution with respect to changing the
way in which the business world acts given their leadership roles in each of their respective
market segments.
I. INTRODUCTION
Diversity is one of the most important attributes a company can maintain in today’s
business environment. Diversity is defined as, but not limited to, those observed and inferred
differences among employees of organizations which are rooted in their distinctive culture,
gender, age, geographical regions, ethnic group affiliations and related characteristics
(Rhea 2003).
II. PURPOSE
The purpose of this exploratory research was to provide information on the status of
diversity initiatives at two of the most successful companies (Exxon Mobil and Coca Cola) in
the United States and how they are committed to addressing diversity in the workplace. The
goal is to analyze their vision and goals pertaining to company diversity and examine what
they have accomplished as well as where they can improve. A review of these companies
illustrates how companies in general are viewing diversity and the importance of it, whether
they are succeeding in their goals of diversity, and future diversity management plans.
In 1999, Exxon and Mobil merged to become today’s Exxon Mobil, the world's
second-largest integrated oil company. Both Exxon and Mobil began their businesses in the
late 19th century when the petroleum industry was booming (From Kerosene to Gasoline
2005). The merger enhanced the ability of both companies to become global competitors.
524
Exxon Mobil is now a major competitor in the production of petrochemicals as well as oil
and gas exploration, production, supply, and transportation around the world (Williams
2005).
Views On Diversity: Lee R. Raymond, the CEO of Exxon Mobil clearly defines that
Exxon Mobil’s diverse workforce is a key competitive advantage as a worldwide enterprise
(A letter from CEO, 1). Exxon Mobil understands that business is about people. Employees
are more productive working in an environment where everyone gets equal opportunities to
grow and excel. Exxon Mobil has built a diverse worldwide workforce by hiring employees
with a shared focus on attaining superior business results. Lee points out that in this
increasingly global environment, Exxon Mobil will continue its efforts to find diversely
talented people worldwide to join its business.
Hiring Policies: Exxon Mobil relies primarily on college recruiting conducted world
wide in order to hire people from upper management to skilled laborers (Global Diversity –
Recruiting, 1). Campus recruiting and hiring programs have also allowed Exxon Mobil to
develop an excellent workforce full of diverse, intelligent employees who have significantly
helped build Exxon Mobil into one of the strongest petroleum industries in the world.
Truman Bell, the education program officer stated that the company has a variety of
jobs open to anyone wanting to become an employee at Exxon Mobil. There are also many
organizations associated with Exxon Mobil (including the Society of Women Engineers, the
National Society of Black Engineers, the Society of Hispanic Professional Engineers, and the
American Indian Science and Engineering Society) that help support and train future
employees for Exxon Mobil in diversity and communication.
Awareness Training: Exxon Mobil uses the following examples: formal training
classes, informal group sessions, newsletters, team-building activities and brown bag lunches
with guest speakers as diversity initiatives used around the world (Career Development
2005). Through these programs and activities they have effectively increased employees’
awareness of diverse work environments as well as helped them to adjust to changing work
environments.
Impact On Business: Exxon Mobil's retail marketers worldwide tailor their products
and promotions to meet their customers' needs. Hiring local employees, and sponsoring
events shows Exxon Mobil’s being attentive to ethnic and cultural differences. This leads to
improved sales and stronger brand loyalty. In the United States, for example, Exxon and
Mobil stores promote Black History Month to celebrate the achievements and contributions
of African-Americans.
525
Accomplishments In Diversity: In 2003, Exxon Mobil hired more than 1,200 new
professional employees worldwide, 40% of these were women, and 64% were hired outside
of the United States. Notably, many of these new employees were hired from many of the
world's developing countries (Diversity/Career 2005).
On May 8, 1886, the Coca Cola product was produced. Dr. Pemberton, a local
pharmacist in Atlanta, Georgia created the formula through a combination of syrup and
carbonated water. Dr. Pemberton sold all of his remaining portions of his business prior to
his death in 1888 to Asa G. Candler. Mr. Candler bought the additional rights and acquired
complete control of the company. In 1891, Mr. Candler purchased all of the shares of the
company for $2,300. The Georgia Corporation, Mr. Chandler, John S. Candler, Frank
Robinson, and two other associates formed The Coca-Cola Company. Two years later in
1895, Mr. Chandler presented the annual shareholder’s report. The report announced every
state and territory in the United States of America now enjoyed the refreshing Coca Cola
beverage (Heritage 2005).
In 1919, the Candler group sold the Coca-Cola Company to Atlanta banker Ernest
Woodruff and an investor group for $25 million dollars. Robert Winship Woodruff and
Ernest Woodruff’s son elected a president four years later. Robert Woodruff led the
company for more than six decades. The emphasis of the drink placed on the product was the
quality of the beverage (Heritage 2005).
At this time, the Coca-Cola product was bottled and distributed internationally.
Through the early 1900s, there were bottling operations in Cuba, Panama, Canada, Puerto
Rico, the Philippines, Guam, and France. The Coca-Cola Company now operates in more
than 200 countries and produces nearly 400 brands of drinks. For more than 115 years, The
Coca-Cola Company continues to produce drinks for individuals around the world (Heritage
2005).
Views On Diversity: Since the Coca-Cola products extend over 200 countries
speaking more than 100 languages, The Coca-Cola Company strives to be a special part of
people’s lives. The company feels that with this much market position, there must be some
form of responsibility and the company has chosen to take a leadership role regarding
diversity. The company believes that the individual differences make the company stronger in
the business market. The company believes that diversity (whether the basis is race, gender,
sexual orientation, ideas, ways of living, cultures, or business practices) provides the
creativity and innovation essential to the company’s economic well being (Our Company:
Diversity 2005).
Awareness And Training: The Coca-Cola Company values diversity and has many
programs and forums to help promote diversity in the workplace. The company believes that
the heart and soul of the enterprise has always been the people who work in the company.
Over the past century, the Coca-Cola employees have led with success by living and working
with a consistent set of values. The Coca-Cola Company understands that the business world
is constantly changing, but respecting these values will continue to be essential to the long-
term success. As the company has expanded over the decades, the company has benefited
from the various cultures and experiences of the societies that are a part of the business. The
company believes that the future success will depend on the ability to develop a company that
526
is rich in diversity of people, cultures, and ideas. The company is also determined to have a
diverse culture from top to bottom that will benefit from the perspective of the individual
workers (Our Company: Diversity at Work 2005).
In addition to being aware and understanding the need for diversity, the Coca-Cola
Company went a step further by forming a unique partnership with the American Institute for
Managing Diversity to help create the Diversity Leadership Academy (DLA). The academy
receives funds totaling $1.5 million dollars through grant money contributed by The Coca-
Cola Company. Formed in 2001, The DLA is as an innovative experimental learning
program for leaders from all sectors of our society. Some of the sectors include individuals
from the education, government, religion, non-profit, and for-profit groups. The program
brings leaders together monthly for over a five month period to learn the principles of
diversity management, benefit from the knowledge of others, and work collectively to
address the diversity issues of the community (Our Company: Diversity Community Support
2005).
Awards: The Coca-Cola Company has received numerous awards regarding the
diversity of the enterprise. Some of the most recent awards are listed below (Our Company:
Awards 2005):
• LATINA Style 50, LATINA Style Magazine (2005)
• 50 Best Companies for Minorities, Fortune (2004)
• Top 50 Companies for Minorities, Diversity Inc. (2004)
• Top 50 Companies for Diversity, Diversity Inc. (2003/2004)
• Corporate Commitment to Minority Business Entrepreneurs Award, Houston
Minority Business Council (2003)
• Corporate Commitment Award, Houston Minority Business Development Council
(2003)
V. CONCLUSIONS
In future years, a key focus of all businesses will be diversity in the workplace.
Diversity means that a company has a wide variety of certain aspects: age, race, ethnicity,
physical ability, gender or sexual orientation. With many businesses deciding to go global, it
will be very crucial for companies to hire a diverse staff that will be able to help achieve
business objectives more efficiently. The biggest concern for top management is how to
manage all these diverse employees and still remain a successful business. These are many
of the issues that Exxon/Mobil and Coca-Cola face everyday.
Exxon Mobil, after the merger in 1999 became the 2nd largest integrated oil company.
The company does a lot of work globally and recoginized the need for diverse employees to
help with the oil and gas exploration, production, supply, transportation, petrochemical
producing, as well as marketing around the world. Exxon Mobil understands that business is
all about people and by having a diverse staff can help employees look at new challenges
with a wide range of ideas and perspectives, while teaching each other at the same time. To
show their dedication to diversity, Exxon Mobil does a lot of recruiting for new employees at
over 100 colleges and universities in the United States, and more than a 100 in other
countries. By doing this, they are able to keep their company very diverse and highly
qualified for the work they must do to help the company succeed.
527
Coca-Cola also realized the importance of having a diverse company. The company
first started in 1886, and has been apart of many peoples lives since then. Coca Cola makes
over 400 products, and distribute these products to over 200 countries speaking more then
100 different languages. Since they deal with so many different cultures, it is crucial for
Coca Cola to have a very diverse workforce that can take care of all of the customer’s needs
and wants. They must be able to tell the company what is the best way to market the
products, and how to get the products to the customers the best way. Coca Cola must work
together as a whole to reach the goals they have set.
VI. RECOMMENDATIONS
These are only two of the many companies who feel having a diverse workplace gives
them a competitive advantage over other companies in the same line of work. Diversity is
essential for all businesses that plan on moving globally, and should be a important area of
concern as we move forward into the future. When looking at the United States Bureau of
Labor Statistics, one can easily see that business trends are changing in favor of a more
diverse workplace. The Hispanic labor force will continue to grow at a steady pace, but not
as fast as the Asian workforce. Projections indicate a growth over 45% in just a decade
alone, and show no signs of slowing down (U. S. Department of Labor (2005).
Another big part of diversity deals with gender. The workplace was once flooded
with nothing but males in top-management, but women now occupy almost half of the
workforce in the United States. Look for normal business trends concerning gender diversity
to change at a rapid pace as companies are realizing the importance. In the past many
companies used terms such as affirmative action and understanding different cultures as key
words to emphasize the importance of diversity. A new term for the future should be entitled
diversity management, which deals with making sure that everyone understands the cultural
differences of all employees. Diversity Management can be used advantageously while
looking into the future.
REFERENCES
528
CHAPTER 18
MANUFACTURING AND SERVICE
529
THE ROLE OF ELECTRONIC DATA INTERCHANGE IN
SUPPLY CHAIN MANAGEMENT
ABSTRACT
Ten years ago, even the most sophisticated retailers were just mastering EDI
(electronic data interchange) communication with vendors. Today, hyper-efficient supply
chains differentiate the world's leading retailers from the average retailers with merely super-
efficient supply chains. Regardless of the phase your business is in, an effective strategy for
measuring your supply chain effectiveness is a more complex proposition than ever.
I. INTRODUCTION
The Necessity and the Opportunity: While implementations of SCM analytics can be
difficult (determining the right metrics and gathering and cleaning data), the need for a
supply chain performance management system is growing. Outsourced operations and
strategic sourcing projects are accelerating the need for enterprise performance management
and continuous improvement. Through 2004, enterprises that implement interenterprise
metrics to measure the value of c-commerce initiatives will increase their ROI over a five-
year period (0.7 probability).
As supply chains change from linear to nonlinear customer delivery models for both
products and services, performance management is key to guaranteeing that each enterprise
cannot only measure its own performance; but also, monitor and evaluate outsourced
dependent operations. In the face of this opportunity, many enterprises are rushing into
extended supply chain collaborative performance metrics without rationalizing the processes
of collaborative performance measurement. To be successful, enterprises must rethink
existing processes to design effective outward-facing processes and determine extended
supply chain metrics to enable differentiation. While future B2B relationships will use
common metrics to manage risk, determine entry, identify preferred trading partners, and
530
monitor and correct behavior, users today must design and develop their own metrics,
performance management solutions and processes. Because of the complexity of this
undertaking, progress on a common standard is not expected until 2003, with fewer than five
vendors expected to deliver solutions that measure multidimensional supply chain
effectiveness for multiparty processes (see Figure 1).
In planning for a supply chain analytics project, extra time should be allowed to identify,
harvest and cleanse data to ensure that KPIs have a meaningful tie to business objectives.
Industry consortia and benchmarking sources are useful in determining industry specific KPIs
and benchmarks.
2. Focus on KPI Definition — The Devil Is in the Detail: Common enterprise KPIs include
supplier performance, supply chain response time, forecast accuracy, inventory balances,
manufacturing cycle time and delivery performance. However, users find that even the three-
level SCOR Model (see Note 2) must be used with caution to ensure a consistent definition.
The SCOR Model is a general standard enabling enterprises to define KPIs differently (e.g.,
how on-time delivery is defined — is it the date the goods were shipped, the date of arrival or
531
the time dropped in the customer's trailer yard?). To be successful and to benchmark
accurately against the competition, trading partner communities must rationalize and define
metrics rather than allow participants to define their own "version of truth."
3. Start and End With the Business Agreement: Successful performance management of
extended trading relationships starts with the business agreement. While many metrics are
"nice to have," evaluation of collaboration pilots supports the assertion that sustainable
performance measurement systems are closely tied to the agreement.
• What is the true cost of customer service by customer across the extended supply
chain?
• What are the true costs of outsourcing goods and services?
• How well is the extended supply chain performing and where does performance need
to be improved?
1. Additional modules offered by SCM suite vendors: SCM and SCE vendors that have
included supply chain analytic solutions as part of their offering.
2. Emerging SCM performance management specialists: Best-of-breed vendors that
have packaged supply chain performance management and industry specific KPIs for
heterogeneous application environments.
3. Solutions offered as an additional module by ERP expansionists: As an extension of
ERP, ERP vendors in the late 1990s marketed application suites that included
operational, executional and analytical capabilities. As these processes became more
collaborative, ERP II developed as a set of business practices optimizing enterprise
and interenterprise processes for collaboration, operational and financial processes.
(retail),
4. New modules being developed by BI suites and platforms: Enterprise BI suites offer
multiple styles of BI functionality, including ad hoc query, reporting, charting,
multidimensional viewing and light analysis (such as trending). Enterprise BI suites
focus on scalability, usability and manageability. BI Platforms are a more complete BI
offering, with a complete set of tools for the creation, deployment, support and
maintenance of BI applications. These are data-rich applications, with custom end-
user interfaces, organized around specific business problems, with targeted analyses
and models.
532
Selection should be made by a cross-functional team of IT and business professionals
focusing on the trade-offs of these four classes of solution, as illustrated in Figure 2.
Conclusions
Bottom Line: While enterprises should embrace supply chain performance initiatives, the
initial focus must be on enterprise performance management. Due to the complexity of
collaborative performance measurement, users must be conservative in project expectations
and resist market hype, as vendors are offering BI tools, but are struggling with product
delivery.
533
REFERENCES
534
THE EFFECT OF GENDER ON APOLOGY STRATEGIES
ABSTRACT
I. INTRODUCTION
This study is mainly concerned with potential gender effects in American university
students’ use of apology strategies. The researchers adopted the controversial and much
criticized (Cameron, McAlister and O'Leary, 1989; Troemel-Plotz, 1991), yet partially
evidenced (Michaud and Warner, 1997; Basow and Rubenfeld, 2003) views of the two-
culture theory which claims that men and women exist in different cultural worlds (as
opposed to the dominance theory which claims that men and women exist in the same
cultural world in which power and status are distributed unequally). Proponents of the two-
culture view claim that due to the striking differences between them, men and women belong
to different ‘communication cultures’ (Maltz and Borker, 1982; Tannen, 1990; Gray, 1992;
Schloff and Yudkin, 1993) or ‘speech communities’ (Wood, 2000; 2002).
The two-culture theory has mainly focused on gender differences in ‘troubles talk’,
intimacy, and emotion (Jefferson, 1988; Tannen, 1990). Bate and Bowker (1997: 166) claim
that "caring seems to be the principal category that differentiates one sex from the other".
Proponents of this theory claim that girls are taught that talk is the primary vehicle to
establish and maintain intimacy and connectedness (Maltz and Borker, 1982), while boys are
socialized to view talk as a mechanism for getting things done, accomplishing instrumental
tasks, conveying information, and maintaining status and autonomy (Wood and Inman,
1993).
535
interpreting, as well as different values, priorities, and agendas” (MacGeorge, Graves, Feng,
Gillihan, and Burleson; 2004:1)
The relationship between language and gender during childhood has been widely
addressed in the literature (Maltz and Borker, 1982; Huston, 1985; Tannen, 1990, Leaper,
1991; 1994; Swann, 1992; Maccoby, 1998; Wood, 2001) which suggests that girls are more
likely to use language to form and maintain connections with others, whereas boys are more
likely to use language to assert their independence, establish dominance, and achieve goals.
Tannen's (1990; 1994; 1995) research suggests that men and women have different modes of
communication and, thus, communication between them ought to be viewed as intercultural
communication. She (1990:85) further argues that "girls are socialized as children to believe
that "talk is the glue that holds relationships together" which is later reflected on their
perceptions of conversations as "negotiations for closeness in which people try to seek and
give confirmation and support, and to reach consensus". On the other hand, boys are taught to
maintain relationships through their activities, which would later color a man’s perceptions of
conversations as contests ‘in which he [is] either one-up or one-down’. Along the same line,
Wood claims that “much of the misunderstanding that plagues communication between
women and men results from the fact that they are typically socialized in discrete speech
communities” (2000: 207).
To be able to draw conclusions which pertain to the major question of the research,
the researchers attempt to first identify apology strategies and cross-reference them by those
presented by Sugimoto (1997). More specifically, the study aims to answer the following
questions of which the second is the central focus of the research:
1. What are the apology strategies used by American male and female
undergraduate students?
2. What are the potential differences in the use of apology strategies between
male and female respondents?
3. Do Sugimoto’s findings of American respondents’ use of apology strategies
hold true for this group of respondents?
536
In her study of Japanese and American apology strategies, Sugimoto (1997) put forth
the primary strategies which include statement of remorse, accounts, description of damage,
and reparation; secondary strategies which include compensation, and the promise not to
repeat offense; and seldom used strategies which include explicit assessment of
responsibility, contextualization, self-castigation, and gratitude. Although they would keep
others in mind, the present researchers use these strategies as the basis of their data analysis.
III. METHODOLOGY
A short introduction of the study and instructions for answering the questions,
One of the researchers personally visited classes and oversaw the data collection
process. She distributed the questionnaire, offered explanations and answered questions, and
collected the completed questionnaires in the course of one class session. The data were then
tallied to identify any potential differences which could be attributed to gender. To discover
the potential effect of gender, the researchers tallied the percentages of the apology strategies
used by male and female respondents.
In order to find the apology strategies used by the sample, the researchers used two
types of tables: the first to clarify the method used by the student to show his/her remorse
(viz., statement of remorse), and the second to show other apology strategies employed in
each situation. The statement of remorse was manifested in different realizations including
one expression, two expressions, three expressions, one expression with one or more
intensifiers and two expressions with one or more intensifiers.
The researchers list the apology strategies used by the students including those which
do not imply an apology. One such strategy that was not addressed in previous research,
including Sugimoto’s (1997) is that in which the wrongdoer exonerates him-/herself and,
instead, blames the victim for what had happened.
537
IV. CONCLUSIONS
Gender played an important role in the use of apology strategies which coincided with
reports in previous research. Male and female respondents differed in their use of apology
strategies. Female respondents' tendency to apologize more than their male counterparts was
reflected in their overt use of the statement of remorse. Female respondents also used more
manifestations of the statement of remorse than their male counterparts. Although both male
and female respondents used the same primary strategies of accounts, reparation,
compensation, and self-castigation, female respondents used them more than their male
counterparts. Furthermore, female respondents used slightly fewer non-apology strategies
than male respondents. To remedy the potential ‘intercultural’ misunderstanding between
men and women, proponents of the two-culture theory call on educators to develop programs
that foster ‘multicultural awareness’ of stylistically different, albeit functionally equivalent,
approaches to communication events (Wood, 1993).
REFERENCES
Basow, S.A. and Rubenfeld, K. (2003). "Troubles talk": Effects of gender and gender-typing.
Sex Roles: A Journal of Research, 48, 183-7.
Bate, B. and Bowker, J. (1997). Communication and the sexes (2nd ed.). Prospect Heights,
Illinois: Waveland Press.
Burleson, B. R. (1997). A different voice on different cultures: Illusion and reality in the
study of sex differences in personal relationships. Personal Relationships, 4, 229-41.
Cameron, D. (1992). Naming of parts: Gender, culture, and terms for the penis among
American college students. American Speech, 67, 367-82.
Cameron, D., McAlister, F., and O'Leary, K. (1989). Lakoff in context: The social and
linguistic functions of the tag questions. In J. Coates and D. Cameron (Eds.), Women
in their speech communications. London: Longman.
Goldsmith, D.J. and Fulfs, P.A. (1999). "You just don't have the evidence": An analysis of
claims and evidence in Deborah Tannen's You Just Don't Understand. In M.E. Roloff
(Ed.), Communication yearbook 22. Thousand Oaks, California: Sage.
Goodwin, M.H. (1980). Directive-response speech sequences in girls’ and boys’ task
activities. In S. McConnell-Ginet, R. Borker, and N. Furman (Eds.), Women and
language in literature and society. New York: Praeger.
Gray, J. (1992). Men are from Mars, women are from Venus. New York: Harper Collins.
Huston, A.C. (1985). The development of sex typing: Themes from recent research.
Developmental Review, 5, 1–17.
Jefferson, G. (1988). On the sequential organization of troubles-talk in ordinary conversation.
Social Problems, 35, 418-41.
Lakoff, R.T. (1975). Language and woman’s place. New York: Harper and Row.
Leaper, C. (1991). Influence and involvement in children’s discourse: Age, gender, and
partner effects. Child Development, 62, 797–811.
Leaper, C. (1994). Exploring the consequences of gender segregation on social relationships.
In C. Leaper (Ed.), Childhood gender segregation. San Francisco: Jossey-Bass.
Maccoby, E.E. (1998). The two sexes: Growing up apart, coming together. Cambridge,
Massachusetts: Belknap Press/Harvard University Press.
MacGeorge, E.L, Graves, A.R., Feng, B., Gillihan, S.J., and Burleson, B.R. (2004). The myth
of gender cultures: Similarities outweigh differences in men's and women's provision
of and responses to supportive communication. Sex Roles: A Journal of Research, 50,
143-75.
538
Maltz, D.N. and Borker, R.A. (1982). A cultural approach to male-female mis-
communication. In J.J. Gumperz (Ed.), Language and social identity. Cambridge:
Cambridge University Press.
Michaud, S.L. and Warner, R.M. (1997). Gender differences in self-reported response to
troubles talk. Sex Roles: A Journal of Research, 37, 527-40.
Porter, R. and Samovar, L. (1985). Approaching intercultural communication. In L. Samovar
and R. Porter (Eds.), Intercultural communication (4th ed). Belmont, California:
Wadsworth.
Schloff, L. and Yudkin, M. (1993). He and she talk: How to communicate with the opposite
sex. New York: Plume Books.
Sugimoto, N. (1997). A Japan-U.S. comparison of apology styles. Communication Research,
24, 4, 349-70.
Swann, J. (1992). Girls, boys, and language. New York: Blackwell.
Tannen, D. (1990). You just don't understand: Women and men in conversation. New York:
William Morrow.
Tannen, D. (1994). Talking from 9 to 5: How women’s and men’s conversational styles
affect who gets heard, who gets credit, and what gets done at work. New York:
William Morrow and Company, Inc.
Tannen, D. (1995). Gender and discourse. New York: Oxford University Press.
Thorne, B. (1993). Gender play: Girls and boys in school. New Brunswick, New Jersey:
Rutgers University Press.
Troemel-Plotz, S. (1991). Selling the apolitical. In J. Coates (Ed.), (1998). Language and
gender: A reader. Oxford: Blackwell.
Vangelisti, A.L. (1997). Gender differences, similarities, and interdependencies: Some
problems with the different cultures perspective. Personal Relationships, 4, 243-53.
Wood, J.T. (1993). Engendered relations: Interaction, caring, power, and responsibility in
intimacy. In S. Duck (Ed.), Social context and relationships. Newbury Park,
California: Sage.
Wood, J.T. (1997). Clarifying the issues. Personal Relationships, 4, 221-8.
Wood, J.T. (2000). Relational communication (2nd ed.). Belmont, California: Wadsworth.
Wood, J.T. (2001). Gendered lives: Communication, gender, and culture. (4th ed.). Belmont,
California: Wadsworth.
Wood, J.T. (2002). A critical response to John Gray's Mars and Venus portrayals of men and
women. Southern Communication Journal, 67, 201-10.
539
CHAPTER 19
MARKETING
540
SALESPEOPLE’S PERSONAL VALUES:
THE CASE OF WESTERN PENNSYLVANIA
ABSTRACT
Shared personal values of the members in an organization are crucial for the long-
term success and employee satisfaction. A key contribution of this study is measurement of
the personal values of the salespeople in Western Pennsylvania. Schwartz’s Value Inventory
has been used for the measurement.
I. INTRODUCTION
541
II. METHODOLOGY
542
about the characteristics of the salespeople and their values. As shown in Table 1 above, the
sample consisted of 52.7 percent females and 47.3 percent males. The sample members were
highly educated (40.6 percent had College or Post-College degrees), about 37.2 percent were
between 45-54 years of age, and about 35.6 percent earned incomes above $50,000. One of
the most striking characteristics of the sample was that 52.1 percent of the salespeople were
the tenure between 1-5 years.
III. RESULTS
Schwartz’s Value Survey was applied to the salespeople and they were asked to
indicate on a five-point Likert scale what importance for each value is given as guiding
principles of their lives. Table 2 shows the results of fifty-seven statements related to
salespeople’s given importance to these values.
543
Using the ‘factor analysis’ module in SPSS, the ‘value’ data was analyzed. The principal
components' method for initial factor extraction with the criterion Eigenvalue greater than 1
and Varimax method of rotation was applied. Sample size is an element that can affect the
adequacy of the factor models.
All the items were first factor analyzed. Rotated factor loadings were examined
assuming different numbers of factors for extraction. Deleting nine different values; respect
for tradition, social recognition, wisdom, humble, healthy, preserving public image,
responsible, clean, self-indulgent (values 18, 23, 26, 36, 42, 46, 52, 56, 57) all the salespeople
responses could be incorporated into the analysis. This was carried out. The results showed
considerable improvement over the previous attempt as some meaningful patterns emerged.
Table 3 depicts the sorted rotated factor loadings for the items based on nine-factor
extraction. The total figure of 77.37% represents the percentage of variance of all 48 items
explained by the nine factors.
544
It has been found that there are nine different factors related to values as guidance of
salespeople’s life. These are:
The purpose of this study was to investigate the personal values of salespeople in
guiding their lives. Data confirm the widespread presence of 10 value types according to
Schwartz’s Value Inventory. Future studies should combine information about both
salespeople and their managers. Sales managers should be requested to evaluate the
salespeople’s performance. After having both data from salespeople and their manager’s
causal relationship between salesperson’s values and performance can be investigated. The
focus of the future researches can be on how organizational and individual factors affect sales
force commitment, satisfaction, and turnover in sales organizations and also individual sales
force values and then consideration if these differences influence the sales force values of
organizational commitment, job satisfaction, and turnover.
REFERENCES
Apasu, Yao, Ichikawa, Shigeru, Graham, John L..“Corporate Culture And Sales
Force Management In Japan and America“, The Journal of Personal
Selling & Sales Management. New York: Nov 1987.Vol.7, Iss. 3; pg. 51-
63.
Gutman, J. “ A Means-End Chain Model Based on Consumer Categorization
Processes“ Journal of Marketing, 46, Spring, 1982, 60-72.
Schwartz, Shalom H. and Sagie, Galit, “Value Consensus and Importance: A
Cross-National Study”, Journal of Cross-Cultural Psychology, Vol.31,
Iss.4, Jul 2000, 465-497.
Schwartz, Shalom H., “Cultural Value Differences: Some Implications for Work”,
Applied Psychology: An International Review, 48, 1999, 23-48.
Schwartz, Shalom H., “Value Priorities and Behavior: Applying a Theory of
Integrated Value Systems”, C. Seligman, J. M. Olson, ve M. P. Zanna
(Eds.), The Psychology of Values: The Ontario Symposium, No. 8, Hillsdale,
NJ: Lawrence Erlbaum, 1996, 1-24.
Schwartz, Shalom H. and Huismans, Sipke, “Value Priorities and Religiosity in
Four Western Religions”, Social Psychology Quarterly, Vol.58, Iss.2,
June 1995, 88-107.
Schwartz, S.H. and Sagiv, L., “Identifying Culture-Specifics in the Content and
Structure of Values”, Journal of Cross-Cultural Psychology, 26, 1995,
92-116.
Schwartz, Shalom H., “Are There Universal Aspects in the Structure and Content
of Human Values?”, Journal of Social Issues, 56, 1994, 19-45.
Schwartz, Shalom H., “Universals in the Content and Structure of Values:
Theoretical Advances and Emprical Tests in 20 Countries”, Advances
545
in Experimental Social Psychology, 25, 1992, 1-65.
Schwartz, S.H., and Bilsky, W., “Towards a Psychological Structure of Human
Values”, Journal of Personality and Social Psychology, 53, 1987, 550-
562.
546
BEHAVIORAL AND ATTITUDINAL DIFFERENCES BETWEEN ONLINE
SHOPPERS VS NON-ONLINE SHOPPERS
ABSTRACT
The Internet is a growing market, and more shopping experiences are needed. The
study attempts to investigate and compare attitudinal and behavioral characteristics of online
purchasers vs. non-purchasers. Demographic characteristics of online purchasers indicated
that majority of them are younger than 24, moderate or high-risk takers, single, college
educated, and earned an average income of $35,000. On the other hand, majority of non-
purchasers are younger than 35, low risk takers, married or divorced, college educated, and
earned an average income of $15,000.
I. INTRODUCTION
Consumers, as being decision maker for their own shopping needs and wants, are in
position to make the best choices in terms of product types and brand names. The Internet is
now offering shopping environment emphasizing to convenience, lower prices, quality, large
variety of product choices, brand names, and a global shopping network. Online shoppers
enjoy with 24 hours a day and 7 day a week shopping environment, and avoid unnecessary
driving to shopping malls. Online non-shoppers still believe that the online shopping is a
risky decision and it is inconvenient (Rowley, 2000; Warrington, Abgrab, and Caldwell,
2000; Bhatnacar, Misra and Rao, 2000).
547
Characteristics of Online Non-shoppers
Online non-shoppers are group of consumers who enjoy shopping away from their
home. They do not mind to drive far away for shopping and spend time in cashier lines
during the busy times. In addition, they would like to see, touch, and inspect their choices
instead of looking its pictures in the Web sites. They use their own experiences or retail
salespersons for information about pricing, product quality, and colors. Online non-shoppers
do not interested in technology/computer; have highest fears of credit card theft and monetary
loss. They belong to high-risk averse group, and do not trust online ordering. (Swinyard and
Smith, 2003).
Research on online shopping behaviors has become a popular topic and e-retailers
have became interested in determining “who buys on the Internet?”, “what do they think
about online shopping?”, “what do they purchase on online?”, “how often do they shop on
online?”, ‘what is their experiences on the Internet?, “where do they use the Internet for their
shopping experiences?”, “how much time they spent on the Internet”, and the perceived risk
taking behaviors between online and online non-shoppers? Therefore, this study attempts to
investigate attitudinal and behavioral characteristics of online vs. online non-shoppers and
examines various research questions such as reasons behind online and non-online shopping,
satisfaction level and product categories of online products, Internet usage among online vs.
non-online shoppers, and demographic profiles on online vs. non-online shoppers.
III. METHODOLOGY
Data of this study is gathered in the Central Pennsylvania region and analyzed in the
SPSS software using frequency statistics, stepwise regression and Pearson correlation
analysis. The stepwise regression technique is used because of its power to identify the
significant variables, which are reasons/factors of online and non-shopping. The Pearson
correlation is selected for its simplicity and strength to identify the purpose of this study. The
Pearson correlation coefficient helps to obtain a measure of the degree of significant linear
association that exists between the two a dependent and an independent variable (Smith and
Albaum, 2005). In this study, the dependent (Y-variable) and independent (X-variable) are
applied to identify reasons/factors affecting online and non-online shopping.
IV. FINDINGS
Demographic characteristics of the online shoppers indicated that they are female
(54%) or male (46.0), 18-24 year olds (51.3%) or 25-35 year olds (27.3%), single (58.0%) or
married (24.0%) without children (71.3%) or have one or more children (28.6%) , earning
less than $15,000 (30.7%) or $15,000-$30,000 (29.3%) or higher than $31,000 (38.6%), live
in urban areas (17.3%) or rural (29.3%) or suburbs (53.3%), have some college (56.0%) or
college degree (24.0%) or high school education(12.7%) and are white-collar professionals
(50.6%) or blue collar workers(12.0%). In addition, data demonstrated that the majority of
online shoppers own PC (94.0%) and have access to the Internet (100.0%).
548
(31.0%), $15,000-$30,000 (40.5%) or higher than $31,000 (28.6%), live in urban areas
(23.8%), rural (33.3%) or suburbs (42.9%), have some college (47.6%), college degree
(11.9.0%) or high school education(35.7%) and are white-collar professionals (40.4%) or
blue collar workers(23.8%). In addition, data demonstrated that only 72.2% of online non-
shoppers own PC or have access to the Internet (88.1 %).
Pearson correlation analysis of online shoppers; on the other hand, showed evidence
that there is a significant (a = .05) correlation between online shopping behaviors and
security of the Web site, level of risk, low prices of online products and education level of
online shoppers. They indicated that they buy toys, music and CD, books and travel. Table 2
shows factors affecting online shopping.
549
Table 2: Online vs. Online Non-Shopping Behaviors
V. CONCLUSION
Findings of this study demonstrated similarities with previous research findings, and
shared the some concerns regarding the security and safety on the Internet, and need to be
resolved. They stated that they would shop online for convenience, and lower prices than the
offline. On basis of the findings of this study, the online Internet marketers must focus on the
Internet security and safety. Consequently, they should try to introduce new technologies to
make the Internet market a safer environment for every shopper. In addition, they should be
competitive to the offline brick and mortar retailers by offering lower prices and convenient
and more secure shopping environment in term of service, speed of navigation, ordering
process, and delivery time and cost.
550
REFERENCES
551
AN EXPLORATORY MODEL FOR TURKISH HEALTH CARE CONSUMERS
ABSTRACT
I. INTRODUCTION
There are several challenges facing providers of preventative health care. One
challenge deals with getting the at-risk person to realize they are at risk. Generally speaking,
people tend to think they are immune to the consequences of health threats. They may ignore
warning labels and routinely indulge themselves in unhealthy activities thinking the resulting
negative outcomes only happen to “other” people. This sense of denial can be very
frustrating for preventative health care providers. A second challenge for providers is the
high failure rate of long term compliance of strict programs these at-risk patients must
incorporate into their lives. Patients with disease processes such as hypertension, ulcers and
diabetes often fail to routinely adhere to their daily regimen, which, when not followed
strictly may provide less than desired results (Jayanti, 1997).
Due to the broad spread crisis in escalating health care costs, efforts to educate the
public about health-promoting lifestyles, or wellness, have been praised as a partial solution
to this growing concern. (Elias and Murphy 1986; De Arrellano 1990). One strategy many
organizations have taken to deal with rapidly escalating health insurance premiums has been
the development and implementation of health promotion or wellness programs designed to
552
improve employee health, in an attempt to decrease the costs associated with health-related
employee benefits (Busbin and Campbell 1990).
II. METHODOLOGY
Health knowledge measures the extent to which health care customers (most cases
patients) understand health problems and the ways to solve their health problems. This
construct is measured by knowledge scales developed by Jayanti 1997 and Goud 1988).
Respondents indicated their level of knowledge with six questions concerning familiarity of
health care issues and provided answers on a five-point Likert-type scale (1 = strongly
disagree). A Cronbach’s alpha reliability of 0.80 for the scale is found. A scale was used from
the preventive health care behavior literature that covered three subgroups of preventive
behavior; diet, life style and preventive medicine were used to measure preventive health care
behavior. Respondents indicated their agreement about each preventive behavior in keeping
their good health on a five-point scale (1 = strongly disagree). This scale is calculated an
alpha of 0.71. Agreement with the marketing efforts in terms of advertising was captured
using a three-item scale from the health care marketing literature (Kraft and Goodell, 1993;
Goud 1988). Respondents indicated their agreement about each statement on a five point
scale (1=strongly disagree, and 5=strongly agree). This scale reported a coefficient
alpha of 0.76.
The data used in this study was derived from a survey of 565 randomly selected
individuals above 21 years old in several Istanbul metropolitan areas. The data collection
instrument was a self administrated questionnaire consisting of closed ended questions
carefully prepared with the assistance of faculty from the health management department at
Istanbul University. The questionnaire consisted of two sections that have questions related to
study sample demographic characteristics and also attitudes, behaviors and knowledge about
health care issues. As part of a large health related consumer survey, data were gathered by
553
research assistants of the marketing department of Istanbul University. The questionnaires
were verified by calling back some of the respondents and by debriefing the interviews.
Confirmatory factor analysis was used to test the measurement of the seven latent
constructs (see Figure 1). Deleting seven different statements (statements 3, 5, 8, 9, 15, 16,
18) all the responses could be incorporated into the analysis. This was carried out. The results
showed considerable improvement over the previous attempt as some meaningful patterns
emerged. The final model contains 19 observed variables and seven underlying constructs.
Health knowledge, wellness orientation and agreement with marketing efforts are first-order
factors. Preventive health care behavior is a second-order factor, composed of three
dimensions of diet, life style and preventive medicine.The internal consistency of each
construct in the model was assessed using measures of composite reliability, Cronbach's
alpha, and variance extracted. Table 1 presents these figures for the measurement model. All
constructs exceed the 0.7 recommendation for composite reliability and Cronbach's
coefficient alpha, providing evidence of internal consistency. Evidence of convergent validity
is provided when the coefficient of each indicator (variables with a V label in Figure 1) to its
construct is significant. The parameter estimate for every indicator in the tested measurement
model is significant. Table 1 also contains the overall results of the revised measurement
model. The chi-square value for the model is 162.58, based on 137 degrees of freedom, with
a probability of 0.078, indicating a good fit. The chi-square value is an absolute measure of
fit and can be sensitive to sample size and number of indicators in the model. The number of
indicators and constructs in this model is high, so fit indices other than the chi-square should
be examined. Incremental fit measures compare the fit of a tested model to a null baseline
model. EQS provides two incremental fit measures. The first, the Bentler-Bonett normed fit
index (NFI) is 0.979 for the model.
Parameter Estimates
Parameter
Construct Path Labels Estimate t-Value
Wellness Orientation V20 0.557 6.312
V21 0.536 6.682
V22 0.740 9.106
V23 0.688 8.889
Health Knowledge V14 0.572 8.100
V17 0.633 8.701
V19 0.800 12.196
554
Diet V1 0.479 7.847
V2 0.602 10.042
V4 0.464 9.406
Life Style V6 0.559 5.088
V7 0.738 9.919
V10 0.789 9.986
Preventive Medicine V11 0.812 8.979
V12 0.673 7.979
V13 0.782 9.604
Agreement with Marketing
Efforts V24 0.651 7.209
V25 0.726 8.517
V26 0.649 7.232
V2 Factor 5
Diet
Factor 3
V4 Importance Attributed
to
Wellness Orientation
V24
V6
Factor 3
V7 Factor 6 Factor 1 Agreement with V25
Life Importance Marketing Efforts
V10 Attributed to
Preventive Health
Care V26
V11 V14
Factor 7 Factor 4
V12 Preventive Health
V17
Medicine Knowledge
V13 V19
The second, the Bentler-Bonett nonnormed fit index (NNFI), takes into account the degrees
of freedom in a model. This index is 0.986. Parsimonious fit measures are more precise than
absolute fit and incremental fit indices because they evaluate the fit of a model in relationship
to its degrees of freedom and sample size. The parsimonious fit index calculated in EQS is
the comparative fit index (CFI). The comparative fit index for the proposed measurement
model is 0.988. The various fit indices support the revised measurement model.
Estimation of the structural parameters is the second step in the structural equation
modeling technique (Anderson and Gerbing 1988). The overall fit of the model is good, with
chi-square of 12.58, 8 degrees of freedom, p = 0.134, NFI = 0.989, NNFI = 0.991, CFI =
0.997. Preventive behavior is a second-order factor in the structural model and the three first-
555
order factors (diet, life style and preventive medicine) loading onto it all were significant. The
loadings are included in Figure
IV. CONCLUSIONS
REFERENCES
556
BENEFIT SEGMENTATION BY FACTOR ANALYSIS: AN EMPIRICAL STUDY
TARGETING THE SHAMPOO MARKET IN TURKEY
ABSTRACT
I. INTRODUCTION
Market segmentation is based on a simple idea; not everyone wants the same things.
This idea was first introduced by Wendell Smith in 1956 (Smith, 1956). The crucial
advantage offered by market segmentation is that it provides a structured way of presenting
the marketplace facing the company (Wilkie, 1994). In today’s competitive markets,
segmentation has become an extremely important strategy for all companies.
The concept of segmentation in marketing recognizes that consumers differ not only
in the price they will pay, but also in a wide range of benefits they expect from the product or
service and its method of delivery (Doyle, 1987). In this regard, buyers are divided according
to the different benefits they seek from the product. Behavioral segmentation is important
557
since segmentation only has value if it is related to consumer-product relationships (Arnold,
Price and Zinkhan, 2004). Because different people seek different benefits from the same
products or service, it is possible for marketers to use benefit segmentation (Russel, 1995).
Marketing administrative always challenge to classify the one most important benefit of their
product or service that will be mainly significant to the consumer. Consumers try to explore
what sort of benefits are desired and determine which of these benefits lead to the greatest
amount of customer approval. Benefits can take a variety of forms. Some derive benefits
from the functions product perform while others derive social benefits for instance, from
anonymous disposition behaviors such as charitable giving to social services agencies
(Arnold, Price and Zinkhan, 2004).
Market segmentation studies are also used to guide the repositioning of a product or
the addition of new market segment. Nintendo, for example has been very successful in
capturing a large share of the children’s market for its electronic games, but now seeks to
attract adult users (Schiffman and Kanuk 1999).
The major power of benefit segmentation is that the benefits sought have a causal
relationship to future behavior. However, complexities can occur in deciding the exact
benefits to be highlighted and become certain that costumers’ stated motives are their real
motives. Failure to understand the benefits which consumers may be seeking can prevent
market success (Young et al., 1978). Keeping those limitations in mind, this research has
focused on the task of applying benefit segmentation to the shampoo market in Turkish
Market.
II. METHODOLOGY
The questioning included areas such as shampoo use, frequency of use, buying
criteria, brand loyalty and buying decision process for shampoo. Major results of the research
also included in-depth reasoning behind the features of shampoos.
558
The focus group interview revealed that there were seventeen features associated with
shampoo buying. They were price, brand name, fragrance, vitamins, naturalness, prevents
eye burn, prevents dandruff, softens hair, provides brightness, avoid hair loss, easy to foam,
easy to rinse, packaging, ergonomic, provides volume, avoid stickiness, appropriate for hair.
The focus group results were incorporated into the design of the questionnaire.
III. FINDINGS
Using a structured questionnaire, 240 customers were asked to rate the importance of
the shampoo features identified and to compare the performance of the many shampoo with
their “ideal shampoo.” In this way it was possible to see which quality characteristics are
more important for meeting or exceeding customers' expectations.
In this case, considering mean value “Avoids Hair Lose” has the highest importance.
“Avoids Stickiness and Appropriate for Hair ” attributes were ranked the next priority.
According to the Table 1, “Packaging” had the lowest priority. To be sure whether or not all
attributes are important, one needs an exploratory factor analysis.
559
Deleting 5 different attributes; Price, Brand Name, Prevention of Eye Burn, Package, and
Agronomy, all the responses could be incorporated into the analysis. This was carried out.
The results showed considerable improvement over the previous attempt as some meaningful
patterns emerged. It has been found that there are three different factors; "Manageability
factor” (Factor 1), “Maintenance factor” (Factor 2), “Cleanliness factor” (Factor 3). These
twelve items are shown as items in the Table 2. The total figure of 68.37% represents the
percentage of variance of all 12 items explained by the three factors.
Attributes Factor
1 2 3
Provides brightness 0.800
Provides volume 0.758
Softens hair 0.657
Fragrance of shampoo 0.572
Avoids stickiness 0.567
Prevents dandruff 0.548
Naturalness 0.851
Vitamins 0.747
Appropriate for hair 0.703
Avoids hair loss 0.632
Easy to foam 0.817
Easy to rinse 0.803
Factor analysis, which is a multivariate technique, links the six attributes in the Factor
1, and four attributes in the Factor 2, and two attributes in the Factor 3 in such a way that only
the unique contribution of the twelve attributes is considered for each factor. Thus, using
factor analysis avoids potential problems of multicollinearity. The Cronbach’s alpha measure
of reliability for the three factors were; 0.80 for Factor 1, 0.79 for Factor 2, and 0.74 for
Factor 3. All three values are above of the traditionally acceptable value of 0.70 in research
(Raju, 1995).
IV. CONCLUSION
Wills (1985) argues that a major condition for successful segmentation is that the
segmentation criteria must be appropriate to the purchase criteria of customers. It is
specifically the connecting of consumer groups through the benefits they look for that builds
benefit segmentation such a helpful and practical marketing technique. Seventeen customer
demands were obtained by focus group study. These customer demands were grouped under
three factors, which were labeled manageability factor, maintenance factor, and cleanliness
factor. These three factors include only twelve attributes out of seventeen. Five attributes
were eliminated because of low relationship with corresponding factors.
By examining their strengths, companies can pinpoint those benefit markets they are
most likely to appeal to. By noting and, if so desired, overcoming their weaknesses, they can
develop benefits to appeal to previously unreachable markets. By operating in this manner,
companies should be able to market more effectively and efficiently to one or more groups of
customers than is possible using more traditional methods of market segmentation. Minhas
and Jacobs, 1996)
560
REFERENCES
Arnould Eric, Price Linda and Zinkhan George, Consumers, 2nd edition Mc
Graw Hill, Irwin, Boston, 2004.
Darral, G. Clarke. “Johnson Wax”, Harvard Business School Review, August 2,
1999.
Doyle, P. "Managing the marketing mix", in Baker, M. (Ed.), The Marketing
Book, Heinemann, London, Ch. 12, 1987.
Haley, R.I., "Benefit segmentation: a decision-orientated research tool", Journal
of Marketing, Vol. 32 No. 3, July 1968, pp. 30-5.
Kotler, Philip, Marketing Management, Eleventh Edition, Prentice Hall,
Englewood Cliffs: NJ, 2003.
Minhas, Raj Singh, Jacobs, Everett M “Benefit segmentation by factor analysis:
an improved method of targeting customers for financial services” The
International Journal of Bank Marketing. Bradford: 1996.Vol.14, No. 3.
Russel Haley, “Benefit Segmentation: A Decision-Oriented Research Tool”,
Marketing Management, No.1, Summer, 1995, pp59-63
Schiffman G. Leon and Kanuk lazar Leslie, 2000, Consumer Behavior, Eight
Edition, Prentice Hall, New Jersey
Smith, W.R. (1956, "Product differentiation and market segmentation as
alternative marketing strategies", Journal of Marketing, Vol. 20 No. 3.
Wilkie, W., Consumer Behavior, 3rd edition, John Wiley & Sons, New York,
NY, 1994.
Wills, G., "Dividing and conquering: strategies for segmentation", International
Journal of Bank Marketing, 3 (4), 1985, pp. 36-46.
Young, S., Ott, L. and Feigin, B., "Some practical considerations in market
segmentation", Journal of Marketing Research, Vol. 15 No. 3, August, 1978, pp. 405-
12.
561
CHAPTER 20
ORGANIZATIONAL BEHAVIOR
AND
ORGANIZATIONAL THEORY
562
IMPACT OF PERSONALITY FACTORS ON PERCEIVED IMPORTANCE OF
CAREER ATTRIBUTES
ABSTRACT
Beyond the vocational types that Holland’s theory ascribes to individuals and job
functions, there are values or attributes of a career that can impact an individual’s choice of
one career over another. In this study, preferences for certain attributes load together onto
factors that represent different measures of career attribute preference. For a sample of 458
individuals, these factors are shown to have mild, but significant, regression relationships to
personality factors as measured using the RightPath6 assessment. R2 values are in the .05 to
.15 range, but for each career attribute factor, significant regression models exist with either
the personality factors or subfactors. For some individuals, these relationships to personality
could have a significant impact on career choice.
I. INTRODUCTION
While studies on the relationship between personality and vocational choice have
been numerous in recent years (e.g. Nordvik, 1996; Pietrzak and Page, 2001; and Bozionelos,
2004), fewer studies (e.g. Halaby, 2003; Johnson, 2001; and Karl and Sutton, 1998) have
investigated career values (also called job values or career attributes), and fewer still have
looked at the impact of personality on these value or attribute measures (Nordvik, 1996). The
choice of one job over another will typically involve a number of factors. These include
classifications of the specific type of work to be done, an inventory of the skills required for
the job and those possessed by the applicant, and an assessment of the benefits that may
accrue to the applicant as a result of engaging in a specific type of work. These, often
intangible, benefits have critical importance, as they will likely contribute to the overall level
of fulfillment experienced by the employee, and are related directly to job satisfaction
(Locke, 1976). In this study we seek to confirm the existence of significant relationships
between personality and choice of career attributes using different scales for personality and
career attributes than those used by Nordvik, thus broadening the concept validity.
IV. METHODOLOGY
The sample under study in this investigation consisted of 458 individuals, including a
mix of working professionals and college students, both male and female. For each individual
in the study, two instruments were administered, the RightPath6 profile to assess personality
traits, and a simple Work Values Inventory. The assessment instruments were administered as
part of a series of career counseling seminars, delivered in corporate settings, or through the
Career Management offices on college campuses. The RightPath6 profile is a forced-response
version of the Career Direct Personality Inventory (CDPI), developed for career counseling
(Toth, Stokes, Garnett, Ellis & Noble, 1995). In this instrument, usually delivered online,
personality is measured on six scales, and further defined by sixteen subscales. The scales of
personality are Dominance, Extroversion, Compassion, Conscientiousness, Adventurousness
and Innovation. These, along with the subscales are assessed by analysis of a subject’s
preferential selection of self-descriptive adjectives from presented lists. (RightPath
563
Resources, 2002). The Work Values Inventory consisted of a list of 8 career attributes. Each
subject was required to assign a rank order to the items, indicating the item’s perceived
importance to the subject in his/her choice of an ideal career. The 8 career attributes are listed
in Table I along with the mean and standard deviation of the rankings for each item. Due to
the forced response nature of this assessment, the scales are ipsative, i.e. a high score on one
item is only obtainable by awarding lower scores to other items. As such, the items tend to be
negatively correlated to each other, particularly for relatively small numbers of items – in the
extreme case of only two items the correlation would by -1.00. Nordvik (1996) presents a
clear discussion of this issue in relation to the scales of Schein’s career anchors.
TABLE I. LIST OF ATTRIBUTES IN THE WORK VALUES INVENTORY
Correlation analysis of the career attribute (CA) scores is confounded by their ipsative
nature as described above. Therefore, to assess any relationship among the items’ rank
ordered scores, principal component factor analysis was carried out on the 8 item scores.
Varimax rotaion resulted in 4 factors, with eigenvalues above 1.00. These are shown in Table
II. Standardized regression scores were saved for these factors, and became the response
variables in the subsequent analysis of the relationships to personality. Based on the loaded
items, the factors were assigned intuitive descriptors as shown in Table II. These are
interpreted as follows:
CA factor 1: Success (pursuing leadership and achievement) versus Security
(pursuing job security and benefits. CA factor 2: Reward (pursuing high income) versus
Service (seeking to help others develop). CA factor 3: Intellectual Development (pursuing
same). CA factor 4: Self-oriented (pursuing own status on individual terms) versus
Organization-oriented (pursuing success as defined by organization).
TABLE II. ROTATED FACTOR MATRIX FOR CAREER ATTRIBUTES
The personality trait scales of the RightPath6 profiles were the intended potential
explanatory variables in our regression. To reduce effects of correlation between these
personality scales, they too were factor analyzed, resulting in 3 rotated factors with
eigenvalues above 1.00 as shown in Table III. Once again, standardized regression scores
564
were saved for use as the explanatory variables in the subsequent analysis. The descriptors,
shown in Table III, were intuitively developed based on the loaded factors and would apply
to a subject with high scores on the respective factors. Cronbach’s alphas for the first two
factors only (the third had only one loaded trait) were 0.828 and 0.713 respectively, showing
acceptable internal consistency.
TABLE III. ROTATED FACTOR MATRIX FOR PERSONALITY TRAITS
Four unique regression models were created, one for each of the factors of the career
attributes, using the standardized factor scores as the response variable in each case. For each
of the models, the factor scores for the three factors of personality traits were used as the
explanatory variables. The general form of the model is given by:
CAfactorn = α + (β1 × PTfactor1 ) + (β 2 × PTfactor2 ) + (β 3 × PTfactor3 ) ,
where “CA” stands for “career attribute”, and “PT” stands for “personality trait”, and n=1, 2,
3, or 4, for each of the four career attribute factors.
Results of the regression analysis are shown in Table IV. Significant regression
models were found to exist for CA factors 1, 2 and 3, with ANOVA F-values of 21.785,
9.392 and 8.100 respectively, all with p-values significant below the 0.01 level. The
corresponding R2 values for these models were low, as expected, with 12.6%, 5.8% and 5.1%
respectively, of the variance in the CA factor attributable to the model. In all cases the
constant term was statistically 0.000, due to the fact that all the variables, response and
explanatory, are standardized. All the PT factor coefficients were significant below the 0.01
level.
TABLE IV. OLS REGRESSION COEFFICIENTS
Response Variable in the Model
Model Coefficients CA factor 1 CA factor 2 CA factor 3 CA factor 4
Constant 0.000 0.000 0.000
0.341* 0.184*
PT factor 1 (7.765) (4.040)
PT factor 2
0.130* 0.216*
PT factor 3 (2.845) (4.734)
Model Parameters
†
ANOVA F value 21.758* 9.392* 8.100* 1.154
2
R 0.126 0.058 0.051
Coefficients with p > 0.05 not shown. * Coefficient significant below 0.01 level. † ANOVA f-value not significant for this model.
565
CA factor 4 failed to yield a significant model with these factors as explanatory
variables. To further investigate possible personality relationships with this CA factor, a fifth
regression model (not listed in Table IV) was created with the six individual personality traits
as explanatory variables. This resulted in a significant model (ANOVA F-value = 4.492) with
three of the personality traits (Adventurousness, Compassion and Conscientiousness) yielding
coefficients that were negative and significant below the 0.05 level (the first and last of these
three were significant below the 0.01 level). The coefficient for a fourth trait, Extroversion,
was also negative, but was just barely insignificant at the 0.05 level (p=0.066).
VI. CONCLUSION
The list of 8 career attributes in the Work Values Inventory is somewhat smaller than
those used in some other reported studies. Johnson (2001) used an inventory of 14 items that
loaded onto 4 job value scales, Extrinsic, Intrinsic, Altruistic and Social. Marini (1996) used
the same 14 items, plus another 8 that loaded onto scales of Influence, Leisure and Security
and 1 additional subscale item that loaded onto the Intrinsic value scale. The 8 items in the
current study are descriptively related to 8 of the 14 subscale items common to the above
studies (Johnson, 2001; Marini, 1996), spanning all four of the scales in Marini’s work
(1996). Karl and Sutton (1998) and Halaby (2003) use shorter inventories of 10 and 9 items
respectively, of which 4 and 6 items respectively are descriptively related to items in Work
Values Inventory used here.
Nordvik (1996) used Schein’s career anchor scales which consist of 9 items that
loaded onto four factors. Nordvik interprets these factors as being related to concern for: 1)
stimulation versus comfort, 2) skill development, 3) self-direction versus belongingness and
4) lifestyle versus helping others (Nordvik, 1996, p268 – 269). These are conceptually very
similar to the interpretations of the 4 factor scales developed in the current study: 1) success
versus security (like Nordvik’s factor 1), reward versus service (like Nordvik’s factor 4),
intellectual development (like Nordvik’s factor 2) and self-orientation vs. organization-
orientation. The similarity of these two sets of career value (or attribute) scales adds
additional concept validity to the inventory used in this study. CA factor 1 (Success vs.
Security) was found to be positively related to PT factor 1 (Driven-Competitive). According
to the RightPath6 profile developers (RightPath Resources, 2002), the subscales aligned with
this PT factor (based on the loaded trait scales) would include “ambitious” and “daring”,
among others. These characteristics would appear to be associated with desire for success, in
results and positional advancement, as well as a risk tolerance that would not require high job
security.
CA factor 2 (Reward vs. Service) was found to be positively related to both PT factors
1 and 3 (Driven-Competitive and Innovative). A person with this combination of personality
traits would score low on the subscales of “sympathy”, “support” and “tolerance”, primarily
due to the negative loading of the Compassion scale onto the PT factor 1. In conjunction with
high scores on the subscales of “ambition” and “independence”, these would support the
pursuit of an individual’s own gain, without the desire, need or perhaps the innate ability to
identify with the plight of others and/or render assistance. CA factor 3 (Intellectual
Development) was found to be related to PT factor 3 (Innovative). This trait is made up of the
“imaginative” and “resourceful” subscales. These characteristics would suggest one who
pursues new knowledge and creative ways of applying it. The established relationships
among CA factors 1, 2 and 3 and the respective personality traits, thus are consistent with the
conceptual understanding of the personality traits themselves. CA factor 4 is intuitively
challenging, with positive loading of the desire for individual prominence (status and
reputation) and negative loading for desire for career progression. This counter-intuitive
566
inverse relationship seems resolvable by the given interpretation, which makes the distinction
between a self-oriented career trajectory, versus one that aligns with the organization’s
structures and norms.
The secondary regression analysis of CA factor 4, while not as robust for prediction
purposes due to the potential correlation issues among the explanatory variables, also has
some conceptual consistency. A high score on this factor would be associated with low scores
for adventurousness, conscientiousness and compassion, with low extroversion scores being
also being related, but with less significance. The low extroversion scores would suggest an
inward focus that is consistent with the CA factor 4. The low compassion and
conscientiousness scores indicate “detachment” and “rejection of structure” respectively,
according to the RightPath6 developers (RightPath Resources, 2002). These could be
interpreted to support a “make it on my own” approach to career success. Note that the
negative coefficient for conscientiousness corresponds with the results of Nordvik (1996),
where a negative relationship was established between the Judgment trait (from MBTI) and
the “self-direction versus belongingness” career anchor factor. Judgment from the MBTI and
Conscientiousness on the Big Five inventory have been shown to be related (McCrae and
Costa, 1990). Further direct comparison to Nordvik’s coefficients is not facilitated due to the
fact that, in the primary models in this study, loaded PT factors are used which have no direct
corollary in the Nordvik study.
The fact that the primary model for CA factor 4, with the loaded PT factors as
explanatory variables, was not significant may be explained by the signs of the significant
coefficients in the secondary model. In the secondary model, Adventurousness and
Compassion both had negative coefficients, whereas in PT factor 1 these traits load with
opposite sign. The same holds for the Conscientiousness-Extroversion pair; they have the
same sign in the secondary model, but load with opposite sign onto PT factor 2. These
offsetting effects on the PT factors likely rendered them insignificant in the primary model.
REFERENCES
Bozionelos, Nikos. “The Relationship between Disposition and Career Success: A British
Study.” Journal of Occupational and Organizational Psychology., 77, 2004, 403-420.
Halaby, Charles N. “Where Job Values Come From: Family and Schooling Background,
Cognitive Ability, and Gender.” American Sociological Review, 68(2), 2003, 251-
278.
Johnson, Monica K. “Job Values in the Young Adult Transition: Change and Stability.”
Social Psychology Quarterly., 64(4), 2001, 297-317.
Karl, Katherine A., and Sutton, Cynthia L. “Job Values in Today’s Workforce: A
Comparison of Public and Private Sector Employees.” Public Personnel
Management., 27(4), 1998, 515-527.
Locke, Edwin A. “The Nature and Causes of Job Satisfaction.” In Dunnette, Marvin D. ed,
Handbook of Industrial/Organizational Psychology. Chicago, IL: Rand McNally,
1976, 901-969.
Marini, Margaret M., Fan, Pi-Ling, Finley, Erica, and Beutel, Ann M. “Gender and Job
Values.” Sociology of Education., 69(1), 1996, 49-65.
567
THE DETERMINANTS OF OWNERSHIP
IN SPANISH FRANCHISED CHAINS
ABSTRACT
I. INTRODUCTION
The existing literature has employed different theoretical perspectives to justify the
existence of franchising. Specifically, agency and resource-based theories have been applied
to explain why, in some cases, the franchisor chooses to invest directly in a new outlet of the
chain and, in others, he decides to franchise it. In this sense, many empirical studies have
established that franchised units are efficiently superior to franchisor-owned outlets. This
may be due to the fact that franchising enables increased chain growth, while it reduces
monitoring costs, especially when units are located in a disperse manner. Geographic
dispersion increases, in this sense, difficulties and costs associated with control of managers
of franchisor-owned stores (Brickley, Dark & Weisback, 1991; Jensen & Meckling, 1976;
Shane, 1996, 1998). More specifically, franchising increases unit performance through the
allocation of ownership and control rights to the same person, the franchisee, and this reduces
agency hazards. Another stream of the franchising literature, far from establishing the
superiority of one of these alternative forms of government, has highlighted that the presence
of both type of units in the same chain gives rise to relevant synergetic effects (Bradach &
Eccles, 1989; Lafontaine & Kaufmann, 1994; Pénard et. al., 2002; Yin & Zajac, 2004).
Specifically, franchisor-owned stores are most useful for maintaining and developing brand
name quality and homogeneity, exploiting certain economies of scale. On the other hand,
franchisees are best in supplying the chain with new ideas and adaptations to local markets.
Therefore, the so called “plural form” or “dual form” is an efficient solution to mitigate
asymmetrical information, limited rationality and incomplete contractual hazards. However,
many studies have found that as chains reach maturity, they open less franchised units and,
therefore, choose to grow, in greater extent, through franchisor-owned establishments
(Oxenfeldt & Kelly, 1968-1969).
568
II. METHODOLOGY
Due to the non-existence of a ready to use data base related to franchising in Spain,
data employed has been collected, in collaboration with a working group of the University of
Oviedo (Spain), from the annual franchise guidebooks published for the period 1997-2003.
First, to represent the evolution pattern of ownership for Spanish indigenous chains, the
proportion of franchised units was calculated as the quotient between number of chain
franchised outlets in Spain and total number of chain outlets in Spain. However, from more
than 1300 existing Spanish franchised chains only about 500 fulfilled the necessary condition
of having started to franchise in 1997 or before and continue to do so in 2003. Moreover, it
was only possible to collect sufficient data for 316 of these. Second, for ordinary least
squares (OLS) regression, we introduced other key variables. This analysis was conducted for
data corresponding to year 2003. After plotting pattern for outlet ownership evolution, we
conducted an OLS regression in SPSS. The dependent variable –proportion of franchised
outlets- is modeled as the natural log of the ratio of the percent franchised by the percent
company-owned. This transformation has been used in many other empirical studies (see, for
example, Shane, 1998 or Michael, 1996) as a more robust measure of the distribution for both
OLS and Tobit regressions. The independent variables employed are:
• SECTOR. It is equal to one when the chain is basically dedicated to the distribution of
products and equal to zero when its object consists in the commercialization of services.
• AGE. This variable represents the number of years since the franchisor opened the first
unit. It is quite evident that, in general, the longer this period is, the greater the proportion of
franchised units will be. This is not only due to the simple fact that time just goes by. AGE
has been used as a proxy for franchisor experience, brand name value or reputation and for
franchisor accumulated resources (González & López, 2003; Lafontaine, 1992). Given that
the number of franchisees willing to join the chain increases as chain perceived value does, a
positive relation over the proportion of franchised units is expected.
• YNOTF. It represents the number of years the chain initially remained without franchising
any outlets at all. We expect YNOTF to have a negative influence over the proportion of
franchised units because this period of time can reflect, in a certain manner, franchisor
difficulties to adequately design and develop the complete franchise package (González-Díaz
& López, 2003). Moreover, during that period the franchisor has installed a totally centralized
organizational form, and it can result difficult for him to let in quasi-independent
businessmen that will make their own decisions.
• INTERN. This independent variable will be equal to one when the chain has some sort of
international presence and equal to zero when it has outlets only in the domestic Spanish
market. If the franchisor has chosen to expand activities overseas, we should find a higher
proportion of franchised units because, in most cases, he will not have the sufficient local
market knowledge to undertake unit opening by himself. Local franchisees will have much
better and complete information about demand conditions, governmental procedures, etc.
• SIZE. To measure the size of the chain we use the total number of outlets. Chain size has
been used as a proxy for geographical dispersion and, in this sense, for monitoring difficulties
(Agrawal & Lal, 1995; Brickley, Dark & Weisbach 1991). So, geographical distance
affecting the new outlet would make monitoring difficult. However, if the decision is to
franchise the new outlet, monitoring needs are reduced. From another point of view, chain
size has also been used as a proxy for brand name value. In this sense, greater chain size will
increase the number of potential consumers attracted and served (Lafontaine, 1992).
Therefore, chain size is likely to favor franchising.
• ININVEST. The initial investment is the amount, in euros, the franchisee must invest in his
outlet. However, we have not taken into account here the initial lump sum entry fee paid to
569
the franchisor; this amount is included in FIXED PAYM. Therefore, ININVEST reflects the
amount the franchisee must pay to adequately lay out and decorate his premises.
• TOTAL VAR. To calculate the value of this variable, we have added the percentages of
royalties and advertising fees for each chain. Royalty rates contribute to the alignment of both
parties interest because both franchisee and franchisor will be interested in increasing sales.
High royalty payments would serve as a powerful incentive to franchisors to control or
monitor activities in order to increase brand name value but would reduce franchisee
motivation to be efficient.
• DUR. Longer contract duration contributes, from a transaction costs view, to reduce
advantages of hierarchy compared to that of the market. Transaction costs associated to this
last option will be reduced and, as an intermediate case, the same will occur in the case of
franchising. Besides, longer contractual duration also reduces agency costs for various
reasons (Shane, 1998). Therefore, we expect a positive relation over the dependent variable.
• SURFACE. It is the minimum surface, in square meters, fixed by the franchisor in order to
permit the opening of a new unit of the chain. Therefore, to some extent it can reflect effort
required from the franchisee and, in this manner, reduce the number of potential franchisees
willing to join the chain. The negative relation between SURFACE and the dependent
variable can also be due to the fact that the franchisor is usually the owner of the larger
outlets located in big cities, while the franchisee is left with the smaller more disperse units.
• POPUL. We already made reference to the high probability of large units to be owned by
the franchisor. Smaller units located in little villages or towns –where only one outlet of the
chain usually exists- are more commonly franchised. Therefore, the minimum population
fixed by the franchisor to open a new store in a given location should have a negative
influence over the dependent variable.
• FIXED PAYM. It is the sum of the necessary initial lump sum entry fee and the present
value of fixed periodical payments.
We expect to find a negative influence over the proportion of franchised units. However,
initial entry fee compensates the franchisor for selection and initial training costs while the
remaining periodical payments are justified as being remuneration for brand value and on-
going support (Lafontaine, 1992). It is for this reason that we may find a positive relation
between FIXED PAYM and the proportion of franchised units, given that the latter may be
willing to make larger payments in exchange for greater support and intangible resource
transfer from the franchisor.
IV. CONCLUSION
Because we expect to detect different ownership evolution patterns with respect to the
type of activity, chains were divided into two groups, namely, service and product chains.
Figure II shows that service chains, at average, present a higher proportion of franchised
outlets compared to product distribution chains. This is in line with results obtained by
Pénard et. al. (2002), Lafontaine & Shaw (2001) and López et. al. (2000), even though the
percentage of units franchised is slighter higher in our case –nearly 80% for service chains
and 70% for product chains-.
570
1
0,9
Figure II: The evolution of the proportion of franchised units for Spanish service and product
distribution chains (1997-2003).
Next, Table I displays OLS regression results. The dependent variable measures the
proportion of franchised units of chains. Within the independent variables, SECTOR, AGE,
YNOTF, INTERN, SIZE, ININVEST and POPUL are found to have a significant influence
over the proportion of franchised outlets. All of the former present the expected signs.
Therefore, we can say that service chains choose to franchise a higher proportion of outlets.
This seems to confirm that when necessary local activities are of more relevance and more
labor-intensive, the given incentive system makes franchising the best choice. Second,
chains that have been in business for a longer time - AGE - tend to present larger proportions
of franchised establishments. It’s obvious that as time since the franchisor opened the first
outlet of the firm goes by, the greater the value of the dependent variable. Moreover, this
effect can also be due to increased franchisor experience and brand name value as chain age
is longer; this would surely have a positive influence over the number of franchisees willing
to join the chain. Variable YNOTF has a significant negative influence over the proportion of
franchised units. Therefore, as the number of initial years during which all outlets are
franchisor-owned and no franchised units are opened increases, franchisor seem to be
reluctant to le franchisees in. They get used to a centralized organizational form where all
decisions are made by central offices and this situation reduces future franchising activity.
Another significant independent variable is the presence of chain outlets in foreign markets.
INTERN has a positive influence over the dependent variable. This means that when the
chain has outlets abroad, it chooses to grow, more intensively, through franchised units.
Geographical dispersion and reduced franchisor local market knowledge reduces the
proportion of franchisor-owned units. SIZE has a significant positive relation over the
percentage of franchised outlets. Therefore, larger chains exhibit a higher proportion of the
latter, probably because these chains are subject to increased geographical dispersion. The
quantity of initial investment –ININVEST- reduces the proportion of franchised outlets of
chains. Franchisee risk aversion and the impossibility for these to diversify investment
adequately seem to reduce the number of potential franchisees willing to join the chain. We
do not, therefore, find any empirical evidence to prove that franchising exists due to resource
restraints of franchisors. It is the franchisor who directly invests in the opening of new outlets
when the necessary investment is higher.
571
Independent Standardized
variables Coefficients
The last significant independent variable to explain the proportion of franchised units is
POPUL. It has a negative relation over the dependent variable so we can say that the
franchisor tends to be the direct owner of units located in the larger cities, while outlets
situated in smaller towns are chosen to be franchised.
The remaining variables included in the analysis –TOTAL VAR., DUR, SURFACE and
FIXED PAYM- do not help to explain, in a statistically significant manner, variations in the
proportion of franchised units. However, the signs displayed by the first three of these are as
expected. On the contrary, FIXED PAYM presents a positive relation; this seems to indicate
that larger periodical fixed payments do not reduce franchisee interest to join the chain.
Maybe, this is because they are willing to pay more in exchange for greater intangible
resource transfer and support form franchisor.
REFERENCES
572
IN A GLOBAL ECONOMY, EFFECTIVELY MANAGED DIVERSITY
CAN BE A SOURCE OF COMPETITIVE ADVANTAGE
ABSTRACT
This paper briefly reviews the previous research findings to identify the advantages of
effectively managing diversity. Managing diversity means taking advantage of diversity’s
assets while minimizing the potential barriers, such as prejudices and biases. Diversity is
something that is not going way. Workplace diversity is urgently needed, as today’s
competitive environment demands the deployment of a wide range of skills to meet the needs
of the marketplace. Effective 21st century diversity efforts require leadership that seeks to
empower individuals through diversity training. Successfully managing diversity is a
challenging process, but with a clear vision, careful planning, strong leadership, and a
willingness and commitment to change, organizations can develop a competitive advantage
as an employer and a producer of services to the people.
I. INTRODUCTION
“Diversity Management” has become the new business buzzword for the 21st century.
It has been known to enhance workforce and customer satisfaction, to improve
communication among members of the workforce, and to improve organizational
performance (Cox & Dreachslin, 1996). Everywhere you turn, research papers, case studies,
and statistical data appear touting the advantages of successfully managing a diverse
workforce. The 1998 summer issue of Public Personnel Management presented a diversity
symposium that included theories, case studies, and examples of diversity management that
support the vision that if managed well, diversity can help improve organizational
effectiveness. Broadly defined, the term diversity management refers to the systematic and
planned commitment of organizations to recruit, retain, reward, and promote a heterogeneous
mix of employees. Soni (2000) defines managing diversity as developing organizational
structures and processes that effectively utilize diversity, and creating an equitable and fair
work environment for employees of all racial/ethnic and gender groups (p. 396). She adds
that workforce diversity “refers to differences among people based on gender /ethnicity, age,
religion, physical or mental disability, sexual orientation, and socioeconomic class.”
Businesses today have come to realize the many benefits of a diverse approach, which
is facilitated by a diverse workforce. A recent survey by the Business Higher Education
Forum (BHEF), a collaboration between the American Council on Education (ACE) and the
National Alliance of Business (NAB), found that a majority of Americans believe that
workplace and educational diversity is important and even vital to the success of our future
economy. Managing diversity has become an asset for some companies and a liability for
others. The most successful companies will be those that recognize the power of diversity in
their workforces and in their product mix, and effectively create products and services that
appeal to their increasingly diverse customer bases. These companies know that diversity will
become even more important in the coming years, and that the leading companies will be
573
those best reflecting the increasingly diverse marketplaces they serve. (2005, New York
Times)
Diversity is not simple, not easy to grasp, and not easy to manage. Managing diversity
is important because the differences that exist in the workforce can actually be a hinder to
productivity if not managed properly. The successful management of diversity can be the
single most important issue executives must address as globalization becomes the order of the
day. Diversity and its implications for effective management have become increasingly
important over the decades (Duchatelet, 2001), and global trends indicate that managing
diversity has become a business imperative (Cox& Beale, 1997). Their opinion is that
managing diversity “consists of taking proactive steps to create and sustain an organizational
climate in which the potential for diversity related dynamics to hinder performance is
minimized and the potential for diversity to enhance performance is maximized”.
Organizations willing to make this type of environment a reality will profit from the many
benefits of diversity. Indeed, companies have gradually come to understand how diversity in
the workplace affects the management system and, thereby the performance of groups and of
the organization. The literature suggests that organizations that engage in proactive diversity
management strategies are more likely to experience positive organizational outcomes than
those that shun or ignore diversity. Svehla (1994) describes diversity management as a
strategically driven process whose emphasis is on building skills, creating policies that bring
out the best in every employee, and assessing marketing issues as a result of changing
workforce and customer demographics. The objective of managing a diverse workforce
should be to create an environment in which members with any possible diversity profile and
from any background are both able and willing to contribute their full potential towards
achieving their common vision. Organizational reality, however, indicates that diverse
workforces lack a common base; even words may have different meanings and interpretations
(Rijamampianina, 1999). R. Roosevelt Thomas, president of the American Institute for
Managing Diversity Inc., proposed another term, “ ’managing diversity’ that presents a way
to maximize the contributions of each member of the workforce. The goal of managing
diversity is to access the talents of 100 percent of the people in an organization”.
A diversity perspective provides the cognitive frames within which group members
interpret and act upon their experience of cultural identity differences in the group. The
diversity of management groups should be studied in light of relevant contextual factors
(Chatman, Polzer, Barsade, & Neale, 1998). Firm strategy (Richard, 2000) and strategy
process variables are particularly relevant to the study of management diversity, since
strategy formulation and implementation involve individuals at all levels and across all
functional areas of management (Burgelman, 1983). People’s perceptions of how their
cultural identity group memberships influenced their ability to work effectively and exert
influence in their work groups have changed. As one white woman attorney explained,
“Diversity means differences in terms of how you see the issues, who you can work with,
how effective you are, how much you understand what’s going on…there’s not a sense of
‘you’re just like me.’” Groups in organizations around the world are experiencing changes in
the cultural composition of their membership, and the trend is toward even more change as
countries continue to undergo changes in the cultural composition of their general
populations (Erex & Somech, 1996; Hambrick, Canney-Davison, Snell, & Snow, 1998;
Johnston, 1991; Wentling & Palma-Rivas, 2000).
574
The concept of cultural recomposition is an event in which individuals from diverse
cultures are added to or replace members of an existing group. Cultural recomposition may
occur in homogeneous groups, where existing members share the same culture with one
another but not with incoming members, or it may occur in heterogeneous groups, where
incoming members may share the same culture as some existing members but, for the most
part, incoming and existing members are culturally distinct. The diversity work groups are
competitive, and characterized by a high degree of trust, risk taking, and psychological
safety. There are greater opportunities for competency-enhancing cross-cultural learning
(Argyris and Schon, 1978; Edmondson, 1999). Research suggests that a work group’s success
often hinges on members’ ability to engage differences in knowledge bases and perspectives
(Bailyn, 1993; Jehn, Northcraft, and Neale, 1999) and to embrace, experience, and manage
rather than avoid, disagreements that arise (Gruenfeld et al., 1996; Jehn, 1997). Also the
trend toward teams in organizations is increasing (Milliken & Martins, 1996) and employees
are compelled to work together in a variety of ways. When the workplace is diverse, the
different talents and skills, interests, needs and backgrounds, as well as power and
opportunity differences can be harnessed to benefit all. However, this very diversity can also
hamper productivity and teamwork through manifesting as a lack of a common way of
thinking and acting. To maximize effectiveness, managers and team leaders must support the
group to co-create, develop and agree on a vision that transcends their individual differences.
When people work together towards a shared vision, they hold themselves responsible and
accountable, both as individual members of the group and as the groups as a whole.
According to the teaching division of the American Psychological Association, the goals of
diversity content are to heighten sensitivity and awareness, broaden understanding of human
conditions, increase tolerance, enhance psychological mindedness, expose to personal
perspectives, and increase students’ political action (Simon et al., 1992:92). In an article
documenting the team learning process and performance of four diverse groups in a high-tech
manufacturing company, Brooks (1994) also found that unless status differentials between
group members were consciously managed, at best lower status employees were left out of
the learning process and at worst, the teams were dysfunctional. Morrison & Milliken (2000)
hypothesize that demographic differences between top management and lower-level
employees decrease management’s ability to hear and respond to criticism from subordinates,
as well as subordinates’ willingness to voice such views. They suggest, in fact, that “the
negative effects of silence on organizational decision making and change process will be
intensified as the level of diversity increases.” In the integration and learning perspective,
cultural diversity is a potentially valuable resource that the organization can use, not only at
its margins, to gain entry into previously inaccessible niche markets, but at its core, to rethink
and reconfigure its primary tasks as well. Ely & Thomas (2001) found that the common
element among high performing diverse groups was the integration of that diversity. They
also conducted a qualitative study to examine whether different perspectives of the concept of
diversity impact on the organization’s performance and employee satisfaction and
identification with their social group. They found three underlying views of diversity:
integration and learning, access and legitimacy or discrimination and fairness. Each of these
views governed how members of work groups created and responded to diversity. Each
employee’s diversity is to serve as a resource on which all members could draw learning
from each other and reflects a central premise of the integration and learning perspective on
diversity; while there may be certain activities at certain times that are best performed by
particular people because of their cultural identities, the competitive advantage of a
multicultural workforce lies in the capacity of its members to learn from each other and
develop within each other a range of cultural competencies that they can all then bring to bear
on their work. Diversity initiatives focus primarily on dealing with these “Others,” who are
575
“strange, different, maladapted, unusual, requiring additional work, additional effort,
additional qualifications, and additional fit” (Cavanaugh, 1997).
The roles and responsibilities of a leader are always changing, but one thing remains
the same; behind every success, there is a leader that is willing to embrace and conquer the
challenge. Leadership and diversity are inevitably connected when the target organization is
demographically and culturally diverse.
The meaning of effective leadership in a changing world is a question plaguing people in all
kinds of organizations, in all sectors and areas of the world. Fisher and Ellis (1990) said that
effective leaders exhibit flexible communicative behaviors and interpersonal relationships
according to the situation and the nature of the people they work with, the followers.
Increasing diversity among managers and employees is one of the most critical adaptive
challenges organizations are facing today. The relationship between the manager and
employee is crucial to the success of an organization. Diversity awareness and promotion is a
multifaceted phenomenon that could be positively affected by various leadership perspectives
and organizational strategies (Richards, 2000). Rather than positioning diversity solely within
the social and moral realm, 21st century diversity leaders must reflect bottom line issues of
profitability and enhanced productivity (Owens, 1997). As workplace diversity increases,
conflict among different cultures (rather than blacks and whites) will become more complex
and broadened. Therefore, diversity initiatives must be designed to reflect the contemporary
reality of multiple cultures interacting simultaneously in organizations. American
management literature, both popular (e.g., Thomas, 1991; Morrison, 1992) and scholarly
(e.g., Jackson et al., 1992; Cox, 1993) is rife with advice that managers should increase
workforce diversity to enhance work group effectiveness. The functional multicultural
workplace provides wide possibilities for finding new ideas and practices and cultural
learning. Workforce diversity management has become one of the most pressing business
issues that managers must address. The growth in the U.S. labor force now and for the
foreseeable future will be largely composed of women, minorities, and immigrants. They will
constitute about 85 percent of the new entrants in the workforce, according to the landmark
Hudson Institute study, Workforce 2000. The research results concern perceptions of ill
treatment in the workplace due to race, ethnicity, gender and age differences (Cox & Nkomo,
1991; Olott, Rudeman & McCauley, 1994; Talley-Ross, 1995; Grossamn, 2000), and
documented concrete advancement ceilings for white women and women of color (Catalyst,
1999).
Grossman (2000) suggests that in spite of organizational efforts to manage diversity,
very little has changed in the experiences of culture, ethnicity, race, and gender groups. Often
employees who strive to effectively change and adapt work behaviors find that they must do
so in spite of peer reactions and organizational structures (Cox & Beal, 1997). This context
suggests that leadership in diversity management must realize that individual determination
and perseverance in working to promote a positive and proactive diversity posture are salient
in the change process. Organizational leaders implement diversity initiatives in efforts to
motivate and encourage each individual to work effectively with others so that organizational
goals are achieved (Davidson, 1999). The literature suggests that diversity can have an
important impact on organizational performance and perceptions of organizational
effectiveness (Wright, Ferris, Haler & Kroll, 1995; Gilbert & Ivancovich, 2000; Richards,
2000). Organizational leaders wishing to enhance employees’ abilities to embrace and
actively promote inclusionary practices must not just describe behavioral expectations for
employees with respect to diversity.
576
V. CONCLUSION
Cox (1993) says that to develop and implement successful diversity management
programs, it is important to systematically identify and document key considerations that
must be taken into account by organizations attempting to enhance their diversity
management efforts. Organizations having already recognized the value of a diverse
workforce and made a sincere effort to maximize its contributions, have learned that
changing hiring policies will not in and of itself ensure success. Diversity is something that
is not going away. It must be dealt with wisely, carefully, and continuously. In a country
seeking competitive advantage in a global economy, the goal of managing diversity is to
develop our capacity to accept, incorporate and empower the diverse human talents of the
most diverse nation on earth (Roosevelt). This marketplace, whether in construction,
manufacturing, or the service sector, has become increasingly demanding as the metaphorical
performance bar is moved gradually, yet inexorably, upward. This continuous improvement
philosophy is enhanced by managing diversity in such a way as to derive the best
performance from a workforce that, with each passing year, is becoming less homogeneous
and more geographically dispersed. Diversity, if effectively managed, can be a source of
competitive advantage for the group or organization. Successfully managing diversity is a
challenging process, but with a clear vision, careful planning, strong leadership, and a
willingness and commitment to change, an organization can develop a competitive advantage
as an employer and a producer of services to the people. With increasing business
globalization and different cultures we have in this world, maintaining and managing cultural
differences becomes a challenge for managers and supervisors in the twenty-first century. It
looks at how one’s own culture plays an important role in the way one manages, one must
strive to learn, not only about the different culture which exists in the country where one
wants to do business, but also, how to see one’s own culture in an objective manner.
REFERENCES
Bailyn, L. (1993). Breaking the Mold: Women, Men, and Time in the New Corporate World.
New York: Free Press.
Cavanauth, J. M. (1997). Corporating the Other? Managing The Politics Of Workplace
Difference. Managing The Organizational Melting Pot. Thousand Oaks; Sage.
Forbes. L. H. (Oct, 2002). Improving Quality Through-Diversity More Critical Now Than
Ever Before. Leadership & Management In Engineering, 2 (4), 49.
Grossman, R. J. (2000). Race in the workplace. HR Magazine, March, 41-45.
Hambrick, D. C., Cho, T., & Chen, J. M. (1996). The Influence of Top
Management Team Heterogeneity On Firms’ Competitive Moves. Administrative
Science Quarterly, 41: 659-684.
577
OVERCOMING BUSINESS SCHOOL FACULTY DEMOTIVATION
ABSTRACT
Institutions of higher education face a “perfect storm” of public criticism due to the
convergence of three environmental trends: (1) the ever growing demand for college degrees
as professional career credentials since the U.S. continues to shift away from an
industrial/goods-based society towards a knowledge/service-based society, (2) the ever
increasing costs of providing near universal access to college education, and (3) the
problematic nature of student learning outcomes in light of the magnitude of the societal
investment involved (AACU National Panel, 2002; Barker, 2000; Seybolt, 2004). By any
measure business school programs are exploding on both the undergraduate and graduate
levels. Business programs have steadily increased in popularity, accounting for
approximately 25% of all undergraduate degrees in the U.S. (Clegg & Ross-Smith, 2003).
Similarly, Master’s of Business Administration (MBA) degrees have become legitimized as a
credential for managerial careers in large organizations, and now constitute over 23% of all
graduate degrees (Friga, Bettis & Sullivan, 2003). However there is some evidence the
market has become saturated, and the industry is entering a shakeout phase. Student
applications for MBA programs are down, as is the overall potential applicant pool of
GMAT registrants, by 15 to 25 percent (Zupan, 2005). While some of this trend can be
explained by growing national and international competition for students among an ever
increasing number of business schools, analysts note an increasing skepticism about the value
of management education (Grey, 2004; Hayes & Abernathy, 1980; Pfeffer & Fong, 2002;
Porter & McKibbin, 1988). Given the escalating costs associated with most MBA programs,
non-traditional educational institutions, particularly corporate universities and consultants,
have successfully challenged the basic premise that hard, quantitative analysis skills require
an intensive, two-year program of study. (Pfeffer & Fong, 2002). Business Week concludes,
“The drop in B-school enrollments may be signaling that people think they will receive better
training inside Corporate America than out” (Editorial, 2005, 112).
578
To its credit, the academy has developed an effective response to these emerging
trends and pressures. There is an increasing consensus among academic leaders such as the
Association to Advance Collegiate Schools of Business (AACSB), the Association of
American Colleges and Universities (AACU) and the Carnegie Foundation concerning
expectations and goals for business schools and Master of Business Administration (MBA)
programs. The AACSB codified these emerging trends through an international taskforce,
and through its accreditation standards. The AACSB’s Committee on Management
Education summarized the lifelong student learning outcomes of a quality business program,
particularly on the graduate level. The MBA should (a) prepare students with the knowledge
and skills they will need to make meaningful contributions in a wide variety of organizations
and activities (in both the private and public sectors); (b) instill an ethical foundation and
moral base so that MBA students are well-rounded and socially responsible contributors to
their communities and societies; and (c) provide lifelong advantages in personal wealth, self-
sufficiency, and entrepreneurial ability to create wealth (AACSB, 2005, 2003). This new
vision fundamentally expands and transforms the college curriculum across a variety of
dimensions. If these efforts are successful, professionalism as a college professor will be
redefined to include, at a minimum, the following major educational themes in teaching and
curricular development:
• Integrative Learning. Faculty and students will pursue extra-curricular activities, to
the point of collaborating on research and service activities.
• Learning Styles. Professors will adapt classroom pedagogy to address the needs of a
wide variety of student learning styles, embracing the emerging range of cultural
contexts found in increasingly diverse student populations.
• Interdisciplinarity. Professors will move away from presenting discrete knowledge
packets and skill sets towards making explicit linkages and connections between and
among academic specializations (Bisoux, 2005; Smith, 2004). The emphasis will be
on developing students’ abilities to systematically analyze and evaluate whatever
knowledge and experiences they later encounter in life and work.
• Assessment. Professors will be committed to systematically assessing how effective
their choice of content and method is in sparking student learning, and will take that
feedback and make continuous improvements. The course-based assessment
activities of single faculty will be expanded to program level and institutional
outcomes as well. These assessments can elicit collective reflection, guide the
reformulation of learning goals, promote sustained improvement and articulation of
the curriculum, and the design/selection of more appropriate assessments (AACSB,
2005; Smith, 2004).
579
Need for Affiliation. Faculty members with a high need for affiliation tend to prioritize
personal relationships and the development of communities to facilitate the social interactions
they find so rewarding. These faculty members often find that curricular reform is a natural
fit, since it advocates developing close relationships with students both inside the classroom
and in extracurricular learning communities as well.
Need for Power. Faculty members or administrators with a high need for power will
enthusiastically support reforms that extend their power and influence, especially when those
reforms impress external powerholders, such as legislatures who control resource allocation,
and accrediting associations who confer status and prestige through their approval or their
ranking of the business school. In the political arena, the trend is clear--resource allocation
will be increasingly linked with meaningful educational reforms that improve student
learning outcomes and with assessment efforts to document program effectiveness,
particularly when this results in improved b-school rankings (AACU National Panel, 2002;
Danko & Anderson, 2005). For faculty members with a high need for power who are not on
the administration track, power is associated with internal resource allocation, external
funding and grants, and leadership positions in academic and professional organizations. In
most universities and academic associations the path to prestige, status and resources is
linked with research, not teaching accomplishments (AACU National Panel, 2002; Ghoshal,
2005). To the extent that educational reforms do not add to this power base, or worse, drain
time and effort away from building further power and influence, such reforms are likely to
receive lackluster support from this group.
Need for Achievement. Many faculty have a high and dominant need for achievement, but
they are not a homogeneous group in defining what the nature of true academic achievement
is. While some define achievement as some combination of teaching and scholarship, others
focus almost exclusively in terms of research. A minority focus on service, which presents
motivational permutations so varied and complex they will be left for future analysis. The
true “teacher/scholars” are as dedicated to quality teaching and service as they are to
research, and will champion educational reforms advancing student-based learning outcomes
as a matter of academic integrity and professional pride (AACU National Panel, 2002;
Ghoshal, 2005). However, the same research notes that the weight of this burden is heavy.
Given the teaching and service loads typical of teaching institutions, most faculty feel
stretched to the limit even before assuming the additional duties of assessment, adding
interdisciplinary material, or systematically assessing learning outcomes. Those involved in
such efforts tend to burn out over time, and the reforms implode when their champions can
no longer shoulder the stresses inherent in their efforts (AACU National Panel, 2002;
O’Meara, Kaufman & Kuntz, 2003). In contrast, those who buy in to a more exclusive
research orientation are rewarded for it, as high output research faculty (AACU National
Panel, 2002; Barnett, 2005; Ghoshal, 2005):
• Promotion and tenure reviews usually prioritize research over teaching
accomplishments.
• Public recognition, status and institutional reputation is linked with research output.
• Gaining and maintaining accreditation by prestigious academic associations is often
linked with consistent research accomplishment.
• Resource allocation from state legislatures, private foundations and federal grant
sources is usually tied to research productivity.
Consequently such faculty are likely to be resistant to the educational reform agenda to the
degree it increases the amount of time devoted to teaching at the expense of the amount of
time left available for research. They are likely to be actively hostile if this agenda appeared
to force them to invest large amount of time in learning new teaching methodologies they
580
find unfamiliar, uncomfortable and unrewarding (Barnett, 2005; O’Meara, Kaufman &
Kuntz, 2003).
This paper argues that the perceived meaningfulness of educational reforms increases
to the degree that a reform addresses the interests and needs of multiple faculty groups.
Discretionary effort is likely to increase when a reform is perceived as a mutually beneficial
behavior which secures a common and worthwhile goal. These goals allow faculty to pursue
their individual interests (including research) while increasing the quality of the students’
educational experience and improving student learning outcomes for accreditors and funding
agencies. When an activity is linked with that level of perceived meaningfulness, it increases
the likelihood that (a) even an exhausted teacher/scholar will summon up the energy reserves
needed to engage in it, and (b) even a hesitant researcher will divert time to pursue it. In
general, high motivation reforms that meet the following criteria should be prioritized:
1. Prioritize student-centered learning activities that generate research and publication
opportunities for faculty. Instead of advancing reforms which threaten to compromise
research output, why not prioritize reforms which incorporate it? To advance publication
efforts, students can be integrated into field research data collection and analysis. “For our
new curriculum development model, we wanted our teaching and research to be inextricably
linked, and our faculty’s research used as a learning tool. In addition we linked our teaching
and research to resources off campus – including industry, publishers, professional institutes,
and employers” (Jayaratna, 2005)
2. Prioritize interdisciplinary efforts that generate research and publication
opportunities for faculty. The benefits of interdisciplinary learning experiences (in course
content, linked courses, interdisciplinary modules, etc.) are well documented, but so are the
costs. Unless carefully supported, such reforms negatively impact faculty quality of work life
– they simply require too much time and effort (Smith, 2004). Interdisciplinary efforts which
lead not only to classroom innovations, but to conference presentations and journal
publications as well, are more likely to be supported and implemented by a broader range of
faculty. The moral of the story: faculty who collaborate in research and publication are more
likely to teach together as well, and vice versa. For administrators, interdisciplinary activities
represent the kind of curricular innovations which command respect from students and
parents, from accrediting agencies, and from funding sources. This type of high motivation
reform can take the form of entire new programs engaging more than one department or
school. Such is the case for Central Connecticut State University's masters and certificate
programs delivered through OnlineCSU (CCSU, 2004). Under the scope of the university's
mission, and using a timely infusion of external financial support, the multidisciplinary
interests of faculty in the mathematical sciences and computer science departments resulted
in the creation of the first fully online Masters and Certificate Data Mining programs in the
world. The faculty need for mutual affiliation, the potential for prestige generated by the first
program of its kind in a growing field relevant to all, the possibilities of reaching a student
audience worldwide, and the interest of the university in creating programs responding to
present and prospective workforce needs, became critical in the establishment of the program.
Faculty appreciate the advantages this type of program offers in pursuing both research and
grant opportunities. For students, online delivery requires the continuous attention to the
achievement of learning goals, with the achievement program learning objectives being
assessed through a capstone experience.
3. Prioritize reforms that do not overburden faculty. Given the heavy teaching, research
and service loads of faculty at teaching institutions, reforms which add to that burden are
581
likely to be resisted, not embraced. Consequently reforms should be prioritized when they
are coupled with implementation strategies which minimize their intrusion and demands on
faculty life. For example, the SCSU Faculty Technology Resource Lab (FTRL) sponsors a
variety of initiatives to support faculty utilization of technology with assessment, teaching
and research applications. Their largest initiative is called STARS (Student Technology
Assistant Representatives). This innovative program allows technologically literate graduate
students to mentor and assist faculty, staff and other students in the use of computer
technology (FTRL, 2004). After extensive training with various kinds of hardware and
software, STARS provide desktop support for faculty and staff, network administration, web
development, programming, instructional technology, and a faculty technology walk-in
service center. STARS come from a variety of majors, including business. They are
recruited as students who have learned how to integrate the latest computer technologies in
their field. Upon graduation they receive a certificate as a professional credential. STARS
are paid for their work with of the service learning goal to "learn while working" in the Office
of Information Technology department.
IV. CONCLUSION
Unless faculty feel motivated to embrace change, they can find innumerable ways to
resist it. Consequently researchers warn that most curricular reforms attempted in large
universities have failed (AACU National Panel, 2002; Barker, 2000). The collapse of well
meaning efforts usually results from a discouraging combination of inadequate resources and
perceived opportunity costs. Given resource scarcity, the perceived costs and risks of shifting
time and effort commitments away from other activities towards teaching are enormous.
However, high motivation reforms appeal to everyone involved by addressing common and
complementary needs and interests.
REFERENCES
Bisoux, Tricia. “The Extreme MBA Makeover,” BizEd, 4(4), 200, 527-533.Editorial. “B-
Schools for the 21st century,” Business Week, April 15, 2005, 112.
Danko, James M. and Anderson, Bethanie L. “In Defense of the MBA.” BizEd, 5(1), 2005,
24-29.
Ghoshal, Sumantra. “Bad Management Theories Are Destroying Good Management
Practices.” Academy of Management Learning and Education. 4(1), 2005, 75-91.
McClelland, David. Motivational Trends in Society. New York: General Learning Press,
1971.
Mintzberg, Henry. Managers Not Mbas: A Hard Look At The Soft Practice Of Managing
And Management Development. San Francisco: Berrett Koehler, 2004.
582
CHAPTER 21
POLITICAL COMMUNICATION
AND
PUBLIC AFFAIRS
583
MEDIA FRAME: THE WAR IN IRAQ
ABSTRACT
This study analyzes the different framing of the war in Iraq according to the media
coverage of four principal American newspapers during two and a half years. Through this
study, the researcher exposed how the media framing of the war changed throughout time and
the political climate of the moment. Through quantitative content analysis the hypothesis of
this study was confirmed since American newspapers tend to change their framing of the war
in Iraq according to the development of the conflict. The longer the war took the weaker their
support was. Also according to the exploratory question that answered which newspapers
support the U.S. position in the war, the New York Times and USA Today were more critical
towards the U.S. position than the San Francisco Chronicle and the Washington Post.
I. INTRODUCTION
Media significantly shape the way we view and understand present issues. By
focusing attention on selected issues while ignoring others, media framing influences public
opinion. This study analyzes the different framing of the war in Iraq according to the media
coverage of four principal American newspapers: the New York Times, the Washington Post,
USA Today, and the San Francisco Chronicle, and it exposes how the media framing of the
war changed throughout time and the political climate of the moment.
Framing News
Framing occurs when, in the course of describing an issue or event, the media’s
emphasis on the subset of potentially relevant considerations causes individuals to focus on
these considerations when constructing their opinion, instead of on others. Fuyuan Shen
(2004) says that media frames can have significant consequences on how audiences perceive
and understand issues and can alter public opinions on ambivalent and controversial issues.
Among studies about media framing, Adam Simon (2001) says that to fully understand how
framing works, it is important to know how human memory works. Memory associates
concepts to create ideas. Therefore, choosing the right concepts in a story can evoke the right
idea by association of those concepts. Matthew Nisbt, Dominique Brossard, and Adrianna
Lroepsch (2003) analyze the framing and the role of the journalists in constructing drama
stories. They support the idea that war times are one of the most profitable times for media
business, since there are a lot of stories that reflect human drama. On the other hand, Jack
Lule (2003) analyzed the metaphors used in the coverage of the war in Iraq. According to
Lule, metaphors frame the news and are used by media and politicians in the conception and
construction of war. During the Iraq war, media adopted the metaphors used by the Bush
administration. These metaphors provided a means to understand how the prelude to war was
framed and portrayed by news media that anticipated rather than debated the prospect of war.
Ray Eldon Hiebert (2003) analyzes how the Bush administration frames the war in Iraq
through public relations and propaganda strategies. He says that the biggest and most
important public relation innovation of the Iraq War was the embedding of about 600
584
journalists with the troops doing the actual fighting.
International Framing
Through an experimental study, Paul Brewer, Joseph Graf and Lars Willnat (2003)
examined how media affect the standards by which people evaluate foreign countries. News
stories present a frame linking an issue to a foreign nation in a way that suggests a particular
implication shaping how audience members judge that nation. Ilija Tomanic Trivundza
(2004) analyzes how media shape our knowledge of the world. She indicates that in the
international media coverage of the Iraq war, the ideological framing depends primarily on
culturally specific patterns of self-identification with the nations or cultures involved in the
conflict. According to her, media frame nations based on antagonism (the good-the bad, the
inferior-the superior, etc). In Israel, Tamar Liebes (1992) compared the coverage of the Gulf
War and the War in Israel by American and Israeli media. In this study, he analyzes how the
media cover war differently depending on who is involved. Journalists have to deal with
their patriotic fervor and their instinctive loyalty to their own country and their professional
duty of morale building that presides over their careers. After two years of analysis, he
concluded that ideology of objectivity, neutrality, and balance is reserved for reporting other
people’s troubles, rather than one’s own.
Wilhelm Haumann and Thomas Petersen (2004) studied German public opinion
towards the U.S. position in the war in Iraq. Both America and Germany framed the news of
the war in Iraq differently, while the media from the USA were concentrated on actions of the
governments involved in the conflict and news from the battlefield, the German media
focused on the civilian population in Afghanistan.
585
III. HYPOTHESIS AND EXPLORATORY QUESTION
The hypothesis of this study was: American newspapers tend to change their framing
of the war in Iraq according to the development of the conflict. The longer the war will take
the weaker their support will be. The exploratory question of this study was: According to
their news frame, which newspapers support the U.S. position in the war in Iraq?
Through quantitative content analysis the researcher tested the hypothesis and
exploratory question. Using a research randomizer table, the researcher built 10 constructed
weeks in order to compare the newspapers’ framing. The unit of analysis was each title and
each lead of the selected stories. The independent variable was the newspaper’s name, and
the dependent variable was the nature of the framing of each story (favorable, unfavorable,
balanced or factual framing toward the U.S. position). The newspapers were selected on the
basis that they typify the population and because of their importance and influence in their
respective locations. The sample size of the study was 1,243 stories. A story was coded as
favorable if it reflected positively the U.S. position in the Iraq war; highlighted the U.S.
military or the coalition of the willing talents in the war (Coalition of the Willing: group of 38
nations acting collectively and often militarily outside of the jurisdiction of the United
Nations mandates and administration); or associated them with positive characteristics or
actions. A story was coded as unfavorable if it reflected negatively on the U.S. position in the
Iraq war; associated the US. or the coalition of the Willing with unethical, illegal, or immoral
behavior; suggested the US or any of its coalition’s countries was a source of problem; or
associated them with a negative experience of failure. A story was coded as balanced if the
story provided nearly equal amounts of positive and negative information. A story was coded
as factual if the facts transmitted in the news information did not reflect any of the
characteristics of the favorable, unfavorable or balanced stories toward the U.S. position in
the war.
IV. RESULTS
Since the beginning of the war, and after two and half years of participating in the
conflict, a chi-square analysis revealed that out of 1,243 stories that covered the Iraq war in
these four newspapers, 27.0% of them were favorable, 26.9% unfavorable, 8.7% balanced
and 37.4 % factual towards the U.S. position (Table 1). During the first constructed week a
chi-square confirmed that 36.1% of the stories were favorable toward the U.S. position on the
war and during the last constructed week the favorable stories were 19.1%. In between, the
favorable coverage tended to go down month after month. The unfavorable stories toward the
U.S. position on the war went from 24.6% during the first constructed week to 33.0% during
the last constructed week. During the 10 constructed weeks, the level of unfavorable stories
fluctuated constantly, as well, but in general, the unfavorable coverage increased month after
month. Most of the news stories were factually framed (37.4%). The amount of them
ascended during the time frame. The amount of balanced stories descended.
586
From 09/21/03 to 12/21/03 20.9% 29.7% 7.7% 41.8% 100.0%
From 12/22/03 to 03/22/04 24.1% 35.4% 7.6% 32.9% 100.0%
From 03/23/04 to 06/23/04 20.8% 29.6% 8.2% 41.5% 100.0%
From 06/24/04 to 09/24/04 34.5% 15.0% 3.5% 46.9% 100.0%
From 09/25/04 to 12/25/04 25.0% 27.1% 10.4% 37.5% 100.0%
From 12/26/04 to 03/26/05 12.5% 30.0% 11.3% 46.3% 100.0%
From 03/27/05 to 06/27/05 23.9% 23.9% 5.7% 46.6% 100.0%
From 06/28/05 to 09/28/05 19.1% 33.0% 6.4% 41.5% 100.0%
TOTAL 27.0% 26.9% 8.7% 37.4% 100.0%
Note. N: 1243; Chi-square: 64.93 df: 27; p<.01
The Washington Post had one of the highest levels of favorable (39.7%) and
unfavorable (30.0%) coverage towards the U.S. position on the war (Table 2). This same
pattern has been repeated in the San Francisco Chronicle’s coverage (41.7% and 33.8%).
However, the coverage on both newspapers tended to frame the stories more favorably than
unfavorably. The New York Times had one of the highest factual coverage (55.6%) along
with USA Today (61.2%) and reported almost the same amount of favorable and unfavorable
stories (19.1% and 22.8%). USA Today reported a significant difference between the
favorable and the unfavorable stories. There was 18.6 percentage points difference between
both of them, since 28.7% of the stories were unfavorable and 10.7 % were favorable.
V. CONCLUSION
Consistent with the studies in the literature, a large proportion (53.9%) of the
coverage of the stories of the Iraq’ war by American newspapers tends to be favorably or
unfavorably framed towards the U.S. position in the conflict. The hypothesis was confirmed
587
since the newspapers changed their framing of the war in Iraq according to the development
of the conflict. During the first week of the conflict, the coverage of the war was more
favorable than unfavorable with 11.5 percentage points difference (36.1% vs. 24.6%). After
two and a half years, the unfavorable stories were 13.9 percentage points higher than the
favorable stories (33.0% vs. 19.1%). After the first six months of the conflict, the favorable
news dropped dramatically from 36.1% to 20.9 % and it did not reach the original percentage
again.The coverage of those weeks that had a large amount of favorable news was mainly
related to issues such as: the end of Saddam Hussein’s regime, Iraq conquered, assassination
of Saddam Hussein’s sons, Saddam Hussein’s capture, U.S. commitment to democracy, Iraqi
free government, democratic elections, and Saddam Hussein’s trial, among others. The
coverage of those weeks that had a large amount of unfavorable news was mainly related to
issues such as: attacks against U.S. troops, unproven weapons of mass destruction,
international disagreements (UN) about the war, misleading of the Iraqi reconstruction,
economic and oil issues, international terrorist attacks related to the Iraq war, mutilation of
U.S. civilian in Falluja, photos of Abu Graib prison, and American presidential elections,
among others. The exploratory question investigated which newspapers supported the U.S.
position on the war in Iraq according to their news frame. The Washington Post had a more
favorable framing of the war than an unfavorable one. The San Francisco Chronicle mainly
balanced the amount of favorable and unfavorable news, taking a more neutral position. The
New York Times and USA Today published more factually framed stories. The New York
Times maintained a fairly well-balanced framing of favorable and unfavorable stories (19.1%
and 22.8%), as well. The only newspaper that showed more openly its disagreement with the
conflict and the decisions taken by the Bush Administration was USA Today.
REFERENCES
Brewer, Paul; Graf, Joseph; Willnat, Lars. “Priming or framing: media influence on attitudes
toward foreign countries.” Gazette., 65, (6), 2003, 493-508.
Entman, Robert. “Cascading activation: contesting the White House's frame after 9/11.”
Political communication., 20, (4), 2003, 415-432.
Haumann, Wilhelm & Peterson, Thomas. “German public opinion on the Iraq conflict: A
passing crisis with the U.S.A. or a lasting departure?” International journal of public
opinion research., 16, (3), 2004, 311-330.
Hiebert, Ray. “Public relations and propaganda in framing the Iraq war: a preliminary
review.“ Public Relations Review., 29, (3), 2003, 243-255.
588
WOMEN’S IMAGE AND ISSUES: A COMPARISON OF
ARAB AND AMERICAN NEWSPAPERS
ABSTRACT
This study examines the differences in reporting women’s issue news in Arab and
American newspapers. Results reveal the American press does not report on women’s issues
any more or less than the Arab press. Arab newspapers, however, have a more positive slant
on women while American newspapers run more negative stories about women. Results are
discussed in relation to the contrast between Arab and American law and social customs.
I. INTRODUCTION
Feminist and political leaders throughout the Arab region have long argued that one
of the most harmful exports of American media is an inaccurate image of Muslim women
(Darrag, 2002). Egypt’s first lady, Suzanne Mubarak, president of the Arab Women’s
Summit, called the image of Arab women “distorted and unfair” (quoted in Hanley, 2002).
Jabar Asfour of Egypt’s National Council for Women warned that there is a new urgency in
combating Western stereotypes of Muslim women saying “Many Westerners mistakenly
view Taliban women as quintessential Muslim women” (quoted in Hanley, 2002, p. 53). And
Fatima Zayed, wife of the late president of the United Arab Emirates, suggested that Islamic
nations face a hostile media campaign in the West that “often focuses on Muslim women’s
rights, implying they have none” (quoted in Chu & Radwan, 2004, p. 39).
Critics claim a more accurate image of Middle Eastern women can be found in media
produced in areas of the world with large Arab populations such as countries along the
Persian Gulf or in northern Africa (Abernethy & Franke, 1996; Al-Olayan & Karande, 2000).
Since the most effective way to learn about a culture is to live immersed in it (Love &
Powers, 2002), reading media publications originating in the Arab world provides a more
precise picture of Middle Eastern society than the western media exported around the world.
Challenges in gaining access to Arab communities due to travel restrictions, inadequate
language skills, or potentially threatening political environments have limited western
reporters from effectively reexamining these media stereotypes (Feghali, 1997) and, as a
result, most western ideas about women in Arab culture are “more fiction than fact” (Khatib,
1994, p. 58).
Although there are conflicting views about the degree of male oppression over women
in Arab society, scholars agree that there is a considerable difference in the status of women’s
rights in Arab countries and western countries (Darraj, 2002; Fernea, 2000; Fox, 2002;
Hymowitz, 2003; Ray & Korteweg, 1999; Sakr, 2002; Saliba, 2000). These differences in
status are a significant influencing factor in determining what is reported about women and
women’s issues in both Arab and American media (Sakr, 2002). The purpose of this paper is
to determine whether American and Arab newspapers are emphasizing news about women
that perpetuates international stereotypes. Specifically, this study examines (a) to what extend
589
international editions of American and Persian Gulf newspapers carry news about women and
issues relevant to women; and (b) to what extent the news, whether about Arab or non-Arab
women, is treated positively or negatively.
Arab women, therefore, have traditionally had fewer opportunities than Americans to
participate in newsworthy activities (El-Ghannam, 2003; Ray & Korteweg, 1999). Interests
were limited to environments in which there was little or no interaction with men, other than
family members (Feghali, 1997). Educational opportunities were reserved for male children
while business and property rights were afforded to women only under the direction of a
husband, father, or brother. Child raising, household management, and involvement with
religious or charitable organizations were encouraged but closely monitored by other family
members (Fernea, 2000). Although some scholars suggest that this paradigm of men in the
public sphere controlling women in the private sphere (i.e., the home) is no longer valid
(Fernea, 2000), many admit that the so-called “cultural evolution of Arab society”
(Immerman & Mackey, 2003, p. 217) has not significantly permeated Arab life (Fox, 2002;
Hymowitz, 2003; Ray & Korteweg, 1999; Sakr, 2002; Saliba, 2000). Activist women’s
groups are pushing for change through both governmental and nongovernmental initiatives
(Sakr, 2002) but many media outlets in the Arab world fail to cover these events due to
government censorship or societal pressure (Darwiche, 2000).
590
IV. HYPOTHESISES
Since proponents of women’s rights charge that coverage of women’s issue is lacking in
the Arab media, the following hypotheses are proposed:
H1 There should be a difference in the proportion of women’s news carried by
American and Arab newspapers.
H2 American newspapers will carry more hard news, business news, and editorials about
women and women’s issues than will Arab newspapers.
H3 American newspapers will carry more positive women’s news than will Arab
newspapers.
H4 American newspapers will carry more negative news about Arab women and their
issues than about American women’s issues.
H5 American newspapers will carry more negative news about Arab women than will
Arab newspapers.
V. METHOD
International editions of two American and three Arab newspapers were used for this
study. The newspapers examined were the International Tribune Herald (United States),
U.S.A. Today, the Saudi Gazette, the Gulf News (United Arab Emirates), and the Arab Times
(Kuwait). The American newspapers were selected based upon their availability throughout
the 22 Arab nations in the Middle East and northern Africa. The English-language Arab
newspapers represented the spectrum of freedom for expression in the region ranging from
the most limited (Saudi Arabia) to more moderate (UAE), to the most open (Kuwait)(Ruch,
2004). All sections of the newspapers were coded except for classified advertising. Two
coders were trained to examine the data. The unit of analysis was the news story, which
included hard news, features, sports, business, editorials, and accompanying pictures. The
articles and pictures selected for analysis were judged to about women or relevant to
women’s issues. Examples include Arab women’s groups asking the West for help with
women’s rights issues, Israeli attacks on Palestinian families, Chinese women protesting an
extension of the time required for marriage before they can apply for Taiwanese citizenship,
kidnappings in the Philippines, and an American woman biting off the tongue of a would be
rapist. Coders examined articles published over a one month period. Intercoder reliability for
all variables combined was 94.25. Relevant items were then rated according to whether the
slant on the news was positive, negative, or neutral. Positive article/pictures covered events
beneficial to women while negative articles/pictures portrayed women as victims or acting in
harmful ways to themselves or others. When the effect or image of women could not be
determined the slant of the news was considered neutral. Intercoder reliability was 91.05.
When ratings differed between coders, the differences were discussed until a consensus was
achieved.
VI. RESULTS
To determine the news coverage of women’s issues a total of 17,780 stories were
content analyzed in 152 issues of American and Arab newspapers. Women’s news were
covered in 2040 stories, 10.8% of the American articles (N=564) and 11.8% of the Arab
articles (N=1476). (See table 1).
591
Table 1: Women’s Issue Coverage-
________________________________________________________________
Newspaper Women’s Articles Total Articles Percentage
Herald Tribune 172 2616 6.6
USA Today 392 2576 15.2
Saudi Gazette 280 2532 11.2
Gulf News (UAE) 624 6000 10.4
Arab Times (Kuwait) 572 4056 14.1
_________________________________________________________________________
Hypothesis 1: The proportion of women’s issue news in American newspapers was 10.8%
whereas the proportion of women’s news in Arab newspapers was 11.8 percent. The
difference in proportions was not significant, X2(1,N = 2040) = 0.54, p>.05.
Hypothesis 2: The proportion of hard news, editorials, and business news about women’s
issues was 69.5% in American newspapers whereas the proportion of similar women’s news
in Arab papers was 69.3%. The difference in proportions was not significant, X2(1,N = 1416)
= 0.00, p>.05.
Hypothesis 3: The proportion of positive women’s news in American newspapers was 45.4%
while the proportion of positive women’s news in Arab papers was 59.9%. The difference in
proportions was significant, X2(1,N = 1140) = 10.30, p>.01. Post hoc analysis of editorials,
business, and hard news about women’s issues showed the proportion in the American media
was 39.8% and the proportion of similar stories in the Arab media was 56.5%. The difference
in proportions of these types of stories was also significant, X2(1,N = 660) = 10.23, p>.01.
Hypothesis 4: In American newspapers the proportion of negative news about Arab women’s
issues was 38.8% while the proportion of negative news about American women’s issues was
19.1. The difference in the proportions was significant, X2(1,N = 122) = 4.72, p>.05.
Hypothesis 5: The proportion of negative news about Arab women’s issues in American
newspapers was 38.8% whereas the proportion of negative news about Arab women’s issues
in Arab newspapers was 15.0. The difference in the proportions was significant, X2(1,N = 28)
= 5.17, p>.025.
VII. DISCUSSION
The intent of this paper was to examine differences in coverage of women’s news by
American and Arab newspapers. The results of this study failed to support that newspapers in
America give women’s issues more coverage. The American press covers neither hard news
nor general women’s news any more or less than the Arab press. Arab newspapers, however,
have a more positive slant on women while American newspapers run more negative stories
about Arab women. In addition to providing entertainment and advertising, the role of the
Arab media is to convey news, provide opinions, and reinforce social norms and cultural
awareness (Rugh, 2004). In this way, Arab mass media performs the same basic functions as
media in America and throughout the world. Unlike the libertarian system of the American
press, however, Arab media outlets work under a more authoritarian system, supporting and
advancing the policies of the government (Siebert, 1953).
The results of this study indicate that Arab newspapers reflect an image of women
that is more similar to American journalism than the restrictive, sexist society commonly
associated with Arab culture. Perhaps this can be accounted for in differences between the
laws and social customs of many Arab states. Throughout the Middle East, laws require
equality between the sexes. These laws reflect the tradition of Islamic teachings that women
592
have the right to inherit property, own and operate businesses, and be educated (Darraj,
2002).
Although parts of the Qur’an, like other religious writings, have been interpreted to
support male oppression over women, Islamic teaching is supportive of a woman’s right for
free expression (Saliba, 2000). Due to the close relationship between government and religion
in most Arab nations, the legal systems also support equality between the sexes. The Arab
press, having developed under this traditional, authoritarian structure, reflects these
government policies and, therefore, provides women’s news significant coverage. Social
customs in Arab society, however, often do not reflect government laws supporting women’s
rights. Arab women may be prohibited from exercising their full ranges of choice in
employment and education, or even the option of expanding beyond their traditional role in
the home (Chu & Radwan, 2004). News stories about the plight of the Arab women are
underreported or omitted entirely, especially in more traditional counties such as Saudi
Arabia, Kuwait, and the Sudan (Rugh, 2004). This is an area that significantly differentiates
Arab and American societies. The results of this study indicate American media will
highlight these “negative” stories while Arab newspapers will not.
VIII. CONCLUSIONS
Critics of American media claim that the press presents a negative stereotype of Arab
women and society. Critics of Arab media argue the press does not give full coverage of
important women’s issues and the result is an “Arab public that has to resort to foreign media
outlets…to get something resembling the full picture” (El-Affendi, 1993, p. 187). The results
of this study suggest, perhaps, both viewpoints are right. It is hoped that understanding the
differences in Arab and American newspapers will lead to greater insight into the issues that
affect women of both countries and ultimately improve cultural relations between Arab and
western societies.
REFERENCES
593
DOES CHARITY TRULY BEGIN AT HOME?
ABSTRACT
As a result of Hurricane Katrina many agencies both governmental and private are
restructuring planning and business strategies. The failure in leadership has lead to finger
pointing, resignations, and a tremendous amount of media coverage. A call for charitable
donations to help the people of this region has reached unprecedented proportions. This
paper investigates the relationship between the epicenter of a natural disaster and the
motivation to respond to that tragedy. The authors conducted a survey in Texas, Louisiana,
and Ohio. The results suggest that the closer a person was to the event the more information
they craved. The results also suggested that there is no correlation between celebrity
endorsement and the motivation to give. Many obstacles arose in the conducting of this
study. A discussion of those obstacles and an overview of charitable giving is introduced in
the paper.
I. INTRODUCTION
Prior to the storm, New Orleans was a world-famous tourist destination, a city known
for its historic atmosphere. The 2000 U.S. census indicated that in excess of 1.3 million
people lived in the greater New Orleans area, with nearly a half million people in the city
alone. The aftermath according to Wikipedia, left the city with under 150,000 people
(http://en.wikipedia.org/wiki/New_Orleans).
It is no wonder that the devastation of New Orleans and the surrounding areas caused
by the Hurricanes has resulted in an outpouring of donations from across the nation, and the
world. The media was saturated with messages asking for charitable assistance. Luminaries
ranging from musician Aaron Neville to former presidents Bill Clinton and George H.W.
Bush helped boaster the cause.
594
II. BACKGROUND
U.S. News and World Report (2003) indicated that some two thirds of Americans
donate to charities. Individual American citizens in 2002 accounted for more than 80% of
the $241 Billion donated to charity. Of those donations (an average of $2,499 per person)
35% goes to religious institutions, while 13% goes to education. With over 1.6 million non-
profits in the U.S. taking contributions under normal circumstances, many non-profit
executive directors have expressed concern that the extra demands of the Hurricane disasters
will affect overall donations. Some seers expect no drop in traditional charitable giving
despite the extra demands for contributions sparked by Hurricanes Katrina and Rita. The
charitable organization, Star of Hope in Houston, TX reported in late December a "bleakly
empty warehouse." This warehouse usually serves approximately 1000 homeless people
during the winter holiday season. Marilyn Fountain Star of Hope spokeswoman stated there
was a 1.4 million dollar shortfall for December (Holiday Donations, 2005). A spokeswoman
for Charity Navigator, a non-profit group in New Jersey that evaluates philanthropic
groups, Sandra Minuitti, stated that last year charitable giving was about $250 billion and that
no reduction was expected. However, there is believed to be some "donor fatigue" that could
create problems for small charities and food banks. About half the donations for small
charities and food banks are made between Thanksgiving and New Year's Day according to
Miniutti (Holiday Donations, 2005). This led the authors to ask the question: What are the
patterns of giving and what factors might determine the likelihood and magnitude of
donations? Philanthropic patterns have been studied and there is some data regarding who
gives what. Much of the research is demographically oriented, such as the following.
Economic status seems to be one factor. According to The Chronicle of Philanthropy (April,
2004), the middle range wealthy Americans, those who have an annual income of $200,000
to $10 million dollars tend to give less to charity than both people who are richer and those
with less income. Gender is another factor that affects the likelihood of giving to charitable
causes. Single women are more likely to be donors. They also give larger gifts than their
male counterparts “with comparable education and income levels” according to Patrick
Rooney, research director at the Center on Philanthropy at Indiana University (Giving ‘til It
Hurts, 2005.) Age appears to be a factor that affects the size of donations. While donors are
seeing an increase in giving from younger people, Gardner (2005) reports that older people
tend to give larger gifts. The beneficiaries of minority gifts can be also divided by age.
According to The Chronicle of Philanthropy (Oct., 2004) those born after the passage of civil
rights legislation of the mid-1960 tend to support charities and educational causes that help
people of all ethnicities. However, people who are older are more likely to support causes
that help members of their own minority group. Another factor that weighs in on the
magnitude of donations is the Internet. Not only does it increase the speed at which
donations are given, it also increases the size of those gifts. At the Salvation Army, the
average mail-in gifts for Hurricane Katrina range from $20 - $50, about 1/4th the average
online gift of $185 (Gardner 2005).
III. METHODOLOGY
595
impact of different communication channels. Respondents were also asked demographic
questions.
An individual’s distance from the epic center (New Orleans) was calculated in
hundreds of miles. Thus, the respondents in northern Ohio (slightly over a thousand miles
away) distance was calculated as 10, while individuals in Baton Rouge (about 60+ miles
away) received a 1 for distance. Additional demographic information was requested, but was
not used in this research. The data was explored with factor analysis, reliability tests, and then
multiple regressions. In total, approximately of 250 survey instruments were distributed and
166 surveys were returned; 8 were rejected because of missing data. If the number of
questions answered was less than half the number of items for a section the individual survey
was rejected. In total 158 usable questionnaires were returned. The usable response rate was
63%, which is consistent with research design of this nature.
AGE Number of items Mean Median Mode Number in each Range Bin
Items in ctg Items in ctg Items in ctg
Valid Missing (13 = 8%) (14 = 8%) (29 = 18%) 17 18-20 21-23 24-29 30-35 37-45
As suggested by Doll and Torkzadeh (1989), the validity was assessed using factor
analysis and correlation between the items. Factor analysis reduced the component items to
four factors explaining 63% of the variance. Reliability was measured by Cronbach’s alpha.
Alpha scores above 0.70 are considered satisfactory, and those above 0.80 are considered
excellent (Nunnally, 1978). The first factor dealing with “charitable giving” contained six
items explaining 33% of the variance. Factor one’s six items returned an alpha of 0.815. The
additional factor dealt with “charitable motivations” and had a low alpha of .602 and would
normally be consider acceptable for exploratory purposes only. All items were significant at
the .01 level.
IV. CONCLUSION
This study empirically tested the relationship of charitable giving in relationship to the
recent natural disasters. We were specifically interested in the August devastation of New
Orleans, Louisiana (considered a U.S. treasure) by Hurricane Katrina. To try to limit the halo
effect; the survey was conducted several months after the Katrina disaster. This research was
not meant to provide an overall relationship of philanthropy with natural disasters. It
investigated the attitudes of college students in relationship to various factors. The initial
premise was that there existed a relationship between the distance the individual was from the
epic center of the disaster and one’s charitable intentions. This was not supported. The only
relationship that was supported with the distance criteria was peoples’ attitudes towards the
level of news coverage. This was inversely supported. The closer the people were to the
event, the more they craved information. The survey also explored the contribution of
celebrity endorsements for the purposes of giving. This study found there was no correlation
with the intent of giving and individual celebrity endorsement. The level of contribution was
found to be significantly (.05%) related to a person’s natural propensity to donate and the
emotional impact of the event. A person’s emotional impact was found to be correlated to
the level of news that was received from Cable news networks such as Fox, CNN, MSNBC.
596
A person’s willingness to volunteer was found significantly (.01%) related to willingness to
volunteer time in general and his/her willingness to donate funds.
Part of the problem with measuring the amount of charitable intention, as well as, the
amount of actual giving is the multitude of natural and man made disasters. While each
charitable cause has its own virtues, the average person has only a limited amount of time and
money. If a new disaster arises the conventional contributor may not have the resources to
donate again. The number of charitable causes in today’s society has led to an increasingly
competitive environment among charitable organizations. Every where you look in every
conceivable environment some group or organization is looking for a donation. This over
saturated industry is calling on normal everyday people to support an overwhelming number
of causes. But how does one choose which charity to give to? As the competition increases
for donations many groups use the media to show horror stories. This leads to
desensitization, the process of seeing so many shocking images that it has no effect on the
intended recipient. The graphic images portrayed within the media (both paid and unpaid)
could very well corrupt the intended result. As images from the world’s disasters hit the
general public, a sense of “I have seen that before” and “what makes it different” from any
other disaster, could tend to lower donations.
Another factor that could play into donations is the perception of the disaster. With
the recent increase of weather related disasters the public may not be inclined to donate as
much time or money into areas that are prone to these conditions. It also did not help that
predictions (from private and government officials) for a New Orleans area Hurricane related
disaster were expressed in biblical terms. These predictions have left some of the general
public with a feeling that this area deserves what it gets for not preparing ahead of time. In
addition the U.S. government’s (local and national) failure in dealing with the aftermath of
the Hurricane could also have effected donations. While the criticism of the Federal
government is high and deservedly so, there were also foul-ups at the local level -- as the
mayor failed to execute the disaster plan. Charges were exchanged. Some said the mayor did
not activate the city’s school buses for evacuation. Louisiana State Governor Blanco claims
that FEMA told them not to use the buses because they were not air conditioned and feared
heat stroke. The governor also claimed that FEMA promised suitable buses that never
arrived. In addition to the bus issue an Amtrak train that offered to carry hundreds of
passengers out of the city was declined by the city and so the train left without passengers.
Failure to preposition food and water at the Superdome is also considered a city government
blunder. These sorts of failures left a lot of Americans discouraged about preparation. Also
many non-profit charitable organizations have advocated, a disaster this big is a Federal
obligation. Unfortunately for Katrina and Rita victims, government priorities shift. With
issues like the “war on terror”, the level of deficit spending, and the call for re-prioritization
of Katrina relief by Congresspersons and Senators from unaffected areas, the Federal relief
will likely be inadequate. These series of events could have lead to a “why should we
donate” attitude, especially if the government is not going to.
Publicity could have also been an issue. Because the New Orleans disaster was one of
the first devastating recent U.S. natural disasters it has gotten an obscene amount of media
coverage. This continual coverage has increased awareness and may have contributed to the
amount of overall donations. The additional coverage while intended to raise donations may
have had the opposite effect. Many people give one time contributions to charities.
Extending coverage will most likely not get people to give a second time. A byproduct of
continually publicizing the after effects of the hurricane has also led to many allegations
597
(racism, misappropriation of funds, etc…) turning people off from donating to any cause
related to the area.
Many causes do not do well because they are perceived to be long term. The
consensus is no matter how much money you throw at the problem it will never be fixed.
Other causes don’t receive money because they are deemed unworthy. The participants
involved are faulted because they did nothing to correct the problem. Still others are not
funded because of mismanagement. All of these issues could be applied to the New Orleans
disaster, thus leading to lower donations. This study specifically looked at proximity to New
Orleans and donation patterns geographically. The areas surveyed were at varying
geographic distances but, only one was inland. Although the authors posited that geographic
proximity would be predictive of giving it was not a determinant. While no real correlation
was found, it could have been because the authors failed to take into account the coastal
location of the majority of the survey respondents. Most coastal region residents have
experienced windstorms and accompanying floods. These coastal respondents have the
potential to identify with others who live in these coastal regions. From Burke (1950) to
Strong & Cook (1996) persuasion scholars have written about the power of identification.
The images and the language of the mediated messages about Katrina in the news may have
allowed for donors and refugees to share a common experience concerning hurricanes and
floods coming close to their own homes. This level of identification may be a factor that is
more important than the region of the country from which one comes.
REFERENCES
598
Nunnally, J. C. Psychometric Theory. (2nd Edition), New York: McGraw-Hill 1978.
Strong, W.F. & Cook, J.A. Persuasion: Strategies for Social Influence. (4th Edition),
Dubuque: Kendall-Hunt. 1996.
599
IN THE PROCESS OF DECOLONIZATION:
THE RE─CREATION OF CULTURAL IDENTITY IN TAIWAN
ABSTRACT
I. INTRODUCTION
The major argument of this paper is that people's cultural identity is developed from
daily rhetorical activities (e.g. songs); but, according to Gramsci’s claims of hegemony,
people’s attitudes, values, and beliefs are controlled, or at least influenced, by the state’s
mechanism. A government can lead the construction of cultural identity purposely in order to
develop a specific national identity or to achieve some particular political goals. The main
purpose of this paper is to analyze how Taiwanese people have constructed their identities
and faced the history in the process of decolonization. By analyzing two Taiwanese songs,
the author of this essay will discuss what type of local consciousness is revealed in these
songs and how it reinforces the development of the Taiwanese identity. This paper will also
mention how and why the newly-formed Taiwanese identity has replaced the Chinese identity
and its influence afterwards. The discussion in this paper points out the circular activity of
diffusing hegemony. In a word, when a government makes mistakes of utilizing national
mechanism, people’s attitude, beliefs, values, identities, etc. may exceed the government’s
expectations. People’s resistance over an old ideology may be recognized. However, there is
always a new ideology replacing the old one and predominating over the subordinate class or
other minor groups in a society. Therefore, the activity of diffusing hegemony keeps
600
repeating, even though the meaning and type of ideology might be different, and would never
stop.
The power and influence of music has been discussed over centuries. In his book The
Republic, Plato (n.d.) indicated that “any musical innovation is full of danger to the whole
state, and ought to be prohibited” (page 135). In Chinese history, philosophers, scholars, and
politicians also discussec the relationship between music and politics. Ancient Chinese
believed that music was one instrument for an imperial government to educate people, to
influence social customs, and to reform abuses. For instance, one Confucians book stated that
“The music in the piping times of peace is harmonious and joyful, and demonstrate the
harmony of politics; the music in troubled times is sad and angry, demonstrate the turmoil of
politics; the music in a subjugated nation is grieved, demonstrate the people's predicament.
Music and politics have direct relationship” (Li Ji Zhu Shu,1993, page 663). Thus, it can be
seen that both Western and Chinese philosophers noticed the power of music in a society and
601
treated music as a tool served for political purposes. This kind of idea can be linked to
Gramsci’s concept of hegemony. According to Gitlin (1980), “Hegemony is a ruling class’s
(or alliance’s) domination of subordinate classes and groups through the elaboration and
penetration of ideology (ideas and assumptions) into their common sense and every day
practice” (page 253).
When Taiwan was ruled under martial law, the ruling party (KMT) of the ROC set a
strict standard to rule and prohibit publications and music. In one of Chiang Kai-shek’s
article (Chiang was the ROC president from 1947-1975), he revealed the necessity for the
government to “train national righteousness, encourage fighting spirit… concentrate on the
music and song in order to correct the decadent music and excessive song that waste”
(Chiang, 1953, page 73). The government therefore diffused a series of patriotic songs. These
patriotic songs included Chinese folk songs, military songs, anti-Japanese songs, and anti-
communism songs.
All of patriotic songs under material law represent a sense of “Great China”, the
memory of Chinese mainland, or the enmity to communism. Also, all of them were written
and sung in mandarin. On one hand, by diffusing these patriotic songs, the KMT government
efficiently controlled the national ideology and created (or recreated) Taiwanese people’s
identities. By admiring Chinese culture, history, places, the government wanted people to
identify themselves as Chinese culturally and nationally. On the other hand, however, this
kind of patriotic song suppressed people’s consciousness of being Taiwanese. The word
“Taiwan” was a taboo and had never presented in any patriotic songs during the period of
martial law.
In the late 1970s, when the government’s control was loosened, Taiwanese people
finally had chance to present their anger that they had been repressed over hundreds of years.
The resistance of Taiwanese people can be separated into two parts: politically, many people
in Taiwan organized a series of demonstrations against the authoritarian regime and for
seeking Taiwan’s independence; culturally, Taiwanese people’s resistance against the
government’s ideology of Chinese identity reinforced the awareness of Taiwanese local
consciousness. In other words, more and more Taiwanese have re-learned the history of
Taiwan that has been ignored purposely by different colonizers, emphasized the education of
dialects, and changed their cultural identity from Chinese to Taiwanese. This kind of political
and cultural change is reflected on the results of the 2000 and 2004 presidential elections in
Taiwan as well as on modern Taiwanese literature and music. In this paper, I select two
Taiwanese songs that are usually sung on occasions of seeking Taiwan’s independence in
order to discuss the change of people’s cultural identity and the development of local
consciousness in Taiwan.
602
IV. CONCLUSION
In this paper, “Mother’s name is Taiwan” and “Do Not Annoy Taiwan” are chosen
to analyze the re-creation of cultural identity in Taiwan. These two songs present some
mutual characteristics. First, both of them encourage Taiwanese people to recognize that their
homeland is Taiwan rather than other places. The first song indicates directly that mother’s
(motherland’s) name is Taiwan, and the second song asks people to love Taiwan. “Mother’s
name is Taiwan” starts with describing Taiwan’s geographical features; “Do Not Annoy
Taiwan” also mentions Taiwan’s geography, but it pays more attention to Taiwan’s plentiful
produces. Taiwan’s another name is Formosa, which means a “beautiful island.” Taiwan is
famous for its abundance of landforms. This little island is surrounded by sea and consists of
mountains, hills, plateaus, and coasts. All types of climate can be found in Taiwan; hence, the
produces in Taiwan are ample. Both Taiwanese songs indicate this characteristic and imply
that Taiwanese people should be proud of their land. In addition, they both represent a painful
memory of being colonized. Historically, under the Qing dynasty rule, Taiwan was a border
of China and had never gotten any notice from the imperial government; under the Japanese
rule, Taiwan was a colony of Japan and lacked its independence; and under the KMT
government rule, Taiwan was a base for the ruling party to defeat the Chinese Communist
Party and to resume the Chinese mainland. The word “Taiwan” and the meaning of it has
been prohibited, ignored, or even distorted. These two Taiwanese songs describe this
situation; “Mother’s name is Taiwan” uses the words “dumb person” and “no voice” to
describe people’s silence when they were colonized, and then it raises several questions to
ask Taiwanese people why they would have negative feelings to recognize that they are
Taiwanese and their motherland is Taiwan. “Do Not Annoy Taiwan” describes a situation
that being Taiwanese was a shame when Taiwan was colonized and consequently asks people
to abandon the old ideology.
In terms of cultural identity, both songs present positive features of Taiwanese culture
and adopt historical factors to reinforce Taiwanese people’s local consciousness. In
“Mother’s name is Taiwan,” the lyric describes Taiwanese culture with direct, positive
words, such as the intuitive truth, justice, and spring. Both songs reveal a fortitudinous
Taiwanese spirit. “Do Not Annoy Taiwan” adopts a direct way to present it; at the end of the
song, it encourages people to have bravery to face difficult life, and Taiwanese people are not
worse than others. On the contrary, the lyric of “Mother’s name is Taiwan” uses “the sweet
potato son” to represent Taiwanese people. Sweet potato is a kind of native plant in Taiwan
and is easy to grow. During the 1940s to 50s, many Taiwanese were poor and did not have
money to buy rice and food. They therefore dug sweet potatoes from their farms and ate them
as the staple food everyday. In the late 1970s, when Taiwanese organized a series of political
activities to resist the authorities, they started to call themselves as “sweet potato sons.” In
“Mother’s name is Taiwan,” the word “sweet potato sons” can help it to achieve at least two
goals: first, it reinforces people’s local consciousness; and second, it recalls people’s memory
back to that poor period and consequently asks people not to repeat the same mistakes. “Do
Not Annoy Taiwan” also contains historical elements to emphasize people’s cultural identity
and local consciousness. However, this song chooses a positive description; the purpose of
mentioning “ancestor” and “generation” is to tell Taiwanese people that their families had
moved to Taiwan hundreds of years ago, and their children and descendants will continually
live on this land. This kind of depiction asks people to review their family histories and not to
forget their identities as Taiwanese.
603
Indeed, this kind of Taiwanese songs reflect Taiwanese people’s desire for
constructing their own cultural identity in the process of decolonization in Taiwan. By
constructing Taiwanese cultural identity, Taiwanese people also present their anger and
resistance to the KMT government and other colonizers throughout the Taiwanese history.
The way people develop a Taiwanese identity is to depict their love to the land of Taiwan. On
one hand, this kind of linkage can reinforce Taiwanese people’s local consciousness; on the
other hand, the link between land and identity explains the failure of the old ideology that
was emphasized and diffused by the KMT government over 40 years. Because of Taiwan’s
geography and special historical experience, the relationship between Taiwan and Chinese
mainland is actually not as strong as the KMT government thought. When the authorities paid
a lot of attention to change people’s Japanese identity, Taiwanese people actually have been
confused in the process of reconstructing a new cultural identity because of the lack of
sympoath and common memory of the Chinese mainland. Besides, the more the authoritarian
government oppressed them, the stronger desire that Taiwanese people would have to
develop a Taiwanese cultural identity and local consciousness.
However, this kind of development is not totally positive and healthy. On one hand,
finally, Taiwanese people have chances to face squarely Taiwanese history and language and
to show their love to the land where they live and grow up. On the other hand, the
development of the Taiwanese identity and local consciousness has represented a newly-
formed ideology. The two songs chosen in this paper are written and sung in so-called
“Taiwanese.” The meaning of “Taiwanese” here is actually a popular dialect called “Min
Nan.” The Taiwanese society consists of different sub-cultural groups. Ethically, people in
Taiwan can be separated to aborigine and Han people, with Han people are the majority. Han
people, depending on the regions their ancestors originally came from and the time they
immigrated to Taiwan, can be separated into several different sub-cultural groups: Holo,
Hakka, non-native groups. Among these sub-cultural groups of Han, Holo is the majority
(over 70%) and the dialect they speak is Min Nan. According to Shih (1997), the broad
definition of being Taiwanese presents a meaning of resident Taiwanese; in other words,
people who identify themselves as Taiwanese can be called Taiwanese. However, the current
Taiwanese identity, culturally, only reflects the image of Holo group; and the language
“Taiwanese” is the dialect of this group. Even though this new Taiwanese identity has
successfully resisted the old ideology of the Chinese identity, it represents a new ideology
and predominates over other sub-cultural groups in Taiwan.
In short, from analyzing these two Taiwanese songs, people can find out how
Taiwanese people reveal their resistances to previous colonizer and how they construct (or
reconstruct) the Taiwanese cultural identity. Because of Taiwan’s history and the political
resistance, the construction of the Taiwanese cultural identity and the development of
Taiwanese people’s local consciousness mutually influence each other in the process of
decolonization. The 2000 presidential election can be treated as a victory against the old
authoritarian regime and its hegemony. When the new Taiwanese identity replaces the old
Chinese identity and is accepted by more and more people in today’s Taiwan, it actually
starts a new round of diffusing hegemony. In the process of colonization and decolonization,
the constructions of Taiwanese people’s cultural identities were led by political forces and
served for political purposes. When the old identity was overthrown by Taiwanese people’s
resistance, it should give the society a good opportunity to review the history that they are
forced to forget and encourage the integration by different sub-cultural groups in Taiwan.
However, the image of the majority group continues the route of the old ideology because it
can not avoid the manipulation by politicians. This kind of political exploitation has given the
604
newly-formed Taiwanese identity a hegemonic meaning. What people can expect is the
resistance from other minor sub-cultural groups in Taiwan, and it has already begun. Today,
when Taiwanese people speak loudly and proudly that they are Taiwanese, they should
carefully consider one question: what does Taiwanese mean?
REFERENCES
Bocock, Robert. Hegemony. New York: Tavistock Publication and Ellis Horwood Litmited,
1986.
Boggs, Carl. Gramsci’s Marxism. London: Pluto Press.
Chiang, Kai-shek. Min Sheng Zhu Yi Yu Le Liang Pian Bu Shu [The supplements for the
chapters of education and amusements in the doctrine of People’s livelihood]. Taipei,
Taiwan: Centrol Wen Wu Publication, 1953.
Femia, Joseph. V. Gramsci’s Political Thought. New York: Oxford University Press, 1987.
Firth, Simon. “Music and Identity. “ In Hall, Stuart., and Paul du Gay, eds., Questions of
Cultural Identity. London: Sage, 1997, 108-126.
Gitlin, Todd. The Whole World is Watching: Mass Media in the Making and Unmaking of
the New Left. Berkeley, CA: University of California Press, 1980.
Gramsci, Antonio. Selections from the Prison Notebooks of Antonio Gramsci. Translated by
Hoare, Quintin., and Geoffrey N. Smith. New York: International Publishers, 1971.
Li Ji Zhu Shu [The explanations of Li Ji]. Taipei, Taiwan: Yi Wen Publications, 1993.
Makay, John J. “Psychotherapy as a Rhetoric for Secular Grace.” Central States Speech
Journal, 31, 1980, 184-196.
Plato. The Republic, Book IV. Translated by Jowett, Benjamin. NewYork: Modem Library,
n.d.
Shih, C. F. “Taiwan Di Zu Qun Zheng Zhi” [The politics of sub-cultural groups in Taiwan].
Jian Shou Lun Tan Zhuan Kan [The journal of professors forum], 4, 73-108.
605
CHAPTER 22
PUBLIC RELATIONS
AND
CORPORATE COMMUNICATIONS
606
NEWSPAPER ENDORSEMENTS AND ELECTION RESULT HEADLINES IN
THE 2 0 0 4 U.S. PRESIDENTIAL ELECTION
ABSTRACT
I. INTRODUCTION
On election night of the 2004 United States presidential election, newspaper headline
writers were faced with an electoral count that left the final decision up in the air as many of
them went to press that night and into the next day. As in the 2000 United States presidential
election, the final decision about who would win the election, Republican incumbent
president, George W. Bush of Texas, or Democratic candidate, Senator John Kerry of
Massachusetts, all rested on the voters of one state, this time Ohio. In what was widely
regarded as another highly polarized election year, and amid the ever-present cries of a liberal
media bias among some pundits and conservative politicians, research about presidential
endorsements and election result headlines seemed particularly appropriate. If a liberal
media trend were to have shown itself, it might have been manifested in election result
headlines, among papers that endorsed Kerry when the outcome was unclear, that leaned
toward the democratic candidate winning, declaring Kerry the winner or an election too close
to call. Conversely, a conservative media trend might have shown up in election result
headlines among papers that endorsed Bush that leaned toward a Bush win or declared Bush
the winner.
607
often affected by popular support of candidates (Schaefer, 1997). Research shows that
community leaders and newspaper editorial endorsements influence local and state election
outcomes (Lariscy, Tinkham, Edwards and Jones, 2004; Fedler, Smith and Counts, 1985;
McLenegham, 1983).
Voter demographics are locally and nationally consistent. Independent voters have
higher education and family income levels and tend to base voting decisions more on
newspaper editorials and endorsements (Smith, 1985; Hurd and Singletary, 1984).
Endorsements from group-owned and independently-owned newspapers are also consistent
with statewide averages for local and national elections (Rystrom, 1987). However, voters
rely more heavily on newspapers in local rather than national elections (Fedler, Smith and
Counts, 1985).
Research on the 1948 presidential election concluded that newspaper endorsement led
to an increase in voter turnout, but only affected voting to a small degree (Counts, 1989). A
comparison of the 1948 and 1960 presidential elections found editorial newspaper
endorsements and presidential outcome results support previous findings that newspaper
endorsements have little influence on voting (Counts, 1989). Examining the 1976, 1980 and
1984 presidential elections, Busterna and Hansen (1990) found that newspaper endorsements
heavily favored Republican candidates. Independent newspapers were more likely than chain
newspapers to favor the Republican candidate (Gaziano, 1989). Emig (1991) concluded that
past newspaper endorsements were predictors for 1988 presidential endorsements.
In the 1988 Bush, Dukakis presidential election newspapers appealed to liberal and
conservative positions, which were not predicted by the newspapers’ endorsements (Boeyink,
1992/1993). Newspaper candidate endorsements were independent of the coverage of
candidates (Dalton, Beck, Huckfeldt & Koetzle, 1998). Candidate coverage showed more
favorable coverage early in the campaign for Democratic candidates, while Republicans
received more favorable coverage late in the campaign (Stempel and Windhauser, 1989.)
Fedler, Counts and Stephens (1982) concluded that in the 1980 presidential election
voters were more likely to vote for the candidate endorsed by the newspaper published in the
city in which they lived. In that election, St. Dizier (1985) determined that endorsements had
a strong effect on voters’ decisions, which tended to remain consistent throughout the
election. There was a shift of newspaper party endorsements from Republican Party to
Democratic Party candidates during the 1964, 1992 and 1996 presidential elections.
Devereaux (1999) concluded that the shift was due to two factors. First, organizational and
business support was apparent in the media. Second, Democratic Party candidates developed
close relationships with media decision makers.
In the 2004 election, Kerry won the newspaper endorsement race with 213
newspapers endorsing him with a total of 20,882,889 in daily circulation, while 205 endorsed
Bush with a total of 15,743,799 in daily circulation (Mitchell, 2004). Bush won the real
election when final vote totals were tallied in Ohio.
III. HYPOTHESES
H1: Newspapers that endorsed Kerry will more frequently use election result
headlines leaning toward Kerry as the winner or declaring Kerry the winner of the 2004 U.S.
presidential election than will newspapers that endorsed Bush.
608
H2: Newspapers that endorsed Bush will more frequently use election result headlines
leaning toward Bush as the winner or declaring Bush the winner of the 2004 U.S. presidential
election than will newspapers that endorsed Kerry.
H3: Newspapers that endorsed Kerry will more frequently use election result
headlines concluding the result is too close to call in the 2004 U.S. presidential election than
will newspapers that endorsed Bush.
H4: Newspapers published in the northeastern U.S. region will more frequently use
election result headlines favoring Kerry rather than Bush.
H5: Newspapers published in the western U.S. region will more frequently use
election result headlines favoring Kerry rather than Bush.
H6: Newspapers published in the southeast U.S. region will more frequently use
election result headlines favoring Bush rather than Kerry.
H7: Newspapers published in the mid-western U.S. region will more frequently use
election result headlines favoring Bush rather than Kerry.
IV. METHODOLOGY
Researchers conducted a content analysis of U.S. newspaper articles, using the Lexis-
Nexus database, published Nov. 3, 2004, the day after the 2004 United States presidential
election. The unit of analysis was any headline published concerning the pending outcome of
the 2004 presidential election. Independent variables were which candidate was endorsed
(Bush or Kerry) and the region in which the paper was published (Northeast, Southeast, West
or Mid-West). The dependent variable was headline outcome (leaning toward Bush win,
Bush wins, leaning toward Kerry win, Kerry wins, too close to call/winner uncertain). A
total of 36 metro daily newspapers from across the nation were included in the study; 19
newspapers endorsed Bush, and 17 endorsed Kerry. Two coders achieved 100 percent
agreement in an intercoder reliability test on all variables except headline outcome, on which
they achieved 90 percent agreement, after two rounds.
V. RESULTS
609
Table I Candidate Endorsed By Headline Tone
Candidate Leaning Bush Too Close Leaning
Endorsed to Bush Wins To Call To Kerry
Bush 26/ 60.5% 6/ 14.0% 8/ 18.6% 3/ 7.0%
Kerry 11/ 35.5% 10/ 32.3% 18/ 32.3% 0/ 0%
Totals 37/ 50% 16/ 21.6% 18/ 24.3% 3/ 4.1%
Note: N=74; Chi-Square= 8.58; df= 3; p= <.05. No newspapers declared Kerry the winner.
H4, H5, H6 and H7 were designed to test regional influences on headline tone.
Results are seen in Table II. Categories were collapsed so that headlines favoring Bush
included headlines leaning toward Bush as the winner and headlines declaring Bush the
winner. Headlines favoring Kerry included headlines leaning toward Kerry as the winner and
headlines concluding the outcome of the election was too close to call.
H4 was not supported. Newspapers published in the northeast U.S. region did not
more frequently use election result headlines favoring Kerry rather than Bush, but just the
opposite. Sixty percent of the headlines favored Bush, while 40 percent favored Kerry. H5
was not supported. Newspapers published in the western U.S. region did not more frequently
favor Kerry. They favored Bush by 72.2 percent. H6 was supported at less than the .05
level. Newspapers published in the southeastern U.S. region favored Bush (60 percent) over
Kerry (40 percent). H7 was also supported at less than the .05 level. Bush received
overwhelmingly favorable headlines in newspapers published in the mid-western U.S. region,
where 100 percent of the headlines favored him.
Overall 71.6 percent of all headlines analyzed favored Bush, the incumbent president
and Republican candidate, while 28.4 percent favored Kerry, the Democratic candidate.
VI. CONCLUSION
Results pointed toward a conservative trend more so than a liberal trend. Newspapers
that endorsed Kerry did not more frequently use headlines that leaned toward him as the
winner; no newspapers declared Kerry the winner. On the other hand, newspapers that
endorsed Bush did more frequently lean toward him as the winner, but they did not more
frequently declare Bush the winner. Newspapers that endorsed Kerry, more frequently
declared Bush the winner than did newspapers that endorsed Bush. This does not indicate a
liberal trend.
The only suggestion of any possible liberal trend was that newspapers endorsing
Kerry more frequently concluded the outcome was too close to call than did newspapers
endorsing Bush. This is weak support for Kerry, since these newspapers did not declare
Kerry the winner or lean toward him.
610
However, Bush won the headline race with ease with 50 percent of the headlines in
the sample leaning toward him as the winner and 21.6 percent declaring Bush the winner.
Only 4.1 percent leaned toward Kerry as the winner and zero declared Kerry the winner. It
might be assumed that headlines characterizing the election as too close to call (24.3 percent),
could have been seen as favoring Kerry, but even including this weak support, only 28.3
percent of the headlines favored Kerry.
At a time when the outcome of the 2004 presidential election was inconclusive,
newspaper headlines appeared to trend toward the conservative candidate. An alternative
explanation might have been that headline writers may have made their headline decisions
based on their own conclusions about the probable election outcome. Some afternoon
newspapers, especially those published in the Pacific time zone, may have had more of an
indication about the outcome of the election before they went to press than did morning
newspapers.
REFERENCES
611
CHAPTER 23
QUALITY, PRODUCTIVITY
AND
MANUFACTURING
612
THE EFFECT OF AMBIGUOUS UNDERSTANDING OF PROBLEM
AND INSTRUCTIONS ON SERVICE QUALITY AND PRODUCTIVITY
ABSTRACT
Many services require service provider and service consumer to work together to
create the service product. As co producers, the service provider must understand the
problem of the consumer and consumer must understand the procedures and instructions of
service production to fully cooperate to produce the service product. This article proposes to
find the effects of ambiguous understanding of consumer’s problem by the provider and the
ambiguous understanding of procedures and instructions by the consumer on service quality
and productivity.
I. INTRODUCTION
Leonard L. Berry (1980) defines service, “as an act or performance offered by one
party to another. Although the process may be tied to a physical product, the performance is
transitory. Often intangible in nature, and does not normally result in ownership of any of the
factors of production.” Lovelock and Wirtz (2004) modifies the definition to, “a service is an
economic activity that creates value and provides benefits for customers at specific times and
places by bringing about a desired change in, or on behalf of, the recipient of the service.”
Basically the above definitions differentiate products from services. Products provide benefit
to the consumers by endowing them with ownership of devices or physical objects whereas
the services provide benefits through action or performance.
Manufacturing processes and marketing concepts were developed in the past based on
the manufacturing sector. Therefore, it will not be possible for us to transfer those processes
and concepts directly from the manufacturing sector to the service sector. Lovelock and
Wirtz (2004) list nine basic differences between products and services and they further
caution not to generalize these differences to all services. One of the differences, customers
may be involved in the production process is important to this paper.
613
Therefore, to produce a service product, which would solve the problem for the
consumer the service provider must understand the problem of the consumer and the
consumer should understand the instruction/s of the service provider to fully cooperate in the
production. An ambiguous understanding by either of the two will result in a sub standard
service product that will not completely solve the consumer’s problem. There is a dearth of
or almost no research in this area requires an immediate attention and hence, this paper.
In the past some researchers have done work on service quality and productivity.
Mary Jo Bitner, Bernard H. Booms, and Mary Stanfield Tetreault (1990) studied critical
incidents in airline, hotel and restaurant businesses. In this study they focused on the
experience of the service consumers. Following this Susan M. Keaveney (1995) studied
critical incidents that led service consumers to switch to competitors. In 1994 Mary Jo
Bitner, Bernard H. Booms, and Lois A. Mohr studied critical service incidents. This time
they looked at the employee or the service providers’ point of view. Jagdip Singh (2000) did
on service productivity and quality. Singh’s work too focused on service provider only.
Mary Jo Bitner, Bernard H. Booms, and Mary Stanfield Tetreault (1990) studied
critical incidents in airline, hotel and restaurant businesses. In this study of critical incidents,
a sample of customers was asked to recall a service encounter which was particularly
satisfying or dissatisfying. The respondents were asked to answer the following questions:
(1). When did the incident happen? (2).What specific circumstances led up to the situation?
(3).Exactly what did the employee say or do? (4).What resulted that made the consumer feel
the interaction was satisfying /dissatisfying? A total of 699 incidents were recorded. Fifty
percent of them were satisfying incidents and the others were dissatisfying. Then they were
categorized into three groups: (1) employee response to service failures, (2) employee
response to request for customized services, and (3) unprompted and unsolicited employee
actions. The results show that the customers were likely to be twice dissatisfied when a
service fails, twice satisfied when employees met the need of the consumer in a different
way, and half were satisfied and the other half was dissatisfied when unprompted or
unsolicited actions occurred.
When a service provider satisfactorily resolves a negative critical incident then it has
a great potential for enhancing consumer loyalty. By the same token, if not resolved, the
consumer loyalty will vanish and the customer will switch to the competitors. Susan M.
Keaveney (1995) studied 838 critical incidents that led customers to switch to competitors.
The reasons ascribed for switching: (1) core service failures 44%, (2) dissatisfactory
encounters 34%, (3) unfair pricing 30%, (4) inconvenience 21%, and (5) poor response to
service failures 17%. Many respondents described a decision to switch to competitors as
resulting from interrelated incidents, such as a service failure followed by an unsatisfactory
response to resolving the problem.
614
V. CRITICAL SERVICE ENCOUNTERS – EMPLOYEES POINT OF VIEW
Mary Jo Bitner, Bernard H. Booms, and Lois A. Mohr (1994) studied the service
providers’ viewpoint of the critical service encounters. In service settings, satisfaction is
often influenced by interactions between the service provider and the service consumer. In
this study 774 critical service incidents reported by hotel, restaurant, and airline employees.
The research found that many service providers do have a true customer orientation and do
identify with and understand customer needs. They have a respect for customers and a desire
to deliver excellent service. According to them the inability to provide excellent service is
governed by inadequate or poorly designed systems, poor or non existent recovery strategies,
or lack of knowledge. Further this study found from the employees that customers can be the
source of their own dissatisfaction through inappropriate behavior or being unreasonably
demanding.
Jagdip Singh (2000) sought answers for the following questions in his work: (1).What
mechanisms govern productivity and quality for service providers? (2). Does the tension of
competing demands from consumers and management have dysfunctional consequences?
(3).What resources help counters these dysfunctional effects? Singh found that service
providers’ productivity was unaffected by burnout tendencies but negatively impacted by
conflict between resources and demands and by role ambiguity relative to customers. He
believed that employees seek to maintain their productivity, even in the face of burnout,
because the relevant indicators are visible and relate to pay and job retention. By contrast,
the quality of service, which is less quantifiable and less visible, is likely to be damaged
directly as employees’ burnout on service consumers. Singh found an unexpected negative
correlation between organizational commitments and service quality, indicating that service
providers who are more committed to the organization may be less committed to service
consumers, and vice versa. Providing greater task control and boss support helps to shield
service providers from role stress, burnout, and thoughts of quitting, while also enhancing
positive attitudes.
As shown above some studies looked the service encounters through the lens of the
service consumers and some other studies looked through the lens of the service provider.
But, no study has so far studied simultaneously through the view points of the service
consumer as well as the service provider. Further, no study has ever studied the effects of
unambiguous understanding between the service consumer and the service provider. Hence,
in this paper we will study the effect of unambiguous understanding on the service quality
and productivity.
The purpose of this study is to examine (1) the effect of service provider’s
unambiguous understanding/ambiguous understanding of consumer’s problem and (2) the
effect of service consumer’s unambiguous understanding/ambiguous understanding of
provider’s instructions and guidelines.
615
VIII. PROPOSITIONS
Data Collection: Data will be collected using (CTI) critical incidents method, a
systematic method for recording events and behaviors that are observed to lead to success or
failure on a specific task (Ronan and Latham, 1974). Here the success will be indicated by a
production of a quality service product that solves the consumers’ problem and the failure
will be indicated by a sub standard service product that will not solve the consumers’
problem. Using the CTI, data will be collected through open ended questions and the results
will be content analyzed. In the first study respondents will be asked to report specific events
from their stay in the institution within the past two to four years of their course work. In the
second study clients of law firm will be studied. As the respondents are asked about specific
events rather than generalities, interpretations, or conclusions, this procedure meets criteria
established by Ericsson and Simon (1980) for providing valuable, reliable information about
cognitive processes. Researchers have found this method could yield reliable results in
finding success or failure of the task in question Ronan and Latham, 1974; Flanagan 1954;
White and Locke 1981).
616
Figure I. Typology of Service Product Outcomes
STUDY 1:
The students of a South Eastern University will be the service consumers
and the teachers will be the service providers. The instructions to the students/teachers being
interviewed will be as follows: Think of a time when you or a fellow student/teacher had a
particularly satisfying (dissatisfying) interaction with a teacher/student at your school. Then
they will be asked the following questions:
1. When did the incident happen?
2. What specific circumstances led up to this situation?
3. Have you understood the need of the consumer/the instructions of the provider?
4. Exactly what did you or your fellow student/teacher say or do?
5. What resulted that made you feel the interaction was satisfying (dissatisfying)?
6. What should you or your fellow student/teacher have said or done?
STUDY 2:
The clients of a Mid Western law firm will be the service consumers and the attorneys
will be the service providers. The instructions to the clients/attorneys being interviewed will
be as follows: Think of a time when you or a fellow client/attorney had a particularly
satisfying (dissatisfying) interaction with an attorney/client at your law firm. Then they will
be asked the following questions:
1. When did the incident happen?
2. What specific circumstances led up to this situation?
3. Have you understood the need of the consumer/the instructions of the provider?
4. Exactly what did you or your fellow attorney/client say or do?
5. What resulted that made you feel the interaction was satisfying (dissatisfying)?
6. What should you or your fellow attorney/client have said or done?
IX. CONCLUSION
The incidents reported by service consumer will be classified under the three major
groups of provider behaviors that account for all satisfactory and dissatisfactory incidents: (1)
provider response to service delivery system failure, (2) provider response to consumer needs
and requests, and (3) unprompted and unsolicited provider actions. The incidents reported by
service provider will be classified under the following groups for satisfactory and
dissatisfactory incidents: Satisfactory: (1) Extra work done, (2) model behavior and for
dissatisfactory: (1) Misbehavior, (2) verbal and/or physical abuse, (3) breaking
company/institutional policies, and (4) uncooperative. Then a comparison of provider and
consumer responses would provide us with the type of incident outcome. The outcomes will
be used to test the validity of the typology provided earlier.
REFERENCE
Christopher Lovelock. And Jochen Wirtz (2004), “Services Marketing: People, Technology,
Strategy,” 5th Ed. Pearson Prentice Hall, Upper Saddle River, NJ 07458.
Ericsson, K. Anders and Herbert A. Simon (1980), “Verbal Reports as Data,” Psychological
Review, 87 (May), 215 – 250.
Jagdip Singh (2000),”Performance Productivity and Quality of Frontline Employees in
Service Organizations,” Journal of Marketing, 64 (April), 15 – 24.
Leonard L. Berry (1980), “Service Marketing is Different,” Business (May – June).
617
Mary Jo Bitner, Bernard H. Booms, and Louis A Mohr (1994), “Critical Service Encounters:
The Employee’s View,” Journal of Marketing, 58 (October); 95 - 106.
618
I T PROJECT MANAGEMENT AND SOFTWARE
EVALUATION AND QUALITY
ABSTRACT
Project Management is a very essential tool for how migrate through the cradle to
grave process. Projects sometimes fail due to bad system design or manufacturing defects
due to bad process design or poor quality checks and inspection. However, a huge issue in
project success is time management.
I. INTRODUCTION
New and improved system and technology projects involve many elements which can
be very complex in nature. Technology is always changing and requires constant research to
stay ahead of the competition. Most often new technology requires moving into unfamiliar
territory. Compatibility is a huge issue, and most new systems projects integrate into
existing systems and incompatible technologies. Sometimes it may be simpler to drop old
system altogether and install a totally new system. Since technology changes so rapidly,
most projects have to deal with those changes in the middle of project which causes questions
of staying with original plan, change direction and incorporate new changes, or scrap old
project and start over. A detailed analysis will be done later in the paper to explain how
knowing technology can help come to a quicker decision. Affected business units,
information systems, and management all have to be accommodated for when implementing
a new system. IT projects are like investing in high risk growth stocks, it’s usually feast or
famine. IS projects can either give huge ROI or result in huge LOSS in the millions of
dollars. Technology projects are very technology dependent and have a cumulative effect on
other projects. Employee understanding of the new technology is a very important issue as
some employees may be set in their ways and become very reluctant to change. That can
slow down a project fast. Systems projects take away doing tasks manually and slowly.
They allow for quick data transfer, easier accessibility to data for qualified individuals, and
moving scattered data into one centralized place via data warehousing. Some of the trends in
systems are software package tools that help automate jobs and eliminate manual work.
Now that there is so much competition out there, tools can be made to help validate which
vendor to use when purchasing equipment or deciding who to outsource manufacturing to.
Software tools can help pull out the relevant data needed to make decisions as well as
monitor productivity. Tools can help investment reps dig up relevant mutual fund, stock, or
bond info. The challenge of the project is to design, test, debug, redesign, retest, debug,
(constant loop) until prototype is finished from an engineering standpoint. All during the
process market research, return on investment, and cost analysis work needs to be done also.
This paper will explore failures and successes of projects, the process of cradle to grave
project completion, and the need for top management participation in strategy formulation.
II. STATISTICS
Project Management skills are very important in the workplace, more than most
people care to realize. The ancient way of doing projects only required single and stand-
alone projects with minimal benefits and limited resources. Specific profits as opposed to
619
company-wide profits were the main emphasis. Companies in the earlier days did not realize
that projects can’t be built in a day and just be released anytime just to fail considerably.
The new and improved way of thinking as far as Project Management goes is to have more
integrated projects with shared resources. The expectations and benefits have been raised to
a higher level as well as the connection to processes. The business environment wants to get
the most out of all resources available which not only means buying all new resources, but
realizing what’s already in-house and getting the best uses out of those. The most effective
project management provides greater benefits to the business since the purpose and scope of
project is clearly defined to provide tangible business benefits.
620
IV. THE PERTINENT QUESTIONS FOR IT PROJECT MANAGEMENT?
Speed, service, cost and quality always seem to come up in conversations about how a
systems process can be improved using information technology. Reengineering is analysis,
simplification, and the act of redesigning a process. In other words, If companies choose
not to implement reengineering when new system planning is on the horizon, that would most
likely imply that companies want to keep the existing system in place and modify as opposed
to drastically changing the system altogether. The reengineering reorganizes work flows,
which takes combined steps to cut very repetitive manual paper-intensive tasks and move
toward automation. This is a very important tool in project management since time is a very
critical component of success. Time spent on more research oriented tasks is better than too
much time spent on busy work tasks that can be handled by automation. This also requires
much more than tweaking a lot of existing procedures. The reengineering process requires a
new vision of how the process is to be organized. An important step in reengineering is for
management to understand and measure the performance of existing processes as a baseline.
In other words, existing standards have to be understood and enforced in order for change to
take place.
621
Management has to make sure they fully understand the existing process cover to
cover and understand the positives and negatives about it so the same mistakes won’t be done
twice. Also by understanding the positives, those concepts can be carried over to the new
system. Another key to success is to not think of only what the company wants to achieve,
but think of what the customer wants to achieve. An important concept some companies miss
is the fact that they don’t factor in the customer when making the decisions on the next wave
of systems to implement. Customers are the ones spending the money to keep the company
afloat, so without keeping up with customer trends, companies can’t survive. In the case of
the Swiss bank SkandiaBanken, their CEO Goran Lenkel believes in the concept that the
customer leads his company and their initiatives, not the high up executives. The customer
may not know how to manipulate the technology to get the systems working, but they sure do
know what they want to do with the technology. Before any reengineering gets done, the
customer has to be heavily involved with the brainstorming stages via customer surveys and
constant follow-up and feedback with the existing customers.
The process can be improved, or redesigned. The paybacks and the risks have to be
heavily thought out. Studies have shown that about 70% of reengineering projects fail to
deliver the benefits. Reasons are fear, anxiety and resistance. There will be instances where
people believe, “if it ain’t broke don’t fix it”. Some people don’t like change or getting out
of that “comfort zone” with the existing system. Some reasons are that people feel threatened
that their job security will fall due to the new implementation, so they will fight to keep the
old systems in place. Another issue is the amount of training that would have to take place to
get up to speed on the new system as well as general growing pains to any type of change that
takes place. Customer concerns are very important, however, employee concerns also have to
be correctly taken into account. All employees should be brought up to speed on what is
going on so their concerns can accurately be taken under consideration. If the employees
don’t like it or feel uncomfortable with it, they won’t support it which could slow down the
release of the project. It’s an ongoing challenge for redesigning new systems, but if
companies are going to keep doing things the same, how will growth and continued success
happen? Also, putting band-aids on the sore spots still does not solve the complete problem.
This leads into the reason why ROI definitely has to be measured accurately before the
project even starts to ease the transition into the project.
622
consideration, the system will add a possible 20% chance the product will not produce a
positive ROI. Depending on unique circumstances, many clients of this system will have
different tolerances on how much risk they are willing to take with the project, so a project
may have a 60% ROI with a 20% risk for negative return, green light goes to those who are
willing to take the 20% risk, red light to those who are not. The project manager has the
tough task here to ensure that the IT and finance groups are working together when
evaluating ROI. These two groups have totally different responsibilities, however when it
comes to ROI for a project, they need to be clicking together on all cylinders. It also means
having an understanding of what the responsibilities of both functions are so decisions can be
made more clearly. If both sides are on the same page, the company will move twice as fast,
if not, project will be at a standstill.
VII. CONCLUSION
A Project Office is a useful technique that will help achieve a measurable ROI with
business issues and limited resources being on the rise. The purpose is to evaluate, measure,
and essentially enforce the performance and interaction of IT processes across a company’s
business units. Key players and their roles for each project are known as well as who is
responsible for communicating and setting the policy regarding the project’ s performance.
It communicates to the heads of the business units involved as well as senior management
about ongoing project performance. A project office should have a specialist and an analyst.
The specialist is the person with the skills to oversee those that are charged with delivering
the projects. The analyst gathers, compiles, and reports project performance to senior
management. They also have to take into account the other projects in construction
elsewhere in the organization since no one IT project in an organization stands alone from the
rest of the IT projects. The Office can definitely draw out a clear picture of how the project
falls in line with company strategy and objectives. Also actual vs. projected ROI comes into
play with the Office. It is one thing to compile a projected ROI at the beginning of project;
however the measure of actual ROI when project is complete is just as important. Actual
ROI can be used as a measuring stick for future and similar projects.
REFERENCE
623
IMPROVING PRODUCTIVITY WITH ENTERPRISE RESOUCE PLANNING
ABSTRACT
The term productivity has an inherent meaning for most people. Many consider
productivity to be a measure of efficiency. In organizations, however, productivity should be
viewed from both efficiency and effectiveness (performance) point of view. Focusing on
efficiency alone can be harmful to the organization’s long-term success and competitiveness.
Over the years, corporations have adopted new technology to integrate business activities in
order to achieve both effectiveness and efficiency in their operations. In recent years, many
firms have invested in enterprise resource planning in order to integrate all business activities
into a uniform system. The implementation of enterprise resource planning enables the firm
to reduce transaction costs of the business and improve its productivity and profitability.
I. INTRODUCTION
In today’s global business managers are increasingly under pressure to improve the
financial performance and the profitability of their companies. One method of improving
profitability is to focus on a strategy that improves productivity by reducing costs of business
activities in the firm. To reduce costs and to improve productivity many organizations
consider the adoption of new technology. This approached is based on the assumption that
the company is operating as productive as possible with the existing technology and therefore
one way to improve efficiency is to upgrade or change the current technology. In recent
years, one such technology adopted by many companies is the enterprise resource planning
(ERP).
A review of the literature suggests that ERP systems are used by small, medium and
large corporations as well as government agencies and nonprofit organizations. In recent
years a growing stream of research has focused on the competitive advantage of ERP and
stress the importance of considering the organization’s business models and core
624
competencies when making decisions for or against ERP implementation (Lengnick-Hall et
al., 2004; Davenport, 1998; Prahalad and Krishnan, 1999; Holland and Light 1999).
To successfully implement an ERP system and to avoid failure, the firm must conduct
a careful preliminary analysis and develop a plan for ERP acquisition and implementation.
The most important success factors for ERP implementation include top management
support, effective project management, extensive user training, and viewing ERP as a
business solution. Factors such as inadequate technology planning, user involvement and
training, budget and schedule overruns, and availability of adequate skills are considered
reasons for ERP failures (Sumner, 2000; Umble and Umble, 2002, Wright and Wright, 2002).
ERP systems are very expensive to implement. The costs of an ERP system come in
various forms and include the software, hardware and network investments and often
consulting costs. These costs vary from one company to another and depend on the degree of
system integration and applications desired by the firm, the larger the company and the more
advanced and complicated the ERP system, the greater the cost.
The cost of the ERP package varies among vendors, type of package, and degree of
customization. Some companies require more customized versions of ERP systems than
others. The more customization a company desires from an ERP system, the greater the cost.
Changes in the operation environment, implementation costs, and integration costs are also
among the fixed costs of an ERP system.
Company time, time when other functions could be performed is an opportunity cost
of ERP systems. Along with the amount of time that it takes to train employees and keep
them up-to-date with the newest innovations within their present system, it takes a lot of time
to determine which system to choose, implement the system and keep it running. It usually
takes at least a year to implement an ERP system. In some instances it can be shorter than
that, possibly in smaller companies, however that is extremely rare. In many cases it takes
much longer than a year for full implementation of ERP, sometimes as much as 2 to 3 years
(Leitch, 2002).
ERP systems can provide an organization with many benefits. It is important that
these benefits outweigh the costs of the system and they should as long as the correct system
for the organization is chosen and the system is implemented properly. These systems can in
625
the long run save millions of dollars, improve quality of information, and increase workers
productivity by reducing the amount of time to do a job. ERP systems can virtually eliminate
the redundancies that occur from outdated and separate systems that may be present in each
department of an organization.
One benefit of ERP systems is that information has to be put into the system only
once. Various employees can access data simultaneously in ERP systems, where as in
outdated and separate legacy systems this task was much less likely. With the integration of
departments in ERP systems, personnel from the finance department can obtain information
about a customer for example, as well as personnel from the Human Resources department.
A successful ERP system will provide real-time and up-to-date information to all of a
company’s decision makers, from executives to front-line employees. Customer service is
improved by the rapid release of information. Labor, production and inventory costs are
reduced by having timely information.
Unfortunately, determining the benefits of an ERP system is much harder than the
costs. Although a company cannot be precise in estimating the cost of a system, measuring
the benefits is much more difficult. Often companies will not be able to determine the
monetary benefits, and cost savings that an ERP solution will offer until several years after its
implementation.
Strategic evaluation requires the firm to take a careful look at its existing legacy
systems. Legacy systems encompass the existing business processes, organizational structure,
culture, and information technology. Thus, they determine the organizational change required
to successfully implement an ERP system. For example, if the existing legacy systems are
complex, with multiple technology platforms and a variety of procedures to manage business
processes, then the amount of technical and organizational change required is high. If the
organization already has common business processes and simple technical architecture,
change requirements should be low.
626
The organization’s readiness for change influences the ERP strategy to a great extent.
Based on that, different approaches can be adopted. For example, the company can
implement a skeleton version of a software package initially, and then gradually add extra
functionality once the system is operating and the users are familiar with it. The main
advantages of this approach are speed and simplicity. By adopting a skeleton approach,
implementation of an ERP system across multiple sites can be achieved in a much shorter
timeframe. A more ambitious approach is to implement a system with complete functionality
in a single effort. This approach will require less time for the entire system to be integrated,
however, it requires a higher level of expertise and stronger organizational commitment, and
risk of failure is greater.
Apart from strategic issues, many tactical factors such as vendor selection,
outsourcing, hiring of consultants and personnel should be considered as well. Companies
should consider vendor selection by determining the appropriate functions that are desired of
the system first. If a vendor offers a solution that is close to what the company needs, this
vendor should be selected to reduce the need for customization. If a single vendor cannot
satisfy all the company’s needs then a multi-vendor solution could be considered, but the
implementation is complex.
Instead of buying, some companies can lease ERP modules or suites, renting only the
modules they need. Leased software usually is accessed over the Internet rather than installed
on the company’s hardware. With this option, the ERP vendor assumes responsibility for
maintaining and upgrading the system. However, this approach requires a lot of trust and
security since the company that is leasing the system places all its data capabilities in the
hands of the online software vendor.
Often companies hire outside consultants to assist them with ERP system selection
and implementation. Many consultants have considerable experience in specific industries or
comprehensive knowledge about certain software. Thus, they are often better able to
determine which ERP module will work best for a given company. In selecting consultants,
companies should look carefully at the consultant’s resume and inquire about that
consultant’s financial ties to the software vendor to ensure objectivity.
627
V. CONCLUSIONS
ERP implementation offers significant benefits for today’s businesses; they are based
on a value-chain view of the business in which functional departments coordinate their work.
It forces the firm to standardize and integrate its processes across the enterprise, eliminate or
re-engineer redundant or non-value-added processes, which leads to significant reductions in
operational costs and increases in productivity in the long run.
REFERENCES
Brickley, P. “Defunct outfit firm blames IT firm; in liquidation, firm alleges Andersen
Consulting added to its troubles.” Philadelphia Business Journal, 17, (40), 1998, 3-4.
Davenport, T. “Putting the enterprise into the enterprise system.” Harvard
Business Review, 76, (4), 1998, 121-131.
Ferrando, T. “ERP systems help with integration.” American City & County, 115, (11), 2000,
12.
Holland, C. P., and Light, B. “A critical success factors model for ERP Implementation.”
IEEE Software, 16, (3), 1999, 30-35.
Leitch, J. “The cost of cutting costs.” Contract Journal, March 6 2002, 8-10.
Lengnick-Hall, C., Lengnick-Hall, M. and Abdinnour-Helm, S. “The role of social and
intellectual capital in achieving competitive advantage through enterprise resource
planning (ERP) systems.” Journal of Engineering and Technology Management, 21,
(4), 2004, 307-330.
Prahalad, C.K. & Krishnan, M.S. “The new meaning of quality in the information
Age.” Harvard Business Review, 77, (5), 1999, 109-118.
Sumner, M. “Risk factors in enterprise-wide/ERP projects.” Journal of Information
technology, 15 , (4), 2000, 317-327.
Umble, E.J., and Umble, M.M. “Avoiding ERP implementation failure.” Industrial
Management, 44, (1), 2002, 25-33.
Wright, S. and Wright, A. M. “Information systems assurance for enterprise resource
planning system: implementation and unique risk considerations.” Journal of
Information Systems, 16, (supplemental), 2002, 99-113.
628
CHAPTER 24
SPIRITUALITY IN ORGANIZATIONS
629
REFLECTIONS ON ISLAM AND GLOBALIZATION IN
SUB SAHARA AFRICA
ABSTRACT
The present paper will be one of several overviews intended to introduce a larger study
of economic and developmental problems in sub Sahara Africa. This paper is concerned with
how religion functions with respect to the integration of underdeveloped nations with the
global economy. In Sub Sahara Africa there are three major religious groups: native
religions, Islam, and Christianity. This paper shows how nations in the region where Islam is
strong can use that religion in a positive manner to strengthen global linkages. Beyond that
the message seems to be that those concerned with global linkages must regard existing
religions as givens as they pursue their objectives.
I. INTRODUCTION
The recent meetings of the Elite Eight in Scotland focused some attention upon one of
the poorest regions of the world—Sub Sahara Africa. As a result of those meetings Tony
Blair, the Prime Minister of Great Britain was partially successful in getting at least a portion
of the monies owed to the Elite Eight by governments in the region forgiven. Of course even
this partial success will be somewhat helpful to the governments concerned. Whether or not
it raises positive possibilities for more assistance for the region remains to be seen.
Since the breakup of European Empires in the period following World War II little
positive attention from what is sometimes characterized as the Western World, has been
evident in the region. The result has been that at this writing (2005) many of the nations of
the region were listed amongst the poorest in the world. Evidence of that can be seen in the
Human Development Report issued by the United Nations 2005, in which no Sub Saharan
mainland nations were listed in the top 100 as measured by the Human Development Index
(134-142).
A closer look at the component factors in the index adds weight to the issue. In Sub
Sahara Africa life expectancy stood at 46.3 years as compared to 78.3 years in high income
nations. The literacy rate for those aged 15 and above in 2002 stood at 63.2 per cent in the
region. There are various reasons why the nations of the region are doing poorly compared to
other parts of the globe. European colonial powers left behind them former colonies ill
equipped to govern themselves. Beyond that in many cases European settlers retained the
best land and European business interests continued in control of valuable resource exports.
Beyond such matters foreign business interests were slow in investing in facilities in nations
controlled by questionable regimes.
630
II. THE IMPORTANCE OF EXTERNAL LINKAGES
There is much that could be discussed concerning the poor conditions prevalent in Sub
Sahara Africa. The current discussion will be aimed at how a better linkage to globalization
and the global economy may improve economic prospects for the jurisdictions concerned and
some specifics on how that can be accomplished.
To begin with the diversity of African cultures, belief systems and languages make
foreign linkages rather difficult. Beyond that diversity amongst former colonial powers
regarding languages, governance measures and imposed cultural and religious imperatives
seem to have been of questionable assistance. Indeed the former colonial regimes established
geographical boundaries that separated tribal groupings, extended families and other
functional groupings. Thus many current nations in the region appear to be based upon
artificial boundaries. Any prospective global business aspirations must face many or all of
these regional peculiarities.
Thus, from the point of view of individual nations in the region global integration and/ or
linkages must appear daunting. Indeed in many cases natural developmental ambitions may
be being thwarted by religious and cultural differences between residents. Sub Sahara Africa
divided as it was by European powers is having some difficulties in developing the national
identities required of functional nations. It is the position of the current investigators that
these national identities can be fostered to some extent through the development of sound
business climates and international business linkages in the nations concerned. Such
elements must be mastered without incursions into existing cultures and religious mores.
This may appear difficult if not impossible. Hopefully the current discussion will begin to
lay a foundation for how this task can be accomplished.
Africa has long been a place in which various religions have flourished. Today its
population espouses three religious traditions which enjoy the adherence of the bulk of the
population. The first of these traditions covers the native religions which predated foreign
influence and are still popular today. The second is Islam, the first foreign religious tradition
to gain a foothold in the region. The third is Christianity, embracing a number of sub sets.
Of course these traditions are only the most visible and hardly imply the non existence of
other belief systems.
The present investigators are of the opinion that the successful linkage of the nations of
the region to the global economy should not represent the eclipse of religious and cultural
traditions. Indeed success can only come if local populations are encouraged to retain their
customs and beliefs. In general it should be possible to bring that about.
The Islamic faith has had a significant impact in the region under discussion here. For
example in the nations of Mali and Somalia 100 per cent of the population is Muslim (Central
Intelligence Agency, 2004). The same source identifies the Cameroons as 98 per cent
Muslim. Zambia boasted a population which was 90 per cent Muslim. Djibouti and Senegal
stood at 74 per cent. In Sub Sahara Africa nearly 245 million people were identified as
Muslim.
631
Clearly such numbers suggest that the Islamic faith is a force to be reckoned with in Sub
Sahara Africa if the region is to improve its material status. If the region’s material status can
be improved with respect to the global economy, this can only be accomplished if Islam
sustains a position of respect both religiously and culturally.
One problem that most of Sub Sahara Africa shares with many nations that were
formerly classified as members of the Third World is the lack of employment opportunities
for young people. This problem causes unrest and violence in poor nations. The frustrations
of young populations can quickly translate into anarchy and even revolution. In today’s
world this issue has become very significant among young Islamic populations. Such
populations can become recruiting grounds for international terror groups.
The present authors see nothing in this which translates into the justification for labeling
Islam as a violent or anti western religion. Anyone modestly knowledgeable about Islam
would dismiss this as an oversimplification.
Any disenfranchised population of young adults is a fertile field for trouble. The youth
of Sub Sahara Africa are no exception. In jurisdictions where Islam is significant the solution
of the problem of youth unemployment would go a long way toward solving youth related
issues.
For instance a closer alignment with the global economy should create employment
opportunities and help in drying up the pools of disenfranchised youth.
The youth who are employed have a more hopeful future and are less susceptible to
international adventures of any sort. In the current context the question becomes whether or
not Islam can contribute to forming and supporting an economy largely dependent upon
globalization. The current authors assume that it can. Of course such a view requires an
answer to the question of how.
In an ideological paper David McCormack suggests that Sub Sahara Africa is facing an
infusion of recruits for radical Islam (2005). Of course disenfranchised young people in any
poor country represent a pool to be recruited for various questionable transnational causes,
not to mention domestic groups whether criminal or ideological.
The present authors are of the opinion that Islamic nations, where that religion has
successfully established a closer relationship with global interests, are in a strong position vis
a vis material betterment. This can be seen among various Middle Eastern nations (McKee,
Garner and McKee, 1999).
The general case for cooperation between Islam and the global economy has been
presented more recently (McKee, McKee and Garner, 2004, 2005). Specifically the coming
together of Islam and global interests was seen as benefiting from the actions of multi
national business service firms. Such firms are capable of facilitating and linking national
interests with those operating internationally.
632
groups are composed of foreign experts familiar with the machinations of the global economy
and foreign interests generally and local professionals familiar with both the peculiarities of
the local economy and Islamic traditions. Such a mix of service personnel should be able to
assist business with both international and domestic needs. In the practical world there seems
to be an overlapping of interests which should allow for a successful foundation for
businesses both foreign and domestic which should have no need to compromise local beliefs
and traditions.
Within Islam practices and traditions exist which should actually be conducive to
successful business transactions within a global systems of markets.
Obvious among such practices are the well known Islamic prohibition of interest and the
practice of alms giving known as the zakat.
In the case of interest it is known that there are financial interests in Islamic nations
where interest charges are accepted. This is not a needed adjustment nor is it well thought of
by religious Muslims. It hardly seems to be an advantageous policy if the nations concerned
are seeking a stable relationship to the global economy. To the eyes of the current authors the
prohibition of interest may be a positive element vis a vis the recruitment of modern business
for the global economy. Venture capitalists in search of positive investment opportunities
may be more easily recruited in the absence of interest opportunities. Certainly investment
for profit is an excellent hedge against inflation.
The zakat is one of the five pillars of Islam and certainly central in the Muslim belief
system. Under the zakat each Muslim is required to provide 2.5 per cent of his or her wealth
to the poor on an annual basis. There is little need to dwell upon how such persons are
impacted by inflation. Together with the prohibition of interest the zakat is a powerful
argument for Muslim participation in free market activities.
If such participation causes the domestic economies of Islamic nations to expand the
creation of employment opportunities appears obvious. This of course should reduce the
cadres of unemployed youth popularly referred to in some circles.
IV. CONCLUSION
Christianity with its myriad of subsets may appear even more problematic vis a vis
globalization, since Christianity has been a frequently vocal critic of materialism.
Nonetheless Christianity has long been known to be compatible with the free enterprise
system (Worland 1967, Gutierrez 1973). If religions such as Islam and Christianity are
given their due they should be found to be quite compatible with globalization. Indeed
international business interests should treat such belief systems with respect as potential sites
633
for their operations. In the same vein traditional religious interests must be treated
respectfully as accomplished facts by those searching to do business in Sub Sahara Africa.
REFERENCES
Buckley, Peter J., and Jeremy Clegg, (eds.), (1991) Multinational Enterprises in Less
Developed Countries. New York: St. Martins Press.
Central Intelligence Agency (2004), the World Fact Book.
Gutierrez, Gustavo (1973), A Theology of Liberation. Maryknoll, New York: Orbis Books.
Kuran, Timur (2004), “Why the Middle East is Economically Underdeveloped: Historical
Mechanism of Institutional Stagnation”, Journal of Economic Perspectives,
Volume 18, Number 3, 71-90.
McCormick, David (2005), An African Vortex, Islamism in Sub-Saharan Africa, Washington
DC: Center for Security Policy. Occasional Paper, Number 4.
McKee, David L., Don E. Garner, and Yosra A. McKee (1999), Accounting Services, The
Islamic Middle East and the Global Economy. Westport CT, Quorum Books.
McKee, David L., Yosra A. McKee and Don E. Garner (2002), “Multinational Consultants
as Contributors to Business Education and Economic Sophistication in Emerging
Markets”, in Alon, Ilan and John R. McIntyre (eds.). Business Education and
Emerging MarketEconomies PP. 15-26, New York: Kluwer Academic Publishers.
634
KARMA-YOGA AND ITS IMPLICATIONS FOR
MANAGEMENT THOUGHT AND INSTITUTIONAL REFORM
ABSTRACT
I. INTRODUCTION
The Karma-Yoga philosophy is most strongly advocated in the Bhagvad Gita, one of
Hinduism’s principal scriptures, which unfolds as a discourse on the battlefield between
Arjuna, a warrior-prince, and Krishna, an incarnation of God (and Arjuna’s charioteer in the
impending battle). Arjuna, a great warrior but suddenly reticent on the eve of battle, desires
to flee the battlefield and return to a life of exile and mendicancy in the forest. Krishna
counsels Arjuna that his reticence results from cowardice, not compassion for relatives in the
opposing ranks, and that a Ksatriya pursues spiritual progress through acquitting duty in the
proper fashion:
Looking at thine own Dharma, also, thou oughtest not to waver, for there is nothing higher
for a Ksatriya than a righteous war. Fortunate certainly are the Ksatriyas, O son of Prtha,
who are called to fight in such a battle that comes unsought as an open gate to heaven.
(Swarupananda 1996, pages 31-32)
635
The relative efficacy of an active or contemplative life in the pursuit of spiritual
progress has been a perennial subject of debate within Hindu society. The Bhagvad Gita
addresses the debate by asserting that work and activity are unavoidable in human life, and
that the spirit in which this work is carried out can either enmesh the individual by forming
new karma (good or bad), or liberate the individual by working out the effects of old karma
and avoiding the accumulation of new karmas. The principal elements involved in
transforming one’s karma into a form of yoga include non-expectation of rewards, emotional
detachment from context and consequences of the work, full concentration of mind on the
work itself, regarding the work and associated rewards as an offering to God, and seeing
agency in the Divine while regarding oneself as an instrument. According to the Gita, work
actuated by desire for rewards ultimately produces misery:
Work with desire is verily far inferior to that performed with mind undisturbed by
thoughts of results…The wise, possessed of this evenness of mind, abandoning the fruits of
their actions, freed for ever from the fetters of birth, go to that state which is beyond all evil.
(Swarupananda 1996, pages 49-51)
Despite the attitude towards rewards and attachment contained in Karma-Yoga, work
is not to be carried out in a shoddy, indifferent, haphazard manner. Work and associated
rewards are to be considered as offerings to God, and thus should be carried out with a
dedication bolstered by a detachment from self-consideration:
By endowing karma with a spiritual outlook Karma-yoga teaches the worker efficiency in
work. He is not to consider the kind of work as much as the spirit in which the work is done.
The humblest work done earnestly with the right attitude produces the greatest good. Having
no ulterior motive, a Karma-yogi can devote his whole attention to the work itself. He is
unconcerned with the pleasures and pains entailed in the work…A Karma-yogi does his work
neither mechanically nor in a mood of abstraction but with full attention and devotion
(Satprakashananda 1977, page 221).
Given that the core of Karma-yoga lies in detachment, how does it does with the
ethics of work? Does it discriminate between good and bad works? Karma-yoga is not the
same as good-works, if such works are performed in a spirit of attachment and a desire for
rewards. The result of such works is the production of good-karmas and samskaras, a form of
payment that is eventually exausted. However, the moral development that emerges from the
practice of good works, combined with the disillusionment that arises from attachment to
such works may lead to the path of Karma-yoga. Though unattached, the Karma-yogi does
discriminate between works of divergent ethical qualities:
(the Karma-yogi) knows that certain acts are right and promote happiness, while certain acts
are wrong and bring about misery. He finds the cause of bondage in both. But he is not blind
to their relative worth. As he has to work, he must choose the right work. There is another
636
reason why a Karma-yogi cannot work indiscriminately. It is selfishness that impels a man to
wrong deeds. No one will harm others unselfishly. A Karma-yogi, having no selfish motive,
cannot do misdeeds. His very nature directs him to the right path (Satprakashananda 1977,
page 224).
637
the importance of Calvinist doctrine in promoting the ‘worldly asceticism’ that enabled
Western Europe the institutional breakthroughs resulting in modern capitalism. Likewise,
Karma-yoga philosophy would require a parallel sort of contemporary interpretation to
influence management thought and practices in the United States. Covey’s Effectiveness
movement has managed to provide Christianity-derived ideas that resonate with a broad
American public, while speaking the secular language of business and not requiring the overt
acceptance of religious audiences by secular audiences. Similarly, Karma-yoga has been
interpreted in the modern age (by Vivekananka, for example) as being a heuristic,
experimental technique not requiring the acceptance of religious doctrine or self-conscious
religious avocation. However, the diffusion and influence of Karma-yoga based management
ideas would be facilitated by a potential audience for whom ideas from Indian and Eastern
philosophy resonate. Religious movements derived from eastern philosophies have a
significant presence in the United States, not least among the burgeoning immigrant
communities from Asia. Would the Karma-yoga philosophy hinder or facilitate the
integration of an organization? Complex organizations have been seen as having to
overcome basic problems of integration and cooperation, variously labeled as ‘collective
action’, ‘free-rider’, and ‘obediance to authority’ problems. The basic problem of integrating
complex organizations has been dealt with by solutions as varied as developing strong
organizational cultures and devising reward systems in accordance with principal agent
theory. The Karma-yogi would not be a self-interested agent, a free-rider, or a loyalist who
regards the organization as his very own. At the same time, there is a conservatism in
Karma-yoga, which encourages to be tolerant of circumstances, to not be contentious and
rebellious. This conservatism of the Bhagvad Gita’s message has been criticized by some
Indian leftists as reinforcing the injustices of the caste-system (Dirks 2001).
638
imperial system, more than public service in a democracy. In fact, many government
structures in India represent not only vestiges and legacies of the British Raj, but also the
Mughal Imperium that preceded it, cumulating to over four centuries of imperial rule.
Governance suffers as a result of this chasm between government officials and people.
Administrative reforms in India, as in other countries, surely would involve such steps such
as an opening up of human resources and staffing practices beyond the monopoly of an elite
civil service cadre, but an even more fundamental task is to transform the ideological and
cultural foundations of government service. Swami Ranganathananda, a leading spokesman
of the Ramakrishna Mission, attempted to re-think the foundations of democratic
administration in India in his lectures to public administrators in, collected and published
under the title Democratic Administration in the Light of Practical Vedanta (1996). The
Karma-yoga philosophy is likely to serve as a pillar in the ideological rethinking of public
service in India.
REFERENCES
639
CHAPTER 25
SPORT MARKETING
640
GOOD GAME, GOOD GAME: APPLYING SERVQUAL TO AND ASSESSING AN
NFL CONCESSION’S SERVICE QUALITY
ABSTRACT
Assessing quality has been vital for conventional organizations for years and now
sport organizations have begun to focus on it too. Rising ticket prices for fans, skyrocketing
team costs for owners, and increasing competition from other entertainment entities make
quality control central to many. The most accepted technique of assessing quality,
SERVQUAL, is discussed and then applied to an NFL team’s concession experience. The
results deliver the first averages of how fans rate the key dimensions of service. Results are
reported and conclusions and recommendations are drawn.
I. INTRODUCTION
Every year millions of fans flock to their favorite sporting event in droves and the
way they assess the quality of their game day experience is becoming increasingly important
to venue managers, fans, and concession vendors. As ticket prices continue to increase, so do
fan expectations. In addition to the event itself, which is out of the marketer’s control, the
concession experience is one of the most influential elements affecting the fan’s experience.
First class service and selection is expected from customers to match the premium ticket
prices. The average fan in regular seats will spend nearly $20 at each NFL event on standard
concessions (Team Marketing Report 2005). In short, it has become apparent that a regular
hotdog was not going to do the trick (Buzalka 2000). Food vending is an enormous business
within sport: it is estimated that $9 billion is spent on foodservices at sporting events
annually (King, 2004); $2 billion is from the NFL’s suite / club seating alone (Cameron
2004) a relatively new revenue stream. Suite holders are typically charged between $145 and
$250 per person.
Moreover, the venue and concession service provider has only limited opportunities to
establish a relationship of high quality exchanges because of the nature of sport. “You only
have 10 games to make an impression on your guests” says Hans Williamson, president of
the sports and entertainment group for Levy Restaurants (Cameron, 2004). Professional
sporting events are also becoming increasingly costly for owners as the expenses of the game
(e.g., player salaries, equipment, maintenance, and new venues) continue to escalate. For
sport managers increasing the game day experience’s value is a primary concern and critical
for the organization's survival. For team marketing professionals, understanding the variables
that affect the service quality perception is a key input into their resource allocation and
strategic marketing decisions. Moreover, for the providers who have been outsourced to
create the service, it is vital to continually improve the service because their business
customer (the team or venue) demands it and has the luxury of seeking contracts with other
providers if service quality isn’t good enough. The number of qualified vendors capable of
serving at major venues has intensified the competition for stadium and arena contracts. If a
641
vendor fails to satisfy the team’s fans with valued food experiences, then the team can readily
choose another food service provider. Good suppliers must provide outstanding service. In
this sense then, service quality is important to the fan as a valued part of the game day
experience, to the team as an important attribute of the total sport product sold to the fan, and
to the outsourced supplier as a business-to-business differentiation tool. This paper will
discuss service quality perception as it applies to an NFL team’s concession experienced by
fans. This will be done by using the RATER model of service quality. Following the
literature review of sport service quality assessment, we report the execution of an empirical
study where the dimensions of service are explores and assessed. The paper concludes with a
presentation of the results and discussion. Implications are suggested.
As with other service industries, in the sport industry it is not enough produce
adequate service encounters, but crucial for a company to hire, train and motivate employees
to consistently provide quality service. To do that, it is important for a company to listen to
what exceptional service means to customers and incorporate feedback into the company’s
vision and training programs. In a service, customer perceptions of service quality are the
measures used. The excepted way to measure customer perceptions is to use the
SERVQUAL model to identify and understand customer expectations. SERVQUAL is a
service quality assessment tool that was created by Parasuraman, Zeithaml and Berry (1988)
to measure how customers perceive the quality of service being provided. It’s been shown
that consumers tend to use the same basic criteria no matter what type of service is being
provided (Parasuraman, 1988). The original SERVQUAL model contains 22 questions that
measure the expectations consumers have about service quality and the perceptions of what is
actually delivered during their experience. These 22 questions are broken down into 5
dimensions. Easily remembered with the acronym RATER, it includes the dimensions on
which service quality are assessed: Reliability, Assurance, Tangibles, Empathy, and
Responsiveness. By using a Likert scale ranging from “strongly disagree” to “strongly
agree”, the customers’ perceptions can be gauged by asking the service’s customers items
related to the five dimensions (Hudson, 2004). Each of these dimensions is discussed as they
apply to sport.
Reliability is the service quality dimension that measures the ability to perform the
service dependably and accurately (Parasuraman et al., 1988). It has been called the most
important dimensions. When an employee is trained for a specific job, it includes making
sure the customer satisfaction is the top priority. This should include the proper way to greet
a customer, provide helpful information to service the customer and how to accurately
address questions the customer may have. (Czaplewski, 2002). For the purpose of rating the
service’s reliability in the NFL venue environment, the following questions have been
adapted for the dimension of reliability: Assurance is the service quality dimension that
measures the knowledge and courtesy of employees and their ability to convey trust and
confidence (Parasuraman et al., 1988). Training an employee to perform their job function
should also include providing a skill set that empowers them to make the right decisions.
This not only shows that the company hiring the employee has faith in them, but that they are
an important part of the organization. This also has positive benefits for the customer
receiving the service, such as making corrections instantly when services rendered do not
meet customer’s expectations. Tangibles dimension takes into consideration the appearance
642
of physical facilities, equipment, personnel, and communication materials. Training the
employee the correct way to handle the job also plays a part in this dimension. It is important
the product and services are executed so that it is appealing to the consumer and done in a
clean, well lit, and comfortable setting. Facilities should also be designed to appeal to the
customer’s senses. A question might be “The employees were neat in appearance.” Empathy
is the service quality dimension that measures the perceived caring, individualized attention
the employees provide to each customer. Providing service that goes above and beyond the
expected service levels occurs when an employee displays empathetic qualities. Empathy is
difficult to instill in an employee because of its intimate nature. It manifests itself in smiles,
personal attention, and clear communications. Customers feel a high quality service provider
is able and eager to give prompt and satisfactory service
IV. METHODOLOGY
The study utilized a mall-intercept technique at an NFL team’s stadium during a 2005
regular season afternoon game (beginning at 1:00 EST). Twenty field researchers were
divided into five teams to cover the stadium systematically and approached attending fans at
random who had recently exited a concession stand within a research team’s assigned area.
Respondents from all areas of the stadium were solicited (upper concourse, club level, and
main concourse). Data were collected beginning three hours before kickoff (when
concession areas opened) until just after halftime when the flow of fans visiting the
concession stands slowed.. Respondents were approached by field researchers, invited to
participate in the survey, and offered a food coupon to promote participation. A total of 269
usable surveys were collected. Measures - The survey used to measure customer service
includes all 5 dimensions of the RATER model with approximately 3 to 5 questions per
dimension (totaling 17 items). Five-point Likert scales ranging from Strongly Agrees to
Strongly Disagrees were used which allowed for the measurement of the difference between
customers’ expectations and perceptions of the actual service they received (Brown,
Churchill, and Peter 1993). The single page survey finished with multiple standard
demographic questions (see Figure I).
Thank you for your help by completing this survey. All answers are confidential.
Circle the number that best reflects your concession stand experience.
Strongly Strongly
Disagree Agree
The line moved quickly. R1 1 2 3 4 5
I received exactly what I ordered. R1 1 2 3 4 5
I received what I ordered quickly. R1 1 2 3 4 5
Staff appeared well trained to handle the job. A2 1 2 3 4 5
The staff greeted me in a friendly manner. A2 1 2 3 4 5
Staff recommended additional items to purchase. A2 1 2 3 4 5
The concession stand was clean. T3 1 2 3 4 5
The condiment area was clean. T3 1 2 3 4 5
The condiment area was well stocked. T3 1 2 3 4 5
The employees were neat in appearance. T3 1 2 3 4 5
The menu was easy to read. T3 1 2 3 4 5
The food presentation met my expectations. T3 1 2 3 4 5
643
The quality of the food met my expectations. T3 1 2 3 4 5
The drinks were worth the price. T3 1 2 3 4 5
The employee greeted me with a smile. E4 1 2 3 4 5
The staff seemed happy to provide service. E4 1 2 3 4 5
The staff seemed thankful for my patronage. E4 1 2 3 4 5
The staff displayed willingness to help. R5 1 2 3 4 5
The staff provided prompt service. R5 1 2 3 4 5
My overall concession stand experience was positive.
R5 1 2 3 4 5
I will return to this concession again. 1 2 3 4 5
Questions were developed based on the original RATER model (Parasuraman et al. 1988)
and in partnered cooperation with the host management team to capture pertinent issues
crucial to their specific business environment. It is not uncommon to adapt service quality
assessment items to accommodate specific industry needs, and may even be necessary, to
collect more pertinent information (Eastwood 2005).
V. RESULTS
Participants - The final sample was comprised of fewer females (22%) than males
(78%), and the sample's ages ranged from 13 to 81 years although the majority (92.0%) fell
between 18-69 years of age (mean = 36.84; SD = 13.86). Most of the respondents were white
(88.1%). The next largest race represented was African-American (17%). The annual
household income reported by the solicited fans was $102,118. However, 70% of the
respondents reported making less than that. The median annual household income was
$85,000. Most respondents reported higher education levels: less than 20% reported having
less than “some college” as the highest education level completed.
Behaviors of the fans were also collected. On average, fans reported to have visited
the concession area 3.6 times during the game. Most of those visits (24%) occur before the
game begins. The respondents attended, or expected to attend, about six games (5.58) over
the season. Most (72.3%) planned to attend at least eight games.
Service Quality Assessment - The 17 items used to assess quality perceptions held by
fans were first organized into the five dimensions and tested for Cronbach reliability (see
Table I). Each scored .70 or above indicating acceptability. Next, the dimension averages
were computed.
644
Table I
Dimension Cronbach ά Mean Quality
____________________________________Rating
Reliability .7434 4.229
Assurance .7496 3.697
Tangibles .8574 4.149
Empathy .8839 3.961
Responsiveness .7800 4.089
As the results above indicate, the Reliability dimension was rated the most positively
by the NFL team’s fans with a 4.2 on a scale of 1 to 5. In order of performance, the
remaining ServQual dimensions were perceived as Assurance, Tangibles, Response, and
Empathy. According to Berry et. al (2003) reliability is the most important dimension.
VI. CONCLUSION
REFERENCES
Black, B., (2000, August). The application of SERVQUAL in a district nursing service.
Retrieved March 15, 2005, from www.touchmedia.co.uk
Brown, T., Churchhill, G., & Peter, J.P. (1993, Spring). Research note: Improving the
measurement of service quality. Journal of Retailing, Greenwich. Vol 69, Iss. 1; p.
127
Buzalka, M., (2000). Catering to the suite life. Food Management, Vol. 35, Iss. 7; pg. 54-58
Cameron, S., (2004). The Frill of it All. Amusement Business, Vol. 116, Iss. 26; pg. 14-15
Czaplewski, A.J., Olson, E., & Slater, S. (2002, Jan/Feb). Applying the RATER model for
service success. Marketing Management, Chicago, Vol. 11, Iss 1, pg.14-18
Dolezalek, H. (2004, July). Boot Camp Brewhaha, Training, Vol. 41, 7, pg. 17
Eastwood, D., Brooker, J., & Smith, J. (2005). Developing marketing strategies for green
grocers: An application of SERVQUAL. Agribusiness, Vol 21, Iss. 1; p. 81
“How to create a Service Quality Survey/ How to build a Service Quality Survey.”Retreived
March 15, 2005, from http://www.surveyz.com/howto
/how%20to%20build%20service%20quality%20surveys.html
Hudon, S. & Graham, A., (2004). The measurement of service quality in the tour operating
sector: a methodological comparison. Journal of Travel Research, Vol. 42, pg 305-
312
Keele, (1994). Keeping the Customer Satisfied. Health Manpower Management, Vol. 20, Iss.
4; p. 11
645
King, P., (2004). Home Park Advantage. Nation’s Restaurant News, Vol. 38, Iss. 13; p 17
Nitecki, D. SERVQUAL: measuring service quality in academic libraries. Retrieved March
15, 2005 from http://www.arl.org/newsltr/191.servqual.html
Parasuraman, A., Zeithaml, V., & Berry, L., (1988). Servqual: a Multiple-Item Scale for
Measuring Consumer Perception. Journal of Retailing, Greenwich, Vol. 64, Iss. 1; Pg.
12-29
Parasuraman, A., & Zeithaml, V., (2003). Ten Lessons for Improving Service Quality. MSI
Reports Working Paper Series, No. 03-001, p 61-82
“The Rater Model”(2002). Monash University. Retrieved March 15, 2005 from
www.adm.monash.edu.au/cheq/support/rater_model.html
Team Marketing Report. Retrieved January 10, 2006 from http://www. team
marketing.com/fci.cfm?page=fci_nfl_05.cfm
646
CONVERGENCE IN MISSISSIPPI: A SPATIAL APPROACH.
ABSTRACT
This study analyzes the convergence process in Mississippi at county level, from both
a descriptive and general test perspective, applying a spatial statistics framework. Mississippi
makes an interesting case study for analyzing the income convergence process because of
several characteristics, such as the fairly large number of counties, its relative homogeneous
economy and its low income compared with the rest of the U.S. It finds evidence of low but
significant spatial correlation, suggesting an almost pattern-free spatial distribution of
percapita income growth. It also finds significant evidence of β convergence, albeit at a low
speed (less than one percent).
I. INTRODUCTION
One of the most intriguing research topics, designated by some researchers as the
“regional scientist’s art” (Plane 2003, p105), is the permanent growth and change at the
regional level. Indeed, the question if inequalities between different regions (countries and
their subdivisions) tend to decrease over time, and whether the process is endogenous, always
preoccupied economists. But, although research in the economic growth area is common, the
econometric (and not only) issues underlying the topic are still highly debated. Thus some
scholars plead for moving from general tests to “statistical descriptions of what is happening
coupled with a forecasting mechanism” (Carvalho and Harvey 2002).
This study analyzes β convergence for real income within an U.S. state, namely
Mississippi, over the 1969 – 2001 period, combining both a descriptive and a general test
perspective. Mississippi has a mix of characteristics that makes such a study interesting. First,
the absence of trade barriers of any kind (including the less important interstate barriers)
allows for an absolute convergence approach. Furthermore, the problem of different
standards and imperfect conversions amongst the data, which may lead to biases (Dowrick
and Nguyen 1989, Dowrick and Quiggin 1997), is also avoided. Indeed, there should be little
reason to distinguish between conditional and absolute convergence in this case (Barro and
Sala-i-Martin 2004, Carvalho and Harvey 2002). Third, the low percapita income compared
with the U.S. also makes Mississippi an interesting case, since extremes are known to behave
unpredictably.
The study finds a relatively low level, albeit significant, of spatial correlation within
the area. Tests against the OLS or spatial error model as the best specification for a general
convergence test seem inconclusive, although they appear to favor the spatial model.
However, relatively strong support for absolute convergence is found, even if at a low speed
(less than one percent).The next section reintroduces the reader to some basic convergence
concepts, and the most common model specifications as well as their interpretations. After
647
presenting and commenting on the estimation results, the study ends with conclusions and
suggestions for further research.
It is said that absolute β convergence exists when poor economies tend to grow faster
than the rich ones while all possible factors that govern the phenomenon are endogenous
^*
(Barro and SM 2004). To model such a situation one would assume that y has a common
^
value for all economies under study and therefore the growth rate depends only on y (0) , as
^
suggested by Baumol (1986). Then, if the coefficient of y (0) is statistically significant, one
may conclude that the sample exhibits absolute convergence.
On the other hand, conditional convergence exists when there are other variables
influencing the speed of convergence, and these variables differ between economies, being
^*
therefore area specific. Such variables may lead to different steady states y and therefore the
growth rate for each economy would depend not only on the initial conditions but also on
these variables. Finally σ convergence occurs when the dispersion of the real income (or other
measure of economic relevance) of a group of economies tends to decrease overtime. It can
be demonstrated that β convergence is a necessary but not sufficient condition for σ
convergence (Barro and SM 2004).
Model specification. One of the simplest empirical specifications for a model allowing testing
for convergence was proposed by Baumol (1986) and is the starting point for many
contemporaneous studies. While the theory behind it might have been less formal (from both
an economic and econometric point of view) it is simple and provides a robust study
framework. Moreover, it was demonstrated later that the model is in line with economic
theory. Ignoring the economy subscript (i) the model is:
ln yt +T − ln yt Y
= α − β * ln( yt ) + ε , where yt = t (2)
T Lt
Here Y represents income, L labor, and T the time interval under analysis. The norm in the
literature is to compute the growth rate over the entire time period for which data is available
and annualize it, and to standardize income by division to population, number of active
648
workers, hours worked, or other indicators. Then, obtaining a statistical significant β* is
interpreted as evidence that areas with lower income at the beginning of the period (time t)
grow faster. Since the average growth rate depends only on the initial y, such evidence would
indicate absolute convergence. The underlying convergence speed is obtained from the
following formula:
ln(1 − β *T )
β =− (3)
T
The convergence speed was found to be somewhere between 1.5 and 3.0 percent by several
previous studies (for various regions and time intervals).
But even if the economy under scrutiny is homogenous enough to be analyzed under an
absolute convergence assumption, an important effect may be introduced by the possible
spatial dependence in the data. Whilst spatial dependence only relatively recent begun to play
an increasingly explicit role in economics and econometrics, several scholars pleaded for
shifting the focus of research from treating areas of interest as “islands” to taking in
consideration the spatial dimension of the phenomena (Quah 1996). Consequently, the
classical convergence tests might need to be augmented to take in consideration possible
spatial dependencies. Such possible “spatial” models may be (but are not limited to) a spatial
Durbin or spatial error model. The decision between the best specifications relies as usual on
theory and econometric tests, and several recent papers describe such approaches (Anselin
2002).
Data. The data used in this study is compiled from the REIS system of the Bureau of
Economic Analysis (Bureau of Economic Analysis 2003), and consists of yearly realizations
of “Personal income” and “Population”
0 .0 4 0
for the 1969 – 2001 interval. The data and
methodology is described in the CD-ROM
0 .0 3 5 notes and on the web, and therefore need
Madison not be discussed here. The “Personal
0 .0 3 0
consumer expenditure: Chain-type price
0 .0 2 5 index” was used as a deflator for
calculating real per capita income. The
0 .0 2 0
data is relatively well known, being used
0 .0 15 in several other studies (Boasson 2002,
Higgins et all. 2003). The first step is to
0 .0 10
8 .6 0 8 .8 0 9 .0 0 9 .2 0 9 .4 0 9 .6 0 visually assess the strength of the
relationship between per capita income
Figure 1. Growth scatter plot, 1969-2001.
growth and initial percapita income.
Figure 1 suggests a negative relationship between the initial LRPI and the real percapita
income growth (LRPIGR), and therefore possible β convergence.
A second step is to understand the patterns of spatial correlation in the data and
identifying the possible spatial clusters. Figure 2 reveals the map of the LRPIGR values,
which helps one visualize the counties with the highest and lowest growth, as well as possible
spatial patterns in the data. The county with the largest average annual growth (3.8 %) is
Madison, situated in the middle of the state. It is interesting to observe that the counties
649
where gambling became an important part of the economy (gambling was established in late
1990s in Tunica, Coahoma and Bolivar, and the growth of the industry was astonishing) seem
to have benefited since they are in the group of counties with relatively high growth. As
expected, the counties with the lowest growth are situated mostly in the Mississippi Delta.
LRPIGR
The degree and significance of
the global spatial correlation in the data
is assessed with the help of the
Moran’s I statistics. Mississippi’s
spatial neighborhood structure is
characterized by an average of 5.48
neighbors for each county (based on
the queen neighborhood definition).
The computed Moran’s I is .1768, with
a p value of about .002 after 999
Monte-Carlo randomizations.
Corresponding to the above Moran
statistics, the standard deviation for the
LISA statistics is .4551. Table 1 shows Figure 2. Spatial distribution of real income growth.
the locations that qualify as possible
outliers regarding the spatial distribution of the LISA statistics as well as the associated p
value. The outliers were established with the two times standard deviation rule. There are
three such locations, out of which only Scott County has a negative Local Moran value
(indicating negative spatial correlation). Naturally, all outliers also appear as possible clusters
in the analysis.
Figure 3 shows the map of the clusters (as suggested by the LISA statistics, at a
significance level of .05, after 999 randomizations) based on the per capita income growth for
Mississippi as well as for the surrounding counties. It appears that, while the overall spatial
correlation is significant, there are relatively few clusters. Indeed there are two high-high
(Leak and Marshall counties, H-H in the legend), two high-low (Warren and Itawamba
counties, H-L in the legend), three low-high (Scott, Lauderdale, and Winston counties, L-H in
the legend), and one large low-low (Pearl River, Hancock, Harrison, Jackson, George and
Stone counties, L-L in the legend) clusters. The later is the clearest spatial cluster, situated in
the southern region of the state and composed by six low growth regions (out of which three
are adjacent to the Gulf of Mexico). As it can be observed from the maps the counties
surrounding the state border are maintained in the sample throughout the analysis. This
approach assures correct statistics for all calculations where a first-degree neighborhood
matrix is involved.
650
N Estimation. There are several examples of
H-H studies that looked at different regions and
L-L time periods to assess the degree to which
L-H the classical convergence framework holds.
H-L They employed different methodologies and
their findings are many times contradictory,
but in the case of the U.S. many studies
reported convergence at a speed of around
two percent (for a review of such studies see
Barro and SM 2004). However, researchers
found that divergence may not be ruled out
in certain cases, and suggested that the
possibility of formation of “clubs” should
Figure 3. Cluster Map, real percapita income also be considered (the term “clubs” was
coined by Quah 1996 who suggested that
the distribution of growth patterns may be bimodal, or even multimodal). Moreover, they also
found that, for certain time periods at least, divergence may appear even within countries,
suggesting that relatively similar economies do not necessarily converge (Evans and Karras
1996). Examples of regions exhibiting very weak or no support at all for convergence are
Austria (Hofer and WorgotteR 1997), and Greece (Siriopoulos and Asteriou 1988).
Table 2 reveals the estimation for the “classical” OLS regression as well as for the
spatial model believed to fit the data best. The OLS results suggest an acceptable fit for this
type of model and data, while the maximum likelihood model brings no significant changes.
Moreover, although the diagnostic statistics for the OLS estimation suggest weak
heteroscedasticity (White test marginally significant with a p value of .0422) and the spatial
diagnostic tests suggest a spatial error model (the Moran’s I p value is .0171 and the LM test
for the error model has a p value of .0330), the Akaike and Schwartz criterion are both
slightly larger for the ML model, and the LR test for spatial dependence is only marginally
significant (p value .0478). However, for the spatial model, the Breusch-Pagan test does not
indicate significant heteroscedasticity (p value .0563) suggesting a better specification.
651
Table 2. Estimation results.
OLS ML
Variable
Coefficient t-stat. Coefficient t-stat.
Constant 0.1132 7.1436 0.1109 6.7094
LRPI 1969 -0.0098 -5.5894 -0.0095 -5.2165
Lambda - - 0.2654 2.0782
R2 0.2066 0.2434
F statistics 31.2414 -
AIC -1010.49 -1014.54
SIC -1004.88 -1008.93
Note: As in (7), lambda stands for the coefficient of the lagged error.
In both cases β* is highly significant and has a fairly close value, which is taken as a proof
that the real percapita income in Mississippi converges. In both cases the speed of
convergence is about .8 percent, much less than the two percent which part of the literature
suggest as a universal constant. More research is needed to understand why the convergence
speed is so low. This result is unexpected especially since at the county level many people
work in a different county then the one they live, a movement that would tend to equalize
income growth. A possible explanation is that
IV. CONCLUSION
This study investigates the convergence process at the county level for Mississippi,
for the 1969 – 2001 interval, from both a descriptive and a general test approach. It finds
indications of low but significant global spatial correlation, but a relatively low number of
spatial clusters, suggesting a spatially unorganized economy. Applying both a classical and a
spatial approach, the study finds significant evidence of real percapita income convergence
amongst the counties in Mississippi. The convergence speed of about .8 percent however is
lower than the two percent speed suggested by other authors as “standard”.
REFERENCES
Baumol W. J. “Productivity Growth, Convergence, and Welfare: What the Long-run Data
Show.” American Economic Review. 76, 1986, 1072-1085.
Boasson E. The Development and Dispersion of Industries at the County Scale in the United
States 1969 – 1996: An Integration of Geographic Information Systems (GIS),
Location Quotient and Spatial Statistics. Ph.D. dissertation, State University of New
York, Buffalo, 2002.
Dowrick S. and Hguyen D. “OECD Comparative Economic Growth 1950-85: Catch-up and
Convergence.” American Economic Review, 79, 1989, 1001-1020.
Dowrick S. and Quiggin J. “True Measures of GDP and Convergence.” American Economic
Review. 87, 1997, 41-64.
Evans P. and Karras G. “Do Economies Converge? Evidence from a Panel of U.S. States.”
Review of Economics and Statistics, 78, 1996, 384-388.
Hofer H. and Worgotter A. “Regional Income Convergence in Austria.” Regional Studies,
31, 1997, 1-12.
652
Plane D. A. “Perplexity, Complexity, Metroplexity, Microplexity: Perspectives for Future
Research on Regional Growth and Change.” Review of Regional Studies, 33, 2003,
104-120.
Quah, D. T. Twin peaks: “Growth and Convergence in Models of Distribution Dynamics.”
Economic Journal, 106, 1996, 1045-1055.
Rey S. J. and Montouri B. D. “US Regional Income Convergence: A Spatial Econometric
Perspective.” Regional Studies, 33 (2), 1999, 143 – 156.
Siriopoulos C. and Asteriou D. “Testing for Convergence across the Greek Regions.” Regional
Studies, 32 (6), 1998, 537-546.
653
CHAPTER 26
654
EXPLORING CRITICAL STRATEGIC MANAGEMENT
ABSTRACT
This article examines the tenets of Critical Strategic Management based on the latest
ideas and debates amongst critical strategic management scholars. The perspective subsumes
wider cultural, political, and moral ramifications in the process to reflect the reality of the
wider social world and argues against a technocratic approach that is preoccupied with
instrumental rationality to preserve the sectional interests of elite managers who run
corporations only to improve corporate profitability. Critical Strategic Management resonates
with the paradigm of the power and political school perspective and draws on post-modern
and critical theories to question the neutrality of the strategy process embedded in an
organisational context. The perspective suggests a conception of emancipation to be
introduced in the content, process and context of strategising to improve corporate
performance for the betterment of wider society.
I. INTRODUCTION
Many of those engaged in Critical Management Studies, who have come to be known
as critical management scholars, myself included, feel impelled to do more than simply
communicate our critical thoughts to each other. For many of us, some form of practical
engagement as activists is an essential part of our identities as critical management scholars,
in other words, changing the management world as we find it and in so doing affirm
ourselves as active agents. This article is an attempt to move the new paradigm of critical
strategic management beyond the language of critiques into practice and perspective. In other
words, to make a contribution to the development of a critical understanding of strategic
management by examining the tenets and providing a clearer practical foundation for further
research in the field. The first section of this article examines the minefield of orthodox
strategic management in its current form. The second section examines the latest
contributions to the development in the strategy field by critical scholars who examine
strategic management from a broader critical perspective i.e. one that has significant cultural,
social and political ramifications within organizations and in the wider society. The third
section critiques these latest critical ideas, thinking and contributions to the strategy field.
The final section concludes by suggesting caution about some of the embedded problematic
premises, presumptions and presuppositions of the critical approach.
For better or worse strategic management has become an academic discipline in its own
right, with its own academic journals, conferences and common body of knowledge embedded
in more than fifty books under the title of ‘Strategic Management’. For the most part, the field
has evolved into a set of taken-for-granted assumptions that the development of strategy in
contemporary organizations is a relatively straightforward rational process, based upon the
simplification or dichotomy of management subject disciplines. There is also a positivist
assumption that there are prescriptive techniques, which are able to determine organization
strategic direction, long-term performance and integrate the entire scope of decision- making
655
activity within an organization by simply performing the five common and fundamental
technocratic tasks of environmental analysis, strategic formulation, strategic implementation and
strategic control. However, the procedural school of strategy (Minztberg, 2004) provides a
skeptical critique of such an approach and argues that such simplification imperative manifests
itself in the subject being trapped in what Weber calls ‘technical rationality’ (Weber, 1978) and
maintains concerns for prescriptions, linearity and orders, in other words, packaging
management knowledge into a series of formalized information (Watson, 1996). Whittington
(2004) argues that technocratic strategic management is trapped in a positivist epistemological
strand of its own making. This author argues that such a simplified technocratic approach takes
for granted the historical and political conditions under which stakeholders’ priorities and
interests are determined and enacted. In other words, such technocratic perspective can easily
overlook the non-linearity of the broader issues of power and politics and the concern for ethics,
domination and managerial assumptions which may have profound impacts on the corporation’s
long-term performance and wider society in general. The dilemma posed by the technocratic
approach stems from the fact that the embedded positivist thought seems to deny uncertainty and
the complexity of contemporary organizations and their business environment. This author
contends that the reality of the business environment of the twenty-first century is much more
complex than the technocratic strategic management model played out in most Business
Schools’ management education programs, or by voluminous strategic management text books.
The world has changed and there is very little fresh reliable or comprehensive empirical
evidence to date that convincingly proves a relationship between technocratic strategic
management and corporate successful performance (Eden and Ackermann 1998). There is also
very little fresh empirical evidence to suggest that technocratic strategic management contributes
to the development of a healthier, better and fairer industrialized society. According to one view,
the technocratic mode of strategy development has actually created a circle of sociological
problems attributed largely to the ‘taken for granted’ historical and political conditions under
which strategic decisions are determined and enacted by corporations (Whittington, 2004).
There is also empirical evidence to suggest that the problems are attributed to the grip of
industrial positivism and attempts made by corporations to rationalize the political and social
world. Levy, Alvesson and Willmott, (2003) argue that the technocratic perspective of
strategizing in organizations is colored by the preoccupations of instrumental values of corporate
performance and profitability at the expense of wider political and sociological problems of the
21st century (Levy, Alvesson and Willmott, 2003).
It can be reasonably argued that there is a need for an alternative approach to strategic
management that is not only relevant but also desirable and reflects the changing needs of wider
society and seeks to explore strategizing as a discursive process, one that has significant social
and political ramifications both within the corporation and in wider society. The key point is to
have a critical approach that moves away from certainty, towards an appreciation for pluralism
and diversity, towards an acceptance of social and political ambiguity and the paradox of
complexity rather than rationality. In other words strategic management should not only be
confined to a managerial perspective which helps corporate elitist managers just to improve
profitability, but also helps managers develop a richer conceptualization of the complexity of the
social and political world, and prepare them for the complicated understanding of value conflicts
found in contemporary organizations and in broader society. The next section explores the tenets
of critical strategic management based on the latest thinking and ideas of critical management
scholars.
656
III. THE TENETS OF CRITICAL STRATEGIC MANAGEMENT
Despite 30 years of academic teaching and research, the field of strategy is still lacking
direction, respect, roles and contributions and replete with various competing fashions,
perspective and directives (McKieman and Carter, 2004). The relevance and desirability of the
strategy field to contemporary organizations and wider society has been widely challenged by
critical scholars over the last two decades (Knight and Morgan, 1991; Whittington, 1992; Bower
and Doz, 1997; Whipp, 1999; Levy, Alvesson and Willmott, 2003; Wilson and Jarzabkowski,
2004; Clegg, Carter and Kornberger, 2004; Knight and Muller, 2004; Ezzamel and Willmott,
2004 and Starkey and Tempest, 2004). They are calling for a more critical approach to revive the
subject field, one that is examined as discourse and practice and has significant political and
social ramifications within corporations and wider society. They challenge the relevance and
desirability of orthodox strategic management that is embedded with positivism and governed by
managerial ideology and values. A set of identifiable tenets or beliefs of Critical Strategic
Management can be extracted from their work. It appears that a Critical Strategic Management
perspective is anchored by:
• a wider strategic context that subsumes significant political and social ramifications
within corporations and in wider society. The context is expected to go well beyond
managerial efforts to harness social and political knowledge and commitment to reflect a
system of values that are democratically acceptable within the corporation and wider
society. In other words strategy-as -practice is expected to go well beyond the business
context to include charity organizations, non-profit making organizations and semi-
private organizations, ranging from regional economic development to the development
of social and political economy.
• a strategic content that subsumes liberal, cultural, social and political cognitive
discourses or ideology and does not just encapsulate positivist ideas where the values or
interests are being trapped in what Weber (1978) calls ‘instrumental rationality’. In other
words a strategic content is expected to include an awareness of corporate social capital
and network relationships that is less technocratic i.e. less rigid, closed and exclusive, but
based on good principles of hegemonic alignment of ideological, political and
economical issues.
• a strategic process that subsumes a cognitive frame of critical reflective learning that
questions the hidden positivist premises and presuppositions that are commonly
embedded as received knowledge and practice found in orthodox strategic management.
The critical reflective learning is expected to include opportunities to discuss what
Arygris (1996) calls ‘the undiscussable’, that is asking questions that are usually not
asked. It is important to distinguish between apparently coherent sets of values, beliefs
and practices that are constructed and disseminated by strategists to explain and sustain
their legitimate position, and the assumptions that are concealed during practice. In other
words, the critical reflective learning is expected to include an opportunity to question
the neutrality of strategy, political knowledge and power in relation to different
stakeholders’ vested values and interests. The Habermasian concept of emancipation
(1972) is to be contained within such a process to foster pluralistic decision- making and
include stakeholders whose voices have been previously marginalized. In other words it
is expected to allow less privileged and powerless minorities to identify and contest
sources of inequality and unfair treatment, and question the implicit managerial
assumptions of the existing hierarchies and the strategic decision outcomes in relation to
the vested interests and values of the organization.
657
IV. CRITIQUES AND CONTRIBUTIONS OF CRITICAL STRATEGIC
MANAGEMENT
Critical Strategic Management resonates with the political and power strategy school
perspectives as described by Mintzberg, Ahlstrand and Lampel (1998). Its tenets are drawn
upon the premises of the power and political strategy school. For example, the strategic
process is shaped by power and politics and sees strategizing as the interplay of persuasion,
bargaining and sometimes confrontation in the form of political games. The engagement of
power and political games is to ensure that strategic content embedded in the context be taken
seriously and to see organization as promoting its own welfare through the use of
maneuvering as well as collective means in combating domination and exploitation by elitist
managers. The political process also helps to open up space for resistance by labour,
environmentalists, and other forces challenging practices or the status quo of corporations
and their elitist managers. One of the weaknesses of Critical Strategic Management is that it
overstates the interplay of power and politics and downplays the importance of leadership of
the entrepreneurial school perspective. By concentrating too much attention on the content
concerning divisiveness and politics, the strategic process may miss fruitful dialogue and
preclude or undermine a visionary leader’s ability to develop clear vision which may be
beneficial to wider society, even in some conflictive fashions. This author argues that the
significance of power and politics as played out in the critical strategic management
perspective may risk an impetuous, overconfident, dogmatic identification of dominant and
subordinate groups and their interests, which can preclude a critical reflection on wider social
goals and virtues. For example, while it is true to claim that political and power dimensions
can play a positive role in strategy development, particularly challenging the established and
legitimate form of domination, divisiveness and unethical practices of corporation elitist
managers as contended by critical scholars, it can also be a source of wastage and distortion
viewed from a wider societal perspective. This is highly prevalent in a situation when
corporations face severe pressure from environmental uncertainty, and when political
activists who have most power and inclination may cloud other critical issues to further their
own interests. This usually happens in an event when ailing corporations are subjected to
intense market pressure and are unable to establish any clear direction or turnaround
strategies, when decision- making tends to become a free-for-all and powerful stakeholders
make claims to expertise, insight and authority that often reproduce or reinforce legitimate
organizational inequalities and unethical practices. Yet such significant issues are hardly
addressed in critical strategic management literature.
On the positive side, critical management scholars have introduced some useful post
modernist and critical ideas in enriching the field, for example, concepts of domination,
coalition, emancipation, hegemony, exploitation and pluralism. These radical ideas hold out the
promise of revealing the taken-for-granted assumptions and ideologies embedded in the
discourse and practice of strategy and challenge its self-understanding as a politically neutral
tool to improve the organization’s long-term performance. It also highlights the need for
stakeholders to question the universality of managerial interests and bring to the surface latent
conflicts. It brings strategic content, process and context into closer scrutiny by fostering more
participative decision-making, from which the voices of the powerless were previously
excluded. Overall, the strategy development process is more likely to be of participative form,
negotiated through persuasion, bargaining or direct confrontation. Strategy that emerges from
such processes is more likely to embrace critical reflective learning. Finally, Critical Strategic
Management breaks new ground and frees us from the comfort provided by rational and linear
658
thinking and relocates us in the modern environment where history, politics, power and culture
are the driving forces of change for the betterment of wider society.
V. CONCLUSION
This article examines the tenets of critical strategic management and its underpinning
theoretical perspective based on the latest ideas and debates among critical scholars. The
approach is consonant with the power and political school perspective. It challenges the
positivist thoughts of orthodox strategic management and argues that strategic management
should embrace a critical examination and understanding of the cultural, social, political, moral
and ethical issues, the fundamentals upon which any contemporary organizational reality rests.
The approach encourages the development of a system that is less coloured by the narrow
interests of top management and their preoccupation with instrumental values, but views the
strategic management process as having significant social and political ramifications. The
approach encourages critical reflective learning that draws upon postmodernist and critical
theories to conceptualize strategic management as a discourse and discursive practice. It helps
us to understand the importance of politics and power in promoting strategic change for the
betterment of wider society. However, the approach like the power and political school
perspective is based on some problematic premises, presumptions and presuppositions. It
assumes that politics is always dysfunctional and is nothing more than a mechanism of control
used by the corporation to serve the interests of elitist managers to make profits, and the
perspective is able to identify dominant and subordinated groups and their strategic interests and
intents in absolute terms and in a way that they promote divisiveness and exclude wider societal
goals and virtues. In many ways, the paradigm of Critical Strategic Management complements
the power strategy school perspective by infusing discursive critical ideas to stimulate strategic
change that is blocked by legitimate corporate systems and procedures and to ensure that all
stakeholders’ issues are fully and democratically debated. However, further empirical research
is needed to examine how and to what extent that such critical approach might benefit
organizations and wider society.
REFERENCES
659
TOWARD AN UNDERSTANDING OF RELEVANT STRATEGIC
ORGANIZATIONS: A FUZZY LOGIC APPROACH
ABSTRACT
The purpose of this paper is to attempt a preliminary development of a new model that
integrates strategic organizational analysis and the fuzzy logic approach. We suggest that in
today’s business environment and according to the stage of the business life cycle that firms
are facing, managers must pay attention to how important it can be to focus on either
efficiency or effectiveness when designing their firm’s organizational structure. The
preliminary conclusion of the study is that in a competitive environment with a short business
life cycle the question is no longer how efficient the firm’s organization might be but how
relevant are the decisions made in terms of strategic alignment between strategic organization
and business strategy. In other words, how relevant is the organization to its competitive
environment and position.
I. INTRODUCTION
In today’s uncertain and global business environment, companies are facing several
organizational and structural challenges. To explore these new ways and approaches, we
suggest a multi-field research in crossing the fuzzy logic approach with of that of
organizational development (OD). We believe that the findings of this multi-field research
can be helpful for managers to respond to key questions such as How to manage company’s
organization in a global context. How to design or restructure a new corporate organizational
architecture. How to build an organization that is relevant strategically to the firm’s business
life cycle. The central question of our research proposal is How Can Firms’ Managers Move
From an Efficient Organization to a Relevant Strategic One? We make the assumptions that
first, being an efficient organization is no longer enough today to compete in a business world
characterized by uncertainty, short business cycles, and a very dynamic competitive
environment that requires firms to be more flexible and adaptable. Second, at the early stage
of the business life cycle, decision-making about organizational structure might look for
effectiveness instead of efficiency. Third, at the later stage of the life cycle, say maturity and
decline, decision-making about organizational structure might look at efficiency instead of
effectiveness.
II. LITERATURE REVIEW
660
It was Adam Smith who built the premise of what we called today “organization
theory” when he demonstrated the greater efficiency that could be gain through division and
specialization of labor (Stigler, 1957). This work laid the foundation for later organizational
and industrial theorists such as Max Weber and Frederick Taylor who advocated narrowing
the scope of workers’ jobs so that specialization could be developed and efficiency enhanced
(Wren, 1972). Another important contribution to organizational theory was that of Chester
Barnard known as the Human Relations School. This approach explored the role of groups
and social processes in organizations. The most notable work is the Hawthorne studies at
Western Electric by Roethlisberger and Dickson and works by Elton Mayo (Roethlisberger
and Dickson, 1939; Mayo, 1945). These studies questioned the rational, efficiency-oriented
scientific management views of work. The works of contingency theorists have a decidedly
rational overtone and have resulted in extensive investigations of organizational technology,
the external environment, goals, organizational size, and how these contextual factors are
related to organizational structure. Contingency theorists reject the one-best-way model of
firm’s organization proposed by earlier theorists (Donaldson, 2001). To finish, this brief
review of organizational theory we focus on two theories based upon industrial and
organizational economics, namely transaction cost economics and agency theory. Although
subtle differences distinguish these two approaches, their central focus is similar (Fama,
1980). Owners seek to maximize their return on investment by the most efficient use of the
organization. Agents, on the other hand, seek to minimize their efforts and maximize their
remuneration. We argue that these different approaches are more focused on the search for
operational efficiency than strategic relevance of the firm’s organization.
The term effectiveness is itself unclear. An organization may be more or less effective
in a variety of different ways. Is effectiveness simply the amount of profits earned? Or is it
the number of units produced or customers served? What about worker satisfaction? In
addition, what about definitions of effectiveness proposed by the organization’s stakeholders?
Are customers satisfied with the organization’s products or services? Is the broad community
satisfied with the manner in which the organization has conducted itself? Has the company
polluted the air and water? Has the company provided some value to the community? All
these questions help us to define the firm’s organizational effectiveness as the way managers
structure their companies so that they are able to satisfy not only shareholders but also
stakeholders’ interests even though these interests can be somewhat different. Concisely, we
define organizational effectiveness as “the right organization that takes in account not only
the need to create value for stockholders and customers but also to create wealth in an ethical
and societal manner for employees and the community as a whole.” On the other hand,
efficiency is defined as the way the firm uses its resources to maximize its outputs. In this,
the efficiency approach is more a legacy of the industrial engineering and time-motion
studies of Frederick W. Taylor.
The fuzzy logic literature offers a large variety of fuzzy inference systems. However, to
model the relevant strategic organization index (RSOI), we chose to use a fuzzy logic
controller (Lofti Zadeh, 1965, 1978, and 1983; Mamdani, 1975) which is schematized as:
crisp inputs, fuzzy inference system, crisp outputs.
Definition of linguistic variables:
661
A linguistic variable can be defined as a triplet (V, X, TV), where V is the name of the
variable; X is the reference set and TV is a collection of normalized fuzzy subsets of X. We
first define the linguistic variable business life cycle, which is specially defined in order to
model its four periods or stages.
X = [0,100], a set of real numbers between 0 and 100,
TV = {embryonic, growth, maturity, decline}; we visualize the four periods of the life
cycle on a scale between 0 and 100: between 0 and 10, we are in the embryonic period,
from 10 to 20 we pass from embryonic to growth, and then from 20 to 40 we go from
growth to maturity, and so forth.
The linguistic variables efficiency, effectiveness and relevant strategic organization index
are defined in the same way.
X = [0,100], set of real numbers between 0 and 100
TV = {very low, low, medium, high, very high}
Definition of the rules of the inference engine. The next step is to define the fuzzy rules for
the inference engine. For our model, we have defined four sets of rules. We should notice that
the expertise of the system depends on the quality of the rules. Therefore, building the model
needs the expertise of many persons who know the universe of the firm’s strategic
environment and are aware of the firm’s strategic intent. Figure 1 shows the fuzzy values for
a life cycle at maturity stage.
Figure 1. Definition of the set of rules when the life cycle is at the maturity level stage.
Inference process: The fuzzy inference process contains the following three steps:
(1) Application of rules:
Consider the example where x1 = 50, x2 = 75 and x3 = 65; the value of the variable life cycle
is maturity at level 1; the value of the variable efficiency is high at level 1 and the value of
the variable effectiveness is medium at level 0.4 and high at level 0.6.
The output variable is denoted Y; the result of the application of a rule, called consequent, is:
μ B ' ( y ) = min(min(μ A1 ( x1 ), μ A2 ( x 2 ), μ A3 ( x 3 )), μ Bi ( y ))
Now, if we take an example in applying two rules, the result can be as follows:
If efficiency is high and effectiveness is medium then the RSOI is low.
If efficiency is high and effectiveness is high then the RSOI is medium.
(2) Aggregation of consequents:
{ }
μ A( B '1 ,..., B 'n ) = max( μ B '1 ( y), μ B '2 ( y),...; μ B 'n ( y ) )
Therefore, we obtain a polygonal structure.
662
(3) Defuzzification:
By this means, we can calculate the value of RSOI by the centroidal method;
∫
y 0 = μ A( B '1 ,..., B 'n ) ( y ) ydy ∫μ A( B '1 ,..., B 'n ) ( y ) dy
Figure 2. Crisp RSOI values when the value life cycle is 10, efficiency and effectiveness
varies from 60 to 70.
We can imply that managers must decide to improve first their effectiveness and
secondly their efficiency if they want to improve their firm’s RSOI value. Consequently, the
RSOI value goes from 46 to 54 and then to 61, which are values that can be read as medium.
The conclusion we can draw from this first simulation is that at the embryonic stage of the
life cycle, the relevant the decision is to look at for effectiveness instead of efficiency.
Simulation when the business life cycle is at the maturity stage
For input variables whose life cycle is 50, efficiency 60, and effectiveness 60 the value
of RSOI falls down to 32 or rather lower than medium.
663
Figure 3 Crisp RSOI values when the value life cycle is 50, efficiency and effectiveness vary
from 60 to 70.
In this case, managers should decide to improve simultaneously efficiency and effectiveness
if they want to improve their firm’s RSOI value. If they do not do that, their firm’s RSOI will
fall down dramatically (e.g., when the values of efficiency and effectiveness are 50, the RSOI
falls down to 8). Therefore, the conclusion we can draw here is that at the maturity stage of
the business life cycle for firms willing to maintain their competitive advantage they must
seek for efficiency instead of effectiveness.
VI. CONCLUSION
A general conclusion we can draw from this analysis is that on the one hand, the more
the business life cycle is at the beginning (embryonic) of the cycle the more important the
role of effectiveness becomes to set up the firm’s organization. On the other hand, the more
the business life cycle is at the final (decline) stage the more crucial the role of efficiency
becomes to adjust the firm’s organization to the competitive environment that the firm is
facing. We trust that the Relevant Strategic Organization framework we are suggesting could
be an interesting and useful approach and tool to help managers to align their firm’s
organizational structure to that of their firms’ competitive strategy. Further studies and
applications must be done to determine how consistent and relevant our framework can be to
that knowledge not only to the organizational development theory but also to the business
decision-making process.
REFERENCES
A. BOOKS
Andrews, Kenneth, R. The Concept of Corporate Strategy. New York: Richard D. Irwin, Inc.,
1980.
Cyert, M. Richard & James A. March. A Behavioral Theory of the Firm (2nd Ed.). Madden:
Blackwell Publishing, 1992.
Child, John. Organizational Structure, Environment, and Performance: The Role of Strategic
Choice, in Complex Organizations. Richard H. Hall (ed). Aldershot: Dartmouth
Publishing Company, 1972.
Donaldson, Lex. The Contingency Theory of Organizations. Thousand Oaks: Sage
Publications, 2001.
664
Miles E. Raymond and Charles C. Snow. Organizational Strategy, Structure and Process.
New York: McGraw-Hill, 1978.
Stigler, G.Ed. Adam Smith: Selections From the Wealth of Nations. New York: Appleton-
Century-Crofts, 1957.
B. JOURNAL ARTICLES
Bojadziev, G. & Bojadziev, M. Fuzzy Logic for business, finance and management. World
Scientific, 1997, 25-37.
Sharfman, P. Mark & James W. Dean Jr. Conceptualizing and Measuring the Organizational
Environment: A multidimensional Approach. Journal of Management 17, 1991, 681-
700,
Zadeh, L. The role of fuzzy logic in the management of uncertainty in expert systems. Fuzzy
sets and systems, 11, 1983, 199-227.
665
TOTAL QUALITY MANAGEMENT ACCEPTANCE AND APPLICATIONS IN
MULTINATIONAL COMPANIES: AN EMPIRICAL EXAMINATION
ABSTRACT
I. INTRODUCTION
666
Without management involvement and commitment to TQM, a company can never
become successful at TQM. Deming (1972) argued that simply teaching people to use
statistical tools aimed at achieving quality improvement was insufficient. He stressed that
only management has the power to change a firm’s processes affecting the quality of its
products and services. Deming observed that "no permanent impact has ever been
accomplished in quality control without [the] understanding and nurture of top management."
TQM recognizes that the quality and cost of product/services produced are activity-
related and that the product being manufactured consumes these activities at a given rate. It
recognizes that quality costs are not driven by volume or direct labour alone but by activities
such as design the product, product processes, engineering, storage, shipping, and other
related services (Componation, P. & Farrington, P. 2000). Today, labor costs represent a
small percentage of the production process because of automation. However, labor cost in
providing the services to loyal customers is high. It soon became obvious to managers that if
their companies were allocating the costs improperly, they might be making strategic
decisions concerning activities related to product mix, pricing and marketing that were
inaccurate. Therefore companies adopting TQM concerns with the continuous improvement
of the quality of the product and service they provide. The company will focus on the
efficiency and effectiveness of the entire process (Gitlow, Howard S., Einspruch, Norman G.,
Loredo, Elvira N., and Percival, Mary McKenry, 1994).
Implementing TQM does not guarantee success and full realization of its benefits unless
every one involved in the process and a clear commitment from the top granted. If the
implementation process is not properly carried out, and facilitation steps taken to get
workforce acceptance and usage of TQM, then, success becomes questionable (Coyle and
Alkhafaji, 2005).
II. METHODOLOGY
This study surveyed Managers of about 100 companies located in the Gulf (United
Arab Emirates, and Oman). A questionnaire (see respondent and their industry table I) was
designed to assess the experiences of those who have implemented TQM in their operations.
The research instrument consisted of both objective questions and subjective (open-
ended) questions. The questionnaire contained twenty objective questions and consisted of
667
two types of questions. One set of objective questions attempted to address system issues is
summarized in Table 2.
The subjective questions were open-ended in nature and allowed for any suggestions
the respondent might have for others embarking on any TQM implementation efforts.
A preliminary analysis of the data is given in this section below. However, this paper
represents the first part of this research. The author intends to write the second part in the
next few months.)
In regards to the most important reasons for implementing TQM (Table 3), about
45.5% of the respondents indicated that determining the quality of the product is crucial.
Another 30% indicated that competitiveness is behind the implementation of TQM. The
remaining 24.5% gave another reason as being most important.
About one third (40.%) of the respondents indicated that their TQM system and the
application of ISO 900-2000 was explained to the entire employees. They indicated that
TQM has helped them in obtaining the ISO certificates. About 64.5% of the respondents
hired outside trainers and consultants to educate their employees. In the open-ended questions
they indicated the role of the outsiders in the process. Those outside consultants were hired
to serve in different capacities, including the application of TQM statistical tools, assisting
668
management and employees in the change required, and providing other expertise. See Table
4.
In terms of the length of time for TQM implementation, 30% of respondents indicated
less than three years, 20% more than three years and less than five years, and 50% of
respondents indicated that project was still in process. The majority of respondents (56%)
indicated that TQM implementation cost was less than $100,000, while 24% of the sample
indicated the cost to be between $100,000-$150,000, 20% between $150,000-$200,000. In
addition, 35.5% reported that TQM implementation was successful, and 21.5% indicated that
it was mildly successful. About 43% of the respondents indicated that it was a failure.
Three major benefits (Improve quality of product and services, improved management
information, and greater cost awareness) were stated as the most common benefits derived
from TQM. Improved strategic decision-making and eventual cost reduction were also given
as potential benefits. However about 38% of those responded indicated that there was no
obvious improvements or benefits. Respondents indicated that they encountered numerous
problems during the implementation of their TQM systems. Examples of these problems
include a problem with:
1 Gathering relevant data (27%)
2 Lack of cooperation from department (19%)
3 Lack of employee awareness (16%)
4 Employees not well-informed about TQM implementation (24%)
5 Other problems (14%)
Forty managers or about 31% indicated that the process will enhance their
competitiveness in the global market. About 34% of the respondents indicated that it was too
soon to know if TQM implementation will result in reducing cost and increasing overall
profit. Only 38% indicated that TQM implementation resulted in an overall net benefit, while
about 37% reported no change. About 45% of the companies surveyed had more than1000
employees, while 34% of the companies had less than 500 employees, and 21% of the
companies have less than 250 employees. The questionnaire revealed that the respondents
classified themselves into the following categories: See Table 5.
Table 5; Respondents Position within their companies.
669
Please indicate what level of management you consider yourself:
Number Percentage
Top-level management 15 11.3
Middle level management 42 31.6
Operating level management 29 21.8
Administrative department manager 16 12
Administrative department staff 24 18
Others 7 5.3
Total 133 100%
IV. CONCLUSION
Organization can be efficient when its people, processes, systems and structure are
effectively integrated. Based upon the findings, the following conclusions emerged.
1. Total quality management provides many advantages to companies who choose to
implement the concept.
2. A good number of managers indicated that TQM and ISO-9000-2000 are connected
and one will lead to the other.
3. The process of implementation is a long term and require outside help to assist
management and employees in the change required.
4. Still about 40% of those companies adopting TQM are not successful. Further
research is needed to find why this is so.
5. Overall, Managers of various businesses differ regarding the extent to which TQM
implementation in their organization will improve competitiveness and reduce cost.
6. A strategic approach to implement TQM will improve its implementation.
REFERENCES
670
THE VALUE RELEVANCE OF HOSPITAL INTEGRATION STRATEGIES,
OWNERSHIP CONTROL CHARACTERISTICS AND DIVESTITIURE DECISIONS
ABSTRACT
Recent research has been devoted to examining the types of organizational change
associated with hospital divestitures. The increased acceptance of divestiture as a strategy
may reflect recent patterns of consolidation in the health care field that require health systems
to cut back certain subsidiaries by removing assets that do not contribute to the core business
and organizational mission of the system. Using a sample of 362 system hospitals, an
examination of the effects of integration and ownership control on health system divestiture
decisions, and the interaction of these factors on hospital financial performance, was
conducted. Employing data from archival sources (American Hospital Association, Health
Care Financing Administration), discrete-time with probit regression, a method appropriate
for analyzing longitudinal data with a continuous dependent variable but with both
dichotomous and continuous independent variables, was used to test three hypotheses.
Findings support the argument that hospital divestitures remove activities that generate
negative value, and that both integration activities and ownership control provide strong
incentives to improve operations following the divestiture.
I. INTRODUCTION
The introduction of the Medicare prospective payment system (PPS), and the resulting
increase in competition among health care providers during the early to mid 1990s,
compelled United States hospitals to engage in more affiliation and consolidation behavior in
healthcare organizations (Shortell 1999). Fundamentally, four options have existed for
hospital systems to improve the fit between the goals and actions of physicians, hospitals and
administrators (Rich 2000). Although they are presented here as distinct choices, they may
also be taken in combination in establishing a course to improve a hospital’s financial
performance and its contribution to the health care system. The choices are to: (1) improve
and find processes for management to reduce costs, maximize revenue, and selectively
consolidate hospitals and practices; (2) transform or close hospitals with significant economic
problems and then restructure the remaining hospitals within new entities which are
strategically aligned with the health system; (3) privatize or sell hospital assets and physician
contracts to a commercial physician practice management company (PPM) which then
establishes a strategic relationship with the parent hospital or health system; and (4) divest or
sell/transfer hospital assets to another hospital system.
Divestiture occurs when a business unit loses its value to the parent firm (Kaplan and
Weisbach 1992). A decrease in financial performance is a strong indicator of a decline in the
value of a business unit. In the hospital industry, factors other than divestiture may affect
financial performance. To identify such factors, this study draws from the literature on
contingency theory and interorganizational relations theory.
Impact of hospital ownership control on divestiture
671
Contingency theory illustrates that an affiliated hospital's value to its health system is
determined by its ability to adapt to the uncertainty and instability of the environment. It is a
systems model based upon a framework of factors that have a generally important influence
on strategic choice and also have performance implications. Contingency theory emphasizes
the importance of ownership control in determining the fate of organizations (Ginsberg and
Venkatraman 1985). Hospitals possessing assets that are valuable and not easily accessible by
other hospital organizations achieve advantages over others in the system. A major reason
that hospitals join systems is to help secure needed resources and gain greater bargaining
power with purchasers and health plans. Through these actions, an individual hospital's
dependence on its environment is reduced, and thereby its prospects for survival and growth
increase (Lin and Wan 1999). Thus, the control of critical resources relative to nearby
hospitals may support a system's competitive position and thus reduce an affiliated hospital's
chance of divestiture. On the basis of these assumptions, it is hypothesized as follows:
Hypothesis 1: System-affiliated hospitals that possess more ownership control over assets
by their management are less likely to be divested by their parent health system.
Impact of hospital integration on divestiture.
Interorganizational theory suggests that firms integrate to compensate for an
incomplete market for resources, such as brand names, management expertise, or referrals. In
the case of hospital integration, both acquirers and targets may hold critical resources for
which markets are incomplete. Through integration, the acquirer might gain access to the
target’s resource of a close attachment to local patients and physicians; the target might gain
access to specialized technology, the quality reputation of the acquirer, and potentially
valuable contracts with managed care payers (Coddington, Fischer and Moore 2000).
Functional integration determines the long-term allocation of existing resources and the
development of new ones essential to assure the success of health systems (Oliver 1990). To
guarantee the efficient use of resources in meeting their own objectives and to add value,
health systems need to achieve functional integration (Shortell et al. 1996). Thus, the
following hypothesis is postulated:
Hypothesis 2: System-affiliated hospitals that are more integrated with their parent health
system are less likely to be divested by their parent health system.
Effects of hospital divestiture on financial performance.
The literature on divestiture in non-health care industries has highlighted the
importance of poor financial performance as a determinant of divestiture (Duhaime and Grant
1984). Health systems may consider divestiture of poor-performing hospitals as a way to
avoid further financial losses. A testable hypothesis can be stated as follows:
Hypothesis 3: Hospitals that are less likely to be divested by the system are more likely to
enhance a health system's financial performance.
III. METHODS
672
Administration (HCFA) data files which includes the Cost Reports. The AHA Hospital
Guide, Part B and Annual Surveys of Hospitals files contain characteristics such as
ownership, services, and bed size. HCFA cost reports include hospital financial records and
case-mix data.
Measurement of variables.
The variables in this study were divided into four categories. The first category has
two exogenous constructs consisting of ownership control and integration strategies. The
second category is the endogenous construct, divestiture. The third variable, also an
endogenous construct, represents financial performance. The last group is a set of control
variables representing the common but significant hospital characteristics of size and
nonprofit ownership.
Ownership control was measured based on two independent variables: profit status and
system type. Profit status includes two dummy variables used to differentiate for-profit and
non-profit organization. System type consists of two dummy variables used to distinguish
centralized and decentralized types of operations. Integration strategies were determined by
six independent variables representing three different dimensions of integration. Integration
based on service type was measured by the number of inpatient beds used, number of
outpatient visits made, and the number of physicians associated with the hospital. Integration
based on physician participation in the management of the hospital includes two dummy
variables used to differentiate open physician hospital organization and closed physician
hospital organization (Goes and Zahn 1995). Integration based on managed care contracting
was represented by the number of managed care contracts provided in the system. Hospital
Divestiture is the dependent variable defined as the transfer or sale of the assets of an
associated hospital from one system to another or as the termination of the hospital
relationship with the system whereby the hospital divested is converted to independent
(freestanding) status. In this study, a hospital was considered divested if the hospital name
was removed from the member list of one health system and appeared on the list of another
health system (AHA Hospital Guide). A hospital was also considered divested if the hospital
assumed freestanding status. Financial Performance was measured by cash flow from
operations or the ratio of changes in working capital and depreciation to total assets. This
measure is a more effective and timely indicator of both profits earned based on hospital
cash-based activities than from financial measures based on profits only (McCue 1991).
IV. RESULTS
Sixty-four system hospitals of the sample were divested during the study period.
There were significant differences between the for-profit and non-profit groups across
integration strategies and centralized ownership control. Non-profit hospitals were more
likely to utilize system integration strategies and centralized control, but for-profit hospitals
were more likely to divest, as indicated in Table 1. As non-profit hospitals clearly behaved
differently, the study analysis controls for the effects of non-profit ownership and hospital
size.
673
Table 1 Hospital Integration Strategies and Centralized Ownership Control
By Profit Status, 2003
Non-
profit For-profit
Profit Status (N=362) (N=179) (N=183)
Frequenc
y % Frequency %
Physician management 13
integration 23 % 13 7%
68
Managed care integration 122 % 87 48%
76
Centralized ownership control 136 % 17 9%
12
Divestiture 22 % 42 23%
Hospital staff physician 59
integration 105 % 90 49%
Descriptive statistics and correlations for study variables are presented in Table 2. None
of the correlations between independent variables are over .50, suggesting that there is little
potential concern of multicollinearity.
Table 2
Descriptive Statistics and Pearson Correlation Matrix of the Independent Variables, 2003
Variables Mean S.D. 2 3 4 5 6 7 8 9 10
Dependent Variable
1. Financial performance 7611.99 2764.64
Predictor Variables
2. Inpatient beds used/day 214.86 293.68 -
3. Outpatient visits/day 144.15 195.51 0.45 -
4. Physician management 68.48 24.18 0.18 0.34 -
integration
5. Managed care integration 9.93 12.38 0.20 0.01 -0.03 -
6. For profit ownership 0.23 0.26 0.01 0.07 -0.02 -0.01 -
7. Centralized ownership 0.48 0.14 -0.10 -0.26 -0.10 0.03 -0.36 -
8. Divestiture 0.17 0.33 -0.39 -0.28 0.19 0.14 -0.21 0.13 -
9. Hospital MDs on staff 76.5 20.84 -0.12 -0.10 -0.07 -0.09 0.12 -0.02 -0.07 -
Control Variables
10. NFP ownership 0.58 0.09 0.19 0.35 0.17 -0.03 0.15 -0.27 -0.10 0.07 -
11. Hospital size-beds 342.17 0.03 -0.07 0.26 -0.13 0.16 -0.18 0.07 0.16 -0.17 -0.43
available/day
Results indicate a high negative association between the hospital service integration
variables (inpatient and outpatient services) and divestiture, and a high positive association
between the hospital management integration variables (physician management and managed
care) and divestiture. However, results indicate a high negative association between for-profit
ownership control and divestiture, and a high positive association between centralized
ownership control and divestiture. Table 3 presents the results of the probit regression model
used for testing the hypotheses. The results of the study show a significantly negative
relationship between divestiture and hospital financial performance.
674
Hypothesis 1 predicted that system-affiliated hospitals that possess more ownership control
over assets by their management were less likely to be divested by their parent health system.
This hypothesis was partially supported. For-profit health systems are less likely to divest
hospitals from the system but only when they maintain less centralized control over their
assets.
Hypothesis 2 predicted that system-affiliated hospitals that are more integrated with their
parent health system were less likely to be divested by their parent health system. This
hypothesis was partially supported. For-profit health systems are less likely to divest
hospitals that provide more inpatient services, managed care products, physician participation
in management and when they provide less complex (riskier) medical and surgical
treatments.
Hypothesis 3 predicted that hospitals that are less likely to be divested by the system are
more likely to enhance a health system's financial performance. This hypothesis was fully
supported.
Table 3 Results From Probit Regression Modeling: Analysis of the Effects of Divestiture
on Changes in Hospital Financial Performance (Cash Flow From Operations Per
Hospital Admission), 1997-2003
Hospital Financial Performance
β S.E. β S.E. β
S.E
Hospital Ownership Control
1. For-profit ownership .118 0.22 .245 0.44 *
. 345 0.34 **
2. Centralized ownership -.157 0.13 -.142 0.18
.122 0.16 *
9. Divestiture .059
0.51 **
Control Variables
10. Nonprofit ownership -.163 0.12 -.427 0.55 *
-.264 0.38
11. Hospital size .260 0.53 .355 0.77
675
.273 0.71
V. CONCLUSION
This study investigated the financial performance of system hospitals that divested
assets in order to produce gains. Hospitals that sell or transfer assets that cause negative
synergies should experience improved financial performance, but hospitals that divest assets
to raise capital or in response to economic declines may not see improved financial gains.
Results supported the notion that hospital divestitures improve the health system's operations,
perhaps by removing non-performing assets and improving services.
REFERENCES
Coddington, D., Fischer, E., and Moore, K. "Characteristics of Successful Health Care
Systems." Health Forum Journal 43, no. 6 (2000): 40-46.
Conklin, M.S. "Thorough System Integration Results in Better Financial Performance."
Health Care Strategic Management 12(7) (1994): 16-22
Duhaime, I.M. and J.H. Grant. "Factors Influencing Divestment Decision-Making: Evidence
from a Field Study." Strategic Management Journal 5(2) (1984): 301-18.
Ginsberg, A. and N. Venkatraman. "Contingency Perspectives of Organizational Strategy: A
Critical Review of the Empirical Research." Academy of Management Review 10(3)
(1985): 421-434
676
MANAGING AND MEASURING INDUSTRY ANALYST RELATIONS
ABSTRACT
The first section of this paper examines case studies of agency-driven and in-house
managed AR programs from Europe and the U.S., benchmarking the most successful
strategies. From these and the direct testimony of analysts a checklist or best-practices metric
for analyst relations management is derived. The paper's second section extrapolates from
this model a template for measuring the results of analyst relations efforts. The template
includes tools and services recently available through a new breed of agencies and consultants
that specialize in evaluating analyst programs. It also, for the first time, broadens the tracking
metrics to include the wider range of publics now influenced by industry analysts.
I. INTRODUCTION
Over and above their obvious influence on customers, investors, and the media,
industry analysts provide business intelligence and strategic advice to manufacturers and
vendors. Such counsel, whether delivered for free in the give-and-take of pre-announcement
briefings, or for a contracted fee to marketing and product managers, often includes vital
positioning, timing and competitive insights. It may even include qualified customer leads.
Between product or service announcements, industry analysts can also prove a rich source of
industry knowledge and trending patterns that generate the material for client initiatives and
“soft news” PR campaigns. With many analyst firms, you can, for a supplementary fee, take
the relationship even farther and enlist their senior people as event partners and podium-
sharers.
677
Nortel had minimal visibility in the networking marketplace, and was ranked 16th
according to industry analysts covering their product category—well below chief competitors
Lucent and Cisco. Brodeur advised Nortel to make direct contact with report authors at key
firms well ahead of their deadlines, and to pre-brief and test messages with senior analysts
before announcements. These strategies resulted in better report coverage and a leap to 6th
place in their target audience’s collective opinion. (http://www.brodeur.com)
Weber Shandwick for Microsoft Pocket PC. At the third annual GSM World
Congress in 2001 in Cannes, Microsoft’s wireless division ranked fourth in share of media
and analyst voice, well behind powerhouses like Nokia. Weber took a two-pronged approach,
pre-briefing both analysts attending the 2002 show, and their more senior colleagues at home
in the U.K. The effort resulted in an improvement to second place in share of voice and a full
90% of the analyst coverage noting Microsoft’s importance and correctly positioning the
company’s strategy. Ben Wood, of Gartner group, offered a typical encomium: “I arrived at
the 2002 event skeptical about Microsoft’s progress in the wireless space, and my view has
certainly changed.” (http://www.webershandwick.com) Burson-Marsteller for Alcatel.
Alcatel had no analyst relations prior to 1998, and looked to Burson to boost its worldwide
awareness and brand equity among opinion leaders. The agency conducted a perception audit
among analysts and, from that, designed the format and content for Alcatel’s first annual
industry analyst conference. Burson also provided a target list of invitees for the event,
oversaw all preparations, and monitored 1-on-1s, breakfast sessions and exclusive site visits
for senior analysts. In addition, they launched a bi-annual analyst tour for Alcatel’s CTO and
a bi-weekly media watch for analyst opinions. Hundreds attended the conference and senior
Alcatel management responded enthusiastically. Based on the event, the audit findings and
the media watch quotes, Alcatel’s leadership has funded an annual analyst relations program
ever since. (http://www.bm.com)
Motorola. Motorola had no industry analyst relations program at the corporate or Internet-
division level before 1998. Financial analysts were treated like favored customers, while their
industry counterparts were neither pre-briefed, nor electronically informed, nor even invited
to networking product announcements. A newly hired communications director for the
Internet and Networking division replaced his existing agency with a more analyst-relations
savvy support arm, conducted an immediate analyst phone audit, trained spokespeople for
analyst 1-on-1s, toured the top firms on background, and instituted announcement pre-
briefings with all first-rank analyst houses. In addition he lobbied for a dedicated corporate-
level AR director. To drive home the need, he presented benchmarking evidence of strong
AR funding and resource allocation among Motorola’s chief competitors, and submitted to
top management a Kensington Group report ranking Motorola’s analyst relations program
27th of the 27 top companies in the IT and telecom industries. These efforts led to the
appointment of a Corporate Analyst Relations Director, annual corporate-level forums for IAs
in London and Chicago, a $200K AR program budget, and partially dedicated AR managers
in the more technical divisions of the company. By 2001, Motorola had moved up in
Kensington’s analyst relations rankings from 27th to 16th place (Mike Doheny interview,
12/20/04).
Oracle. This worldwide leader in software, having recently absorbed PeopleSoft for
an estimated $35 billion, boasts the best analyst relations program in the high-tech field.
Nothing since Digital Equipment Corporation’s late 80’s model (which enjoyed a then
industry record analyst budget of $31 million and a 13-member corporate team) comes close.
Today Oracle fields a corporate AR team of 15 to 20, reporting directly to the CCO and
dotted line to the CMO. They in turn are supported by an even larger number of product-
specific AR managers who report directly into the corporate group. Moreover, each corporate
678
manager owns a specific relationship with a top firm and the group as a whole owns
consulting contracts with all top tier analyst firms. Understanding the circle of analyst
influence, Corporate AR also interfaces directly with the heads of Sales, PR, IR and partner
programs. Year on year, Kensington Group’s tally of industry analyst opinion has given
Oracle top or near top ranking among all software companies.
What do analysts themselves say they want from vendors? At the strategic level,
former Ovum analyst Duncan Chapple reminds relationship managers to maintain a dialogue
with analysts in the market where the sale is being negotiated, since it is here that expert word
of mouth often makes or breaks a multi-million dollar deal (Chapple,
http://www.brodeur.com/insights). By the same token, Laurie McCabe, a senior analyst at
Summit Strategies, underscores the reach of IA influence beyond customers and press: be
aware, she advises, that they also affect how “prospects, partners, financial analysts and
competitors perceive a vendor and its standing in the market” (McCabe, quoted in
http://www.kensingtongroup.com). At the tactical level, IAs are equally forthcoming in their
recommendations. Analysts’ in-depth, fact-checking standards, says Kathy Quirk of Nucleus
Research, mean they want from a communication department “not just a link to a press
release, but to background materials, presentations, white papers, customer statements, and an
overview of what’s going on.” (http://www.prnews.blogspot.com) William Hopkins, CEO of
Knowledge Capital Group and a former analyst, warns against a glut of electronic updates,
however. Analysts want greater depth than reporters, but “they don’t want an endless stream
of information pushed out to them. They want to know what they need to know.”
(http://www.prnews.blogspot.com) Laurie McCabe seconds the point bluntly: “Don’t spam
industry analysts.” (http://www.kensingtongroup.com) She follows with a short list of
practical do’s-and-don’ts that includes calling analysts early (ahead of the press), sending
briefing materials in advance of meetings, and exploiting the analyst mindshare guaranteed in
your consulting contracts before you announce products. (http://www.kensingtongroup.com)
What should you ask for? A large multinational IT or telecom company will need a
dedicated, director-level corporate manager (analysts expect as much), and at least one full-
time analyst relations manager per division. Program funding, exclusive of salaries, should be
comparable to your media relations budget. In an SME or a large firm that wants to pilot
before it commits, it’s reasonable to propose a joint AR and PR charter for one manager per
division, program dollars that allow for regular outreach, and an annual industry analyst
679
conference event, as well as the tactical support of a PR agency with analyst relations
experience and expertise.
• Do your goals include driving sales, bolstering corporate valuation, increasing the
quality of market intelligence, and supporting PR and marketing campaigns?
• Do you have a dedicated corporate director to plan and manage the worldwide
program, advise top management, and offer a consistent, company-wide perspective to
senior analysts?
• Is each AR manager conversant with the detailed features and benefits of the products
and services he/she covers for the analysts? (IAs expect all contacts to be technically
savvy.)
• Have you or your agency trained designated marketing and technical spokespeople for
the more rigorous encounters they’ll have with analysts? (Press training is not
enough.)
• Do you brief all key firms under NDA one or two weeks ahead of press
announcements? Do you also invite them to the press event?
• Do you solicit endorsement quotes from top analysts to include in major releases?
• Do you practice inbound as well as outbound analyst relations to gain market
intelligence and test strategies and messages?
• Do you employ an analyst-savvy agency to support your efforts? Does the agency
have existing consulting contracts with analyst firms they can exploit for your benefit?
Do they have regional subs or affiliates with similar strengths?
• Between major announcements and IA conferences, do you update individual analysts
by phone and in-person? (They want to be the first to know new developments in
their space.)
• Are you also prepared to discuss competitive, pricing, business, product, and channel
strategies if asked?
• Do your CEO, CMO and CTO meet regularly with industry analysts?
• Have you enlisted analysts as event speakers, or commissioned them to write white
papers on your behalf?
In most cases, accurate evaluation of an AR program requires the right mix of internal
research and outsourced measurement expertise. I’ll conclude with a look at the best of those
external and internal resources and suggest an integrated approach to evaluating your analyst
relations success. Most full-service PR agencies can provide you with program assessments
via custom audits or secondary data from reports and press coverage. A variety of Internet
sites also offers generalized how-to-evaluate advice. More important, in the last decade or so,
680
a small but significant industry has grown up that focuses exclusively on analyzing the
analysts. They offer help not only to overwhelmed communication departments, but to CIOs
and investors as well. Some—Outsell, SageCircle, and the recently announced Tekrati—boil
down information on leading firms, individual analysts, upcoming events, and AR
management methods (http://www.tekrati.com). Others, chiefly Kensington Group and
Knowledge Capital Group, provide detailed evaluation tools. Kensington Group is the largest
and oldest, and the only one with a primary focus on measuring analyst response to the AR
programs of major corporations. Today they publish several reports annually: if you’re
among the top 25 companies in the hardware, software, networking, services or security
business, you’re reviewed and rated in Kensington. For a fee, you can buy the book and
discover what 80 to 100 key analysts think of your program and those of your competitors.
The categories include product positioning, strategies, access to staff, central contact point,
briefings, and forums. You’re also ranked for responsiveness, credibility, candor and
relationship “comfort level.” In addition, each company’s program is plotted against its
competitors numerically for quality of information content, information channels, attitude and
program concepts. There are even quantitative and qualitative sections that compare your
European and North American efforts (http://www.kensingtongroup.com). In an interview
with Norma LaRosa, the CEO of Kensington, I learned they have recently expanded their
industry-standard services to include packaged analysis for SMEs and customized reports on
product niches and the chief competitors in each.
681
REFERENCES
Chapple, Duncan. Why Do Analyst Relations in Every Region Where You Want
to Sell?, April 2002. (www.brodeur.com/insights).
Doheny, Mike. (Director of Global Industry Analyst Relations at Motorola.)
Marketing is Under More Pressure to Deliver Results. Fall 2004 Presentation
to Motorola senior management.
Doheny, Mike. Telephone interview with author, December 20, 2004.
Gartner, IT Service Strategy, Winter 1996 (www.gartner.com).
Forrester, CIO/MIS Receptiveness to the VoIP Trend, Spring 2000.
(www.forrester.com).
Larkin, Douglas. “Good Relations with Industry Analysts a Credible Benefit,”
The Washington Business Journal, May 2, 2003.
LaRosa, Norma. (CEO of Kensington Group) Telephone interview with author,
December 17, 2004.
McCabe, Laurie. Seven Ingredients for a Winning Analyst Relations Program.
2000 (www.kensingtongroup.com).
Paul, Laurie Gibbons. How to Analyse the Analysts, August 9, 2001.
(www.cio.com).
PR News (article not bylined). Analyst Newsletters Present Risky Investment
(www.prnews.blogspot.com)
Reynolds, Joshua. (Vice President and Director of US Analyst Relations)
Boston: Blanc and Otus Presentation, 2004.
Schatt, Stan. Securing the Campus Network, Forrester Research, Inc., September 2004.
Tekrati press release. (www.tekrati.com).
682
CHAPTER 27
683
A COMPARISON OF STUDENT PERCEPTIONS OF TEAMWORK IN THE
ACADEMIC AND WORKPLACE ENVIRONMENTS
ABSTRACT
This study compares student experiences of teamwork in both the workplace and
academic environments for critical factors that might contribute to the effectiveness of the
teamwork. The data for this study was collected over a two week period at an educational
institution in Maryland. The data was analyzed using paired t-tests. The result suggests that
subjects tend to appreciate similarities and differences in team dynamics between the work
and academic environments. Implications of the result are discussed.
I. INTRODUCTION
684
objective or recognizable goal to be attained, interacting with and influencing each other
(Katzenbach and Smith, 1993). Membership of teams may be voluntary or compulsory and
they can make horrible decisions that alienate members (Hackman, 1990) but when they
function effectively, they exhibit a unitary behavior characterized by morale, cohesion,
confluence and synergy (Ingram and Desombre, 1999). Hackman, 1990 defines team
effectiveness as the degree to which a group’s output meets requirements in terms of
quantity, quality and timeliness. Effective teams are generally characterized by regular
feedback to the team members, extensive collaboration and communication (Ancona and
Caldwell, 1992). Bateman et al, 2002 suggest that effective teams are characterized by
extensive team synergy, performance objectives, skilled members, efficient use of resources,
innovation and the provision of quality. Ettington et al, 2002 drawing from previous studies,
also suggest interdependence, group composition, group development, motivational job
design, organizational support and effective leadership as being critical to effective teams in
general. The above brief review identifies a number of critical factors that enable team
effectiveness and therefore provides a basis for comparing student perceptions of team
effectiveness in both the academic and workplace environments.
III. METHODOLOGY
A total of 103 questionnaires out of the 200 distributed were returned, reflecting a
response rate of 51.5%. However, of these, 13 had not had any employment in the last six
months and were thus excluded from the study. A further 18 were incomplete and so
excluded. Therefore 72 questionnaires were usable. There were 18 males (25%) and 54
females (75%). Two-thirds reported holding full time jobs. Positions held were mainly non-
managerial and almost all the respondents worked in the service industry. Over 50% of the
respondents had been working for their current employer for at least a year.
The data was analyzed using paired t-tests (see Table 1). Overall, the results suggest that
subjects tend to appreciate similarities in team dynamics between the work and academic
environments. Of the 24 mean comparisons, 11 were not statistically significant (p≥0.10), 6
685
were marginally significant (p<0.10), and 7 were statistically significant (p<0.05). The
statistically greater means disparities occurred in the following instances: respondents were
more likely to recognize the potential of other members of their team at work than within an
academic environment (t=3.12, df = 72, p = 0.003); respondents were usually informed
when they did something that made another member’s job easier/harder at work than within
an academic environment (t=4.36, df = 72, p = 0.040); respondents were more likely to let
another team member know when he/she did something that made their job easier/harder at
work than within an academic environment (t=2.60, df = 72, p = 0.011); respondents were
more likely to experience effective leadership within the team at work than within an
academic environment (t=2.16, df = 72, p = 0.034); respondents were more likely to identify
clear work-related activity targets established for the team at work than within an academic
environment (t=2.08, df = 72, p = 0.041); respondents were more likely to feel that the team's
standards are monitored on a regular basis at work than within an academic environment
(t=2.93, df = 72, p = 0.005), and; respondents were more likely to expect regular feedback on
the team's performance at work than within an academic environment (t=2.26, df = 72, p =
0.041). It is also interesting to note that, of the 24 comparisons, it was only in 4 that the mean
score for team work in the academic environment was directionally greater than the mean
score for the workplace environment. Overall, there were no statistical differences between
males and females in their evaluation of the team dynamic in the work or academic
environment. However, female respondents were more willing to help finish the assigned
work of other members of the team in an academic setting compared to their male
counterparts (xw = 4.04, xm = 3.50, F=4.72, df=65, p=0.034), even though there was no
statistical difference between male and female respondents on this issue in the work
environment (xw=4.00, xm=3.69, F=1.74, df =65, p=0.191). Also, there were no statistically
significant results between respondents with part time and those holding full time jobs.
Applying content analysis to the narrative texts generated on group work in the
academic environment revealed a number of additional insights. Overall respondent
perception of group work in the academic environment was that though the importance and
purpose of working in teams was well understood, they did not like engaging in an exercise
which was dependent on individual willingness to actively participate and the uncertainties
therein. The primary underlying reason for continued participation was the need to attain a
good grade. Though not always a horrible experience, group work was considered a time
consuming and difficult task which was necessarily shared to minimize the time and effort
required of members and also to
686
Table 1: Paired T-Test Comparisons
ensure that everybody contributes to the task at hand. Respondent expectations of instructor
input to the effectiveness of the group work process focused on a desire for active instructor
participation that exhibited concern for student needs especially in the areas of decisions on
687
submission deadlines, recognition of the reality of students with full time jobs and
enforcement of member participation in the group task. In-group operational challenges
identified include the lack of credible leadership within the group, minimal communication
between group members and the refining of individual contributions into a coherent whole.
Other issues included the tendency of individuals to procrastinate or to seek maximum
benefit with as little input as possible.
VI. CONCLUSIONS
The data suggests that students perceive the dynamics of teamwork in the work
environment as being generally similar to that of the work environment. This confirms the
appropriateness of the intent underlying the use of group work within the academic
environment; a means of exposing students to the nature of teamwork in organizations.
However, respondents felt that the academic group project dynamic was less encouraging,
tended to suffer from less effective team leadership, and was less likely to recognize
individual effort and be monitored by superiors (i.e. faculty) compared to workplace teams.
Considering that teambuilding takes time and that most university group project often last no
longer than a semester, it is perhaps not surprising that there is less collaboration and
encouragement among students within the same group. The implication here is that
instructors need to find ways of extending student awareness of the interdependency in group
projects beyond the current short-term focus. Also, rather than wait until the end of the
semester to determine the success or failure of the group project, instructors can learn from
teamwork in the work environment to be more involved in the team dynamic. They should be
well informed about student attitudes to group work, support the development and
strengthening of leadership within the teams, encouragement of one another and provide
regular performance checks and feedback throughout the duration of the group work. It
therefore suggests a greater level of intervention than is currently practiced. This finding is
consistent with efforts being made in the development of teaching aides e.g. the Team
Learning Assistant , Boston University, that enhances instructor ability to monitor and
intervene in the team learning experiences. Moreover, managers in the workplace
environment regularly utilize rewards and sanctions in managing individual and group effort.
As a result, individual members of the team are more willing to take up leadership roles to
increase the likelihood that their individual effort within the team will be recognized and
rewarded accordingly. This is unlike the nature of group projects in academic environments
where leadership qualities and individual success seems to be of less concern to members
especially where the team is on schedule with the tasks assigned or when the grading system
is focused on group effort as opposed to individual input within the group. There is a strong
case for creating team skills learning assignments where students are evaluated on both the
process of working as a member of a team as well as the collective final output of the group.
This dual form of evaluation is likely to increase co-operation among team members,
minimizing some of the challenges of group project assignments.
REFERENCES
Ancona, D. G. and Caldwell, D. F. (1992) Bridging the boundary: External activity and
performance in teams, Administrative Services Quarterly, Vol. 37, No. 4, pp. 634-665
Baker, D. F. and Campbell, C. M. (2005) When is there strength in numbers?: A study of
undergraduate task groups, College Teaching, Vol. 53, No. 1, pp. 14-18
688
Bateman, B., Wilson, F. C. and Bingham, D. (2002) Team effectiveness – Development of an
audit questionnaire, Journal of Management Development, Vol. 21, No. 3, pp. 212-
226
Ettington, D. R. and Camp, R. R. (2002) Facilitating transfer of skills between group projects
and work teams, Journal of Management Education, Vol. 26, No. 4, pp. 356-379
Hackman, J. R. (Ed.), (1990) Groups that work (and those that don’t), Sans Francisco, CA:
Jossey-Bass
Ingram, H. and Desombre, T. (1999) Teamwork: Comparing academic and practitioners’
perceptions, Team Performance Management, Vol. 5, No. 1, pp. 16-21
Katzenbach, J. R. and Smith, D. K. (1993) The discipline of teams, Harvard Business
Review, Vol. 71, pp. 111-120
King, P. E. and Behnke, R. R. (2005) Problems associated with evaluating student
performance in groups, College Teaching, Vol. 53, No. 2, pp. 57-61
Lizzio, A. and Wilson, K. (2005) Self-managed learning groups in higher education:
Students’ perceptions of process and outcomes, British Journal of Educational
Psychology, Vol. 75, No. 3, pp. 373-390
Lawler, E. E. III (1998) Strategies for high performance organizations, San Francisco:
Jossey-Bass
Tang, K. C. C. (1993) Spontaneous collaborative learning: A new dimension in student
learning experience?, Higher Education Research and Development, Vol. 12, pp. 115-
128
Teare, R., Ingram, H., Scheuing, E. and Armistead, C. (1997) Organizational teamworking
frameworks: Evidence from UK and USA-based firms, International Journal of
Service Industry Management, Vol. 8, No. 3, pp. 250-256
Ulloa, B. C. R. and Adams, S. G. (2004) Attitude toward teamwork and effective teaming,
Team Performance Management, Vol. 10, No. 7/8, pp. 145-151
689
AN EXAMINATION OF THE RELATIONSHIP AMONG SELF-MONITORING,
PROACTIVITY, AND STRATEGIC INTENTIONS FOR HANDLING CONFLICT
ABSTRACT
This study examines the relationship between the five modes of handling
organizational conflict (as measured by the Thomas-Kilmann Conflict Mode Instrument) and
the two personality factors of self-monitoring and proactivity. Participants in this study were
a mix of 157 undergraduate and graduate students from a large public university located in
the mid-Atlantic region of the United States. Results show that self-monitoring was not
significantly correlated with any of the five conflict handling strategies. Proactivity did show
a significant positive association with the competing and collaborating styles, and a
significant negative correlation with avoiding and accommodating styles. Implications for
future research are discussed.
I. INTRODUCTION
Thomas (1983) proposed dual dimensions that help in understanding the differences
among the five orientations. One dimension measures an individual’s desire to satisfy his or
690
her own needs, also referred to as the degree of assertiveness. The second dimension
indicates a person’s desire to satisfy the needs of the other party, also referred to as the degree
of cooperativeness.
III. HYPOTHESES
High self-monitors tend to monitor and control the images they present to better fit
with their perception of the social climate, whereas low self-monitors tend to be true to
themselves, exhibiting more consistent behavior across various social contexts (Day,
Schleicher, Unkless, & Hiller, 2002). High self-monitors, with their chameleon-like response
to social context, can vary their behavioral response depending on the situation and the
potential outcomes. Accordingly, research has found that high self-monitors are more likely
to emerge in leadership roles (Day, Schleicher, Unkless, & Hiller, 2002), display
organizational citizenship behaviors (Blakely, Andrews, & Fuller, 2003), and be promoted
within the corporate hierarchy (Kilduff & Day, 1994). Other researchers have suggested that
high self-monitors might not represent the most appropriate leaders, given that high self-
monitors might not display the full portfolio of needed leadership skills (Bedeian & Day,
2004) or might put their own career success and self-preservation above the interests of the
organization (Callanan, 2003).
By its very nature, self-monitoring indicates the degree to which an individual is able
to adapt behavioral responses to meet situational demands. Low self-monitors would likely
respond to a conflict episode in line with their primary type given their proclivity to “reflect
their own inner attitudes, emotions, and dispositions” (Premeaux & Bedeian, 2003, p. 1542).
In both cases, there would not be a clear-cut linkage between self-monitoring and any one
conflict handling strategy. Given this expectation, the first hypothesis to be tested is:
H1: Self-monitoring shows no overall association with any of the conflict handling styles.
691
Proactivity has received considerable research attention as a personality characteristic
that can influence individual behaviors. People with a proactive personality display
aggressive, action-oriented behaviors that allow them to be agents of change who can
transform an organization (Callanan, 2003). Given its desirability within various
organizational contexts (Seibert, Kraimer, & Crant, 2001), it would be of interest to know the
linkage, if any, between proactivity and the various strategic options for handling conflict.
Given the nature of the competing and the collaborative styles, where concern for
oneself is manifest in degree of assertiveness used in conflict situations, it could be expected
that proactivity would have a significant association with both competing and the
collaborative styles. Given this expectation, the second hypothesis to be tested is:
H2: Proactivity shows a significant positive association with the competing and
collaborating conflict handling styles, and a significant negative association with the avoiding
and accommodating styles.
IV. METHODOLOGY
Research Participants
Subjects for this study (N=157) were a mix of undergraduate and graduate business students
from a large state university located in the mid-Atlantic region of the United States.
Participation in the research was voluntary and was part of normal coursework and
instruction in various management courses. Students were not given incentives to participate
and all responses were anonymous. Further, subjects had not been exposed to coursework in
conflict management prior to participation in the research. Demographic information is
included in Table I.
Assessment Materials
A survey was used to collect data for the study. Participants completed the survey on
their own and at their own pace. Strategic intentions for handling conflict were measured
using the Thomas-Kilmann Mode Instrument (the TKI). The TKI is based on Blake and
Mouton’s (1964) conceptual model and reports scores for each of the five modes or styles.
The TKI is viewed as easy to administer and is relatively uncontaminated by social
desirability effects (Womack, 1988). The TKI has been used extensively both in research and
692
in training, and it is the most widely used instrument for determining conflict resolution style.
Table I shows the overall pattern for dominant conflict handling style as given by results
from the TKI.
V. DATA ANALYSIS
Correlations were calculated based upon the total self-monitoring and proactivity
scores for each participant along with scores in each of the five conflict handling styles.
Table II shows the mean scores, standard deviations, and Pearson correlations for all of the
main variables included in this research.
Table III summarizes the information on the five styles and includes the average proactivity
and self-monitoring scores for each of the dominant conflict handling modes.
693
Proactivity 41.61 (4.47) 39.31 (8.90) 39.76 (5.53) 37.29 (6.13) 37.28 (5.06)
Self-Monitoring 25.61 (3.12) 26.81 (3.62) 25.88 (3.60) 26.39 (3.73) 25.52 (3.18)
N 36 16 35 41 29
Note: Standard deviations are in parentheses.
VI. RESULTS
In line with Hypothesis 1, Table II shows that self-monitoring was not significantly
correlated with any of the five conflict handling styles. In support of Hypothesis 2, Table II
shows the proactivity variable with a significant positive correlation with the competing and
collaborating styles, and a significant negative correlation with the avoiding and
accommodating modes.
VII. CONCLUSION
Future research should continue to examine the extent to which personality influences
not only the strategic dispositions for responding to conflict, but also whether personality
influences or moderates the choice of conflict response when distinct contextual factors are
apparent in the conflict episode. For example, one possible stream of research could test
whether individuals who are relatively higher in self-monitoring, given their supposed ability
to read social cues and adjust their behaviors, are better able to choose an appropriate
response to a conflict episode regardless of their primary conflict handling strategy. In
addition, future research should assess whether the present findings would be different with
an older, more experienced sample. The participants in this study were relatively young and
with limited work experience, which might have an influence on the overall results.
REFERENCES
Antonioni, D. “Relationship between the Big Five personality factors and conflict
management styles.” The International Journal of Conflict Management., 9, 1998,
336-355.
Amason, A. C. “Distinguishing the effects of functional and dysfunctional conflict on
strategic decision making: Resolving a paradox for top management teams.”
Academy of Management Journal., 39, 1996, 123-148.
Bateman, T. S., and Crant J. M. “The proactive component of organizational behavior.”
Journal of Organizational Behavior., 14, 1993, 103-118.
Bedeian, A. G., and Day, D. V. “Can chameleons lead?” Leadership Quarterly., 15, 2004,
687-718.
Blake, R. R., and Mouton, J. S. The Managerial Grid. Gulf, Houston, TX., 1964
Blakely, G. L., Andrews, M. C., and Fuller, J. “Are chameleons good citizens? A
longitudinal study of the relationship between self-monitoring and organizational
citizenship behavior.” Journal of Business and Psychology., 18, 2003, 131-144.
Callanan, G. A. “What price career success?” Career Development International., 8, 2003,
126-133.
Day, D. V., Schleicher, D. J., Unckless, A. L., and Hiller, N. J. “Self-monitoring personality
at work: A meta-analytic investigation of construct validity.” Journal of Applied
Psychology., 87, 2002, 390-401.
694
Jameson, J. K. “Toward a comprehensive model for the assessment and management of
intraorganizational conflict: Developing the framework.” International Journal of
Conflict Management., 10, 1999, 268-294.
Kilduff, M., and Day, D. V. “Do chameleons get ahead? The effects of self-monitoring on
managerial careers.” Academy of Management Journal., 37, 1994, 1047-1061.
Kilmann, R. H., and Thomas, K. W. “Interpersonal conflict-handling behavior as reflections
of Jungian personality dimensions.” Psychological Reports., 37, 1975, 971-980.
Moberg, P. J. “Linking conflict strategy to the five-factor model: Theoretical and empirical
foundations.” International Journal of Conflict Management., 12, 2001,47-68.
Premeaux, S. F., and Bedeian, A. G. “Breaking the silence: The moderating effects of self-
monitoring in speaking up in the workplace.” Journal of Management Studies., 40,
2003, 1537-1562.
Rahim, M. A. “A measure of styles of handling interpersonal conflict.” Academy of
Management Journal., 26, 1983, 368-376.
Rahim, M. A. “Toward a theory of managing organizational conflict.” International Journal
of Conflict Management., 13, 2002, 206-235.
Rahim, M. A., Magner, N. R., and Shapiro, D. L. “Do justice perceptions influence styles of
handling conflict with supervisors?: What justice perceptions precisely?” The
International Journal of Conflict Management., 11, 2000, 9-31.
Seibert, S. E., Crant, J. M., and Kraimer, M. L. “Proactive personality and career success.”
Journal of Applied Psychology., 84, 1999, 416-427.
Seibert S. E., Kraimer, M. L., and Crant, J. M. “What do proactive people do? A longitudinal
model linking proactive personality and career success.” Personnel Psychology., 54,
2001, 845–874.
Snyder, M. “Self-monitoring and expressive behavior.” Journal of Personality and Social
Psychology., 30, 1974, 526-537.
695
CHAPTER 28
STUDENT PAPERS
696
MARTHA STEWART: FROM LEONA HELMSLEY TO FOLK HEROINE
ABSTRACT
Martha Stewart, the queen of gracious living, is known as an American success story.
But in 2002, she was confronted with the biggest challenge of her career, an investigation of
her personal ImClone stock trading by the Justice Department and the Securities Exchange
Commission. Martha maintained her innocence throughout, but was brought to trial early in
2004. The court dismissed the original accusation of insider trading from which other charges
stemmed, but a jury did find Martha guilty of misleading federal investigators and obstructing
an investigation. Although she appealed her conviction, Martha served a five-month prison
sentence. The company she founded survived the scandal and continues to thrive thanks to a
well-orchestrated public relations strategy.
I. INTRODUCTION
To many Americans, Martha Stewart is the epitome of gracious living. Many people
assume that she grew up in the type of home pictured in her books and magazine. The fact is
that Martha was born in of Jersey City, New Jersey. Her parents, Martha and Edward
Kostyra, a schoolteacher and a pharmaceuticals salesman, were the heads of a close-knit
Polish-American family. From the age of three, she grew up in Nutley, New Jersey with four
brothers and sisters. Martha’s dad taught her gardening when she was only three; her mother
taught her cooking, baking, and sewing. Martha attended Barnard College in New York City,
working as a model to help pay expenses. At the end of her sophomore year, she married
Andrew Stewart, a law student. It was not until 1967 when Martha began a successful second
career as a stockbroker. When recession hit Wall Street in 1973, Martha left the brokerage.
She and her husband moved to Westport, Connecticut, where they restored the 1805
farmhouse seen in her television programs. (Cohen, 2002).
Company History
In 1976, Martha Stewart started a catering business with a friend from college, and
then she went out on her own. In ten short years, this basement-run business became a million
dollar enterprise. 1982 saw the publication of the first of many of her now-signature lavishly
illustrated books, Entertaining, co-written with Elizabeth Hawes. The book was an instant
success, and Martha Stewart, fast becoming a one-woman industry, was soon producing
video tapes, dinner music CDs, television specials and dozens of books on hors d'oeuvres,
pies, weddings, Christmas, gardening and restoring old houses. (NCOE, 2003).
697
Martha Stewart Living. (Cohen, 2002). Her enterprises have grown into a large
conglomerate, Martha Stewart Living Omnimedia, Inc. (MSLO), with branches in publishing,
television, merchandising, and Internet/direct commerce, providing products in home,
cooking and entertaining, gardening, crafts, holidays, housekeeping, weddings, and child
care. (MSLO Overview, 2003).
Although not a warm public personality, nevertheless, Martha Stewart has shown
patience and good humor in the face of the inevitable criticism and satire common with
public figures in the mass media. But beginning in 2002, she was confronted with the biggest
challenge of her career, the ImClone scandal, an investigation of her personal stock trading
by the Justice Department and the Securities Exchange Commission. Martha maintained her
innocence throughout, but she was brought to trial in the first months of 2004. The court
dismissed the original accusation of insider trading from which the other charges stemmed,
but in 2004, a jury found Martha guilty of misleading federal investigators, and obstructing
an investigation. Although she appealed her conviction, she served a five month prison
sentence. The company she founded continues to thrive, and after her release, she resumed
her business career. Whatever crime she may or may not have committed, one cannot deny
the influence that Martha Stewart has had on how Americans, eat, entertain, and decorate
their homes and gardens.
698
all of her publics, consumers, media, investors, and employees alike, were left to conjecture,
assume and assign guilt.
Martha’s Internet campaign was not enough to keep her out of jail and ultimately, she
served 5 months in a minimum security West Virginia prison, beginning in October, 2004.
However, while Martha was resting in prison, her public relations team was not. Reports of
Martha losing a decorating contest in prison raised eyebrows. Known for regaling her
audiences with countless holiday decorating tips, Martha was unable to lead her team to
victory in a prison decorating contest. (USA Today, 2004). This type of stunt sounds like
good public relations strategy. Her heretofore cool public persona has also been judiciously
cultivated to a much warmer one. The following quote from Martha is a good demonstration
of the result of the cultivation. “The experience of the last five months in Alderson, West
Virginia, has been life altering and life affirming. Someday, I hope to have the chance to talk
more about all that has happened, the extraordinary people I have met here, and all that I have
learned. I can tell you now that I feel very fortunate to have had a family who nurtured me,
the advantage of an excellent education, and the opportunity to pursue the American dream.
You can be sure I will never forget the friends I met here, all that they have done to help me
during these five months, their children, and the stories they have told me. Right now, as you
can imagine, I am thrilled to be returning to my more familiar life. My heart is filled with joy
at the prospect of the warm embraces of my family, friends, and colleagues. Certainly, there
is no place like home.” (Stewart, 2005). Not only is this a clear demonstration of the ‘new’
Martha, but she also alludes to the future where she will utilize her experience in prison, pays
homage to her family and to America, providing us a well-crafted message indeed.
699
IV. SOCIAL, CULTURAL, POLITICAL AND ECONOMIC PORTENTS
During the public awareness of the trial, MSLO suffered from the negative press,
putting the financial health of the company at risk. Less than one year after the story broke
publicly, MSLO reported its first-ever quarterly losses. (Ritchie, 2003). Clearly, investors
and consumers were losing faith in Martha. It is a tribute to the strength of the public
relations crisis campaign that the recovery of Martha’s company as well as the repair of her
somewhat tarnished image has been so successful. After losing 75 percent of its value during
the insider-trading scandal, the MSLO stock is now hovering around an all-time high.
Clearly, the initial reaction to the breaking scandal was a public relations nightmare.
Underestimating the power of a public figure can be fatal. After the initial misstep and
Martha’s clever employment of a public relations firm, the successful strategy employed
during this time demonstrates that the public relations firm thoroughly researched her
company and worked closely with her executive board, her legal team, and Martha to apply
the existing market research to this crisis. The website was effective in putting Martha’s case
before the public, for all her audiences, including consumers, investors, the general public
and media. Although the website did receive emails, it was not formatted for posting
comments of any kind. Utilizing a two-way symmetric model would have enabled the public
relations firm to measure the posts for audience information levels, attitudes, and public
image of Martha.
As the campaign continues to run, it is necessary for the public relations strategy to
keep measuring and monitoring how well the objectives are being met. At this time, it is not
as crucial whether the public thinks that Martha was innocent or guilty, but that the faith in
her and her company are restored, emotionally and fiscally. An overview of media clips
should demonstrate a positive trend regarding her public image. The fact that her stock is on
the rise and remains strong indicates that investor confidence has been restored.
V. EVALUATION
Martha has demonstrated a greater respect for the media now than she did prior to her
scandal. Her portrayal of a warmer, more media-friendly persona to the press will go a long
way toward continuing to smooth her way in the media spotlight. As she has done during the
scandal, it is likely that she will continue to court strong relationships with her media.
Even though Martha has the reputation of a diva, she must inspire loyalty among her
staff. Not once during the scandal and trial, did a media exposé occur because of a leak
among her staff. Each staff member was obviously very aware of the possible implications of
leaking information and chose not to disclose information. This loyalty communicated itself
well to the public, helping send a strong message of the staff’s belief in Martha’s innocence.
Martha chose to drag out her case and take it to court. The MSLO stock languished at
$10 per share (about half of its pre-scandal price). When Martha was convicted and sentenced
to prison, the stock price began improving. Investors breathed a sigh of relief to see
resolution to the whole issue. Because Martha is brand-identified, it was easy for the
investors to lose faith in her company and indeed, her company did suffer stock losses. But
the mark of a successful public relations campaign is ultimately ensuring the fiscal livelihood
of the company and clearly, as the stock has risen and holds steady and her company
continues to demonstrate a healthy fiscal viability, the campaign did its job successfully.
700
The restoration of the public’s faith in Martha is evidenced by the growing strength of
her company, the apparent humbleness of Martha during her prison stay, her subsequent
release, and her simple gratitude at being in her own home. Carefully orchestrated footage
demonstrated Martha visiting a working class home to watch them cook their favorite dish
and then bringing them on her show to cook the dish in Martha’s own kitchen. This is a
flashback to Martha’s working class roots, reminding America that she is a manifestation of
the American dream. Martha’s diva-like persona, at least for the time being, has been
shelved in favor of a warmer, caring, more personable Martha, continuing to ensure that
Martha as a positive, brand-name celebrity will thrive for years to come.
Using the two-way asymmetric model of public relations, the Martha Stewart
campaign worked to persuade all her publics of her innocence and to assure them of her
company’s vital fiscal health. As a public persona, the use of the model continued to assure
her publics that Martha as the brand-name image of the company would continue to be strong
and positive. After her initial couple of missteps, clearly a proactive approach to the scandal
was implemented. The media were not only courted, but given timely updates via her
website. When she was released from prison, a convenient flatbed truck was set up for the
photographers. Not only is she cooperating with the media, but she is also making the job
easy by being selectively accessible to them. She continues to promote her theme of
disclosure, trust, quality and ethics, emphasizing her innocence, in spite of her being
convicted and at the same time, reestablishes her company as a solid, trustworthy investment.
If her publics, investors or media have any question that Martha is back stronger than
ever, a quick reread of Martha’s quote shows a woman humbled, but not broken. Now that
Martha is out of jail, her public relations campaign continues full speed ahead. Her release
from prison, complete with a photo op in the snow outside her New England home, got wall-
to-wall coverage by all major media. She has a new television show and her company is
thriving. Everyone makes mistakes. The key is to admit them, ask for forgiveness, and
accept the consequences. Had Martha admitted her mistakes and asked for leniency, her story
might have had a different ending. As it is, her actions could well serve as a public relations
primer for what not to do in a crisis situation. Her categorical denials and defiance only
inflamed the media and investigators. Martha never apologized for what happened. Although
this is a difficult concept when considered to be the epitome of domestic perfection, Martha
might have averted a lot of public scrutiny and mitigated her legal consequences with a
simple apology. We, as human beings, have a great capacity for forgiveness, but that is
balanced by our equal contempt for deception.
701
REFERENCES
A. BOOKS
Dezenhall, E. (1999). Nail 'em!: confronting high profile attacks on celebrities & businesses.
1st ed. Amherst, NY: Prometheus Books.
B. INTERNET ARTICLES
Cohen, D. (2002) Biography on Martha Stewart. Retrieved October 31, 2005 from
http://lala.essortment.com/biographyonmar_rino.html
Company Overview. (2003). Martha Stewart Living Omnimedia Company Overview.
Retrieved October 31, 2005 from http://www.corporate_ir.net/ireye/ir_site.html.
CourtTV. (2003) Martha Stewart indicted on nine counts stemming from insider-trading
scandal. Retrieved November 1, 2005 from
http://www.courttv.com/people/2003/0604/marthastewart_ap.html.
Dugan, K. (2004, July 16). The Martha Stewart crisis. Message posted to Global PR Blog
Week 1.0, archived at
http://www.globalprblogweek.com/archives/the_martha_stewart_c.php
National Commission on Entrepreneurship. (2003). Stories of Entrepreneurs. Retrieved
November 1, 2005 from http://www.noce.org/toolkit/stories_stewart.html.
Ritchie, A. (2003). Save Martha timeline. Retrieved October 31, 2005 from
http://www.savemartha.com/timeline.html
Report: Stewart loses decorating contest in prison. (2005). USA Today.
Retrieved November 1, 2005 from http://www.usatoday.com/money/2004-12-31-
stewart-loses_x.html.
Report: Stewart convicted on all charges. (March 5, 2004). CNNMoney.com. Retrieved
November 30, 2005 from
http://money.cnn.com/2004/03/05/news/companies/martha_verdict.
Stewart, M. (2005). News from Martha. Retrieved Nov. 1, 2005 from http://www.
marthastewart.com/page.jhtml?type=learn-cat&id=cat20171.
SEC charges Martha Stewart, broker Peter Bacanovic with illegal insider trading. (2003).
Retrieved November. 30, 2005 from http://www.sec.gov/news/press/2003-69.html.
702
CASE STUDY OF TOLL ROAD PROPOSAL FOR LOOP 1604
ABSTRACT
Named after former Bexar County Judge, Charles W. Anderson, Loop 1604 was
originally built in the 1960s as a Farm-to-Market and State Loop road and is known today as
the “Death Loop” around the city of San Antonio. Since its original design was a two-lane
rural state highway, the expansion to a four-lane freeway left a rather narrow median.
Speeding, the lack of barriers and the high volume of traffic on such a narrow stretch of
freeway is cause for immediate disaster in the event of an accident. The Texas Department of
Transportation (TxDOT) has claimed awareness of the problems along Loop 1604 and
proposed the installation of a toll road as a long-term solution. This study proposes a public
relations plan to ensure the passing of the toll road proposal.
I. INTRODUCTION
In 1917, the Texas Highway Department (THD) was established by the Texas
Legislature to administer federal funds for highway construction and maintenance. By the
mid 1970s, the Legislature merged the Texas Mass Transportation Commission with the
THD to form the State Department of Highways and Public Transportation (SDHPT.)
Ultimately, the Texas Department of Transportation (TxDOT) was formed by combining the
SDHPT, the Department of Aviation and the Texas Motor Vehicle Commission in 1991
(www.dot. state.tx. us/insdt dot/geninfo .htm? pg=history, 2005.) Through its mission to
provide safe, effective and efficient movement of people and goods, the Texas Department of
Transportation is able to ensure the social, political and economic needs of Texas citizens
(www.dot.state.tx.us/insdtdot/geninfo.htm, 2005.) Socially, TxDOT is committed to
providing comfortable, safe, durable, cost-effective, environmentally sensitive and
aesthetically appealing transportation systems that work together. It also ensures a desirable
workplace for its employees, which creates a diverse team of all types of people and
professions. Politically, it promotes a higher quality of life through partnerships with the
citizens of Texas and all branches of government by being receptive, responsible and
cooperative. In addition to its social and political commitment, it uses efficient and cost-
effective work methods that encourage innovation and creativity therefore maintaining its
economic responsibility to the state of Texas.
Since 1990, the northern arc of Loop 1604 has experienced a substantial amount of
growth. The growth has resulted from new subdivisions and apartments, which have
increased the demand for schools, which in turn have attracted businesses to establish in that
area. All of these factors have increased volume of traffic. The Average Annual Daily Traffic
(AADT) recorded traffic growth well above 200% over the entire route of Loop 1604 from
1990 to 2003. Of the three sections, the northern arc near Bandera Road has had the most
growth at 500%. In addition, the AADT also reported that the section west of Highway 281
703
has an average of 100,000 vehicles per day traveling on Loop 1604
(www.texhwyman.com/l1604.htm, 2005.) As of 2005, the accidents along the loop have
received a substantial amount of media attention which may be in correlation with the 250%
increase of fatal accidents from 2003 to 2004 (www.sanantonio.gov/sapd/TrafStats.htm,
2005.) The Texas Department of Transportation recently responded to citizens’ concerns
about their safety on the loop by collaborating with the San Antonio Police Department in
setting up speed traps from June 5 to July 6 while temporary concrete barriers were being
placed along the medians to help prevent any further fatalities.
II. OBJECTIVES
To ensure the passing of the toll road proposal, we have identified informational,
attitudinal and behavioral objectives to target our key publics. The key publics are commuters
on Loop 1604, San Antonio residents, business establishments along Loop 1604, local
Chambers of Commerce, San Antonio Police Department (SAPD), City of San Antonio
elected officials and local media outlets.
Informational Objectives
• Educate 50% of Loop 1604 commuters, San Antonio residents, and business
establishments along the loop about the issues regarding the safety of the freeway
including traffic, dangers of speeding, median, barriers and toll roads within one year.
• Educate 100% of local Chambers of Commerce, SAPD, City of San Antonio Elected
Officials, and local media about the issues regarding the safety of the freeway
including traffic, dangers of speeding, median, barriers and toll roads within one year.
• Create awareness of 100% (of the 50% educated) regarding the benefits of the toll
road among Loop 1604 commuters, San Antonio residents, and business
establishments along the loop within one year.
• Create awareness of 100% regarding the benefits of the toll road among local
Chambers of Commerce, SAPD, City of San Antonio Elected Officials, and the local
media within one year.
• Inform 100% (of the 50% educated) of Loop 1604 commuters, San Antonio residents,
and business establishments along the loop of the cost involved with accepting the
proposed toll road (state funding, tax-payers dollars, and cost to commuters once the
toll road is in operation) within one year.
• Inform 100% of local Chambers of Commerce, SAPD, City of San Antonio Elected
Officials, and the local media of the cost involved with accepting the proposed toll
road (state funding, tax payers’ dollars, and cost to commuters once the toll road is in
operation) within one year.
• Maximize exposure of the benefits of the toll road by 30% through local media
support within one year.
Attitudinal Objectives
• Convince 25% of Loop 1604 commuters to practice defensive driving techniques
within one year.
• Convince 50% of the local media outlets about the importance of covering safety
issues along Loop 1604 and the developments of the proposed toll road within one
year.
• Create favorable attitudes about the proposed toll road among 30% of Loop 1604
commuters, San Antonio residents, business establishments along the loop, local
704
Chambers of Commerce, SAPD, City of San Antonio Elected Officials, and the local
media within one year.
Behavioral Objectives
• Persuade 15% of Loop 1604 commuters to drive defensively within one year.
• Increase local media coverage about the importance of the safety issues along Loop
1604 and the developments of the proposed toll road by 25% within one year.
• Encourage at least 20% of San Antonio residents to get out and vote in the toll road
election.
• Encourage at least 50% of the voter turnout to vote in favor of the proposed toll road.
• Have an attendance of at least 200 citizens at each of the three community forums.
TxDOT should follow the two-way asymmetric public relations model to undertake
the issues regarding safety along Loop 1604. It will help to promote change in attitudes and
behaviors through honest feedback. The main theme for the campaign would be “Keeping
San Antonians Safe in Every Direction.” The messages that would accompany the theme
would be less traffic, less accidents, less fatalities, faster access across town, and an overall
safer commute.
To kick-off the campaign, TxDOT will have to conduct a news conference at the
Drury Inn & Suites on the access road along Loop 1604 between Highway 281 and Stone
Oak Parkway. The conference will provide local media and the community-at-large with
initial and complete access to details about the dangers of Loop 1604 and the proposed
solution. The details will include issues regarding the safety of the freeway, traffic, dangers
of speeding, medians and barriers. It will also highlight the proposal for the toll road
including the benefits and costs involved (state funding, tax payers dollars, and cost to
commuters once the toll road is in operation.) Attendees will also be able to view a 3-D
model of the proposed toll road. Three months after the campaign launch, TxDOT will have
community forums at three locations along Loop 1604 which will take place on the same day
and at the same time. The locations will include: Live Oak Civic Center, Alzafar Shrine
Temple and the University of Texas at San Antonio Convocation Center (1604 campus). The
community forums will provide complete access to details about the dangers of Loop 1604
and the proposed solution. The details will include issues regarding the safety of the freeway,
traffic, dangers of speeding, medians and barriers. They will also highlight the proposal for
the toll road including the benefits and costs involved (state funding, tax payers dollars, and
cost to commuters once the toll road is in operation.) In addition, the forums will provide
citizens an opportunity to ask questions, provide feedback and raise any additional concerns.
Attendees will also be able to view a 3-D model of the proposed toll road.
An early voting party will be held at three designated voting sites (to be determined)
along Loop 1604 between Bandera Road and FM 78. Another early voting party will take
place at a central voting site downtown San Antonio. Light snacks and refreshments will be
served throughout the day at each of the events courtesy of H-E-B. In the first day of voting,
Krispy Kreme will serve free doughnuts, coffee and orange juice at selected voting sites
throughout the city. Uncontrolled media for this proposal include media kits, press releases,
feature stories, interviews, news conference and photo opportunities. Controlled media for
this proposal include informational brochures, informational packets, information video,
flyers, billboards, public service announcements, community forums, website and
705
PowerPoint presentations. Source credibility for the campaign will come from Hope
Andrade, the Texas Commissioner of Transportation, and Red McCombs, well-respected
local businessman, who will be the official spokesperson for the campaign. Both will
participate in the community forums, news conference and kick-off parties.
Salient information will include facts about the dangers of Loop 1604, and benefits
and costs involved with the proposed toll road. It will also highlight TxDOT’s genuine
commitment to ensure the safety of the travelers on its roads. Verbal and nonverbal cues will
be in a serious tone using key words such as safety, benefits, solution, life-saving, and
priceless solution. Two-way communication will take place at the news conference,
community forums and the voting sites. Opinion leaders will include the president of each
local Chamber of Commerce, City of San Antonio Police Chief, Mayor and Council
Members, Texas Commissioner of Transportation, Hope Andrade, and Red McCombs.
IV. EVALUATION
Impact Objectives
• Obtain a phone list of San Antonio households and conduct a phone survey phone to
assess if residents received the brochures, are aware of the benefits and costs involved
with the proposal of the toll road, and their attitudes toward defensive driving and the
proposal.
• Ensure that a campaign representative conducts a face-to-face meeting with the
president of each local Chamber of Commerce, City of San Antonio Police Chief,
Mayor, Council Members, and local media representatives.
• Assess if the president of each local Chamber of Commerce, City of San Antonio Police
Chief, Mayor, Council Members, and local media representatives were made aware of
the benefits and costs involved with the proposal of the toll road by having each
representative fill out a Likert-type scale at the end of the face-to-face meeting.
• Obtain traffic accident and fatality statistics on Loop 1604 for the year before, during
and after the campaign to determine if there was a correlation between the message
about the importance of defensive driving techniques and the statistics.
• Obtain the number of San Antonio residents eligible to vote at the time of the election.
After the votes have been tabulated, find out the actual number of voters to determine if
the voter turnout objective was met.
• Obtain the number of San Antonio residents who voted in favor of the proposed toll
road.
• Count attendees at each of the community forums.
Output Objectives
• Media exposure will be determined by using media monitoring and clipping techniques
of the campaign coverage.
V. CONCLUSION
Through the recommended plan of action we hope that the community-at-large is not
only aware of the dangers along Loop 1604 but is also exposed and open-minded to the
proposal of toll road, which we feel is the first step to preventing unnecessary accidents and
fatalities. This solution will alleviate the citizens concerns regarding their safety.
706
REFERENCES
San Antonio Police Department, Traffic Fatality Statistics. (2005). Retrieved July 31, 2005,
from
www.sanantonio.gov/sapd/TrafStats.htm
TxDOT History. (2005). Retrieved August 4, 2005, from
www.dot.state.tx.us/insdtdot/geninfo.htm?pg=history
TxDOT’s Mission & Vision. (2005). Retrieved August 4, 2005, from
www.dot.state.tx.us/insdtdot/geninfo.htm
San Antonio Area Freeway System, State Loop 1604. (2005). Retrieved July 31, 2005, from
www.texhwyman.com/l1604.htm
707
LESSONS OF OPTIMUM LEADERSHIP FROM SMALL-CITY MAYORS
ABSTRACT
The study of leadership principles and skills in undergraduate and graduate schools is
generally insufficient to prepare those with political career aspirations. The author
interviewed three Southern California mayors, examined their leadership styles, and linked
their political success with organizational effectiveness achieved through the use of specific
leadership models.
Despite their disparate pathways to office and the size of their city, the interviewees
shared many similar leadership qualities. Each mayor worked hard to gain the trust and
respect of his or her electorate, was an excellent communicator, built strong relationships, and
strived for consensus. Their differences were mainly manifested in how they mixed
transformational and transactional leadership styles.
I. INTRODUCTION
Acclaimed physicist Albert Einstein once stated, “Politics is more difficult than
physics” (as cited in Reardon, 2005, page 2). While this statement may be true, political
leaders have been the strongest and most vital visionaries and revolutionaries throughout
history. From the nation’s representatives in Washington to city council members in the small
towns of America’s heartland, political leaders are the voice of the people; they are the
catalysts of change for a better society, who take great strides to ensure the progression of
their citizens’ rights and liberties. The purpose of this article is to take an in-depth look at
three political leaders in an attempt to gain insight into their styles and philosophies. In this
article, the results of interviews with three city mayors will be presented, along with an
analysis of their approaches to leadership.
In analyzing the leadership styles of three separate mayors, an attempt was made to
select subjects who were diverse—both in personalities and the demographics of the cities
they govern.
On March 16, 2005, the first subject to be interviewed was Dr. Brenda Ross, Mayor of
Laguna Woods, CA. What separates Laguna Woods from most cities is that the average age
of its citizens is 78. A new city, Laguna Woods received its approval for incorporation in
1998 and local voters ratified its proposal in 1999. Mayor Ross, who is 89 years of age, was
reelected in 2004, and her third term expires in 2008. In addition to her mayoral
responsibilities, Ross acts as commissioner on the California Commission on Aging and
serves on multiple other commissions, committees, and boards of directors.
On March 22, 2005, Mayor Randall Bressette of Laguna Hills, CA was interviewed at
the Laguna Hills City Hall. Reporting a 2003 population of 32,875, the City of Laguna Hills
is an affluent community with a low crime rate. A 23-year veteran of the Navy Reserve,
Mayor Bressette was first elected to the city council concurring with the city’s incorporation
708
in 1991 and has served in a leadership capacity within the city ever since. In addition to his
mayoral duties, Bressette acts as alternative representative to the El Toro Reuse Planning
Authority, a group that works to oppose the creation of an airport on the grounds of the
former El Toro Marine Base.
On March 24, 2005, Mayor Trish Kelley of Mission Viejo, CA was interviewed in her
office at city hall. A volunteer in the community since 1977, Mayor Kelley serves as the
city’s representative for the Orange County Fire Authority Board of Directors, and as the
alternate representative on the board of directors for the San Joaquin Hills Transportation
Corridor Agency and the Orange County Council of Governments General Assembly.
The leadership style of Mayor Brenda Ross is as multifaceted and diverse as her
captivating background. “I usually have a leadership position, not because I seek it, but
because I usually get asked to do it, and I do that kind of thing well,” she stated. “I think
being a team leader doesn’t mean that you sit back and let everybody do it, but I listen to
everybody first. I try first to understand and then be understood” (B. Ross, personal
communication, March 16, 2005).
When speaking of the relationship between a leader and a follower, Ross said that it’s
“One of mutual respect, one of equality. In other words, you’re not in a better position
because you’re a leader. You’re in a worse position because it’s your responsibility to get
everybody working together.” Her method of making decisions proved valuable when Ross
said “If somebody comes at me today and they want me to vote on something, I’d like to
sleep on it. It gives me time to think it through. I don’t like to jump to conclusions” (B. Ross,
personal communication, March 16, 2005).
When speaking of what she values in a leader and a follower, Ross stated “listening”
and added, “That’s the single most important (attribute) in either a follower or a leader,
because if you listen to the other person, you hear what he has to say and you can mull that
over in your mind.” Additionally, she said, “A leader needs to have some vision, has to have
some kind of strategic plan, whatever you want to call it. I don’t think you just get to be a
leader and run out in front of a group.” In explaining how she makes decisions, Ross stated “I
like to be sure that I’ve thought of all sides of it if I can. If I can’t, I like to call somebody that
I know will be in opposition, and hear what those worries are per se” (B. Ross, personal
communication, March 16, 2005).
Randall Bressette seems to vary in his perspective. When asked if he felt individuals
are born or trained to be leaders, he replied “I think there are people, who by nature of their
parents and their extended family and their surroundings, become leaders simply because
they are forced into it.” Then, he added “Most people though I think obtain their leadership
skills in their teenage years, as their parents are a very strong influence, mine were, and they
look at other people who they come to respect” (R. Bressette, personal communication,
March 22, 2005).
709
necessarily the same thing.” When talking about obstacles, he said, “I am rather
straightforward with people, and I don’t believe very much in political correctness as we
define it today. I believe in being polite, but I also believe in being very straightforward.”
When talking about what the relationship between a leader and a follower should be, he
stated “Teamwork. The leader and the follower can be interchangeable. It’s the person who
comes with the plan, who comes with the energy, who will generally turn out to be the
leader” (R. Bressette, personal communication, March 22, 2005).
When asked how he made decisions, Bressette replied “As a member of the city
council, my ultimate question is ‘What would my neighbors want me to do?’” Also,
“Whether it’s from an electorate or from a board of directors that’s simply by consensus, the
guy in charge has got to make a decision that he thinks is right for the group” (R. Bressette,
personal communication, March 22, 2005).
Speaking with Trish Kelley proved to be insightful because she comes from a much
different background. When asked how she would deal with a resister, she replied “I try to
just be honest no matter what.” She added “It really doesn’t happen too often. If I have
something that needs to be done and it can’t be done, then they just respectfully tell me that
this won’t work and here are the reasons why” (P. Kelley, personal communication, March
24, 2005).
When talking about the relationship between a leader and a follower, she answered,
“I’ve always believed that the best leader is someone who can complement and bring out the
strengths in the people that work for you or work with you.” Additionally, “I’ve always tried
to have a very positive outlook.” When talking about her values, she stated “I value integrity,
just to know that a person is honest, and trustworthy, and will be able to make decisions for
the right reasons.” She added “Also, an open mind, which that’s been one of the biggest
surprises being what I would call a normal person and working with a bunch of politicians”
(P. Kelley, personal communication, March 24, 2005).
After conducting thorough and in-depth interviews with each mayor, a leadership
analysis will be presented that describes the styles and philosophies of each, while examining
the factors shared by all.
First and foremost, Brenda Ross and Trish Kelley are both prime examples of
emergent leaders (Northouse, 2004). However, they both have alternative paths in their
perceptions as the most influential members of their respective groups. As Ross stated, she is
actually pursued for leadership positions because of the respect and admiration she has
gained from subordinates throughout her career. Kelley, meanwhile, built a long-term
reputation of reliability through her consistent volunteer work. In other words, she
continually took on leadership volunteer positions that others did not prefer. It’s her
emergence and empowerment from others that results in her leadership position.
Each leader differs in their situational approaches to leadership. Ross, through her
demeanor (and possibly, her maturity as a leader), epitomizes a high directive/high supportive
behavior (coaching). She has the incredible capabilities to direct a team to institute a new
710
city, yet, at the same time, she places a large amount of effort in her supportive capabilities.
She offers a nurturing, caring, and compassionate attitude. Meanwhile, Bressette exemplifies
a high directive/low supportive (directing) method. He takes pride in his undeviating
approach to lead his staff. Additionally, through the path-goal method, he gives clear
instructions on what is to be expected and at what times. Kelley displays a low directive/high
supportive (supporting) method, and seems to place a particular emphasis on a supportive
environment (Northouse, 2004).
A facet that all three mayors share in common is their abidance to transformational
leadership. All three tend to raise their level of moral maturity, convert followers to leaders,
broaden and enlarge the interest of their followers, motivate and entice others to go beyond
their personal interest for the betterment of the organization, and address the sense of self-
worth for each of their followers (Northouse, 2004). All three realize that this is an effective
route and one that leads to success. Additionally, they convey positive emotions about the
future that include faith, trust, and confidence, which leads to better performance on the job
(Seligman, 2002).
Another aspect that all three mayors share is their focus on creating and maintaining
learning organizations. Senge (1990) defines a learning organization as a workplace
characterized by sharing, growth, and adapting. His notion is that a learning organization
doesn’t make the same mistake twice and that the barriers that get in the way are the absence
of time and a reactive versus proactive ideology within the culture of the organization.
Robbins (2005) adds that a learning organization is characterized by a shared vision that
everyone agrees on, people openly communicating without fear of criticism or punishment,
and people sublimating their personal self-interest and fragmented departmental interests to
work together to achieve the organization’s shared vision.
711
Through an analysis of the three leaders who were interviewed, it’s evident that they
vary and differ to various degrees. Yet, they are successful leaders in that they embrace
challenge with meaning and passion, while seizing the initiative with enthusiasm (Kouzes &
Posner, 2002). The next section will discuss the primary lessons that were gained through this
project.
Several lessons of leadership were gained throughout the course of this project. This
section will focus on some of the primary lessons that arose throughout the course of the
interviews.
First, all three leaders seem to have mastered what Cashman (1999) refers to as
purpose mastery. “Focusing on how to make a difference” (page 65) refers to building upon
one’s strengths. The three are completely in line with their strengths, realize their
weaknesses, and work to improve upon their strengths. Thus, an important lesson relates to
the validity that was established by Buckingham and Clifton (2001). In Now Discover Your
Strengths, the two refer to building a, “strength-based organization” (page 40). It’s noted that
seeing the capitalization of their strengths in action provides insight. The primary lesson is
that victory over the focus of one’s fears and weaknesses is a key aspect of effective
leadership.
The second lesson is the amazing results that each mayor is able to obtain by utilizing
his or her passion, vigor, and drive. For all, their passion is helping others, serving their
communities, and making their cities better places to live. The passion is demonstrated
differently for each. Kelley strives to serve the community as a volunteer with the PTA, Girls
Scout’s, church, and various other community activities. Her approach has driven her to excel
as the leader of her city. Ross has had a distinguished career of serving her country, teaching
at Boston University, serving on state commissions, and leading the incorporation of her city.
Today, she stands as an emergent leader of one of America’s most unique cities. Bressette
indulges in his passion for instigating innovation and thinking outside of the box—in his
military service, business life, and political career. He strives to take on new endeavors and
lead others to achieve the best results. Through this passion, they have been able to live a life
in which they can achieve the intersection of personal greatness, leadership greatness, and
organizational greatness (Covey, 2004).
Finally, the third lesson from this project is that many of Covey’s notions in The
Seven Habits of Highly Effective People are valid. Each leader reinforces the seven
characteristics model, referring to beliefs in the necessity for work to be meaningful, the
focus on beliefs and behaviors that provide peace and spirituality, and the focus on discipline.
Perhaps most important, each leader realizes the importance of relationships that provide
energy (Covey, 1989). Each is compelled to lead from the heart and accomplish dynamic
objectives.
VIII. CONCLUSION
After conducting interviews with Brenda Ross, Randall Bressette, and Trish Kelley, it
can be concluded that each leader demonstrates great care for his or her city. After gaining
insight into their styles and philosophies, it’s encouraging to see that there are genuine
leaders that exist who truly place the best interest of their followers ahead of their own
712
personal interests. Hopefully, such a trend will extend to other organizations throughout the
years ahead.
REFERENCES
Buckingham, Marcus, & Donald O. Clifton. Now, Discover Your Strengths. New York, NY:
Free Press, 2001.
Cashman, Kevin. Leadership From the Inside Out: Becoming a Leader for Life. Minneapolis,
MN: Executive Excellence Publishing, 1999.
Covey, Stephen R. The Seven Habits of Highly Effective People. New York, NY: Free Press,
1989.
Covey, Stephen R. The 8th Habit. New York, NY: Free Press, 2004.
Kouzes, James M., & Barry Z. Posner. The Leadership Challenge, 3rd ed. San Francisco, CA:
Jossey-Bass, 2002.
Northouse, Peter G. Leadership Theory and Practice. Thousand Oaks, CA: Sage Publications,
2004.
Reardon, Kathleen K. It’s All Politics: Winning in a World Where Hard Work and Talent
Aren’t Enough. New York, NY: Currency, 2005.
Robbins, Stephen P. Essentials of Organizational Behavior, 8th ed. Upper Saddle, NJ: Pearson
Education, 2005.
Seligman, Martin E.P. Authentic Happiness. New York, NY: Free Press, 2002.
Senge, Peter M. “The Leader’s New Work: Building Learning Organizations.” Sloan
Management Review, 32, (1), 1990, 7-23
713
CULTURAL ADAPTATION OF AUSTRIAN AND U.S.-AMERICAN WEBSITES:
A COMPARISON USING HOFESTEDE’S CULTURAL PATTERNS
ABSTRACT
The purpose of this paper is to explore how cultural values are depicted on Austrian
and US websites, using a cross-cultural approach. Using Hofstede’s cultural dimensions and a
predefined conceptual framework, Austrian and US websites were qualitatively analyzed in
order to measure the propensity of defined cultural attributes. This study implies that as the
global market continues to expand, cultural customization of international websites is
becoming less of a choice and more of a necessity.
I. INTRODUCTION
With almost one sixth of the world population online, the Internet has become the
most important marketing medium to date (http://www.c-i-a.com/pr0904.htm). Consumers
can easily shop from the comfort of their homes, without having to depend on business
operating hours. Current customer awareness of global offerings is higher than ever before.
But the website is still a virtual representation of a shop or an office. The visitors are
still people with individual values, norms and beliefs. These values define how one will react
to symbols and sensory inputs, and are ultimately the basis for one’s culture. In the same way
as shops differ from one culture to another, websites have to be culturally adapted to their
audience. Websites should be locally tailored for each specific country in order to make sure
the site communicates meaning properly and serves the needs of the visitor.
It is not surprising that many scholars have written about this topic and have tried to
provide a framework for cultural understanding. The starting point for this research is
Hofstede’s early work about culture, in which he (Hofstede, 1991, 5) identifies culture as "the
collective programming of the mind which distinguishes the members of one group or
category of people from another." Hofstede (2001) specifies five cultural dimensions that
allow for the classification of cultures: Power Distance, Uncertainty Avoidance,
Individualism, Masculinity and Long-/Short-Term orientation. He also gives very helpful
index ratings that show how countries and regions score within these dimensions.
714
The crucial importance of intercultural competence and knowledge, for businesses, is
shown in an example by Mayo (1991). He examined exporters who entered foreign markets
and failed. The underlying reason for their failure was due to a lack of knowledge concerning
country-specific business practices. Business practices in the private sector are determined by
values, norms and beliefs. The cultural taxonomy that was developed by Hofstede is often
used as a starting point for further investigation. Rawwas (2001) connects this taxonomy to
ethic beliefs of consumers from the USA, Ireland, Austria, Egypt, Lebanon, Hong Kong,
Indonesia, and Australia. He concludes that marketing strategies need to be developed
bottom-up. The purpose is to communicate those values which are important to the consumer
and to that consumer’s native society.
Since customers are different from one country to the next, sellers must receive
intercultural communication training (Bush & Ingram, 1996) and specific approaches should
be taken in order to attract foreign customers (McDonald, 1994). Bush (2001) makes the
statement that the importance of cultural adaptation is understood, but not satisfyingly
executed. The ideal marketer should have “empathy, world mindedness, low ethnocentrism,
and attributional complexity” to excel in the global market. Hofstede’s
Individualism/Collectivism dimension was used by Litvin and Kar (2003) to explore the
relationship between one’s self-perception and the perceived image of a product. It was
concluded that “cultural differences have once again been shown to play a significant role in
consumers' […] attitudes.”
It is obvious that a great deal of work has been completed showing the unalterable
significance of intercultural competence for businesses. To go more in detail, some
researchers have studied how this knowledge is being integrated in electronic marketing. The
two most important variables which must be tailored country specific, when considering an e-
commerce website, are: language and infrastructure for payment and delivery (Bin, Chen &
Sun, 2003).
Junglas and Watson (2004) found that infrastructure and market surroundings play an
important role in website design. Junglas and Watson also explain their findings using
Hofstede’s cultural dimensions. Further studies revealed that most corporations in fact adapt
their websites according to the particular country (Singh, Kumar & Baack 2005).
As shown in many studies, cultural adaptation of websites is not only very important,
but also necessary. It is understood that a successful launch of an e-commerce website in a
foreign country demands more than simply translating the content. Most studies in this field
have focused on comparison between websites from the US and European countries,
including the United Kingdom, Germany, France, Ireland, Spain, and Greece (Darling et al,
1996; Rawwas, 2001; Singh et al 2005).
III. HYPOTHESIS
The intention of this study is to compare websites from both Austria and the United
States in order to find out if Hofstede’s cultural dimensions were used in site construction.
Further analysis is done in order to ascertain the extent to which the dimensions were used
and if they were incorporated in conformity to Hofstede’s findings.
715
of 70 (2003). Austria has an Individual/Collective index score of 55 (2003), which is
considered relatively neutral.
IV. ANALYSIS
Based on Hofstede's dimensions, two countries were selected that are as culturally
different as possible. The countries selected were the United States and Austria. The limit for
the selection was the authors’ language proficiency. The panel that was chosen consists of 60
websites, 30 websites from each country. The industries chosen were banking, insurance,
business-to-business, ski resorts and football.
Methodology
Each website was tested on the manifestations of Hofstede’s cultural dimensions. To
achieve this, a questionnaire with seven categories that was developed and used by Singh,
Xhao and Hu (2003) in an earlier study was utilized. The categories used were Collectivism,
Individualism, Uncertainty Avoidance, Power Distance, Masculinity, Low-Context and High-
Context. Each category was broken down into features, which by definition, account for the
respective cultural category. The website analysis consisted of the identification of particular
features on a website and measuring the degree to which these features were implemented.
The degree of the incorporation of those features was measured in a five step scale
with the scores 0, 1, 2, 3 and 4. The lowest score was given for the total absence of the
feature; the highest score was awarded for a prominent depiction on the website’s front page
or for consistent appearance. Scores 1, 2 and 3 were awarded in a relative manor based upon
the degree to which the website incorporated a particular feature in a website.
After scoring of the websites had been completed, the scores were than measured
using statistical analysis. ANOVA tests were applied to the entire data set. This test allowed
for the measurement of each variable in order to find out if deviations in each dimension were
of significant value.
Results
While the results of the ANOVA test do not show a significant difference in the
cultural dimensions of the US and Austrian websites, there are several subsets in these
categories that do appear to be significantly different. The differences in these subsets lie in
parallel with Hofstede’s cultural analysis of both Austria and the US. The results of the
ANOVA are tabulated below.
Individualism-Collectivism
Privacy: When analyzing to what degree a website provided a statement of privacy,
US websites were significantly higher in their propensity to provide a statement of privacy
than were Austrian websites (mean: US 3.16 and Austria 1.90; F = 18.080, Sig. = .000).
716
Table I: ANOVA TEST RESULTS, * denote significance
F- F-
Categories Mean Value Categories Mean Value
Power Distance
Collectivism US Austria Hierarchy Information 1.74 1.69 0.873
Community Relations 2.26 2.24 0.96 Pictures of VIP 1.58 1.55 0.936
Clubs 0.77 1.1 0.35 Awards 1.71 1.97 0.406
News Letter 2.03 2.24 0.579 Vision 1.57 1.72 0.555
Family Theme 1.9 1.9 0.984 Pride of Ownership 2.03 2.52 0.048*
Symbols 1.42 1.66 0.435 Titles 1.84 1.69 0.597
Loyalty Program 1.61 1.24 0.198
Links 1.45 1.93 0.162 Masculinity
Adventure Theme 1.52 2.07 0.194
Individualism Realism Theme 2.68 2.9 0.283
Privacy 3.16 1.9 0* Effectiveness 2.87 2.9 0.891
Independence 1.84 1.76 0.676 Gender Roles 0.77 1.31 0.06*
Originality 2.23 2.52 0.232
Personalization 1.06 1.28 0.504 Low Context
Rank of Position 2.16 2.31 0.554
Uncertainty Avoidance Hard Sell 1.84 1.9 0.822
Customer Service 3.06 3.07 0.981 Comparatives 1.13 0.86 0.31
Navigation 3.19 3.34 0.528 Superlatives 1.97 1.86 0.695
Local Stores 2.55 2.72 0.542 Terms 2.58 2.14 0.062
Local Terminology 1.45 1.48 0.892
Free Trial 0.77 1.52 0.028* High Context
Testimonial 0.48 0.72 0.309 Politeness 0.68 1.1 0.029*
Toll Free# 2.45 1.07 0 Soft Sell 1.52 1.24 0.312
Tradition 1.58 2.28 0.015* Images 2.39 2.69 0.228
Uncertainty Avoidance
Free Trial: On the sub-dimension Free Trial, Austrian websites were significantly
higher in their tendency to provide free trials (mean: US .77 and Austria 1.52; F = 5.069, Sig.
= .028).
Toll Free #: When analyzing the degree to which websites provided a toll free phone
number, US websites were significantly higher in their trend to provide toll free phone
numbers (mean: US 2.45 and Austria 1.07; F = 28.644, Sig. = .000).
Tradition: The degree to which the sub-dimension tradition was shown on websites
was significantly higher for Austria than for the US (mean: US 1.58 and Austria 2.14; F =
6.237, Sig. = .015).
Power Distance
Pride of Ownership: When analyzing the sub-dimension Pride of Ownership Austrian
websites were significantly higher in tendency to show pride of ownership on websites
(mean: US 2.03 and Austria 2.52; F = 4.083, Sig. = .048).
Masculinity
Gender Roles: When analyzing the extent to which gender roles where portrayed on
websites, it was found that Austrian websites displayed gender roles at a significantly higher
rate than did US websites (mean: US .77 and Austria 1.31; F = 3.694, Sig. = .060).
High and Low Context
Politeness: Austrian websites used polite language more frequently than did US
websites (mean: US .68 and Austria 1.10; F = 5.013, Sig. = .029)
717
V. CONCLUSION
The United States is a very individualistic society. When considering the cultural
dimension of Individualism, the US ranks higher than any other country that Hofstede (2003)
analyzed. Personal freedom and achievement are underlying themes in most US cultural
expressions. A major component of personal freedom is privacy. It is no wonder than, that
US websites provide privacy statements much more often and more clearly than did Austrian
websites. Comparatively speaking, Austria is much higher on Uncertainty Avoidance than is
the United States. This means that Austrians are more apt to get involved in situations which
have been clearly defined. It makes sense that Austrian websites scored higher on both the
sub-dimension of Free Trials and the sub-dimension of Tradition Theme. A free trial allows
uncertain users the chance to experience an offering before its purchase, therefore reducing
uncertainty. Incorporating tradition into web design reduces uncertainty by offering products
and services in familiar and traditional settings. When studying the cultural dimension of
Masculinity, Austria (79) scores much higher than the United States (62), according to
Hofstede (2003). This suggests that Austria is a culture that can be described a masculine,
while the United States is only somewhat masculine. Austrian websites tend to exhibit
discrete gender roles; this is most likely because a major component of displaying
masculinity on a website is through the use of gender roles.
Implications
In observing websites from both of these countries it is obvious that there is clear
distinction between U.S. and Austrian websites. However, although there is a clear distinction
between website designs in the two countries, there seems to be a lack of conformity
regarding website design in either country. While, it is not necessarily beneficial to create
uniformity in site structure, it may prove beneficial for companies to display their offerings in
a way in which local consumers find comfortable. There is evidence in this study that
websites are becoming more sensitive and adapting to the needs of their local customers. This
can be seen when observing the differences between the United States and Austria, in their
cultural dimensional subsets. The sub-dimensions, free trial, tradition, pride of ownership,
gender roles and politeness all were in conformity with Hofstede’s ranking of Austria and the
United States. This study implicates that as website construction evolves, there will be a
continuance in this trend of cultural adaptation.
When assessing the methodology in this study, two possible sources of error emerge.
The first source of possible error involves the sample population used. Five industries were
selected for this study, based upon their availability on the World Wide Web. In retrospect it
seems that each of these industries displayed certain values which where intrinsic to their
specific industry. These industry specific values were incorporated into the evaluation
process and could have possibly skewed the statistical results. The second source of error
involved the use of two evaluators, one native to the United States and one native to
Germany. It is possible, that while evaluation techniques were the same, that there were
discrete measurement differences, do to cultural bias. In conclusion, this study finds that
while there appears to be a trend toward cultural adaptation, in both Austria and the United
States, there remains a deficit in offerings that intimately reflect the culture in which they are
offered. Further research, in industry specific areas, should be helpful in generalizing cultural
differences in websites, without the confliction that arises when assessing multiple industries.
718
REFERENCES
Bin, Qiu; Shu-Jen Chen & Shao Qin Sun. “Cultural differences in e-commerce: A
comparison between the U.S. and China”. Journal of Global Information Management.
2003.Vol.11, Iss. 2, pg. 48, 8 pgs.
Bush, Victoria D. & Thomas Ingram. “Adapting to diverse customers: A training
matrix for international marketers”. International Marketing Management. 1996. Vol.25, Iss.
5, pg. 373, 11 pgs.
Bush, Victoria D. et al. “Managing culturally diverse buyer-seller relationships: The
role of intercultural disposition and adaptive selling in developing intercultural
communication competence”. Academy of Marketing Science. 2001. Vol.29, Iss. 4, pg. 391,
14 pgs.
Darling, John R. & Taylor, Raymond E. “Changing attitudes of consumers towards
the products and associated marketing practices of selected European countries versus the
USA, 1975-95”. European Business Review. Bradford: 1996. Vol. 96, Iss. 3, pg. 13.
Hofstede, Geert. (1991). Cultures and Organizations: Software of the Mind. London:
McGraw-Hill.
Hofstede, Geert. (2001). Culture’s Consequences: Comparing Values, Behaviors,
Institutions and Organizations across Nations. 2nd ed. Thousand Oaks, CA: Sage.
Hofstede, G. (2003). Geert Hofsteds’s Cultural Dimensions. Retrieved October 10,
2005. Web site: http://www.geert-hofstede.com
Junglas, Iris A. & Richard T. Watson. “National Culture and Electronic Commerce”.
E-Service Journal. 2004. Vol. 3, Iss. 2, pg. 3, 32 pgs.
Litvin, Stephen W. & Goh Hwai Kar. “Individualism/collectivism as a moderating
factor to the self-image congruity concept”. Journal of Vacation Marketing. London: Dec
2003. Vol.10, Iss. 1, pg. 23, 10 pgs.
Mayo, Michel A. “Ethical Problems Encountered By U.S. Small Businesses In
International Marketing”. Journal of Small Business Management. 1991. pg 51.
Mc Donald, William J. “Developing international direct marketing strategies with a
consumer decision-making content analysis”. Journal of Direct Marketing. 1994. Vol.8, Iss.
4, pg. 18, 10 pgs.
719
VALERO ENERGY CORPORATION AND RISING GAS PRICES
ABSTRACT
This public relations case study looks closer at Valero Energy Corporation and the
rising gas prices. Valero Energy Corporation, based in San Antonio, Texas, is the largest
refiner in the North America and is quoted as refining about 3.3 million barrels a day with its
refineries that are placed all over the western hemisphere. The case study will conduct an
examination of the above history concerning Valero. Many residents of San Antonio believe
that it is Valero that is increasing the gas prices. The purpose of this case study is to inform
the target publics of the truth behind Valero and the rising gas prices.
I. INTRODUCTION
Valero is concerned about its reputation due to an up rise in negative publicity and has
a desire to inform its public about the company’s values. Valero would like to improve and
maintain the local community morale and its positive reputation. The company would also
like to inform the public about the price of gas and circulate the truth concerning the
corporation’s profitability during this time. Valero Energy Corporation was founded in 1980
as the corporate successor to LoVaca Gathering Company, a natural gas gathering subsidiary
of the Coastal States Gas Corporation. Valero is a Fortune 500 company based in San
Antonio with approximately 22,000 employees and assets valued at $33 billion (Valero
Energy Corporation, 2005). The corporation is a premiere refining and marketing business
that leads in shareholder value growth through innovative, efficient upgrading of low-cost
feedstocks into high-value, high-quality products. Valero also holds an in-house public
relations office. Mary Rose Brown, Senior Vice-President of Corporate Communication
works directly with Joanna Weidman, Director of Corporate Communication along with other
individuals who work hard to maintain Valero’s relationship with the public and media. At
present, the public views Valero negatively, along with diminishing assurance in the
company by the consumer’s apparent lack of confidence in the company. Valero has concern
for the potential loss of revenue and reputation.
The main reason for conducting the campaign is to reassure the community of
Valero’s intention to demonstrate the company’s commitment to creating, supporting and
maintaining high standards of excellence.
720
II. BACKGROUND
Valero is the largest privately owned refining corporation in North America. It has an
extensive refining system with a throughput capacity of approximately 3.3 million barrels per
day, in comparison to the 900,000 barrels per day in 2001. As described on Valero’s website,
the company's geographically diverse refining network stretches from Canada to the U.S.
Gulf Coast and West Coast to the Caribbean. In combination with its interest in Valero L.P.,
Valero has 9,150 miles of pipeline, 94 terminal facilities and four crude oil storage facilities
(Valero Energy Corporation, 2005). Many of these terminal and oil storage facilities
complement Valero's refining and marketing assets in the U.S. Southwest and Mid-continent
regions. According to Valero’s website, as a marketing leader, Valero has approximately
4,700 retail sites branded as Valero, Diamond Shamrock, Ultramar, Beacon and Total. The
company markets on a retail and wholesale basis through a bulk and rack marketing network
in 42 U.S. states, Canada, Latin America and the Caribbean. Valero has long been
recognized throughout the industry as a leader in the production of premium, environmentally
clean products, such as reformulated gasoline (Valero Energy Corporation, 2005).
Valero has added 380,000 barrels per day (BPD) of refining capacity since 1997; this
is the equivalent of building at least three world-scale refineries. It has also announced the
addition of another 400,000 BPD of refining capacity over the next five years at a cost of $5
billion. Also, Valero has purchased many unreliable plants and invested in them so they now
run reliably at expanded rates because the company has been running all of its plants at
maximum rates, there is little more that Valero can do to positively impact gasoline prices.
The current tight supply and demand picture is due to strong demand from a booming global
economy and the reduced volumes of refined products that can be produced from each barrel
of crude oil due to the cleaner gasoline specifications in the U.S. today. This should come as
no surprise to anyone. In fact, Valero pointed out these challenges to the U.S. House of
Representatives Health & Environment Subcommittee in sworn testimony four-and-a-half
years ago.
Before Hurricanes Katrina and Rita, U.S. refineries were operating at very high
capacity operation rates to keep up with market demand. However, despite these high
operation rates, inventories were already low due to strong demand from a booming global
economy and the reduced volumes of crude oil. Then, the hurricanes dealt with a devastating
blow to the Gulf Coast’s refining infrastructure, intensify an already tight market by knocking
out almost 30% of U.S. refining capacity following Hurricane Rita.
As a result, gasoline prices – which are based on a freely negotiated spot market –
dramatically increased. During the aftermath of these back-to-back hurricanes, Valero chose
not to pass along the full amount of these increases to consumers and its branded jobbers. In
721
fact, following Hurricane Rita, the company’s retail prices in some areas of the country were
$1 per gallon below its cost to replace the gallons Valero was selling in those markets. As a
result, Valero lost $27 million in branded wholesale business and only made $5 million from
its network of more than 1,000 U.S. retail stores during the third quarter (Weidman, 2005).
Valero helped contribute to the decline in gasoline prices by investing significant resources to
restart its impacted refineries in record time, which helped to ease the supply shortages. It
was a monumental effort to provide housing, food, supplies and additional workers from
other Valero refineries immediately after the storms. Despite personal losses, the employees
returned to work immediately and worked around the clock to restore power and repair and
restart the refineries. As a result of these considerable efforts, Valero’s refineries and retail
stores were back online more quickly than neighboring facilities -- providing much-needed
fuel to consumers during this difficult time (Weidman, 2005).
III. RESEARCH
IV. OBJECTIVES
The following are the impact and output objectives of the campaign for the next year.
Impact Objectives
Valero will set the following impact objectives for its employees:
1. To inform 100% of employees of Valero’s current position on the negative publicity
and its vulnerable reputation in three months.
2. To spread the word about improvements taking place at Valero by 80% within one
year.
3. To increase favorable opinions with employees by 60% within one year.
For Valero’s consumers, the objectives will be:
1. To inform 45% of local consumers of Valero’s current position on the negative
publicity and its vulnerable reputation in three months.
2. To spread the word about improvements taking place at Valero by 50% within one
year.
3. To increase favorable opinions with the local community of consumers by 45% within
one year.
Valero’s local community – non-consumers’ objectives will be:
1. To inform 40% of local non-consumers of Valero’s current position on the negative
publicity and its vulnerable reputation in three months.
2. To spread the word about improvements taking place at Valero by 50% within one
year.
3. To increase favorable opinions with the local community of non-consumers by 30%
within one year.
722
Valero’s neighboring communities’ objectives will be:
1. To inform 35% of neighboring communities of Valero’s current position on the
negative publicity and its vulnerable reputation in three months.
2. To spread the word about improvements currently taking place at Valero by 50%
within one year.
3. To increase favorable opinions among neighboring communities by 35% within one
year.
Finally, Valero’s media’s objectives will be:
1. To inform 75% of the media of Valero’s current position on this issue in three
months.
2. To spread the word about improvements currently taking place at Valero by 50%
within one year.
3. To increase favorable opinions with the media by 40% within one year.
Output Objectives
Valero will also set the following output objectives:
1. To send press releases to major local and neighboring community media outlets.
2. To establish a cohesive relationship with 50% of local media within one year.
3. To send out postcards to everyone in San Antonio with fast facts about Valero and the
increase in gas prices.
4. To gain media coverage with all local media sources within six months.
5. To gain favorable attitude among 35% of local media within one year.
Employees are an important part of the campaign. If they are not properly informed,
this may contribute to apathetic feelings about the organization. Thus, memos will be sent to
keep them updated and get them more involved.
Valero will use the public information model. The model emphasizes the use of
truthful messages to all concerned publics. Valero will implement a proactive program to
avoid any potential problem by making necessary adjustments in the organization to
overcome negative media attention. The campaign slogan will be, “The drive to take you
there & the energy to keep you here” and the theme will be, “The Energy to Inform”. In
addition, the main message will be, “This local energy company puts the interest of the locals
first.”
Special Events
Valero plans to arrange unique special events. At each special event, the media
should not only be contacted and invited, but also treated in a VIP style. As positive
relationships develop with local media outlets, Valero can use this clout to promote itself in
the future. First, Valero believes that a speech by Bill Greehey, the CEO of Valero Energy
Corporation, would be a great way to jump-start the campaign. Following Mr. Greehey’s
speech, Valero would create an event that would be set up like a mock refinery with people
stationed at numerous different locations to provide information and answers to questions. At
this event, Valero will give away free gas cards, finger foods and beverages. To add some
excitement, Valero thinks it would be appropriate to have go-kart races for children of all
ages. In this controlled event, the children who win each race would have their pick from
anything in the “mock” Valero gas station. This gas station would be set up like all Valero
gas stations with candy, gum, drinks, etc. Valero would also hold pumping gas relay races
723
for adults. The gas pumping would not be like today’s gas pumps, but like the original
manual pumps. The winner of pumping gas relay races would also have their choice from
anything in the “mock” Valero gas station. However, the adults would also be able to win
prizes such as Lotto tickets and additional gas cards.
Valero would also invite visitors on a tour of its buildings. In this event, visitors will
learn such useful information as how the company is run and the extent of the hard work put
forth by all Valero employees. Valero would also display the numerous awards the company
has obtained in its years of operation.
Media
Media coverage can enhance or demise an organization. In this case, Valero should
use uncontrolled media to its advantage as often as possible. By sending news releases and
media kits, local media will have the opportunity to shine a positive perception of Valero
through articles and feature stories. Valero plans to create an enduring and positive
relationship with the local media throughout the year and keep them informed about its
happenings. It is imperative that a letter be sent to the editor of S.A. Life of the Express
News to create a positive relationship. In addition, Valero would compose a feature story
about the company’s history, community work, statistics and the refinery process. Valero
feels it is important to keep the community as up-to-date as possible. Thus, the company will
post its events in local print media within community calendars and repeat posting them one
month before the event, one week until the event and then one day prior to the event.
Effective Communication
Valero will stress the use of effective communication with all target publics. Thus,
the use of credible and updated information, verbal and nonverbal cues, opinion leaders, two-
way communication and audience participation and feedback will be essential to the
campaign implementation.
VI. EVALUATION
Due to the large number of Valero employees and their diverse locations, the most
effective way to measure the awareness level of the employees is through an extensive survey
distributed to all of them via Valero’s Intranet. The survey will also include questions
regarding the current improvements at Valero and the employees’ opinions of the company.
A self-addressed survey will be distributed to consumers along with their gas card statements.
This survey will assess the consumers’ opinions of Valero and their knowledge of the
improvements.
724
Valero. Furthermore, an evaluation of the number of website hits before, during and after the
campaign will be tabulated.
Media will be measured by calculating the number of stories run about Valero and its events
in local print and broadcast medias. The content and size of the stories `would also be
evaluated.
VII. CONCLUSION
This campaign has both strengths and weaknesses. A major strength is that Valero is
dealing with an active audience who is seeking out information on the issues at hand. For
example, gas prices affect all individuals. A major weakness is that San Antonio is a large
city and it is difficult to successfully reach every targeted public. It is also difficult to change
pre-existing attitudes. Based on the controversy, the directions taken by Valero could be
changed during the implementation of the campaign. Valero should maintain a proactive role
of keeping their employees and the community informed.
REFERENCES
Valero Energy Corporation, (2005). Retrieved Nov. 15, 2005, from Valero Energy
Corporation Web site: http://www.valero.com/.
Valero Energy Corporation, (2005). Retrieved Nov. 14, 2005, from Valero Energy
Corporation Web site: http://www.valero.com/ About+Valero.
Weidman, Joanna. Personal interview. 16 Nov 2005.
725
ADAMS
ALKHAFAJI
EDITORS BUSINESS
BUSINESS
RESEARCH RESEARCH
YEARBOOK
Volume XIII
2006
YEARBOOK
Global Business
Perspectives
VOLUME XIII 2006
MARJORIE G. ADAMS
International
Academy of
ABBASS ALKHAFAJI
Business EDITORS
Disciplines
Publication of the International
Academy of Business Disciplines