You are on page 1of 745

ADAMS

ALKHAFAJI
EDITORS BUSINESS
BUSINESS
RESEARCH RESEARCH
YEARBOOK
Volume XIII
2006
YEARBOOK
Global Business
Perspectives
VOLUME XIII 2006

MARJORIE G. ADAMS
International
Academy of
ABBASS ALKHAFAJI
Business EDITORS
Disciplines
Publication of the International
Academy of Business Disciplines

Cover Design by Tammy Senath ISBN 1-889754-10-2


BUSINESS RESEARCH YEARBOOK

GLOBAL BUSINESS PERSPECTIVES

VOLUME XIII, 2006

Editors

Marjorie G. Adams
Morgan State University

Abbass F. Alkhafaji
Slippery Rock University

A Publication Of The International


Academy Of Business Disciplines

IABD
Copyright 2006
by the
International Academy of Business Disciplines

International Graphics
10710 Tucker Street
Beltsville, MD 20705
(301) 595-5999 office

All rights reserved


Printed in the United State of America

Co-published by arrangement with


The International Academy of Business Disciplines

ISBN
1-889754-10-2
PREFACE

This volume contains an extensive summary of most of the papers presented during
the Thirteenth Annual Conference of the International Academy of Business Disciplines held
in San Diego, California April 6 – 9, 2006. This volume is part of the continuing efforts of
IABD to make available current research findings and other contributions to practitioners and
academics.

The International Academy of Business Disciplines (IABD) was established eighteen


years ago as a world wide, non-profit organization, to foster and promote education in all of
the functional and support disciplines of business. The objectives of IABD are to stimulate
learning and increase awareness of business problems and opportunities in the international
market place and to bridge the gap between theory and practice. The IABD hopes to create an
environment in which learning, teaching, research, and the practice of management,
marketing and other functional areas of business will be advanced. The main focus is on
unifying and extending knowledge in these areas to ultimately create integrating theory that
spans cultural boundaries. Membership in the IABD is open to scholars, practitioners, public
policy makers, and concerned citizens who are interested in advancing knowledge in the
various business disciplines and related fields.

The IABD has evolved into a strong global organization during the past eighteen
years, thanks to immense support provided by many dedicated individuals and institutions.
The objectives and far-reaching visions of the IABD have created interest and excitement
among people from all over the world.

The Academy is indebted to all those responsible for this year’s program, particularly
Ahmad Tootoonchi, Frostburg State University, who served as Program Chair, and to those
who served as active track chairs. Those individuals did an excellent job of coordinating the
review process and organizing the sessions. A special thanks also goes to the IABD officers
and Board of Directors for their continuing dedication to this conference.

Our appreciation also extends to the authors of papers presented in the conference.
The high quality of papers submitted for presentation attests to the Academy’s growing
reputation, and provides the means for publishing this current volume.

The editors would like to extend their personal thanks to Dr. Otis Thomas, Dean of
the School of Business and Management, Morgan State University, for his support.

i
TABLE OF CONTENTS

Preface
Table of Contents

CHAPTER 1: INTRODUCTION....................................................................................... 1

CHAPTER 2: ACCOUNTING THEORY ........................................................................ 3

The 28% Capital Gains Tax - An Antique?


Annette Hebble, University of St. Thomas
Nancy Webster, University of St. Thomas .................................................................. 4

Does the Implementation of SFAS No. 131 Convey Useful Information?


Yousef Jahmani, Kentucky State University .............................................................. 8

Industry and Market’s Effects on The Usefulness of Book Value


Wei Xu, William Paterson University
Lianzan Xu, William Paterson University ................................................................... 13

Examining Perceptions of Student Solution Strategies in an Introductory Accounting Course


Ira Bates, Florida A&M University
Joycelyn Finley-Hervey, Florida A&M University
Aretha Hill, Florida A&M University ......................................................................... 18

The Impact of Merit Pay on Research Outcomes for Accounting Professors


Annhenrie Campbell, California State University, Stanislaus
David H. Lindsay, California State University, Stanislaus
Kim B. Tan, California State University, Stanislaus ................................................... 22

An Analysis of Investment Performance And Malmquist Productivity Index For Life Insurers
In Taiwan
Shu-Hua Hsiao, Leader University
Yi-Feng Yang, Leader University
Grant G.L. Yang, Leader University ........................................................................... 27

Connecting ABI Acceptance Measures to Task Complexity, Ease of Use, User Involvement
and Training
Aretha Y. Hill, Florida A&M University
Ira W. Bates, Florida A&M University ....................................................................... 32

The IRS Cracks Down On Deductions For Mba Education Costs


Pamela A. Spikes, University Of Central Arkansas
Patricia H. Mounce, University Of Central Arkansas
Marcelo Eduardo, Mississippi College ........................................................................ 38

CHAPTER 3: ADVERTISING AND MARKETING COMMUNICATIONS ............... 44

The Effects of Ambient Scent on Perceived Time: Implications for Retail and Gaming
John E. Gault, West Chester University of Pennsylvania............................................ 45

iii
The Relationship Between Age, Education, Gender, Marital Status and Ethics
Ziad Swaidan, University of Houston-Victoria
Peggy A. Cloninger, University of Houston-Victoria
Mihai Nica, Jackson State University.......................................................................... 52

A Content Analysis of an Attempt by Victoria’s Secret to Generate Brand Mentions through


Provocative Displays
John Mark King, East Tennessee State University
Monica Nastase, East Tennessee State University
Kelly Price, East Tennessee State University .............................................................. 58

Effectiveness of Emotional Advertising: A Review Paper on the State of the Art


Branko Cavarkapa, Eastern Connecticut State University
John T. Flynn, University Of Connecticut................................................................... 64

Pick a Flick: Moviegoers’ Use and Trust of Advertising and Uncontrolled Sources
Thomas Kim Hixson, University of Wisconsin-Whitewater....................................... 70

When Web Pages Influence Web Usability


Alex Wang, University of Connecticut........................................................................ 76

University Brand Identity: A Content Analysis of Four-Year U.S. Higher Education Web Site
Home Pages
Andy Lynch, American University Of Sharjah ........................................................... 82

Brand Knowledge, Brand Attitude, Purchases & Amount Willing To Pay For Self & Others:
Third-Person Perception & The Brand
Thomas J. Prinsen, The University Of South Dakota.................................................. 88

Congruency in Strategic Corporate Social Responsibility: Consumer Attitude Toward the


Company & Purchase Intention
Youjeong Kim, Pennsylvania State University
Charles A. Lubbers, University Of South Dakota ....................................................... 94

Internet Advertising and Its Reflection of American Cultural Values


Lin Zhuang, Louisiana State University
Xigen Li, Southern Illinois University Carbondale ...................................................100

CHAPTER 4: APPLIED MANAGEMENT SCIENCE AND DECISION


SUPPORT SYSTEMS ..............................................................................106

Student Online Purchase Decision Making: An Analysis by Product Category


Carl J. Case, St. Bonaventure University
Darwin L. King, St. Bonaventure University ............................................................107

Analyzing Role of Operations Research Models in Banking


Dharam S. Rana, Jackson State University
SherRhonda R. Gibbs, Jackson State University .......................................................112

iv
Decision Support Systems: An Investigation of Characteristics
Roger L. Hayen, Central Michigan University
Monica C. Holmes, Central Michigan University .....................................................117

Tourism Market Potential of Small Resource-Based Economies: The Case of Fiji Islands
Erdener Kaynak, The Pennsylvania State University at Harrisburg
Raghuvar D. Pathak, The University Of The South Pacific .....................................123

Who Says Decision-Making Is Rational: Implications for Responding To an Impending


Foreseeable Disaster
M. Shakil Rahman, Frostburg State University
Michael Monahan, Frostburg State University
Ahmad Tootoonchi, Frostburg State University........................................................129

CHAPTER 5: COMPUTER INFORMATION SYSTEMS..........................................135

An Alternative Approach for Developing and Managing Information Security Program


Muhammed A. Badamas, Morgan State University ..................................................136

Utilization of Information Resources for Strategic Management in a Global


Enterprise
Muhammed A Badamas, Morgan State University
Samuel A. Ejiaku, Morgan State University .............................................................142

Managing the Enterprise Network: Performance of Routing Versus Switching on a State of


the Art Switch
Mark B. Schmidt, St. Cloud State University
Mark D. Nordby, St. Cloud State University
Dennis C. Guster, St. Cloud State University............................................................147

An Empirical Investigation of Rootkit Awareness


Mark B. Schmidt, St. Cloud State University
Allen C. Johnston, University of Louisiana Monroe
Kirk P. Arnett, Mississippi State University..............................................................153

CHAPTER 6: E-BUSINESS ............................................................................................159

I’m With the Broadband: The Economic Impact of Broadband Internet Access on the Music
Industry
Matthew A. Gilbert, Clear Pixel Communications....................................................160

U.S. Attempts to Slow Global Expansion of Internet Retailing Meets Legal Resistance
Theodore R. Bolema, Central Michigan University ..................................................166

Segmenting Cell Phone Users by Gender, Perceptions, and Attitude toward Internet and
Wireless Promotions
Alex Wang, University of Connecticut
Adams Acar, University of Connecticut....................................................................172

v
E-Business Based SME Growth: Virtual Partnerships and Knowledge Equivalency
Zoe Dann, Liverpool John Moores University
Paul Otterson, Liverpool John Moores University
Keith Porter, Liverpool John Moores University ......................................................178

CHAPTER 7: ECONOMICS...........................................................................................185

Important Changes in the U.S. Financial System


Vincent G. Massaro, Long Island University ............................................................186

Accounting for Success in Sports Franchising


Ilan Alon, Rollins College
Keith L. Whittingham, Rollins College .....................................................................189

Do Chinese Investors Appreciate Market Power or Competitive Capacity


Aiwu Zhao, Kent State University
Jun Ma, Kent State University ...................................................................................195

Consumer Ethnocentrism and Evaluation of International Airlines


Edward R. Bruning, University of Manitoba
Annie Peng Cui, Kent State University
Andrew W. Hao, Kent State University.....................................................................201

The Valuation Abilities of the Price-Earnings-To-Growth Ratio and Its Association with
Executive Compensation
Essam Elshafie, University Of Texas At Brownsville
Pervaiz Alam, Kent State University .........................................................................207

A Simple Nash Equilibrium from “A Beautiful Mind”


G. Glenn Baigent, Long Island University – C. W. Post ...........................................213

Economics of/and Love: An Analysis Into Dowry Pricing in East Africa


Waithaka N. Iraki, Kentucky State University .........................................................217

International Trade Growth and Changes in U.S. Manufacturing Concentration


David B. Yerger, Indiana University of Pennsylvania ..............................................222

Threshold Effects Between German Inflation and Productivity Growth


David B. Yerger, Indiana University of Pennsylvania
Donald G. Freeman, Sam Houston State University .................................................228

Trade and Growth Since the Nineties: The International Experience


Paramjit Nanda, Guru Nanak Dev University
P.S.Raikhy, Guru Nanak Dev University ..................................................................234

Investor Relations Challenges Within the Life Sciences Category


Kerry Slaughter, Emerson College
James Rowean, Emerson College ..............................................................................241

vi
Genetic Engineering, Biotechnology, and Indian Agriculture, IPR Issues in Focus
Prabir Bagchi, Sims, Ghaziabad ................................................................................246

Response of Building Costs to Unexpected Changes in Real Economic Activity and Risk
Bradley T. Ewing, Texas Tech University
Daan Liang, Texas Tech University
Mark A. Thompson, University Of Arkansas-Little Rock ........................................251

CHAPTER 8: ENTREPRENEURSHIP/SMALL BUSINESS .......................................256

An Analysis of Funding Sources for Entrepreneurship in the Biotechnology Industry


Sumaria Mohan-Neill, Roosevelt University, Chicago, IL
Michael Scholle, Biosciences Division, Argonne National Laboratory ....................257

The Impact of Team Design on Team Effectiveness


Lawrence E. Zeff, University Of Detroit Mercy
Mary A. Higby, University Of Detroit Mercy ...........................................................263

Strategies in Starting Your Own Business


Omid Nodoushani, Southern Connecticut State University
Julie Brander, Gateway Community College
Patricia Nodoushani, University of Hartford .............................................................269

The Perils of Strategic Alliances: The Case of Performance Dimensions International, LLC
Robert A. Page, Jr., Southern Connecticut State University
Edward W. Tamson, Performance Dimensions International LLC
Edward H. Hernandez, California State University, Stanislaus
Alfred R. Petrosky, California State University, Stanislaus ......................................273

CHAPTER 9: ETHICAL AND SOCIAL ISSUES ..........................................................279

Intelligent Agents-Belief, Desire, and Intent Framework Using Lora: A Program Independent
Approach
Fred Mills, Bowie State University
Jagannathan V. Iyengar, North Carolina Central University.....................................280

The Propensity for Military Service of the American Youth: An Application of Generalized
Exchange Theory
Ulysses J. Brown, III, Savannah State University
Dharam S. Rana, Jackson State University................................................................286

The Maryland Wal-Mart Bill: A New Look at Corporate Social Responsibility


Frank S. Turner, Morgan State University
Marjorie G. Adams, Morgan State University...........................................................292

Discrimination, Political Power, and the Real World


Reza Fadaei, National University ..............................................................................298

vii
CHAPTER 10: FINANCE .................................................................................................302

The Short Term and Long Term Impact of the Stock Recommendations Published in
Barron’s
Francis Cai, William Paterson University
Wenhui Li, Buruch College, Cuny ............................................................................303

Investor Rationality in Portfolio Decision Making: The Behavioral Finance Story


Sudhir Singh, Frostburg State University, Frostburg, Maryland ...............................309

An Analysis of The Movement Of Financial Industry Indexes on The Stock Exchange of


Thailand
Nittaya Wiboonprapat, Alliant International University
Mohamed Khalil, Alliant International University
Meenakshi. Krishnamoorthy, Alliant International University .................................314

The Short Squeeze at Year-End


Howard Nemiroff, Long Island University – CW Post .............................................320

CHAPTER 11: GLOBAL ENVIRONMENT AND TRENDS........................................324

Predicting Internet Use: Technology Acceptance Facilitating Group Projects in a Web Design
Course
Azad I. Ali, Indiana University of Pennsylvania .......................................................325

Predicting Internet Use with the Technology Acceptance Model and the Theory of Planned
Behavior
Marcelline Fusilier, Northwestern State University of Louisiana
Subhash Durlabhji, Northwestern State University of Louisiana..............................330

Globalization and Its Impact on Africa’s Trade


Semere Haile, Grambling State University................................................................336

University Education, Performance Standards, and the Realities of a Global Marketplace


Melissa Northam, Troy University ............................................................................342

Internet-Based Marketing Communication and the Performance of Saudi Businesses


Abdulwahab S. Alkahtani, King Fahd University of Petroleum & Minerals ............348

Identity Theft: What Should I Do If It Happens?


Mark Mcmurtrey, University of Central Arkansas
Mike Moore, University of Central Arkansas
Lea Anne Smith, University of Central Arkansas .....................................................354

Government Regulation of the Oath of Hippocrates: How Far Can the Government Go?
Roy Whitehead, University of Central Arkansas
Kenneth Griffin, University of Central Arkansas
Phillip Balsmeier, Nicholls State University .............................................................358

viii
What Are The Benefits, Challenges, And Motivational Issues Of Academic Teams?
Blaise J. Bergiel, Nicholls State University
Erich B. Bergiel, Mississippi State University ..........................................................362

CHAPTER 12: HEALTH COMMUNICATION AND PUBLIC POLICY ..................368

Health, Culture, Communication: Perceived Information Gaps/Needs of Female Minority


Patients & Their Doctors
Amiso M. George, Texas Christian University .............................................................. 369

The Frame-Changing Strategy in Sars Coverage: Testing a Two-Dimensional Model


Li Zeng, Arkansas State University ...........................................................................374

Navigating Illness by Navigating the Net: Seeking Information about Sexually Transmitted
Infections
Kelly A. Dorgan, East Tennessee State University
Linda E. Bambino, East Tennessee State University.................................................380

Communication as Cause and Cure: Sources of Anxiety for International Medical Graduates
in Rural Appalachia
Kelly A. Dorgan, East Tennessee State University
Linda E. Bambino, East Tennessee State University
Michael Floyd, East Tennessee State University.......................................................385

Integrated Social Marketing and Visual Messages of Breast Cancer Information to African
American Women
S. Diane Mc Farland, Ph.D. Buffalo State, Suny.......................................................390

DTCA: Health Communication or Capitalistic Persuasion


Amber Phillips, East Tennessee State University......................................................395

Integrative Theory and Collective Efficacy: Predictors of Intent to Participate in a


Nonviolence Campaign
Bumsub Jin, University Of Florida
Charles A. Lubbers, University Of South Dakota .....................................................400

CHAPTER 13: HUMAN RESOURCE MANAGEMENT..............................................405

Justice or Efficiency: About Economic Analysis of Law


Nuri Erisgin, Ankara University, Turkey
Zulal S. Denaux, Valdosta State University
Özlem S. Erisgin, Ankara University, Turkey...........................................................406

What Does It Take To Succeed as a Human Resources Professional?


A Review of U.S. HR Programs
Crystal L. Owen, University of North Florida...........................................................411

Minimizing the Negative Impact of Telecommuting on Employees


Marian C. Crawford, University of Arkansas—Little Rock......................................416

ix
Implications of the Fairpay Overtime Initiative to Human Resource Management
C. W. Von Bergen, Southeastern Oklahoma State University
Patricia W. Pool, Southeastern Oklahoma State University
Kitty Campbell, Southeastern Oklahoma State University........................................420

The Impact of Knowledge Management Concepts on Modern HRM Behavior


U. Raut-Roy, Anglia Ruskin University, Cambridge, England.................................425

Employee Performance Evaluations Public vs. Private Sector


Charles Chekwa, Troy University
Mmutakaego Chukwuanu, Allen University
Mike Sorial, Troy University.....................................................................................430

To Report or Not Report: A Matter of Gender and Nationality


Wanthanee Limpaphayom, Eastern Washington University at Bellevue
Paul A. Fadil, University of North Florida ................................................................436

CHAPTER 14: INSTRUCTIONAL/PEDAGOGICAL ISSUES....................................441

Changing the Media in the Middle East: Lebanon Improves Journalism and Mass
Communication Education
Ali Kanso, University of Texas at San Antonio ........................................................442

The Protean Career Module: Applied Management and Finance Exercises for Aspiring
Professionals
Angela J. Murphy, Florida A & M University...........................................................447

Teaching Approaches and Self-Efficacy Outcomes in an Undergraduate Research Methods


Course
H. Paul LeBlanc III, The University of Texas at San Antonio ..................................452

Top 10 Lessons Learned from Implementing Erp/E-Business Systems in Academic Programs


Michael Bedell, California State University – Bakersfield
Barry Floyd, California Polytechnic State University ...............................................458

Profiles in Electronic Commerce Research


Sang Hyun Kim, University Of Mississippi
Milam Aiken, University Of Mississippi
Mahesh B. Vanjani, Texas Southern University........................................................464

Processes For The Creation Of Performance Scripts


Paul Lyons, Frostburg State University .....................................................................470

Teaching Overseas Using A Compressed Course Delivery Module


David R. Shetterly - Troy University
Anand Krishnamoorthy - Troy University ................................................................476

x
CHAPTER 15: INTERNATIONAL BUSINESS AND MARKETING.........................482

Antecedents of Egyptian Consumers’ Green Purchase Behavior: A Hierarchical Model


Mohamed M. Mostafa, Gulf University for Science and Technology
Naser I. Abumostafa, Gulf University for Science and Technology .........................483

Culture-Driven Consumer Market Boundaries: An Approach to International Product


Strategy
Dinker Raval, Morgan State University
Bala Subramanian, Morgan State University.............................................................488

Domain Knowledge Specificity and Joint New Product Development: Mediating Effect of
Relational Capital
Pi-Chuan Sun, Tatung University
Yung Sung Wen, Chiang Kai-Shek International Airport Office..............................494

Customer Satisfaction for Telecommunication Services: A Study among Asia Pacific


Business Customers
Avvari V. Mohan, Cyberjaya Multimedia University, Malaysia...............................499

Portrayal of Gender Roles in Indian Magazine Advertisements


Durriya H. Z. Khairullah, Saint Bonaventure University
Zahid Y. Khairullah, Saint Bonaventure University..................................................505

CHAPTER 16: LEADERSHIP..........................................................................................511

Student Leadership at the Local, National and Global Level: Engaging the Public and Making
a Difference
J. Gregory Payne, Emerson College
David Twomey, Emerson College.............................................................................512

CHAPTER 17: MANAGEMENT OF DIVERSITY........................................................517

Organizational Culture and Customer Satisfaction: A Public and Business Administration


Perspective
Shelia R. Ward, Texas Southern University
Gbolahan S. Osho, Texas Southern University .........................................................518

Diversity in the Workplace


Carolyn Ashe, University of Houston-Downtown
Chynette Nealy, University of Houston-Downtown..................................................524

CHAPTER 18: MANUFACTURING AND SERVICE...................................................529

The Role of Electronic Data Interchange in Supply Chain Management


Mohammad Z. Bsat, Jackson State University
Astrid M. Beckers, University of Georgia .................................................................530

xi
The Effect of Gender on Apology Strategies
Astrid M. Beckers, Jackson State University
Mohammad Z. Bsat, Jackson State University ..........................................................535

CHAPTER 19: MARKETING ..........................................................................................540

Salespeople’s Personal Values: The Case of Western Pennsylvania


Tijen Harcar, Penn State University
Mahmut Paksoy, Istanbul University.........................................................................541

Behavioral and Attitudinal Differences Between Online Shoppers vs Non-Online Shoppers


Ugur Yucelt, Penn State-Harrisburg ..........................................................................547

An Exploratory Model for Turkish Health Care Consumers


Talha Harcar, Penn State-Beaver
Karen C. Barr, Penn State-Beaver
Tijen Harcar, Penn State-Beaver ...............................................................................552

Benefit Segmentation by Factor Analysis: An Empirical Study Targeting the Shampoo


Market in Turkey
Talha Harcar, Penn State-Beaver
Selim Zaim, Fatih University.....................................................................................557

CHAPTER 20: ORGANIZATIONAL BEHAVIOR AND ORGANIZAITONAL


THEORY ..................................................................................................562

Impact of Personality Factors on Perceived Importance of Career Attributes


Keith L Whittingham, Rollins College ......................................................................563

The Determinants of Ownership in Spanish Franchised Chains


Rosa Mª Mariz-Pérez, University of A Coruna, Spain
Rafael Mª García Rodríguez, University of A Coruna, Spain
Mª Teresa García-Álvarez, University of A Coruna, Spain.......................................568

In a Global Economy, Effectively Managed Diversity Can Be a Source of Competitive


Advantage
Kayong L. Holston, Ottawa University .....................................................................573

Overcoming Business School Faculty Demotivation


Robert A. Page, Jr., Southern Connecticut State University
Ellen R. Beatty, Southern Connecticut State University ...........................................578

CHAPTER 21: POLITICAL COMMUNICATION & PUBLIC AFFAIRS.................583

Media Frame: The War in Iraq


María J. Pestalardo, East Tennessee State University ...............................................584

Women’s Image and Issues: A Comparison of Arab and American Newspapers


Don Love, American University of Sharjah ..............................................................589

xii
Does Charity Truly Begin at Home?
Louis K. Falk, University of Texas at Brownsville
Hy Sockel, Youngstown State University
John A. Cook, University of Texas at Brownsville ...................................................594

In the Process of Decolonization: The Re-Creation of Cultural Identity in Taiwan


Pei-Ling Lee, Bowling Green State University .........................................................600

CHAPTER 22: PUBLIC RELATIONS/CORPORATE COMMUNICATIONS .........606

Newspaper Endorsements and Election Result Headlines in the 2004 U.S. Presidential
Election
John Mark King, East Tennessee State University
Adriane Dishner Flanary, East Tennessee State University ......................................607

CHAPTER 23: QUALITY, PRODUCTIVITY AND MANUFACTURING ................612

The Effect of Ambiguous Understanding of Problem and Instructions on Service Quality and
Productivity
Palaniappan Thiagarajan, Jackson State University
Yegammai Thiagarajan Esq.
Sheila C. Porterfield, Jackson State University .........................................................613

I T Project Management and Software Evaluation and Quality


Jagan Iyengar, North Carolina Central University ....................................................619

Improving Productivity with Enterprise Resource Planning


Hooshang M. Beheshti, Radford University
Cyrus M. Beheshti, Deloitte & Touche .....................................................................624

CHAPTER 24: SPIRITUALITY IN ORGANIZATIONS..............................................629

Reflections on Islam and Globalization in Sub-Sahara Africa


David L. McKee, Kent State University
Yosra A. McKee, Kent State University
Don E. Garner, California State University, Stanislaus............................................630

Karma-Yoga and Its Implications for Management Thought and Institutional Reform
Rashmi Prasad, University of Alaska Anchorage
Irfan Ahmed, Sam Houston State University ......................................................................635

CHAPTER 25: SPORT MARKETING...........................................................................640

Good Game, Good Game: Applying Servqual to/and Assessing an NFL Concession’s Service
Quality
Brian V. Larson, Widener University
Doug Seymour, Widener University..........................................................................641

xiii
Convergence In Mississippi: A Spatial Approach.
Mihai Nica, Jackson State University
Ziad Swaidan, University of Houston Victoria..........................................................647

CHAPTER 26: STRATEGIC MANAGEMENT AND MARKETING.........................654

Exploring Critical Strategic Management


Kok Leong Choo, University of Wales, Institute, Cardiff, UK .................................655

Toward an Understanding of Relevant Strategic Organizations: A Fuzzy Logic Apporach


Jean-Michel Quentier, ESCPAU School of Business, France...................................660

Total Quality Management Acceptance and Applications in Multinational Companies: An


Empirical Examination
Abbass Alkhafaji, Slippery Rock University
Nail Khanfar, Nova Southeastern University ............................................................666

The Value Relevance of Hospital Integration Strategies, Ownership Control Characteristics


and Divestitiure Decisions
Richard P. Silkoff, Eastern Connecticut State University .........................................671

Managing and Measuring Industry Analyst Relations


A. Abbott Ikeler, Emerson College............................................................................677

CHAPTER 27: TEAMS AND TEAMWORK..................................................................683

A Comparison of Student Perceptions of Teamwork in the Academic and Workplace


Environments
Nathan K. Austin, Morgan State University
Felix Abeson, Coppin State University
Michael Callow, Morgan State University ................................................................684

An Examination of the Relationship Among Self-Monitoring, Proactivity, and Strategic


Intentions for Handling Conflict
Gerard A. Callanan, West Chester University
David F. Perri, West Chester University
Roberta L. Schini, West Chester University..............................................................690

CHAPTER 28: STUDENT PAPERS ................................................................................696

Martha Stewart: From Leona Helmsley to Folk Heroine


Paula Baldwin, University of Texas at San Antonio .................................................697

Case Study of Toll Road Proposal for Loop 1604


Sara V. Garcia, University of Texas at San Antonio
Jessica M. Perez, University of Texas at San Antonio ..............................................703

Lessons of Optimum Leadership from Small-City Mayors


Michael A. Moodian, Pepperdine University ............................................................708

xiv
Cultural Adaptation of Austrian and U.S.-American Websites: A Comparison Using
Hofestede’s Cultural Patterns
Wesley McMahon, California State University, Chico
Dominik Maurer, California State University, Chico ...............................................714

Valero Energy Corporation and Rising Gas Prices


Amber Stanush, University of Texas at San Antonio
Courtney Syfert, University of Texas at San Antonio ...............................................720

xv
CHAPTER 1

INTRODUCTION

1
THE BUSINESS RESEARCH YEARBOOK
PUBLICATION PERSPECTIVES

Marjorie G. Adams, Morgan State University


Abbass F. Alkhafaji, Slippery Rock University

The International Academy of Business Disciplines (IABD) is hosting its 18th Annual
Conference in 2006. There are many challenges that we face in the new millennium. Some of
them are: the rapid evolution of technology, globalization of the market place, green
marketing, and the increasing diversity of the global workforce. We as teachers and scholars
should strive hard to meet and overcome these challenges. We hope that IABD will continue
to provide an interacting forum to identify, discuss, and evaluate alternate solutions to many
of these challenges.

The IABD conferences continue to grow by drawing scholars from a variety of


institutions including small and medium size schools from around the world. You may be
surprised to learn that there are no paid IABD staff members. It is truly a volunteer
organization. IABD volunteers are highly committed to meeting its stated objectives. The
Academy provides a unique international/interdisciplinary forum for professionals and faculty
in business schools, communications programs, and other social science disciplines to discuss
common interests. The International Academy of Business Disciplines especially seeks to
bridge the gap between theory and practice, increasing public awareness of business problems
and opportunities in the international marketplace. Attendees include scholars, corporate
executives, and policy makers from many countries expert in more than thirty business-
oriented fields. About 80% of the attendees at IABD conferences are from business schools.
IABD is unique in many respects, but it is particularly exciting that the participants willingly
embrace interdisciplinary studies. The quality of work continues to improve each year, with
many of the best papers being included in the Business Research Yearbook, the annual
publication of the International Academy of Business Disciplines (IABD).

Sometimes there is confusion as to how to classify the Yearbook’s scholarship when it


comes time for annual evaluations and tenure/promotion decisions. The Business Research
Yearbook is organized to present the results of cutting edge research. Even though the
Business Research Yearbook can be perceived by some people as a proceedings publication, it
is much more than that. As a yearbook, it is organized to present cutting edge research. Unlike
proceedings, the Business Research Yearbook is a refereed publication with an ISBN number
and a Library of Congress registration. It is also listed in Cabell’s Directory of Publishing
Opportunities in Management. It is available for purchase by institutions and libraries.

The selection process leading to publication is rigorous. The overall acceptance rate for
submissions to BRY is about 60%. All papers accepted for presentation at the IABD annual
conference, with the exception of special invited workshops, go through peer review using a
double-blind procedure typical of all the better academic organizations. Based upon the
recommendations of the reviewers, the track chair may either accept or reject papers, also
requesting revisions. Once a paper is accepted for presentation, then it is eligible to be
considered for publication in Business Research Yearbook.

The Editors

2
CHAPTER 2

ACCOUNTING THEORY

3
THE 28% CAPITAL GAINS TAX - AN ANTIQUE?

Annette Hebble, University of St. Thomas


hebble@stthom.edu

Nancy Webster, University of St. Thomas


webstern@stthom.edu

ABSTRACT

Transactions involving collectibles are proliferating because of new venues for buying
and selling, such as the Internet. The tax rules categorize collectors as hobbyists, investors or
dealers. The criteria for the above categories do not seem to take into account the ease with
which collectors can add to or dispose of items from a collection today. The long-term capital
gains rate of 28 percent on collectibles and limitation on losses and related expenses are not
favorable to collectors. In many instances, the collector is better off attempting to engage in
enough activity to be a dealer with a profit motive, rather than being taxed as an investor on
profitable dispositions.

I. COLLECTIBLES

With the ease of market access through internet auction sites, the treasure from trash
excitement aired weekly on Antiques Roadshow, maturing babyboomers needing investment
income, these, and other, factors combine to make the collectibles market, once the province
of Larsonesque stamp enthusiasts, DAR ladies, and rich Sotheby’s patrons, a booming market.
In 2003, for example, average Americans buying and selling merchandise on eBay alone, a
large percentage of those items to hobbyists and collectors spent over two billion dollars.
An example of the collectors’ boom was chronicled by the Wall Street Journal
(January 23, 2004), which outlined the art market’s recent 15-year rise using a cross-section
of 50 artists of varying styles and periods whose works had traded frequently in those years.
The artist whose works had appreciated the most was the photographer Cindy Sherman, with a
372% increase over a 10-year period. In comparison, the Dow Jones Industrial Average
(DJIA) grew 179% over the same period, according to the article, and the average gain for all
artists’ works in the survey was 102%. Although the increase in appreciation for paintings
was not equal to that of the Dow, we must consider that the rise in the Dow over that period
was unprecedented and unlikely to be repeated. Although the Jobs and Growth Tax Relief
Reconciliation Act of 2003 reduced the rate on long-term capital gains, it left collectibles as
28% property. The gains in the last 15 years in the art market, one small sector of the
collectors field, make collecting a worthwhile area for anyone to consider who derives
pleasure from owning and living with material possessions.
Collectible objects fall into one of three categories, depending upon their economic
performance during a downturn. The most collectible items never, or infrequently, appear on
the market. Such items are rare and unique and their current owners have little desire to part
with them. The market for these items; old master paintings, 17th and 18th Century furniture,
large, flawless diamonds, will decline little in a financial downturn, and will appreciate
inevitably to higher levels than before. The second tier of collectibles, valuable, but not the
ultimate, rare, but not unique, include many items that we see for sale in the papers or on the
Internet daily – fine paintings, porcelains, collectibles, which can be replaced. There is a thick
market for these items with less price support and knowledgeable buyers will wait until the

4
prices of these objects hit bottom before buying. The third level, illustrated aptly by the Wall
Street Journal article, is more faddish and frequently follows fashion, as the hapless buyer of a
painting by a modern artist, whose works had recently been acquired by glamorous people,
found. The market collapsed when the stars stopped buying and his painting didn’t command
nearly the price he had anticipated. These items really do not develop much of a market, they
have no price support, and when the price drops, there is usually no recovery. (Crumbley,
1981, 77-79).

Second and third level collectors gain expertise in their area and can make money by
an occasional sale or trading. Becoming knowledge in a specialty and trading are now much
easier than in the past with the plethora of publications, newspaper articles, museum events
and internet information. Individuals engaging in such transactions, however, have limited
guidance with respect to the tax treatment of their collecting activities. The tax consequences
of investing and trading collectibles are confusing and uncertain. The rules are subjective and
limit expenses and losses resulting from collecting activities unless taxpayer can demonstrate
a profit motive. Congress or the Internal Revenue Service has not promulgated new rules that
address the ease with which individuals can find buyers for an array of collectibles over the
Internet. It is timely to review the myriad of tax rules that currently exist.

II. TAX IMPLICATIONS

Section 408(m)(2) defines collectibles as any work of art, any rug or antique, any
metal or gem, any stamp or coin, any alcoholic beverage, or any other tangible personal
property specified by the Secretary and held more than one year. Certain types of coins are
specifically excluded. As an investor of collectibles, how do you manage your investment to
enable you to take the maximum deduction for your expenditures, pay the least amount of tax
on your gains and be able to write off any losses you may incur? There are three possible
classifications for a collector – hobbyist, investor, or dealer. The hobbyist has no profit
motive, whereas the investor and dealer do. A loss or expense associated with the hobby, is
considered personal, and is, therefore is deductible only to the extent of hobby income, but if
an individual is able to show profit motive from an activity, losses from the activity are fully
deductible.

The decision to determine the absence or presence of a profit motive is a subjective


one. The courts have clarified that “a business will not be turned into a hobby merely because
the owner finds it pleasurable; suffering has never been a prerequisite to deductibility.
Success in business is largely obtained by pleasurable interest therein.” (Jackson v.
Commissioner,1972). The tax law includes a presumptive rule of Section 183, which
provides that an activity is profit seeking if the activity shows a profit in at least three of the
prior five years. If this profitability test is met, the activity is presumed to be a trade or
business rather than a personal hobby. The burden of proof (proving the activity is personal
rather than a trade or business) shifts from the taxpayer to the IRS. If this presumption rule is
not met, the activity may still qualify as a business, but the taxpayer must show the intent is to
engage in profit-seeking activity.
If the activity is deemed to be a hobby, the expenses must be deducted in the following
order according to Section 1.67-1T(a)(1)(iv):
• Amounts deductible under other sections without regard to the nature of the activity,
such as property taxes and home mortgage interest.

5
•Amounts deductible under other sections if the activity has been for profit, but only if
those amounts do not affect adjusted basis. Examples include maintenance, utilities,
and supplies.
• Amounts that affect adjusted basis and would be deductible under other sections if the
activity had been engaged in for profit. Examples include depreciation, amortization
and depletion.
These deductions are deductible from adjusted gross income (AGI) as itemized deductions to
the extent they exceed 2 percent of AGI. If the taxpayer uses the standard deduction rather
than itemizing, all hobby losses expenses are nondeductible even though the revenues from
sales need to be reported elsewhere on the return.
There are certain measures you can take to establish the earnestness of your business endeavor
in an effort to avoid the hobby loss rules. Some of them include:
• Maintain a businesslike attitude in all transactions.
• Emphasize the profit potential of your collection or hobby rather than the pleasant or
recreational aspects of it.
• Any aspects of your collection or hobby that are personal and not for investment or
profit, you should keep separate, with separate records, including separate checking
accounts.
• All transaction must be recorded and the records preserved.
• Ask dealer contacts to make investment suggestions and note their suggestions.
• Do not attempt to deduct literature that emphasizes the hobby aspect of your activity.
• Do not use investment property for personal use (no collections, bought as an
investment in the home).
• The greater your wealth, the harder it will be to prove profit and pleasure is the
primary goal of your hobby or collection.
• If all else fails and you are classified as a hobbyist, sell enough of your collection each
year to pay for your maintenance expenses. (Crumbley, 1981, 18-19).

III. DISPOSITIONS AT A LOSS

It has been said that in order to make money in collectibles you need to buy like dealer
– as close to wholesale as possible, you need to think like a hobbyist – avoid fads and look for
steady increases in value over a period of years, and keep records like an investor – by
keeping proper records, you will lose less to taxes (Crumbley, 1981, 43). Should, however,
something goes amiss, and you incur a net loss, the tax law provides three categories by which
losses can be deducted:
• Loss is incurred in a trade or business (applicable to dealers)
• Loss is incurred in any transaction entered into for profit (investor)
• Loss is not connected to a trade of business and arose form fire, storm, shipwreck,
casualty, or theft (dealer, investor, or hobbyist)

Let’s pretend you have made the mistake anyone who has done much collecting has
and bought a counterfeit item. If you paid $1,800 for your counterfeit item, and you find it is
worth $100, the loss is deducted when you sell it, if you are a dealer or an investor. If you are
a collector you may take a deduction as a theft loss in the year you made the bad purchase,
since there is no reasonable prospect of recovery. If you are a collector and sell it, the loss is
characterized as a capital loss, if it can be deducted at all. Capital losses are of limited benefit
unless the taxpayer has enough long term or short-term capital gain to offset the loss.

6
If you have followed the criteria for a business and the IRS is still interested in your
records, the Supreme Court has given certain illustrations from which acts of conduct the
attempt to evade or defeat any tax can be inferred. (IRS Manual 9.1.3.3.2.2.2)
• Keeping a double set of books.
• Making false entries or alterations of invoices or documents.
• Destroying books and records.
• Concealing assets or covering up sources of income.
• The handling of one’s affairs to avoid making the records that is usual in transactions
of the kind.
• Any conduct, the likely effect of which would be to mislead or conceal

Violations of the types given above make the headlines everyday but many cases that
end up in court did not involve acts as blatantly fraudulent as the ones listed. They were,
instead, mostly serious collectors who were making, or intending to make money on their
collections and honestly believed they should receive a deduction for expenses and losses
associated with their collections.

IV. CASE LAW

The Wrightsmans incurred expenses in their collection activities for which they
wanted to claim a tax deduction. They relied, in support of their contention that their
collecting activity was for investment purposes, upon a previous court case, George F. Tyler
(March 6, 1947). In 1947, Mr. Tyler was awarded the ruling in his case that expenses and
losses associated with his stamp collection were deductible since the collection was held for
investment purposes. The IRS ruled that Mr. Tyler exhibited scant knowledge of, or interest
in stamps, and that he did not interact with stamp hobbyists. Although he got some pleasure
out of the stamps, his activities were undertaken primarily for profit. Mr. Tyler’s case is a
rare exception. In an overview of profit versus pleasure cases to come before the IRS over a
51-year span, his is one of the few cases where the tax courts ruled in the taxpayer’s favor.
The Wrightmans lost their case.

V. FINAL COMMENTS

Definitions are subjective and vague in this area. Existing case law was decided in an
era with limited outlet for disposing of collectibles at their fair value, so they really do not
address today’s approach to buying and selling all kinds of items that may come under the
heading of collectibles. Additional expenses in connection with advertising, freight, and
travel may be incurred by today’s collector. How does this new approach to buying and
selling collectibles fit within the hobbyist, dealer or investor classifications?

REFERENCES

Charles B. Wrightman and Jayne Wrightsman v. The United States, U. S. Court of Claims,
No. 364-66, 428 F2d 1316, July15, 1970.
Crumbley, Larry; and Jerry Curtis. Donate Less to the IRS. The Vestal Press Ltd., 1981.
G.F. Tyler, 6 TCM 275, Dec 15,671 (M).
Internal Revenue Manual. Attempt to Evade or Defeat Any Tax, Section 9.1.3.3.2.2.2.
Revision date: 2003-08-11.
Jackson v. Commissioner, 59 T.C. 312. 1972.
The Wall Street Journal, January 23, 2004.

7
DOES THE IMPLEMENTATION OF SFAS NO. 131
CONVEY USEFUL INFORMATION?

Yousef Jahmani, Kentucky State University


Yousef.Jahmani@Kysu.Edu

ABSTRACT

The purpose of this paper is to test the information content of statement of financial
accounting standard sfas no. 131. “disclosures about segments of an enterprise and the related
information” a random sample of one hundred and eleven companies of those listed in
business week global 1000 for the year 1997 was selected. Two statistical techniques a
dummy variable and analysis of covariance were utilized to test whether the new standard
conveys useful information or not. The results indicate that application of new standard does
not statistically convey significant additional information that is useful. Investors and other
users either have already had access to the information disclosed under SFAS no. 131 or
managers can find ways to avoid the hidden costs of disclosing information that may harm the
company but benefit investors and competitors.

I. INTRODUCTION
The FASB issued in 1976 the statement no. 14 “financial reporting for segments of business
enterprise”. It requires listed companies to disclose segment information by both line of
business and geographic area in their annual reports. The absence of precise definition of
business segment and lack of consideration of internal organization of the company as well as
relatively high cost of providing such information led interested parties to express great
dissatisfaction with the statement. Many including the association of investment management
research (amir 1993) complained that the definition of segment is imprecise and there are
many practical problems in applying this definition. The amir also recommended that segment
disclosure in annual report should be based on internal organization of the company.
The AICPA special committee on financial reporting (1994) provided similar
recommendations and asked standard setting bodies to give the highest priority to this
issue. Sfas no. 14 was subsequently amended by statement of financial accounting standard
(SFAS) no. 94, “consolidation of all majority-owned subsidiaries” to remove the special
disclosure requirements for previously unconsolidated subsidiaries, and later superseded by
SFAS no. 131, “disclosures about segments of an enterprise and the related information,” but
retains the requirement to report information about major customer. The new standard
requires that a public business enterprise report financial and descriptive information about its
operating segment. Sfas no. 131 implemented a management approach, focusing on the way
in which management organizes segments internally to make operating decisions and to assess
performance. The objective of this approach is to harmonize internal and external reporting.
The statement became effective for years beginning after december 1997.
Recent studies by herrmann and thomas (2000) and street et al. (2002) show that, with
application of new standard, companies are reporting greater number of line of business
segments, more information about each segment, and there is more improvement in
consistency of segment information with other parts of the annual report. What is missing in
previous studies is the analysis of behavior of the other side of the market, that is demand side
of information. The purpose of this study is to focus market reaction to the above standard.

8
II. LITERATURE REVIEW
Researchers on (sfas) no. 131 have examined different aspects of its application in
order to evaluate its usefulness. Herrmann and thomas (2000) surveyed the annual reports of a
sample of u.s. multisegment firms listed in the 1998 fortune 500 to compare the segment
reporting disclosure under sfas no. 131 with those reported the previous year under sfas no.
14. They found that over two-third of the sample firms changed segment definitions upon
adoption of the new standard. They showed that the application of management approach has
resulted in several improvements. First, the new standard has increased the number of firms
disclosing segment information. Second, companies are disclosing more items for each
segment.
Street et al. (2000) assessed the 1997 and 1998 annual reports of a sample of the
largest publicly traded u.s. companies to determine whether sfas no.131 adequately addressed
user concerns about segment disclosures and the extent to which the expected benefits set
forth in the new standard materialized. The findings suggest that, in general, the new standard
has improved business reporting. The improvement includes the increase in the number of
reported segments and significant consistency of segment information in 1998 compared to
the year before.
In general, the previous studies tested the magnitude of the information disclosed by
suppliers of information, under the new standards, but they did not test the usefulness of such
information by users. The purpose of this paper is to test the usefulness of application of sfas
131 to investors.

III. SAMPLE SELECTION


Business week global 1000 companies was used to identify the u.s. companies.
Business week ranks publicly held companies according to the market capitalization, which is
an indicator of size from an investment perspective. Business week includes 480 u.s. -
domiciled companies. Global 1000 companies are likely to have international operations and
thus likely to have geographic segments. Hence, a sample drawn from u.s. global 1000
companies allows for an examination of the impact of sfas no. 131 on both lob-based
reporting and geographic disclosures provided as enterprise-wide data.
The annual reports for 1997 and 1998 for all u.s. global 1000 companies were
requested. Excluded from the list those that were in energy or finance industries (due to sfas
no.131’s liberal aggregation criteria for operating segment that operate in similar
environments), had no segment lob or geographic disclosure in 1997, adopted the sfas 131 in
1997, and were involved in a merger, major acquisition, spin-off, etc.
As indicated by street et al, (2000), the last criterion ensures that differences (or lack
of differences) identified by the research are primarily a function of the new sfas no. 131
guidelines as opposed to being driven by changes in the makeup of the companies' operating
segments. Applying the above criteria, as well as dropping those companies that were not
listed in the new york stock exchange for the period under consideration, the author identified
111 companies. The last criterion is necessary to obtain the companies’ share prices.

IV. METHODOLOGY

The objective of this paper is to measure the structural change, if any, in beta due to
the segmental data disclosure. There are three ways to do that. First is to use dummy variable
technique, second is to use analysis of variance, and third is to use contingency table analysis.
With respect to the dummy variable, the subject of this inquiry is the single-index market
model shown in equation (1).

9
Rit = αi + βirmt + eit (1)
Where rit denotes the return for the ith firm in the tth week; rmt represents the market return for
the tth week, α i and βi are the regression intercept and slope, and eit is the unsystematic risk.
Equation (2) is a modified version of the single-index market model shown in equation (1),
which is formulated to test the structural change in betas.
Rit = α i1+ βi1rmt + βi2 (dtrmt) + uit (2)
the dt variable in equation (2) is a binary variable which assumes the value of unity in the
period of segmental reporting disclosure and zero elsewhere. The coefficient of the dummy
variable (dtrm) βi2 measures the differential effect of segmental reporting disclosure on beta,
βi1 for the ith firm. But over the non-disclosure period, equation (2) reduces to equation (1). If
the beta for the ith firm differs over the non-disclosure and disclosure of segmental data then
βi2 will be significantly different from zero. T-test was used to test the significance of
individual coefficients while f-test was used to test the significance of whole regression.
Therefore, it is necessary for f-test to be significant in order to accept or reject a hypothesis.
an alternative test procedure is to use analysis-of covariance (ancova). Equations (1)
and (2) and analysis of variance shown above will be used to test for the following
hypotheses:

HYPOTHESIS:
There is no difference in the firm's perceived risk before and after the implementation
of SFAS No. 131. That is
H 0 : β1 = β2
Where β1, and β2 represent the firm's perceived risk before and after the disclosure of
segmental data, respectively.

V. RESULTS

The decision rule is that if the coefficient of the dummy variable is significant at the .05 level
for individual firms in this group, then the null hypothesis, that the application of SFAS 131
conveys useful information will be accepted. A t-test will be used to determine whether beta
changes significantly with lob segmental reporting disclosure.
The results of computed beta together with computed t-test are reported in table i
below. The coefficients mean of the dummy variable is 0.00806 while the mean for t-test is -
0.106, which is insignificant at 95% level of confidence.

Table I
Descriptive Statistics: Coefficient and T-test
VARIABLE N MEAN MEDIAN TRMEAN
STDEV SE MEAN

Β2 COEF. 111 0.00806 -0.00132 -0.00191 0.10153


0.00964
T-TEST* 111 -0.106 -0.180 -0.185 1.149
0.109

VARIABLE MINIMUM MAXIMUM Q1 Q3


Coef. -0.09892 1.04300 -0.00614 0.00284
T-test -2.160 7.230 -0.850 0.330
* CRITICAL VALUE IS 1.980 AT THE 155 DEGREES OF FREEDOM.

10
Table II shows the results of F-test for the whole regression. The mean for F-test is 10.57,
which indicates that the relationship between independent variables and dependent variable is
weak. When the regression is run without the dummy variable, the mean of F-test is
significantly higher, 32.78.

Table II
Descriptive Statistics: F-test for the whole regression
VARIABLE N MEAN MEDIAN TRMEAN STDEV
SE MEAN
F-TEST* 111 10.57 2.50 8.72 14.17
1.34

VARIABLE MINIMUM MAXIMUM Q1 Q3


F-TEST 0.01 76.90 0.62 17.66
* CRITICAL VALUE IS 3.0 AT THE 2 AND 155 DEGREES OF FREEDOM.

When the ancova are applied, the results are not change. Table iii shows the results of
ancova. The mean for f-test is 0.7843, which is insignificant at 95% level of confidence.

Table III
Descriptive Statistics of analysis of variance
VARIABLE N MEAN MEDIAN TRMEAN STDEV
SE MEAN
F-test 111 0.7843 0.3400 0.6489 1.0312 0.0949

VARIABLE MINIMUM MAXIMUM Q1 Q3


F-test 0.0000 5.2300 0.0700 1.0275
* CRITICAL VALUE IS 3.0 AT THE 2 AND 155 DEGREES OF FREEDOM.

Based on the foregoing results, the hypothesis that there is no difference in the firm's
perceived risk before and after the implementation of SFAS No. 131 cannot be rejected. The
forgoing results are consistent with the logic of information. It is important to differentiate
between two impacts of the usefulness of information. The first is when the role of
information is confined only to the reduction of uncertainty surrounding the decision. In this
case, the decision is right but the decision-maker is uncertain. That is because there is
insufficient information. The decision-maker at first utilizes the quantitative information given
in the annual reports and information from other sources. The role of information (segmental
reporting) here is to confirm the previous decision by reducing the spread of probability
distribution around the mean. In this case, the information is new and useful, but there is no
significant reaction in the market to the release of useful information. Its impact can be
measured by asking investors and other users of financial statements to assign a probability
distribution to their expected return.
The second impact is when the effect of information released extends to induce
significant revision in the previous decision. It is this kind of information whose effect can be
captured indirectly by measuring the movement in share prices. Thus, even when the
information released is useful, share prices are not expected to be affected. The current results
reflect these facts.
Moreover, there are situations in which information has no impact at all. These could
occur when the information disclosed is perceived as useless or redundant. In this case, share
prices would not be expected to react to the release of such information.

11
VI. CONCLUSION
The results of this study indicate that application of sfas no. 131 does not statistically
convey significant additional information that is relevant to the investors and other users of
financial statements. Probably investors and other users of financial statements either have
already had access to the information disclosed under this standard.

REFERENCES

Street, donna l., nancy b. Nichols,. “lob and geographical segment disclosures: an analysis of
the impact of ias 14 revised”. Journal of international accounting auditing and
taxation, vol. 11 issue 2, 2002, pp. 91-123.
Street, donna l., nancy b. Nichols, and sidney gray, “segment disclosure under sfas no. 131:
has business segment reporting improved?” Accounting horizons, vol. 14, number 3,
september, 2000.pp. 259-285.
Chen, f. Peter, and guochang zhang, “heterogeneous investment opportunities in multi-
segment firms and the incremental value relevance of segment accounting data”. The
accounting review, vol. 78, no. 2, 2003 pp. 397-428.

12
INDUSTRY AND MARKET’S EFFECTS ON THE
USEFULNESS OF BOOK VALUE

Wei Xu, William Paterson University


xuw@wpunj.edu

Lianzan Xu, William Paterson University


xul@wpunj.edu

ABSTRACT

This study explored the industry and market’s effects on the usefulness of book value
of equity in the valuation of companies reporting negative earnings. Our evidence suggests
that the existence of anomalous negative price-earnings relation is conditional on both the
economic environment and the industry group, and is only consistently present for high-tech
companies throughout our sample period. In addition, the usefulness of book value of equity
in eliminating such anomaly and improving the model explanation power also varies with
respect to different industry sectors.

I. INTRODUCTION

Although Collins et al argued in their 1999 paper that the anomalous negative
relationship between stock prices and negative earnings was induced by the misspecification
of simple earnings capitalization model which can be eliminated by augmenting the model
with book value of equity, we doubt this conclusion is universal across industry groups given
different market conditions. In the first place, with the quick growth in high-tech industry,
especially in the past decade, a much larger share in loss-reporting firms is taken by high-tech
companies, where big investments are made in intangible assets and R&D expenditures (must
be fully expensed). Thus for high-tech firms, earnings are not as important and book value can
hardly be a measure of a firm’s true wealth (Barron et al, 2002). However, low-tech firms
usually have stable income and large long-term capital investments, book value of equity,
therefore, is expected to play a bigger role in their valuation process. Secondly, after climbing
to its historical peak, 5,048.62, in March 2000, Nasdaq experienced a severe fall since April
2000, which started the bear market period. In just 1 year time, the Nasdaq Composite Index
dropped by more than 3,000 points and touched the bottom in the last quarter of 2002. Many
high-tech stocks fell as much as 90% during this stock market crash. Apparently, technology
stocks had been dramatically overstated, which leaves the valuation of high-tech stocks in
even bigger puzzle. Meanwhile, how the 2000 Nasdaq fall would affect value relevance of
accounting information provided by the low-tech firms is another question in interest.

To further explore the industry and market’s effects indicated above, we investigated
the anomalous negative price-earnings relation and usefulness of book value of equity for both
high-teach and low-tech firms in two distinct time periods: before and after the 2000 stock
market crash. Our evidence suggested that the existence of anomalous negative price-earnings
relation is conditional on both the economic environment and the industry group. Meanwhile,
we found persistence of such anomaly in high-tech industry group after the inclusion of book
value of equity, in both pre- and post- crash periods. On the contrary, we found book value of
equity plays a more significant role in the “low-tech” loss firm valuation process: inclusion of
book value in the simple earnings capitalization model not only eliminates the negative price-

13
earnings relation on average but also improves the model explanation power dramatically over
our 10-year sample period.

The rest of this paper will be organized as following: Section II introduces the sample
selection and data. Section III presents and analyzes the empirical evidence for the price-
earnings relation by using simply earnings capitalization model. Section IV reports the results
and findings by including book value into the valuation model. Section V concludes this
research.

II. SAMPLE SELECTION

In this study, we used a very present sample period, year 1995 to 2004, which is
chosen to (1) provide long enough time periods as required by association studies; (2) ensure
identical event years before and after the 2000 Nasdaq crash to make comparisons. To
accomplish our test, we created two contrasting industry groups, high-tech and low-tech, by
using 3-digit SIC codes (see Table I below). Such industry grouping is based on, among other
reasons, whether firms in the industry are likely to have significant intangible assets, reported
or unreported, which is consistent with Francis and Schipper’s 1999 study. For each industry
group, we used all firm-year observations during the 10-year sample period from
COMPUSTAT current and research databases to construct our initial data set and eliminated
the observations, among other criteria, of which: (1) December is not the FYE (to mitigate
temporal effect of the stock market fluctuation on stock price), (2) Stock price, EPS or book
value per share data is missing or is three deviations from its respective mean, and (3) Total
number of common shares outstanding decreases 1/3 from the previous year as it is suspect of
a reverse stock split to maneuver EPS. Our data selection process finally generated 10,785
usable observations in high-tech sub-sample (55% loss firms) and 2,413 usable observations
in low-tech sub-sample (30% loss firms)

Table I
High-Tech And Low-Tech Industry Groups
Industries 3-digit SIC codes
283, 357, 360, 361, 362, 363, 364, 365, 366, 367, 368,
High-tech Group
481, 737, 873
020, 160, 170, 202, 220, 240, 245, 260, 300, 307, 324,
Low-tech Group
331, 356, 371, 399, 401, 421, 440, 451, 541

III. PRICE-EARNINGS ANOMOLY

We first replicated the price-earnings relation study in each of our industry groups by
using the simple earnings capitalization model:

Pt = α + β E t + ε t (1)

where Pt is cum-dividend price of the firm’s stock price three months after the end of the
fiscal year t plus its dividend per share for year t. Et is the bottom-line earnings per share
including discontinued operations, extraordinary items, and accounting changes.
Our empirical results partially confirmed Jan & Ou and Collins et al’s research:
although for “all firms combined”, earnings coefficients are significantly positive in most
years (unreported), if dividing into “loss firms” and “profit firms” groups, an anomalous
negative price-earnings relation exists for loss firms, taking together (unreported). However,

14
as shown in Table II below, such anomalous negative relation is present in high-tech loss
firms throughout whole sample period; but for low-tech sector, it’s only present in the
marking boosting period: for the 5 years prior to the Nasdaq’s fall.

Table II
Coefficient Estimates From Regressing Price On
Negative Earnings – Modle (1)
Pt = α + β Et + ε t

High-Tech Firms Low-Tech Firms


Year
β t-value Adj R2 β t-value Adj R2
1995 -1.905 -20.29** 0.472 -3.377 -7.53** 0.450
1996 -0.981 -12.16** 0.195 -2.901 -6.20** 0.358
1997 -0.915 -7.37** 0.068 -0.653 -1.58 0.020
1998 -0.788 -4.25** 0.024 -1.800 -1.83* 0.042
1999 -0.063 -0.59 -0.001 -0.308 -0.61 -0.010
2000 -0.017 -0.65 -0.001 -0.055 -0.15 -0.013
2001 -0.107 -8.9** 0.062 -0.031 -0.26 -0.008
2002 -0.016 -2.17* 0.004 0.101 0.8 -0.004
2003 -0.274 -3.39** 0.015 0.131 0.23 -0.017
2004 -1.378 -8.44** 0.089 0.062 0.37 -0.018
Pooled -0.644 -7.82** 0.091 -0.883 -1.27* 0.080
* & ** indicates coefficient is significant at <.10 level & <.01 level, respectively

Specifically, for high-tech firms, when taking together, the estimated coefficients on
earnings are significantly positive in 7 of the 10 sample years (unreported). However, when
look into groups, for high-tech “profit” firms, there is a positive and homogeneous relation
between price and earnings (unreported). But for the high-tech “loss” firms, the estimated
coefficient on earnings is significantly negative in 8 of the 10 sample years, so is for the 10-
year pooled data, which is in support of the existence of anomalous negative price-earnings
relation. For low-tech “loss” firms, however, the results differ with respect to different sub-
periods: the earnings coefficient is negative in all 5 years prior to the Nasdaq fall, and is
significant in 3 of these 5 years. However, for the bear market period, the earnings coefficients
are not significantly different from zero, indicating a homogenous negative price-earnings
relation is doubtful given distinct market conditions, even within the same industry group.
The results suggested that the existence of the negative price-earnings anomaly is conditional
on both industries and market conditions.

IV. USEFULNESS OF BOOK VALUE

To further examine the usefulness of book value of equity and its ability to eliminate
the price-earnings anomaly for loss firms, as is argued in prior studies, we included BVt-1, the
beginning of year book value per share at year t, into the simple price-earnings capitalization
model. This second regression model is derived from Ohlson’s 1995 model and the clean
surplus relation as explained by Collins et al in the appendix to their 1999 work.

Pt = α + β Et + γ BVt-1 + ε t (2)

15
Our test results, as presented in Table III on next page, revealed different valuation
patterns for different industry sectors given bull or bear markets. In particular, for high-tech
loss firms, inconsistent with Collins et al’s argument, we found the inclusion of book value
does not eliminate the anomalous negative relationship between price and earnings: the
estimated coefficient on negative earnings remained negative and significant for 7 of the 10
years over our sample period. Also, the overall explanation power the valuation model is not
improved as argued by other researchers: the adjusted R2 only increased by about 1%, from
9% to 10%. However, our findings showed some evidence in support of the improvement of
the usefulness of book value of equity given recession market condition as compared to the
flourishing market condition. Specifically, in the pre-2000 period, the estimated coefficient on
book value is both positive and significant in only 2 of the 5 years; however, in the post-crash
period, the coefficient on book value is significantly positive for 4 years, suggesting a shifted
emphasis on book value for the valuation of high-tech loss firms in bear market. For low-tech
firms, however, our test results are very much different. First of all, when taking together, the
estimated coefficient on earnings remains both negative and significant in only 2 years, and
although negative, the overall estimated coefficient on earnings is not significantly different
from zero with the addition of book value, indicating the elimination of the negative price-
earnings anomaly.
Table III
Coefficient Estimates From Regressing Price On
Negative Earnings And Book Value – Modle (2)
Pt = α + β Et + γ BVt-1 + ε t

High-Tech Firms Low-Tech Firms


Year β γ β γ
Adj R2 Adj R2
(t-value) (t-value) (t-value) (t-value)
1995 -1.820 0.123 0.483 -3.390 0.006 0.483
(-18.86)** (3.29)** (-7.50)** (0.52)
1996 -0.976 0.010 0.194 -2.551 0.595 0.447
(-12.01)** (0.53) (-5.71)** (3.41)**
1997 -0.897 0.083 0.071 -0.055 1.018 0.479
(-7.21)** (1.93)* (-0.18) (7.96)**
1998 -0.804 0.018 0.023 -1.224 0.867 0.415
(-4.26)** (0.46) (-1.58) (5.90)**
1999 -0.082 0.025 -0.000 -0.551 -0.101 0.080
(-0.75) (0.76) (-1.12) (-2.64)*
2000 -0.012 0.016 -0.000 0.088 0.191 0.013
(-0.45) (1.23) (0.23) (1.71)*
2001 -0.051 0.055 0.076 0.206 0.543 0.365
(-2.95)** (4.45)** (2.08)* (8.17)**
2002 0.017 0.148 0.070 0.115 0.302 0.213
(2.21)* (8.59)** (1.03) (5.29)**
2003 -0.410 0.272 0.052 0.098 0.067 -0.030
(-4.92)** (5.24)** (0.17) (0.53)
2004 -1.407 0.053 0.098 0.096 -0.013 -0.038
(-8.65)** (2.94)** (0.47) (-0.29)
Pooled -0.631 0.077 0.103 -0. 68 0.373 0.284
(-5.43)** (2.82)* (-1.22) (3.56)**
* & ** indicates coefficient is significant at <.10 level & <.01 level, respectively

16
Meanwhile, the estimated coefficient on book value is significantly positive in 7 of the 10
years from 1995 to 2004, which are evenly distributed in both boosting and declining periods
and there’s no significant change in the investors’ reliance on book value over time. In
addition, the adjusted R2 increased from 8% to 28.4% after inclusion of book value into the
regression model. The results indicate that for low-tech loss firms, book value of equity is
useful information in the valuation process as an additional proxy for the expected future
earnings.

V. CONCLUSION

This study explored the industry and market’s effects on the usefulness of book value
of equity in valuing the companies reporting negative earnings. Our evidence from the high-
tech sector demonstrates the persistence of the anomalous negative relation with the inclusion
of book value in the model, and although marginally, usefulness of book value improved
during market recession. On the contrary, evidence from the low-tech sector suggests the
anomalous price-earnings relation only exits in the pre-crash period when the market is
flourishing. For the down period, there’s no significant evidence in support of such relation. In
addition, for low-tech loss firms, book value of equity plays a significant role in the valuation
in that including book value of equity in the valuation specification not only eliminates the
anomalous price-earnings relation but also improves the model explanation power
dramatically.

REFERENCES

Ball, R. and P. Brown. “An Empirical Evaluation of Accounting Income Numbers.” Journal
of Accounting Research, Autumn, 1968, 159-78
Barron, O., D. Byard, C. Kile, and E. Riedl. “High-technology Intangibles and Analysts'
Forecasts” Journal of Accounting Research, 40, 2002, 289-312.
Burgstahler, D., and I. Dichev. “Earnings, adaptation and equity value.” The Accounting
Review, 72, 1997, 187-215.
Collins, D., M Pincus, and H Xie. “Equity Valuation and Negative Earnings: The Role of
Book Value of Equity” The Accounting Review, 74, 1999, 29-61
Francis, J., and K. Schipper. “Have Financial Statements Lost Their Relevance?” Journal of
Accounting Research, 37, 1999, 319-352.
Jan, C., and J. Ou. “The Role of Negative Earnings in the Valuation of Equity Stocks.”
Working Paper, 1995, New York Univ. and Santa Clara Univ.
Jahnke, W. “Valuing New Economy Stocks.” Journal of Financial Planning, 13 (6), 2000, 46-
48.
Kothari, S. P. “Capital Markets Research in Accounting.” Journal of Accounting and
Economics, 31, 2001, 105-231.
Lev, B. and Ohlson, J. “Market Based Empirical Research in Accounting: a Review,
Interpretation, and Extensions.” Journal of Accounting Research, 20 (Supplement),
1982, 249-322.
Lev, B. “On the Usefulness of Earnings and Earnings Research: Lessons and Directions from
Two Decades of Empirical Research”, Journal of Accounting Research, 27
(Supplement), 1989, 153-192
Lev, B. and T. Sougiannis. “Capitalization, Amortization, and Value-Relevance of R&D.”
Journal of Accounting & Economics, 21, 1996, 107-138.
Ohlson, J. “Earnings, Book Values, and Dividends in Security Valuation.” Contemporary
Accounting Research , 11, 1995, 661-687.

17
EXAMINING PERCEPTIONS OF STUDENT SOLUTION STRATEGIES IN AN
INTRODUCTORY ACCOUNTING COURSE

Ira Bates, Florida A&M University


ira.bates@famu.edu

Joycelyn Finley-Hervey, Florida A&M University


joycelyn.finleyhervey@famu.edu

Aretha Hill, Florida A&M University


aretha..hill@famu.edu

ABSTRACT

The purpose of this research is to take a closer examination of the reasoning ability of
students in an introductory financial accounting course using protocol analysis. Prior research
has shown that accounting faculty believes, to a higher degree, that accounting students have
poor reasoning ability. This research finds that the shift in accounting professors’ perceptions
that students have poor reasoning ability is not warranted. This research suggests that it’s not
the students’ poor reasoning ability but the lack of prior training in how businesses view
economic transactions.

I. INTRODUCTION

In 1985, Tanner and Caruth examined faculty perceptions regarding the “academic
preparation and motivation of their students.” This assessment was conducted by distributing
a questionnaire with Likert-type attitudinal statements in an effort to measure accounting
faculty responses. In 1995, decade later, Tanner, Tontaro, and Wilson reexamined this issue
by resending the questionnaire to another set of accounting faculty.

The responses of the new study were compared against the responses of the previous
study to determine if any significant differences existed. One of the major findings of this
comparative study is that “while both faculty groups disagreed that accounting majors had
poor reasoning ability, the 1985 faculty respondents showed a significantly stronger level of
disagreement.” This shift would indicate that there appears to be increasing skepticism in the
reasoning ability of accounting students.

The shift in perceptions over time is the foundation of this research. Essentially,
previous research revealed that accounting professors disagreed that accounting majors
displayed poor reasoning ability; yet over a span of ten years professors disagreed less that
accounting majors had poor reasoning skills. Clearly, this indicates a perceptual shift that
attributes poor performance to the reasoning ability of students. In this study, we ask, “Is the
shift in accounting professors’ perceptions of students reasoning ability warranted?”

The purpose of this research is to take a closer examination of the reasoning ability of
students in an introductory financial accounting course using protocol analysis. Protocol
analysis was used because we wanted to compare the solution strategies of students with the
solution strategies of accounting professors. Therefore, this study makes a contribution by
assessing both accounting professors’ and students’ solutions strategies. Whereas, Tanner et
al (1985, 1995-96) utilized a quantitative analysis of faculty perceptions regarding student

18
skills, this research extends previous studies by making a qualitative assessment of both
faculty and student perceptions.

II. METHOD

This research was conducted in a large southwestern university. Two junior classified
business administration students, one male and one female, participated in the study. Since
the objective was to understand solution strategies or what participants think regarding how to
solve problems, a qualitative methodology was utilized to assess those perceptions. Protocol
analysis was employed because unlike the typical quantitative survey it affords an in-depth
examination of participants rationales.

The two students who participated in the protocol analysis had completed both an
introductory financial accounting course and an introductory managerial accounting course.
The students who participated in the protocol analysis were instructed to manually solve
multiple choice exam questions which were previously given on a summative introductory
accounting exam. These questions were selected because anecdotal evidence suggests that
these concepts are difficult for introductory accounting students. In particular, the questions
involve counterintuitive reasoning processes, logical inferences, and timing sequences.
Solution strategies employed by students could possibly assist in understanding why these
concepts seem more difficult for students to comprehend. The students were instructed to
verbalize their thoughts and mental processes as they solved the questions. The verbalizations
of the students solving the problems were taped, transcribed, and analyzed in a process matrix.
The following section presents the process matrix as follows: question statement; professor’s
solution strategy; students’ solution strategies; analysis of student protocols according to the
professors’ ideal response.

III. RESULTS

An exam question (see Table I) asked the students to select the option that contains
one of the criteria for revenue recognition under the accrual method of accounting (additional
questions and responses are available from the author upon request). The question asked the
student, when should an accounting entity record or include in revenue the results of an
economic business event or transaction under the accrual method of accounting? Of the 350
students who took this exam, 57.4% (201) selected “A” (the correct response), 18% (63)
selected “B”, 17.7% (62) selected “C”, and 6.8% (24) selected “D”. Of those students who
missed this question, a similar number of students picked option “B” (63) or “C” (62).

Table I: Summative Accounting Exam Problem

Which of the following is one of the criteria for revenue recognition under the accrual
method?
a. a measurable asset is received.
b. cash is collected.
c. an agreement is signed for a service.
d. revenues must exceed expenses.

The basic two-pronged rule of revenue recognition under the accrual method is that (1)
goods and/or services must be delivered to the customer (i.e., revenue must be earned) and (2)
the monetary value of the good and/or service must be known. Those students who chose

19
option “B” may either have had difficulty understanding the “accrual basis” of revenue
recognition or had difficulty distinguishing between the “cash basis” of revenue recognition
and the “accrual basis” of revenue recognition. Under the accrual basis of accounting, revenue
is recognized when it is earned, regardless of when cash is received while under the cash basis
of accounting, revenue is recognized when cash is received. If the students either did not
understand the accrual basis of accounting or were unable to differentiate between the cash
basis and the accrual basis of revenue recognition, the student may have believed anytime
cash is received for services that were either rendered or will be rendered, revenue should be
recognized.

The students who selected option “C” may have assumed that a signed agreement
would be both enforceable and state a monetary value for either goods to be received or
services to be performed. Under this assumption, only one of the rules of revenue recognition
would have been met and the economic event could not be recognized as revenue. What is
similar in options “B” and “C” is that both may be done during the business transaction and
neither option requires that the goods and/or services be delivered. This may account for the
relatively similar number who selected either option “B” or “C”. Because so few students
picked option “D”, this option is not included in the analysis.

Student protocol responses are shown in Table II. Both students recognized that the
key to answering this question was the differentiation between the accrual and cash method of
revenue recognition. However, both students failed to demonstrate an understanding that
revenue is recognized, under the accrual method, when it is earned.

Table II: Student Protocol Responses

Student 1 Student 2
C, I can’t even remember the difference C, Um, I need to know the difference
between the cash and accrual method, but between accrual vs cash so I know
I want to give it a shot. Um, a measurable that “B” is not an answer. Cash
asset is received, no. Cash is collected, collected could be for a prior year
no that is the cash method. An agreement agreement, and it wouldn’t be
is signed for service, yes. That would be counted as revenue. A measurable
revenue under the accrual method. asset is received that doesn’t
necessarily mean revenue, you could
have gone out and purchased a
building, that leaves “C” an
agreement is signed for services. I
worked for a construction company
and I know that you needed the
contracts signed and lined up in
order to start the work so you can get
paid.

Both students recognized that the key to answering this question was the
differentiation between the accrual and cash method of revenue recognition. However, both
students failed to demonstrate an understanding that revenue is recognized, under the accrual
method, when it is earned.

20
The inference of the researcher on students choosing option “C,” was that the students
chose option “C” because they believed that a signed agreement would be both enforceable
and state a monetary value for either goods to be received or services to be performed. The
result of the protocol analysis is consistent with the inferred test item solution strategy.

IV. CONCLUSION

Prior research has shown that accounting faculty believes, to a higher degree, that
accounting students have poor reasoning ability. Our research does not confirm that students
in an introductory accounting course have poor reasoning ability. An analysis of student
protocol reveals that errors in problem solutions came as a result of students not
understanding terminology that is counterintuitive or concept(s) which underlie how business
transactions are viewed from an accounting perspective.

Therefore, this research finds that the shift in accounting professors’ perceptions that
students have poor reasoning ability is not warranted. Prior research may have fallen victim
to the fundamental attribution error (Kelley, 1972; Miller & Lawson, 1989) or the tendency
to underestimate external factors and overestimate the influence of internal or personal traits.
Essentially, this research suggests that it’s not the students’ poor reasoning ability but the lack
of prior training in how businesses view economic transactions. Future research should
examine this relationship.

REFERENCES

Kelley, H.H. “Attribution in social interaction,” in E. Johes et al. (Eds.) Attribution:


Perceiving the Causes of Behavior. Morristown, NJ: General Learning Press, 1972
Miller, A.G. & Lawson, T. “The Effect of an Informational Option on the Fundamental
Attribution Error.” Personality and Social Psychology Bulletin: June, 1989, 194-204.
Tanner, J.R. & Carruth, P.J. “Accounting Students: Do Their Professors Perceive Them as
Being Adequately Prepared?” Journal of Education for Business, 61(2), 1985, 85-88.
Tanner, J.R., Totaro, M.W., Wilson, T.E. “Accounting Educators’ Perceptions of Accounting
Students’ Preparation and Skills: A 10-Year Update.” Journal of Education for
Business, 74(1), 1998, 16-17.

21
THE IMPACT OF MERIT PAY ON RESEARCH
OUTCOMES FOR ACCOUNTING PROFESSORS

Annhenrie Campbell, California State University, Stanislaus


acampbel@toto.csustan.edu

David H. Lindsay, California State University, Stanislaus


DLindsay@csustan.edu

Kim B. Tan, California State University, Stanislaus


KTan@csustan.edu

ABSTRACT

Merit pay for professors to encourage better teaching, research and service is
controversial. Its effectiveness can be examined empirically. In this study, the existence of a
merit plan and ACT scores of incoming freshmen were strongly associated with measurable
research outcomes. Additional study is needed to test the association of merit systems with the
other dimensions of faculty performance, teaching and service.

I. INTRODUCTION

An article in the Santa Rosa, California, Press Democrat, dated October 24, 2001,
suggests merit pay for professors is controversial:
“Faculty at Sonoma State University staged a protest of their own on Tuesday.
Frustrated by the administration's ‘corporate’ management style, professors held a daylong
teach-in at the Student Union and central quad to draw attention to a laundry list of
grievances. Speakers at the event railed against the merit system. The merit pay system,
established in 1995, is a major sticking point in stalled contract negotiations with professors.
The faculty association wants to scrap the system, which bases pay raises partially on reviews
by both faculty and administrators. ‘The system pits professors against one another and
rewards those who pander to administrators,’ said Rick Luttmann, the Sonoma State faculty
chair (sic) and math professor.”

Compensation practices vary widely across colleges and universities. Periodically, the
College and University Personnel Association (CUPA) surveys over 3,000 higher education
institutions regarding their policies and methods for adjusting individual salaries. Methods
considered in the survey included: annual general wage adjustment, automatic length of
service adjustment, a merit pay plan, lump sum incentive payment, bonus, gainsharing, skill-
and competency based pay, team incentives, and combination across-the-board and merit pay
plans. The CUPA data published in 1999 indicates that merit systems are used by 23.7 percent
of responding institutions and plans that combine across-the-board pay raises with merit pay
are used by another 26.4 percent of institutions. Since the data is aggregated, there is no way
of knowing which individual schools use a merit-based or partially merit-based pay system.

The rationale behind merit systems is to reward and thus encourage better performance
in the key areas of teaching, research and service. Typically, teaching performance is
measured with student evaluations or outcomes assessment tests such as the ETS Major Field

22
tests. Research performance is most frequently measured with a count of a professor’s
publications. An acceptable service measure has been elusive. Given the ongoing controversy
over the usefulness of merit pay plans, we are now asking whether the presence of a merit
system might be an institutional determinant of faculty research output.

II. LITERATURE REVIEW

Increasing restrictions on public funding and the desire of university administrators for
greater discretion over faculty salaries have led to a move away from traditional seniority-
based compensation systems (Grant, 1998). For a merit plan to be feasible, however, there
must be a clear link between individual effort and performance, and that performance must be
accurately measured (Heneman and Young, 1991). It has been vociferously argued that merit
pay schemes are just not practical in a university setting, because the performance of
individual faculty members is too difficult or specialized to measure objectively (Johnston,
1978).

In general, the purpose of merit pay is to provide an incentive or motivating force to


push a worker, whether a laborer, a government employee, or a college professor, to greater
productivity (Miller, 1979). Merit pay for teachers is hardly a new idea; it was first used in
England in the 19th Century (Holmes, 1920).

A field study of public school deans showed they do believe merit systems promote
better teachers and higher quality research output, (Taylor, Hunnicutt, and Keefe, 1991).
However, this study, as well as the faculty protests at Sonoma State University, evidence only
opinions. We suggest that, at least in the context of an accounting program, the question of the
value or effectiveness of merit pay can be addressed as an empirical issue.

Of the three areas of faculty productivity -- research, teaching and service -- this study
is intended to develop empirical evidence of the impact of merit pay systems on research
outputs. If merit pay systems have the desired impact of improving faculty performance in
measured areas, then schools with merit systems would be expected to exhibit stronger than
average faculty performance in this area.

III. HYPOTHESES

Research output is usually measured by counting the number of a faculty member's


publications. It has been argued that the several different ways of counting publications,
including counting only published articles appearing in a subset of the most desirable journals,
yield similar outcomes when used as a proxy measure of research (Feldman, 1987).
The central question of this study is then stated as:

H1: Ceteris paribus, there is a statistically significant relationship between the research
output of the faculty of an accounting program and the presence or absence of a
faculty merit pay plan.

A second hypothesis must be addressed as well in order to consider a potentially


powerful confounding issue. It is reasonable to expect that some schools, perhaps due to
reputation, would attract academically gifted students. Such attractive schools would boast not
only a strong student body but also a strong faculty. Therefore, it is likely that the scholarly
output of faculties of such schools might be stronger. The freshman ACT score was used to

23
represent the quality of each institution's incoming student body and its relationship with a
measure of faculty research output was tested in a second hypothesis:

H2: Ceteris paribus, there is a statistically significant positive association between the
average ACT score of a school’s incoming freshmen and the research output of the
school’s accounting faculty.

IV. METHODOLOGY

The e-mail addresses of department chairs of 500 of the 800+ accounting programs in
the United States were identified using Hasselback’s Accounting Faculty Directory 2003-
2004. Each of the 500 chairs was e-mailed a survey using the CUPA taxonomy of methods
currently used to adjust individual salary rates. The chair’s response to this survey revealed
whether or not a merit plan was in place at that school.

Average ACT scores were obtained from Profiles of American Colleges, 2002
published by Barron’s. If the ACT score was not reported, the California State University
System’s Eligibility Index Table for California High School Graduates or Residents of
California was used to convert the SAT score into an ACT score.

The names of individuals comprising the accounting faculty of each school were
obtained from the Hasselback directory. The research output of each of these individuals was
obtained from the Economic Literature Database 2002 compiled by Jean Louis Heck. The
average number of publications per faculty member per school was then calculated.

A regression was then run. The dependent variable is the average number of
publications per faculty member of a school. In the regression, the two independent variables
are: an indicator variable assigned the value of 0 if the school does not have a merit program,
and a value of 1 if it does; and the school’s mean ACT score of incoming freshmen.
Therefore, the model to be tested is:

Average number of publications = b0 + b1ACT + b2Merit + є

V. RESULTS AND CONCLUSION

Sixty-one of the 500 surveys (12%) were returned. Eleven of these were not usable,
leaving 50 usable surveys (10%). Only 4 types of faculty salary adjustments were reported.
Some schools used multiple methods. As seen in Table I, correlation coefficients show that
schools with merit programs tend not to offer ‘time in grade’ pay adjustments.

In the regression, the F is 6.642 and significant at the .004 level. Therefore, it is highly
unlikely that all of the regression coefficients are equal to zero. The adjusted R square is .267,
so about a quarter of the variance of the publications variable is explained by the variance of
the ACT and Merit variables. The estimated coefficient on the Merit variable is 1.946 and
significant at the .007 level. The estimated coefficient on the ACT variable is .464 and
significant at the .012 level. These results are consistent with both hypotheses.

24
Table I

Pearson Correlation Coefficients


Salary Adjustment Methods Used By Accounting Programs

N = 50
COLA STEPS MERIT BONUS

COLA 1.0

STEPS .229 1.0


(.109)

MERIT <.272> <.402>** 1.0


(.056) (.004)

BONUS .160 <.089> <.079> 1.0


(.268) (.538) (.587)

Legend:
COLA = Annual General Wage Adjustment, used by 31 (62%) schools
STEPS = Automatic Length of Service Adjustment, used by 8 (16%) schools MERIT = Merit
Pay Plan, used by 34 (68%) schools, used by 2 (4%) schools
** = Coefficient is significant at the .01 level

These are quite robust results indicating that there is a strong relationship between a
faculty's research output and the existence of a merit system as well as a strong relationship
between the quality of the student body and faculty research output. If the only purpose of
merit pay were to encourage additional research productivity, it would be easy to conclude
that such systems are effective.

The minutes of a faculty discussion posted on the Drew University website in 2005
show that some faculty regard merit pay as an incentive to encourage and focus their work
while others believe it is simply a means of “recognition” of work that would otherwise have
been accomplished. However faculty interpret their merit system, merit pay for faculty
remains a controversial means to encourage and/or reward faculty efforts and excellence in
multiple dimensions including teaching and service as well as in research.

The results developed here certainly suggest that those campuses more attractive to
higher performing students, as measured by ACT scores, also seem to attract more productive
faculty scholars, as measured by research output. In addition, campuses with a merit system in
place clearly have faculties with higher research outputs.

25
These simple tests could have been influenced by unidentified confounding factors.
More to the point, additional tests are needed to determine whether merit pay systems are also
associated with better outcomes for the other dimensions of rewarded faculty performance:
teaching and service. Both faculty and administrators need to continue to examine the design
and implementation of merit systems. Perhaps additional empirical work will make the
continued discussion less adversarial than it was at Sonoma State University in 2001.

REFERENCES

Feldman, K.A. “Research Productivity and Scholarly Accomplishment of College Teachers as


Related to Their Instructional Effectiveness: A Review and Exploration.” Research in
Higher Education, 26, (3), 1987, 227-298.
Johnston, James J. “Merit Pay for College Faculty?” Advanced Management Journal, 43, (2),
1978, 44.
Grant, Hugh. “Academic contests? Merit Pay in Canadian Universities.” Relations
Industrielles, (Quebec), 53, (4), 647-667.
Heneman, Herbert G. and I.P.Young. “Assessment of a Merit Pay Program for School District
Administrators.” Public Personnel Management, 20, (1), 1991, 35-48.
Holmes, E. G. A. In Quest of an Ideal. London: Cobden-Sanderson, 1920, 62.
Miller, Ernest C. “Pay for Performance.” Personnel, 56, (4), 1979, 4.
Taylor, Ruth L., Garland G. Hunnicutt, and J. M. Keefe. “Merit Pay in Academia: Historical
Perspectives and Contemporary Perceptions.” Review of Public Personnel
Administration, 11, (3), 1991, 51-65.

26
AN ANALYSIS OF INVESTMENT PERFORMANCE AND MALMQUIST
PRODUCTIVITY INDEX FOR LIFE INSURERS IN TAIWAN

Shu-Hua Hsiao, Leader University


shuhua@mail.leader.edu.tw

Yi-Feng Yang, Leader University


yifeng@mail.leader.edu.tw

Grant G.L. Yang, Leader University


grant@mail.leader.edu.tw

ABSTRACT

After the insurance market opened in 1987, the whole market structure changed. Facing
more highly intensive competition, life insurers in Taiwan set a goal of a higher efficiency of
investment performance and profitability. Notably, insurers may become insolvent when
inefficient performance of investment occurs, because declining profit could lead to serious
interest spread loss. The main purpose of this study is to determine the capital investment
efficiency based on the DEA results and Malmquist Productivity Index. Further goals are to
explore if there is a statistically significant difference among different groups. Results
expressed that some life insurers have an efficient investment performance for the overall
efficiency and scale efficiency and/or pure technical efficiency individually.

I. INTRODUCTION

It is important to study the profitability and investment performance for life insurers,
because companies may become insolvent when failure leads to declining profit, and even to
serious interest spread loss. In Taiwan, the main sources of a life insurer’s profit, financial
receipts, depend on the investment performance. Obviously, premiums received only cover
commission and business expenses, although this amount is about eighty percent of the total
income (Yen, Sheu, & Cheng, 2001). Thus, whether the investment performance is efficient
or not is a key factor that relates to the whole performance of business management.

II. THE PURPOSE OF THIS STUDY

The purpose of this study is to adopt the DEA, which was developed by Charnes,
Cooper, and Rhodes (1978) to measure the relative efficiency and investment performance of
life insurers in Taiwan. Further, the MPI, which was developed by Fare, Grosskopf, Lindgren,
and Ross (1989), showed the efficiency change of companies from 1998 to 2002. Based on
the MPI that includes technical efficiency change, technological change, pure technical
efficiency change, scale efficiency change, and total factor productivity (TFP) change, life
insurers can revise their input and output factors. Thirdly, the investment performance of life
insurers was compared among the domestic original, new entrant, and foreign companies.
Finally, results can provide information of strategies raising their competitive ability.

27
III. METHODOLOGY
The participants of this study, based on an annual report of life insurers in Taiwan,
were classified three groups: eight year old domestic companies, nine year old new domestic,
and nine year old foreign group life insurers. The analysis period of this study will cover the
years from 1998 to 2002. The Kuo Hua Life Insurance Companies were eliminated because of
missing data or incompleteness in their financial annual report.

DEA is a non-parametric technique to measure the relative efficiency. There are two
models in DEA: the CCR and the BCC model. Based on the insurance laws in Taiwan,
investment targets of life insurers involve: deposits in bank, securities, real estates, loans to
policyholders, mortgages, loans, foreign investments, as well as authorized projects or public
investments. However, not every item was made for some insurers. Thus, input variables in
this study will be classified into deposits, securities, loans, and four other items. The output
variable is financial receipts, which involve three items: interest income, gain on investment-
securities, and gain on investment-real estate.

DEA was limited to the ability to analyze performance in one year, but MPI was
extended to analyze the productivity change for a continuous different year. Based on Fare
Grosskopf, Norris, and Zang (1994), MPI provide five indices: technical efficiency change
(effch), technological change (techch), pure technical efficiency change (pech), scale
efficiency change (sech), and total factor productivity change (tfpch).

IV. RESULTS OF THIS STUDY

An inclination of the future investment showed that securities of investment are the
most important tool for life insurers in Taiwan. The next is mortgage loans. Authorized
projects or public investment keeps the minimum for each year. A deposit was an important
investment item before 1997, but the securities item increased sharply after 1997. Further,
foreign investment has also increased very quickly since 2000. Moreover, the possession rate
of government and treasury bonds is the maximum in securities investment. Corporation
bonds and benefit certificates appear to be not as important as government and treasury bonds.

The Pearson correlation coefficients of life insurers are summarized in TABLE 1. The
correlation coefficients between input and output variables are all greater than 0.85. the
performance measurement for life insurers from 1998 to 2002 is shown in TABLE 2. Both
Nan Shan and Hontai Life have an overall efficiency and scale efficiency of 100%. Most life
insurers possess a good pure technical efficiency of 100%. They include: Cathay, Nan Shan,
Manulife, American, and Hontai Life Insurer. Only Shin Kong Life Insurer owns an
increasing “return to scale.” The insurers with a “Constant return to scale” are: Cathay, Nan
Shan, Hontai, American, and Manulife Life Insurers. Furthermore, TABLE 3 lists the
efficiency investment performance of life insurers for each year. The DEA model can be
expressed as following:
Financial receipt = f (deposit in bank, securities, mortgage, other).

28
Table 1 Pearson Correlation Coefficients
Input Variables
Year Deposit Securities Loan Other
Output 1998 0.9815 0.9458 0.9829 0.9984
variable 1999 0.9776 0.8996 0.9832 0.9952
2000 0.9699 0.8770 0.9823 0.9897
2001 0.8928 0.8028 0.9573 0.9929
2002 0.8432 0.8842 0.9614 0.9540
Note: Output variable is the financial receipt of insurers

Table 2 Investment Performance Of 100%


Items Company Names
Overall efficiency (CCR) Nan Shan, Hontai
Pure Technical efficiency (BCC) Cathay, Nan Shan, Hontai, American, Manulife
Scale efficiency Nan Shan, Hontai

Table 3 A Comparison Of Investment Performance


Items 1998 1999 2000 2001 2002
CCR 14 14 14 6, 14 6, 14, 22
BCC 4, 6, 7, 14, 17, 4, 6, 7, 14, 4, 6, 14, 17, 4, 6, 7, 14, 4, 6, 14,
22, 23, 25 22, 23, 25 21, 22, 23, 25 17, 22, 23 21, 22
Scale efficiency 14, 19 14, 19 14, 19 2, 6, 14 6, 14, 22
IRS 5 5 7 No 7, 24
CRS 4, 6, 7, 14, 17, 4, 6, 7, 14, 4, 6, 14, 17, 4, 6, 7, 8, 14, 4, 6, 14,
22, 23, 25 22, 23, 25 21, 22, 23, 25 17, 22, 23 21, 22
Note: This table list insurers with 100% efficiency. IRS means increasing return to scale. CRS means constant
return to scale.

This study used DEA to calculate the optimal input for inefficient insurers during 1998-
2002. For example, the decision making of the least two inefficient insurers are shown in
TABLE 4. However, insurers 23 and 25 can maximize their overall efficiency up to 907% and
1210%, respectively. Insurers 16 and 1 can maximize their pure technical efficiency up to
591% and 177%, respectively. Thus, it is necessary to revise their investment to raise their
efficiency. TABLE 5 illustrates five means of MPI. Obviously, effch decrease year by year.

Table 4 The Optimal Input Of The Least Two Inefficient Insurers


Items Original Code Deposit Security Loan Others
Overall 9.93% 23 49,437 59,604 11,061 94,923
efficiency 7.63% 25 51,358 35,996 8,142 71,054
Pure Technical 19.9% 16 453,402 961,970 110,750 1,129,345
efficiency 19.39% 1 2,171,036 1,911,772 360,919 3,296,806
Note: Under BCC & CCR

29
Table 5 Means Of Malquist Productivity Indices (Mpi)
Year effch techch pech sech tfpch
1998-1999 1.29 0.769 1.178 1.095 0.993
1999-2000 1.226 0.573 0.967 1.268 0.702
2000-2001 1.725 0.742 1.292 1.335 1.280
2001-2002 1.133 1.284 1.055 1.074 1.455
Mean 1.344 0.842 1.123 1.193 1.108

V. RESEARCH QUESTION AND HYPOTHESES

This research adopts ANOVA and non-parameter methods to test the hypotheses. To
achieve these objectives, some null hypotheses are created in this study:
Ho1: there is no significant difference in rank of CCR between domestic and foreign
life insurers for each year.
Ho2: there is no significant difference in rank of BCC between domestic and foreign
life insurers for each year.
Ho3: there is no significant difference in rank of CCR among five different years from
1998 to 2002.
Ho4: there is no significant difference in rank of BCC among five different years from
1998 to 2002.
Ho5: there is no significant difference in MPI among original domestic companies, new
entrant domestic companies, and foreign companies.

Referred to in TABLE 6, outcomes of hypotheses one and two state that there is no
significant difference in rank of overall efficiency or pure technical efficiency between the
domestic and foreign life insurers. Furthermore, TABLE 7 shows that there are significant
differences in rank of overall efficiency during the periods of 1998-1999, 2000-2001, and
2001-2002. However, there are no significant differences within the five-year period from
1998 to 2002. Finally, Results of hypothesis five show that there is no significant difference
for MPI among three groups. The p-value of effch, techch, pech, sech, and tfpch were 0.780,
0.191, 0.200, 0.079, and 0.776 respectively.

Table 6 Outcomes Of Ho1 And Ho2


Mann-Whitney U 2002 2001 2000 1999 1998
CCR P-value 0.541 0.244 0.824 0.222 0.267
Decision making Don’t reject Ho
BCC P-value 0.781 0.779 0.248 0.197 0.43
Decision making Don’t reject Ho
Note: the significant level is 0.05, and the test period is from 1998 to 2002.

Table 7 Outcomes Of Ho3 And Ho4


Wilcoxon sign rank 1998-1999 1999-2000 2000-2001 2001-2002
CCR P-value 0.003 0.376 0.000 0.029
Decision making Reject Don’t reject Reject Reject
BCC P-value 0.248 0.376 0.070 0.122
Decision making Don’t reject Ho

30
VI. CONCLUSION

The financial solvency of life insurers has been worsening since 1997 in Taiwan. For
example, Hong Fu Life Ins. Co. recently experienced a financial crisis in 1997 due to a worse
investment environment, resulting in interest spread loss and poor performance of capital
investment after the Financial Crisis in Southeast Asia. A more competitive climate has
formed because of the four financial impacts: the declining interest rate, liberalization and
internationalization, natural or man-made catastrophes, and the “fuzzy boundary” of industry.
Given those impacts, life insurers must maintain their profitability and financial solvency.

Total income of life insurers is generally classified into two parts: financial received
and premium received. Financial received investment is the main profit source of life insurers.
The premium received has possession of about eighty percent of total income, but premium
received only covers commission and business expenses (Yen, Sheu, & Cheng, 2001). Thus,
the investment performance and efficiency are very important since they are the determinants
to a business’s performance. The strategies may further affect business performance with a
result of business failure. In Taiwan, the capital structure of investments of life insurers from
1998 to 2002 showed that securities had the largest proportion of 33.42%. The mortgage,
deposit, and loan during the period possessed 14.85%, 13.1%, and 18.97% respectfully. The
rest were all less than 10%. Obviously, security risk assessment and management are
extremely important, because higher benefit should bear higher risk. Thus, it is proper to use
the DEA to evaluate the overall efficiency, pure technical efficiency, scale efficiency, return
to scale, and the MPI to express the productivity changes.

REFERENCES

Charnes, A., W. W. Cooper, and E. Rhodes. “Measuring the efficiency of decision making
units.” European Journal of Operational Research, 2, 1978, 429-444.
Fare, R., S. Grosskopf, B. Lindgren, and P. Ross. Productivity developments in Swedish
Hospitals: A Malmquist output index approach, in Charnes, A., Cooper, W. W., Lewin,
A. Y. and Seiford, DEA: Theory, Methodology and applications,1989.
Fare, R., S. Grosskopf, M. Norris, and Z. Zhang. Productivity growth, technical progress, and
efficiency change in industrialized countries. American Economic Review, 84(1), 1994,
66-83.
Yen, L. J., H. J. Sheu, and C. L. Cheng. “Measuring the investment performance of the postal
simple life insurance department.” Insurance Monograph, 66, 2001, 48-69.

31
CONNECTING ABI ACCEPTANCE MEASURES TO TASK COMPLEXITY, EASE
OF USE, USER INVOLVEMENT AND TRAINING

Aretha Y. Hill, Florida A&M University


aretha.hill@famu.edu

Ira W. Bates, Florida A&M University


ira.bates@famu.edu

ABSTRACT

This study contends that activity-based costing (ABC) success ultimately depends on
user acceptance of activity based information (ABI) in the early stages of the ABC system
implementation. The results of this study reveals that the level of effort required to use ABI
will have a significant influence on user acceptance of ABI. The findings also suggest that the
expected benefits of using ABI, use of ABI, and satisfaction with ABI are related to the
complexity of the users’ task activities and level of involvement in the ABC system design.

I. INTRODUCTION

In recent years both practitioners and researchers are increasingly evaluating whether
activity based management (ABCM) is effective as a cost management strategy. This recent
skepticism is attributed to the growing evidence that many organizations are achieving
performance gains from their ABC systems while others are not (Shields, 1995, Gosselin,
1997). The aim of this study is to further understand the individual-level factors that may
influence the effective use of ABI. In this paper, user acceptance of ABI is considered the
foremost antecedent of success of an ABC system. This study explores task complexity, ease
of use, user involvement, and training as determinants of user acceptance of ABI. Univariate
results of one-way analysis of variances (ANOVA) and regression techniques are utilized to
analysis data collected from employees of a large telecommunication firm.

II. LITERATURE REVIEW AND HYPOTHESES DEVELOPMENT

Many ABC systems are actually implemented and to some extent used, but many not
be considered successful because there is no action taken on the information provided (e.g.,
elimination of non-valued-added activities) or improved decision-making performance. The
results of several studies provide evidence that an increasing number of ABC adopters are
experiencing problems getting their employees to take actions based on activity-based
information (ABI) (Shields, 1995, Anderson and Young, 1999). It is probable that the
uniqueness of the information provided by the ABC system will have an impact on the extent
to which users will accept ABI, process the information and believe that it is unreliable or
irrelevant for decision-making. We suggest that employees may reject ABI if they face
complex job tasks, believe that the information is difficult to use, experiencing task overload
and/or did not receive effective training on the selection and effective use of ABI. The
research framework is depicted in Figure I.

32
FIGURE I THEORETICAL FRAMEWORK OF ABI ACCEPTANCE
Contextual Factors: ABI Acceptance:
Task Complexity
Perceived Ease of Use Perceived Usefulness
ABI Use
Implementation Factors: User Satisfaction
User Involvement
ABI Training

Task Complexity. Task complexity refers to the analyzability of the tasks and the
extent to which the task can be performed by following well-defined procedures or steps (Kim
et al., 1998). We suggest that the use of ABI when task complexity is low may create
unfavorable user perceptions. When individuals are faced with more analyzable and routine
task activities, the use of detailed “standardized” ABC reports may be viewed as irrelevant or
redundant and interfere with their simple information needs. Likewise, the use of voluminous
broad scope yet detailed “standardized” and irrelevant ABI in highly uncertain complex
environments may result in task overload, unfavorable perceptions, and possibly sub-optimal
use of ABI and less satisfaction.
H1a: Perceived usefulness of ABI is negatively associated with task complexity.
H1b: ABI usage is negatively associated with task complexity.
H1c: Users’ satisfaction with ABI is negatively associated with task complexity.
Ease of Use. Ease of use refers to the degree to which an individual believes that
using a system and its output is effortless (Davis 1993). Self-efficacy theory suggests that
perceived ease of use is one of the basic determinants of information system use behaviors
and perceived usefulness (Davis 1993; Igbaria et al. 1997). It is anticipated that individuals
are more likely to use and have favorable opinions of ABI to the extent that the information is
fairly easy to use.
H2a: The perceived usefulness of ABI is positively associated with the ease of use.
H2b: ABI usage is positively associated with the ease of use.
H2c: Users’ satisfaction with ABI is positively associated with the ease of use.
User Involvement. Prior literature suggests that user involvement has a positive effect
on system success (i.e., ABC) by providing increased knowledge about the user groups and
may reduce unrealistic expectations and resistance to the change (Guimaraes et al., 1992,
McGowan 1997). As such, when employees participate in the development of the ABC
system, they are more likely to accept ABI.
H3a: The perceived usefulness of ABI is positively associated with user
involvement.
H3b: ABI usage is positively associated with their involvement in the system
design.
H3c: Users’ satisfaction with ABI is positively associated with their involvement.
ABI Training. The results of numerous management information system (MIS) and
ABC implementation studies suggest that there is significant and positive relationship
between the adequacy of user training received in using ABI and system success (Igbaria et
al., 1997).
H4a: The perceived usefulness of ABI is positively associated with the adequacy of
training received.
H4b: ABI usage is positively associated with the adequacy of training received.
H4c: Users’ satisfaction is positively associated with the adequacy of training
received.

33
In addition, the following “fit” hypotheses relate to the motivation that the level of task
complexity as well as the adequacy of the ABI training received will affect an individual’s
information needs, perceptions of information sources, intentions, and behaviors.
H5: Perceptions and acceptance of ABI will be more favorable for individuals
facing highly complex tasks activities than for individuals facing lower
complex tasks.
H6: Users perceptions and acceptance of ABI will differ significantly across the
adequacy of the ABI training received.

III. METHODOLOGY

The Company is a major provider of telecommunications services. The ABC system is


used along with the traditional cost management system to provide “data to facilitate cost
management, strategic decision-making and profitability assessment while providing more
detailed data for specific areas (e.g., billing). Users are able to simultaneously share, retrieve,
analyze and summarize the ABI cost and profitability data in multiple dimensions and
perform “what if” analyses. A total of 70 out of 169 potential ABI users completed the web
survey, resulting in a response rate of 41 percent. Approximately half (47.1%) of the
respondents reported that they were upper or middle management and upper-level supervisors;
approximately 34.3 percent are analysts and lower-level supervisors. The respondents were
primarily employed in finance and operations/product management with an average work
experience of 18 years.
ABI acceptance is operationalized using three widely used surrogates of technology
acceptance: perceived usefulness (6 items), use (4 items), and user satisfaction (10 items).
The determinants of ABI acceptance were operationalized using modified versions of widely
accepted scales to measure the applicable construct: task complexity (3 items), ease of use (6
items), user involvement (5 items), and ABI training (5 items). The survey questions were
measured on a 5-point Likert type scale. (The measurement scales and sources are available
from the author upon request.) The score of each research variable is obtained by forming a
composite measurement scale using the average of all the survey items with a principal
component loading greater than 0.50. Review of the factor loadings (loading greater than
0.50), Cronbach’s alpha (greater than 0.60), composite reliability (greater than 0.80),
correlation coefficients and intercorrelations for all of the variables are above the suggested or
accepted minimum level for evidence of the reliability, convergent validity and discriminant
validity of each of the measurement scales (Hair et al., 1998).

IV. RESULTS

The descriptive statistics indicated that ABI users, on average, have somewhat neutral
opinions about the level of effort required to use ABI (mean = 2.95), had little involvement in
the design of the ABC system (mean = 1.66), rate the training received for using ABI as less
than adequate (mean = 2.30) and their tasks are characterized by a moderate degree of
complexity (mean = 2.63). The means for perceived usefulness (2.81), ABI use (2.19) and
user satisfaction (2.69), indicate that, overall, the sampled ABI users have somewhat neutral
opinions about the usefulness of ABI, do not extensively use ABI, and are generally less often
satisfied with ABI.
Hypotheses (H1-H4) Testing. To test the specified hypotheses (H1 –H4), a series of
multiple regression analysis were run using the three measures of ABI acceptance (perceived
usefulness, ABI use and user satisfaction). The results of the regression analysis presented in
Table I provide support for several determinants of ABI acceptance. The complexity of users

34
job-related tasks is not significantly linked (although the coefficient is negative) to the
perceived usefulness of ABI (H1a; B = -0.20, p = 0.194). However, as hypothesized in H1b
and H1c, task complexity is significantly and negatively linked with reported ABI usage (B =
-0.20, p = 0.089) and user satisfaction with ABI (B = -0.28, p = 0.049). The results support the
hypotheses that the ease of using and comprehending ABI is positively and significantly
related to favorable perceptions regarding the usefulness of ABI (H2a; B = 0.60, p = 0.001);
use of ABI (H2b; B = 0.24, p = 0.054); and user satisfaction with ABI (H2c; B = 0.26, p =
0.095). Although the results do not support the hypothesis that user involvement in the ABC
system design will promote favorable perceptions regarding the usefulness of ABI (H3a; B =
0.14, p = 0.252), the results do provide support for the hypotheses that user involvement is
positively and significantly related to use of ABI (H3b; B = 0.28, p =0.003) and user
satisfaction with ABI (H3c; B = 0.20, p = 0.089). Finally, the results indicate that the
adequacy of ABI training received is not significantly linked to the three ABI acceptance
measures, therefore H4a, H4b and H4c are not supported.

TABLE I
REGRESSION ANALYSIS
X5 (ABI Acceptance) = αo + B1X1 (Task Complexity) + B2X2 (Perceived Ease of Use) + B3X3 (User Involvement) +
B4X4(ABI Training) + е
Coefficient
Variable (predicted sign) Estimate t-value p Tolerance VIF
Panel A: Perceived Usefulness
Intercept αo 1.81 2.67 0.009 - -
Task Complexity (H1a) B1 (-) -0.20 -1.31 0.194 0.79 1.26
Perceived Ease of Use (H2a) B2 (+) 0.61 3.55 0.001 0.68 1.46
User Involvement (H3a) B3(+) 0.14 1.16 0.252 0.86 1.16
User Training (H4a) B4 (+) -0.22 -1.63 0.108 0.73 1.38
R2 = 0.47, Adjusted R2 = 0.221, F[4, 65] = 4.614, p = 0.002
Panel B: ABI Use
Intercept αo 1.44 2.96 0.004 - -
Task Complexity (H1b) B1 (-) -0.19 -1.73 0.089 0.79 1.26
Perceived Ease of Use (H2b) B2 (+) 0.24 1.96 0.054 0.68 1.46
User Involvement (H3b) B3 (+) 0.28 3.10 0.003 0.86 1.16
User Training (H4b) B4 (+) 0.02 0.32 0.753 0.73 1.38
2 2
R = 0.48, Adjusted R = 0.235, F [4, 65] = 4.982, p = 0.001
Panel C: User Satisfaction
Intercept αo 2.00 3.28 0.002 - -
Task Complexity (H1c) B1 (-) -0.28 -2.01 0.049 0.79 1.26
Perceived Ease of Use (H2c) Β2 (+) 0.26 1.70 0.095 0.68 1.46
User Involvement (H3c) B3 (+) 0.20 1.73 0.089 0.86 1.16
User Training (H4c) B4 (+) 0.14 1.67 0.247 0.73 1.38
2 2
R = 0.47, Adjusted R = 0.224, F [4, 65] = 4.685, p = 0.002

Fit Hypotheses (H5 and H6). While ABI users’ tasks, on average, are characterized
by a moderate degree of complexity. As predicted, the one-way analysis of variances results
presented in Table II reveal significant differences in users’ perceptions of ABI, involvement
in the ABC system design, and satisfaction with ABI, contrasted by whether they face highly
complex or low complex tasks and decision-making activities. ABI users facing less complex
tasks and decision-making activities (mean = 3.36) perceive ABI as being easier to use than
users facing more complex task and decision-making activities (mean = 2.60) (t = 4.08, p =
0.00). User involvement in the ABC implementation, in general, was significantly greater for
users facing highly complex tasks and decisions (mean = 1.92) than users facing less complex
tasks and decisions (mean = 1.38) (t = 2.122, p = 0.03). With regards to acceptance of ABI,
users facing less complex tasks perceived ABI as being more useful in performing task related

35
activities (means = 3.08) than users facing more complex tasks (means = 2.62) (t = 1.77, p =
0.08). Also, on average, users facing less task complexity (mean = 3.09) are significantly
more satisfied with ABI (t = 2.43, p = 0.02) than users facing high task complexity (mean =
2.54). These results thereby provide partial support for H5.

TABLE II
ANALYSIS OF DIFFERENCE BETWEEN MEANS: COMPLEXITY AND TRAINING
Panel A: Analysis of Differences between Means: Task Complexity
High Task Low Task t
Variable Complexity Complexity Statistic Sig.
Perceived Ease of Use 2.60 3.36 4.08 0.00
User Involvement 1.92 1.38 2.22 0.03
User Training 2.09 2.45 1.37 0.18
Perceived Usefulness 2.62 3.08 1.77 0.08
ABI Use 1.17 2.38 1.13 0.27
User Satisfaction 2.54 3.09 2.43 0.02
Panel B: Analysis of Differences between Means: Adequacy of ABI Training
Not very Moderately Very
Adequate Adequate Adequate F Sig.
Perceived Ease of Use 2.41 2.81 3.55 14.68 0.00
Perceived Usefulness 3.05 2.49 3.15 3.34 0.04
ABI Use 2.17 2.06 2.57 3.50 0.04
User Satisfaction 2.33 2.57 3.27 6.69 0.00

ABI users, overall, rate the training received and their preparedness for using ABI as
less than adequate; however, the comparison of means (Table 2) across the adequacy of ABI
training categories indicate significant differences. All mean scores of the ease of use and ABI
acceptance measures are significantly greater for users who rate the ABI training received as
very adequate. As expected, it appears that users that receive more adequate training on
selecting and using relevant ABI are better prepared to use and have more favorable
perceptions regarding the ease of using and comprehending the information.
It appears that there are significant differences in user perceptions regarding the
usefulness of ABI in performing job-related task activities across ABI training categories (t =
3.34, p = 0.04): not very adequate (mean = 3.05), moderately adequate (mean = 2.49), and
very adequate (mean = 3.15). With regards to reported ABI use (t = 3.50, p = 0.04), it appears
that users that receive more adequate training on selecting and using relevant ABI actually use
the information to a greater extent than users who ABI training was less adequate: not very
adequate (mean = 2.17), moderately adequate (mean = 2.06), and very adequate (mean =
2.57). As compared to users receiving moderate ABI training (mean = 2.81) or very little
adequate ABI training (mean = 2.41), overall, users receiving very adequate ABI training
(mean = 3.27) on selecting and using relevant ABI are more satisfied with the information (t =
14.67, p = 0.00). The results do not indicate any statistically significant differences in the
adequacy of user training and ABI use based on task complexity, thus supporting H6.

V. CONCLUSION

The study’s findings suggest that the extent to which individuals use and rely on ABI,
as well as, their perceptions and level of overall satisfaction with the information is largely
influenced by individual-level contextual factors, such as, task complexity, ease of use and
user involvement. The relative associations of the contextual variables with the three ABI
acceptance measures (perceived usefulness, ABI use and user satisfaction) indicate that
employees may use different standards to evaluate ABI and suggest that alternative measures

36
of ABC success are distinctly related to certain determinants (Foster and Swenson, 1997).
Ease of use is significantly associated to all three measures of ABI acceptance, whereas, task
complexity and user involvement are only significantly associated with ABI use and user
satisfaction. User training is not significantly related to the three measures of ABI acceptance.
The results revealed that compared to users facing less complex tasks and decisions,
users that perform more complex tasks and decisions are more likely to believe that ABI does
not have any performance benefits, not use ABI, be the least satisfied and the most demanding
with regards to the level of effort necessary to comprehend and use ABI. Consultants and
managers may be better able to identify specific situations most conducive to the application
of ABI and the extent to which certain types of ABI will be used most optimally. If properly
addressed, these concerns may facilitate early ABI acceptance.

REFERENCES

Anderson, S. W. and Young, S.M. “The Impact of Contextual and Process Factors on the
Evaluation of Activity-Based Costing Systems.” Accounting, Organizations and Society,
24, 1999, 525-559.
Davis, F.D. “User Acceptance of Information Technology: System Characteristics, User
Perceptions and Behavioral Impacts.” International Journal of Man-Machine Studies, 38,
1993, 475-487.
Foster, G., and Swenson, D.W. “Measuring the Success of Activity-Based Cost Management
and Its Determinants.” Journal of Management Accounting Research, 9, 1997, 107-139.
Gosselin, M. “The Effect of Strategy and Organizational Structure on the Adoption and
Implementation of Activity-Based Costing.” Accounting, Organizations and Society, 22,
1997, 105-122.
Guimaraes, T., Igbaria, M. and Lu, M. “The Determinants of DSS Success: An Integrated
Model.” Decision Sciences, 23, 1992, 409-430.
Hair, J. F., Anderson, R.E., Tatham, R.L., and Black, W.C. Multivariate Data Analysis with
Readings. Englewood Cliffs, NJ: Prentice-Hall, 1998.
Igbaria, M., Zinatelli, N., Cragg, P., and Cavaye, A.L.. “Personal Computing Acceptance
Factors in Small Firms: A Structural Equation Model.” MIS Quarterly, 21, 1997, 279-305.
Kim, C., Suh, K., and Lee, J. “Utilization and User Satisfaction in End-User Computing: A
Task Contingent Model.” Information Resources Management Journal, 11, 1998, 11-24.
Klammer, T.P., et.al. “Satisfaction with Activity-Based Cost Management Implementation.”
Journal of Management Accounting Research, 9, 1997, 217-237.
McGowan, A.S. “Perceived Benefits of ABC Implementation.” Accounting Horizons, 12,
1997, 31-50.
Shields, M. D. “An Empirical Analysis of Firms’ Implementation Experiences with Activity-
Based Costing.” Journal of Management Accounting Research, 7, 1995, 148-166.

37
THE IRS CRACKS DOWN ON DEDUCTIONS FOR MBA EDUCATION COSTS

Pamela A. Spikes, University of Central Arkansas


PamS@uca.edu

Patricia H. Mounce, University of Central Arkansas


Pmounce@uca.edu

Marcelo Eduardo, Mississippi College


Eduardo@mc.edu

ABSTRACT

Accounting professionals often choose to complete the MBA degree after they enter
the workforce. The number of MBA degrees awarded has grown from 35,000 in 1974 to
more than 120,000 in 2002. Congress allows various tax incentives to encourage education,
but many of these provisions limit the deductible amount as well as the type of expenses that
are deductible. MBA students typically utilize IRC (Internal Revenue Code of 1986) §
(Section) 162 and deduct their qualified educational expenses as an itemized deduction
subject to a two percent of AGI limitation. The purpose of this paper is to discuss the
provisions of IRC §162 and to summarize recent challenges by the IRS (Internal Revenue
Service) and the courts in deducting the costs of obtaining an MBA under IRC §162.

I. INTRODUCTION

Accounting professionals who choose careers outside public accounting often find that
completing an MBA degree offers an edge over pursuing the CPA certification or adds value
to their CPA designation because of the broad knowledge base gained through an MBA
curriculum. As the business environment has become more complex and challenging, the
number of MBAs awarded has risen. In 1974 35,000 MBA degrees were awarded compared
to more than 90,000 in 1995 (Johnson 1997). The number has continued to rise with over
120,000 MBA degrees conferred in 2002 (Kim 2004).

The MBA has traditionally provided graduates with an edge in the job market in the
areas of salary and position and has enabled older executives to remain competitive with
younger colleagues who hold graduate degrees upon entering the work force. In the eight
years prior to 2004, average salaries for MBA graduates increased globally by 27 percent and,
in 2004, companies expected MBA salaries to rise by more than nine percent from 2003 to
over $82,000 (Quacquarelli and Saldanha 2004). Many corporations are willing to foot the
bill for MBA degrees for their middle- and upper-level managers. However, with executive
development budgets tightening, a growing number of companies will pick up only a portion
of the cost of post-graduate studies and some require employees to sign an agreement to stay
with the company for several years or repay the education costs (Johnson 1997).

The purpose of this paper is twofold: to discuss the provisions of IRC §162 and to
summarize recent challenges by the IRS and the courts in deducting the costs of obtaining an
MBA under IRC §162. Section II provides a summary of the provisions found in IRC §162.
Section III presents several IRS decisions and court rulings challenging the deductibility of
MBA educational costs under IRC §162. Section IV recaps tax incentives available for

38
taxpayers to consider as alternatives to the IRC §162 deduction. Section V presents brief
concluding remarks.

II. IRC §162 PROVISIONS

One of the oldest and most widely used provisions, IRC §162, allows an employee to
deduct expenses incurred for education as an ordinary and necessary business expense. In the
past, this provision has been a popular way for accountants and other executives to defray at
least a portion of the costs of their MBA education. However, the deduction is classified as an
itemized deduction subject to a two percent of adjusted gross income threshold. Although the
IRC §162 deduction is phased out for higher-income bracket taxpayers, taxpayers often
choose it over other provisions because the phase-out begins at a higher income level, thereby
providing a larger deduction. IRC §162 permits an employee to deduct expenses incurred for
education as an ordinary and necessary business expense if the expenses are incurred for
either (1) maintaining or improving existing skills in their present job or (2) meeting the
express requirements of the employer or those imposed by law necessary to retain their
employment status. Many taxpayers who take a deduction for expenses incurred in the pursuit
of an MBA degree do so with the intent of meeting the first criteria of improving existing
skills in their present job.

Treasury Regulation §1.162-5 provides some clarification on the non-deductibility of


certain education costs. Even though the education may maintain or improve skills required
by a taxpayer’s employment, education expenses under the above two categories are
specifically excluded as a deduction if:
• Those expenditures are incurred for meeting the minimum educational standards for an
existing job; or
• Those expenditures qualify the taxpayer for a new trade or business.
The first category eliminates the opportunity for those seeking an undergraduate degree in
accounting to deduct their educational costs under IRC §162. In addition, most states require
that CPA exam candidates have 150 college credit hours (not necessarily an MBA or Master
of Accountancy) before taking the CPA exam. Those students who choose to complete this
requirement before entering the work force also would not be able to deduct the cost of
completing this additional education. Also, the fact that a taxpayer is already performing
service in an employment status does not establish that he or she has met the minimum
educational requirement. The regulations state that the deduction for costs for education which
maintain or improve skills required by the individual in his employment includes academic
courses provided the expenditures are not within either category (meeting minimum
educational standards for existing job or qualifying taxpayer for a new trade or business) of
nondeductible expenditures.

Several disadvantages exist in deducting education expenditures under §162. First,


expenditures are classified as miscellaneous itemized deductions deductible to the extent of
the excess over two percent of adjusted gross income. However, accounting professionals
may have other miscellaneous itemized deductions (for example, professional dues and
subscriptions, conferences, etc)) that help meet the two percent limit. Second, the deduction
is available only to the taxpayer (not dependents). Third, miscellaneous itemized deductions
are subject to an overall limitation and are phased-out for higher-income taxpayers. The
overall limitation is scheduled to be eliminated beginning in 2006 and will not longer exist
after 2009.

39
In spite of the above disadvantages, §162 has been a very popular method for
taxpayers to write off their education expenditures. First, although the deduction is subject to
the two percent floor, it is not limited to a particular dollar amount. Second, the deduction is
allowable for more than tuition, books and related expenses. Education expenses deducted
under §162 are considered to be ordinary, necessary business expenses and include not only
books and tuition, but also transportation to and from school and travel expenses such as
meals and lodging while away from home. In addition, a taxpayer can deduct under §162 any
excess qualifying costs not eligible for a deduction or credit under another provision of the
Code.
The rationale for deducting the cost of an MBA degree by most accounting and
business executives rests on two assumptions. First, an MBA would presumably improve the
skills of a business person or accountant. Second, since many older executives do not hold an
MBA degree, it is not viewed as a minimum requirement for a job. As discussed in the
following section, education costs deducted under §162 have recently been challenged by the
IRS and upheld by the U.S. Tax Court. A pattern has been developing that leads many to
believe that the IRS is sending the message that taxpayers already working in a business field
will no longer be able to deduct the cost of obtaining an MBA. Robert Willens, a tax and
accounting expert at Lehman Brothers Holdings, Inc. speculates that those claiming the MBA
deduction may be more susceptible to IRS audits and suggests that it is going to be virtually
impossible to take a deduction for MBA-related education expenses (Kim 2004).

III. PREVIOUS IRS DECISIONS AND COURT RULINGS

In a 1986 letter ruling, the IRS concluded that educational expenses incurred while
pursuing an MBA degree were not business expenses for purposes of the deduction provided
by §162 of the Code (Letter Ruling 8714064). In this instance, the taxpayer entered a two-
year MBA program as a full-time student after electing to resign his position as a machine
sales representative when he was unable to obtain a leave of absence. The IRS considered
several factors:
• the length of time the taxpayer was out of the work force while he incurred the
educational expenses;
• whether the period of study was a temporary suspension of employment; and
• whether the studies were pursued at the request or advice of the employer in order to
maintain or improve the skills required by the taxpayer’s position.
The IRS concluded that the taxpayer was not required or advised by the employer to obtain an
MBA degree in order to maintain or improve the skills required for the taxpayer’s position
and there was no indication that the taxpayer expected to resume employment with that
company at any definite time in the future. The IRS viewed the taxpayer’s period of studies
as an interruption of his employment rather than a temporary suspension of such employment
or trade or business. Therefore the costs were not “business expenses” as defined in §162.
In 2002, the IRS denied a deduction for education expenses in the pursuit of an MBA
degree by an employee in the telecommunications industry (TC Summary Opinion 2002-49).
The taxpayer quit his job to complete an MBA degree. The taxpayer argued that he needed to
improve his accounting, financial, and general business administration skills as his job
required him to negotiate complex contracts, and he argued that he did not pursue other
positions. The Tax Court disallowed the deduction. The Court reasoned that if the course of
study qualifies the taxpayer for a new trade or business, the expenditures are not deductible
even though the studies are required by the employer and the taxpayer does not intend to enter
a new field of endeavor, or the taxpayer’s duties are not significantly different after the
education from what they had been before the education.

40
In 2003 the Tax Court denied the deduction for an attorney’s education expenses in
pursuit of the MBA degree (TC Memo 2003-68). The taxpayer obtained an LLM in corporate
finance, a JD and an MBA, enrolling in each degree program immediately after completion of
the preceding one. He was advised to obtain a JD to increase his marketability in a
competitive environment. Subsequently he chose to extend his studies by one year in order to
obtain joint JD/MBA degrees. Although the taxpayer had worked as a summer associate for
law firms while a member of the state bar, the IRS contended that he did not establish that his
assignments and compensation were different from other full-time associates (not practicing
attorneys) and thus he had not established himself in his trade or business of practicing law.
The Court reasoned that it was necessary to break the education cycle and engage in a trade or
business before deducting education expenses.

In 2004, the Tax Court again denied a deduction for education expenses in the pursuit
of an MBA degree by a financial analyst working in the investment banking industry (TC
Summary Opinion 2004-107). An employee with a BA degree working as a financial analyst
quit her job in the investment banking industry and pursued an MBA full-time. She
concluded that it was impractical to continue working because of the long hours required of
financial analysts. To be promoted from “financial analyst” to “associate” the investment
banking industry required an MBA degree. Upon completion of her MBA degree, she did not
return to the investment banking industry, but instead took a position in the manufacturing
industry in a “General Management Program.” Acceptance into the program required an
MBA or equivalent. The taxpayer argued that the expenditures were incurred to maintain and
improve her skills and that the expenditures were required as a condition to the retention of an
existing employment relationship, status, or rate of compensation. She focused on the
similarities between her duties as a financial analyst and those of the associates at the
investment banks firms where she worked.

The IRS contended that the fact that an individual is already performing service in an
employment status does not establish that she has met the minimum educational requirements
for qualification in that employment. Although the duties of analyst and associate overlapped,
the analyst position was a subordinate temporary position lasting for a maximum of three
years. In its decision, the Court cited Income Tax Regs §1.162-5(b)(3) stating that the
expenses were not deductible even though the taxpayer did not intend to enter a new field of
endeavor, and even though the taxpayer’s duties were not significantly different after the
education from what they had been before the education. In each of these cases, the courts
were clear that the costs of pursuing an MBA degree were not deductible education expenses.
The income tax regulations, under Section 1.162-5(b)(3), disallow a deduction for education
expenses if the educational study qualifies the taxpayer for a new trade or business, even
though the studies are required by the employer, and the taxpayer does not intend to enter a
new field of endeavor, or even though the taxpayer’s duties are not significantly different after
the education from what they had been before the education.

IV. ALTERNATIVES TO THE §162 DEDUCTION

In addition to IRC §162, there are several other provisions for exclusions, deductions,
or credits that might be available while pursuing an MBA degree. Listed below is a brief
recap of these provisions:
• IRC §127 allows qualified employer-paid educational costs. However, the company
must have a separate written educational assistance plan, the assistance must be for the

41
exclusive benefit of its employees, and the exclusion is subject to a $5,250 annual
ceiling.
• For qualified employee-paid educational costs, an MBA-pursuing taxpayer may be
eligible for the lifetime learning credit under IRC §25A. This credit is phased out for
higher income taxpayers.
• An above-the-line deduction of up to $4,000 for qualified educational expenses is
available. The deduction is available for obtaining a basic skill, an improvement over
the §162 deduction discussed in the previous section. It is disallowed for taxpayers
who choose the IRC §25 credit for the tax year. The deduction is phased out for
higher-income taxpayers, but may be more attractive for many accounting executives
since the phase-out begins at a higher income level than for the IRC §25 credit.
However, the phase-out begins at a lower income level than for the §162 deduction.
• Congress has created an income exclusion associated with state college tuition
prepayment programs under IRC §529 to exclude the earnings of contributed funds if
they are used for qualified higher education costs.
• The interest on Series EE U.S. government savings bonds may be excluded from
income under IRC §135 if the proceeds are used to pay qualified higher education
expenses.
• IRC §530 allows a nondeductible contribution of $2,000 to an education IRA.
Earnings are excluded from taxable income and distributions to pay qualified higher
education costs are generally made tax-free.

Several restrictions apply to most of the above incentives. First, phase-outs of the credits
and deductions may reduce or eliminate the benefit for middle-income and high-income
taxpayers. Many accounting executives who pursue an MBA may not be able to take
advantage of the benefits because their salaries are above the phase-out levels. Second,
taxpayers cannot “double-dip.” That is, a deduction for an expenditure can be taken under
only one provision. Third, deductions created by the provisions of the Economic Growth and
Tax Relief Reconciliation Act of 2001 expire in 2010. Fourth, specific criteria must be met
for some provisions. For example, educational savings bonds must be issued after December
31, 1989 to individuals who are at least 24 years old at the time of the issuance. Finally, some
of the provisions are not available to taxpayers who are married, filing separately. For
example, the $4,000 above-the-line deduction and the lifetime learning credit are not available
to those who choose to file married, filing separately.

IV. CONCLUSION

An MBA degree can be an important asset to accounting professionals. Today’s


business environment requires the need for expertise in areas beyond accounting (Messmer
1998). The MBA can provide accounting professionals with the opportunity to develop and
enhance their skills in order to face the challenges of a global economy and to ensure that their
organizations meet their strategic visions and objectives (Financial Management 2002).
Congress has provided various tax provisions whereby the MBA-seeking accounting
professional can deduct educational expenses; however, many of these laws limit the
deductible amount as well as the type of expenses that are deductible. Therefore, MBA
students typically utilize §162 and deduct most of their education expenses as an itemized
deduction subject to the two percent of AGI limitation.

Although the IRS and Courts have denied the deduction for several business
professionals and an attorney, will accounting professionals who are already in the work force

42
be able to argue that an MBA improves their skills, but does not qualify them for a new trade
or business? For those who become CPAs upon completion of an MBA, will the IRS
challenge that the MBA has simply allowed them to meet the minimum educational standards
for an existing job? Will the newest rulings denying the deductibility of MBA expenses
change the landscape of our graduate business schools? Or will accounting executives see the
MBA degree as a benefit to their professional career that surpasses the additional costs
resulting from their possible inability to deduct their expenses? Only time will tell.

REFERENCES

Financial Management. 2002. New MBA Targets CIMA Members. April.


Internal Revenue Code of 1986.
Johnson, Terrence L. 1997. Gaining the Executive Edge. Black Enterprise. New York: May;
27(10): 103.
Kim, Jane J. 2004. M.B.A. Students May Lose Tax Break. The Wall Street Journal. August
17: D2.
Letter Ruling 8714064, January 8, 1986.
Messmer, Max. 1998. The Value of an MBA. Management Accounting. Montvale: October;
80(4): 10.
Quacquarelli, Nunzio and Monisha Saldanha. 2004. MBA Salaries Rise Worldwide.
Westchester County Business Journal. September 11.
T.C. Memo 2003-68 (Weyts).
T.C. Summary Opinion 2002-49, Roger Steven Lewis v. Commissioner.
T.C. Summary Opinion 2004-107 (McEuen).

43
CHAPTER 3

ADVERTISING AND MARKETING COMMUNICATIONS

44
THE EFFECTS OF AMBIENT SCENT ON PERCEIVED TIME:
IMPLICATIONS FOR RETAIL AND GAMING

John E. Gault, West Chester University of Pennsylvania


jgault@wcupa.edu

ABSTRACT

Olfaction has long been regarded as a very powerful and enduring sense, yet attempts
to harness the power of scent have proved rather elusive. While manipulating ambient scent
may influence consumptive behavior, consistently predicting the presence and magnitude of
effects has proved rather difficult. Casino operators accept the risk, and regularly use scent to
alter perceived time to induce gamblers to play longer. The scientific evidence to support this
use of scent is scant and mixed. The current study aims to reduce the ambiguity by
experimentally testing how ambient scent influences the enjoyment of, and willingness to
remain in an environment. Eight treatment levels employed include pleasant/unpleasant
scents of low/high arousal in situations of low/high involvement. Results indicate significant
effects for scent pleasantness and subjects’ level of involvement. There was, however, no
significant difference for arousal.

I. INTRODUCTION

People respond emotionally to their surroundings (Machleit and Eroglu, 2000).


Retailers may, therefore, create environments which induce emotions and moods, and these
feelings can in turn, influence consumptive responses (Barnes and Ward 1995). Empirical
research indicates scent- induced affective states may alter shopper perceptions of time spent
in the store (Kellaris and Kent, 1992), and overall shopper satisfaction (e.g., Yalch and
Spangenberg, 1993; Sherman et al., 1997). The effects, however, were not always as
predicted (Yalch and Spangenberg, 2000). To help reduce this uncertainty, the current study
examines the influence of a scent’s hedonic (pleasure-displeasure) and activation (arousing-
sleepy) dimensions on perceived time and overall satisfaction with the environment.
Mehrabian and Russell’s (1974) Stimulus-Organism-Response (S-O-R) model is used in this
study as it is often regarded as preeminent in environmental psychology (Machleit and Eroglu,
2000). The model has been successfully adopted for retail atmospherics research (e.g.,
Russell and Pratt 1980; Baker et al., 1992; Spangenberg et al., 1996). As illustrated in Figure
1, Mehrabian and Russell found environmental stimuli (S) may induce affective feeling states
(O), which in turn mediate approach-avoidance responses (R).

ENVIRONMENTAL TRANSIENT APPROACH/


STIMULI AFFECTIVE AVOID
(S) STATES RESPONSES
(O) (R)
(e.g., Ambient Scent) (Pleasure, Arousal, (e.g. remain in or leave
Mood) environment)

Figure 1. The Basic S-O-R Model

45
Further research on involvement in attitude change (Petty, Cacioppo, and Schuman
1983) and situational effects (Belk 1974, 1975, 1984), suggests situational influences such as
task involvement may moderate the relationship between environmental stimuli and
consumptive responses. Purchase decision involvement (PDI) is therefore a potential
moderator of affective induction. The literature also suggests a scent’s activation (arousal)
property may interact with involvement to influence the intensity of any environmentally
induced affective state.

The Role of Involvement and Arousal


Cacioppo and Petty’s (1984) Elaboration Likelihood Model (ELM) suggests
environmental stimuli are more effective in inducing affective states in situations of low rather
than high involvement. Similarly, Baker (1987, 2002) found peripheral scent cues were
diminished or even nullified by competing cognitively processed stimuli. Mazursky and
Ganzach (1998) also indicated positive affective transfer was more likely in situations of low
involvement. The current study builds on this earlier research, and proposes that scent has
greater influence on perceptions of time and the environment in situations of low
involvement, and that arousal may intensify this relationship.

Arousal theory states that activation (arousal) reinforces other affective feeling states
(Gilligan and Bower 1984) and so may influence evaluations of an environment (Russell and
Pratt 1980) and perception of time spent in the environment (Kellaris and Kent, 1992).
Olfactory researchers Ludvigson and Rottman (1989) found scents low in activation induced a
relaxing effect. Arousal’s influence may be tested by comparing a pleasant/arousing scent to
a pleasant/relaxing one. ELM and arousal theory suggest increased positive responses to a
pleasant/arousing scent vs. the no experimental scent control in situations of low PDI. Effects
of a pleasant/relaxing scent should lie between those of a pleasant/arousing scent and the no
scent control. PDI will moderate the relationship between a scent’s pleasant/arousal qualities
and a shopper’s enjoyment of and willingness to remain in an environment as illustrated in
Figure 2.

II. HYPOTHESES

Evaluation of the Surrounding Environment


Eaton (1989) proposed that cues in the environment aid individuals in making
inferences about an environment’s contents. The mechanism by which this inference occurs
is described as affective transfer (Mehrabian and Russell 1974; Russell and Pratt 1980).
Image research suggests that the retail atmosphere is comprised of aural, visual, tactile, and
olfactory elements (Kotler 1973; Baker et al. 1994). Therefore, in accordance with research
on attitude change, the ELM and arousal theories the following hypotheses were developed:

H1a: Evaluations of the environment are higher in the presence of pleasant ambient
scents than in the no-experimental scent condition under low rather than high PDI

H1b: Evaluations of the environment are higher in the presence of pleasant/arousing


ambient scent than pleasant/relaxing scent under conditions of low rather than high
PDI.

Conversely, the introduction of an unpleasant ambient scent is expected to lead to a more


negative evaluation of the surrounding environment under low PDI conditions:

46
H1c: Evaluations of the environment are lower in the presence of unpleasant ambient
scent than in the no-experimental scent (control) under low rather than high PDI.

FIGURE 2

High

Satisfaction
with the
(scented)
environment; high pleasure / high arousal
scent
Likeliness to
remain in high pleasure / relaxing scent
the (scented)
environment no experimental scent
(control)

unpleasant scent

Low High
Purchase Decision Involvement (PDI)

Figure 2. Moderating Effects of PDI on the Relationship between Scent’s


Pleasant/Arousal Dimensions and Shoppers Enjoyment of and Willingness to Remain in
a (Scented) Environment

Evaluation of Perceived Time


Spangenberg (1996) found exposure to pleasant ambient scent caused shoppers to
perceive spending less time shopping than had actually elapsed. The current study re-tests
this effect, and proposes that: 1) with unpleasant ambient scent, perceived time is greater than
actual time, and 2) perceived time is influenced by arousal and involvement. Donovan and
Rossiter (1982) found that store-induced arousal and excitement increased time spent in the
store and willingness to interact with store personnel. These findings and ELM theory
suggest:

H2a: Perceived time is less than actual time in the presence of pleasant ambient scent
when purchase decision involvement is low rather than high.

H2b: Perceived time is a less than actual time for pleasant/high arousing scents than
for pleasant/relaxing scents when purchase decision involvement is low rather than
high.

Conversely, decreasing arousal in the presence of unpleasant ambient scent is expected to


increase perceived time as a percentage of actual time:

47
H2c: Perceived time is greater than actual time in the presence of unpleasant ambient
scent when purchase decision involvement is low rather than high.

II. METHODOLOGY

The next section describes the methodology used to test the hypotheses, including the
research design, subjects, setting, scent stimuli, cover story, and statistical models.

Research Design & Statistical Models


A laboratory experiment was used to test for causality and to control for potential
atmospheric confounders such as heat, light, noise, and crowding. A 4 x 2 (scent x PDI)
between subjects factorial design was employed (Figure 3).

Between Pleasant/ Pleasant/ Less- Unpleasant/ No-Scent


Subjects Arousing Arousing Scent Arousing Control
Factor Scent Scent
Between High Low High Low High Low High Low
Subjects Factor PDI PDI PDI PDI PDI PDI PDI PDI
Cell 1 Cell 2 Cell 3 Cell 4 Cell 5 Cell 6 Cell 7 Cell
8

Figure 3. Factorial Design: (4 Scent conditions) by (2 Involvement levels)


Disguised as a shopper survey, the design permitted a controlled examination of the mediating
effects of affective states and the moderating influence of PDI. Each of the eight cells was
exposed to only one of the four levels of ambient scent, and one of two levels of PDI. The
moderating influence of involvement was represented as the interaction of PDI level and
Scent Condition in ANOVA and MANOVA analyses. Univariate comparisons were made for
individual response measures, and analyses of results for each subcategory of dependent
responses were reported as MANOVA overall F-tests to control for Type I error. MANOVA
is the appropriate statistical model since the experiment tests the effects of two independent
factors (ambient scent and PDI) on two dependent variables (evaluations of the environment
and perceived time). Additionally, should the dependent variables be correlated, MANOVA
is more powerful than employing several separate univariate tests.

Subjects and Setting


Nearly 700 students of a mid-size northeastern public university participated. Smell
acuity decreases with age (Deems and Doty 1987) and so restricting the sample to young
adults should reduce the within-cell differences, and provide a stronger test of theory (Calder,
Phillips & Tybout, 1981). A modern on-campus focus group facility equipped with its own
HVAC system provided excellent atmospheric control. Audio- and video-taping capability
allowed for enhanced review of all treatment cells, and the facility greatly enhanced the
credibility of the shopper survey cover story. The site also provided excellent cover for the
presence of fragrance with research facilitators making simple mentions the carpet had just
been cleaned.

Cover Story and Shopping Instructions


The cover story and shopping instructions were varied to achieve low and high PDI:

48
High PDI Cells – A professor in a white lab coat told the students that a new type of
vending machine was about to be test marketed on their campus which would accept debit
cards in addition to cash, and that their responses would determine the product lines sold.

Low PDI Cover Story – A casually dressed undergraduate lab assistant told the
students a school supply company was interested in marketing its products to students in other
states, and would not be on their campus for another two years. No mention was made of the
new vending machine and the students were asked not to spend too much time on (their)
responses.

Scent Selection
Scent recommendations were made by experts at a well-known smell institute housed
at a nearby major research university, and by senior project scientists at the world’s largest
producer of flavors and fragrances. Pre-testing quickly narrowed the selection to two pleasant
scents, clementine and vanilla, known for inducing high levels of arousal and pleasantness
respectively. Galbanum was selected as the unpleasant scent for its potent yet non-toxic
noxious odor.

IV. ANALYSIS & RESULTS

Affective Transfer and Evaluation of the Environment


The transfer of positive/negative feelings induced by ambient scent to the evaluations
of the environment is summarized in Table I.

Table I. Results Of Hypothesis Testing Of Affective Transfer Effects

Hypothesized vs. Observed Shopper Responses (Room Evaluation)


H1a Pleasant scents > H1b pleasant/arousing H1c
no scent responses > unpleasant/arousing
pleasant relaxing < no scent
responses
Evaluation of Clementine 57.74 > n.s. Galbanum 39.08 <
Environment. 47.05** 47.05 *
Scale:12 to Vanilla n.s. (p=.075)
84 Pleasants 55.95 >
47.05**
* p < .05; ** p < .01; n.s. = not significant

Hypotheses H1a and H1c are supported, as the facility’s environmental evaluations
were significantly higher in the pleasant/arousing scent condition and significantly lower in
the unpleasant scent condition. Although the difference between pleasant/relaxing vanilla and
the no scent control was not significant (p=.075), the combined means of vanilla and
clementine (i.e., pleasant scents) was significantly higher than the control group at the p=.01
level, so H1a is supported. H1b was however rejected, as subjects exposed to
pleasant/arousing vanilla reported facility evaluation scores not significantly different from
those exposed to pleasant/relaxing clementine.

Affective Transfer and Perceived Time


The main effect of scent on perceived shopping time was assessed by comparing self-
reported perceived time and actual elapsed time. As summarized in Table II, the effect of

49
scent pleasantness on perceived time was not significant for pleasant/relaxing vanilla (H2a) or
the pleasant/arousing clementine scent (H2b) vs. the no scent control. The only significant
difference was for the unpleasant/arousing galbanum scent condition (H2c).

Table II. Results Of Hypothesis Testing: Main Effects Of Scent Pleasantness On


Perceived Time

Hypothesized vs. Observed Shopper Responses


Perception of Time H2a H2b H2c
(perceived shopping pleasant/relaxing pleasant/arousing unpleasant/arousing scent >
time as % of actual) scent < no scent < pleasant relaxing no scent
PerByAct (%) not significant not significant p < .05

Bonferroni comparisons of perceived shopping time as a percentage of actual time revealed


significant differences for each of the pleasant scent cells compared to the no scent control
when situational PDI was taken into consideration. Subjects in the high PDI treatment cells
perceived time to go by faster as a percentage of actual time (73.7% of actual) than S’s in the
low PDI cells (87.3% of actual). Additionally subjects exposed to unpleasant scent reported
perceived time to be longer for each PDI condition than for either the no-scent or pleasant
scent cells. There was also a significant difference between treatments for the extreme
comparison of unpleasant and pleasant scents. Time passed significantly faster for subjects
exposed to pleasant scents than for those exposed to unpleasant scents. This perceptual
difference held for low and high PDI.

V. CONCLUSION

Theoretical Implications
This investigation advances the application of theory by expanding the research
linking environmentally induced feelings and consumptive behavior. The study provides
evidence that situational purchase decision involvement (PDI) was a significant moderator of
the relationship between the underlying dimensions of ambient scent and shoppers’ affective
feeling states, and the subsequent transfer of these feelings to evaluations of the environment
and perceived time.

Managerial Implications
Retailer choice may precede brand choice (Darden 1983), and atmospherics plays a
key role in attracting customers to a particular establishment (Kotler (1973). With many
decisions finalized at point-of-purchase (Sarel 1981; Keller 1987), lighting, music, color, and
scent may have more immediate effects on decision making than non point-of-purchase
marketing inputs such as radio, print and television advertising (Baker et al. 1994). The
results for unpleasant scent suggest that when offensive odors are present (e.g., from tobacco
smoke, nearby sewage plants) managers should consider using pleasant scent as an odor
masking technique. For casino operators, the results indicate using pleasant scent to extend
playing time is more likely with lower involvement games such as slot machines or wheels of
chance, than with higher involvement games such as black jack, poker, craps, and roulette.
Finally, smaller casinos with lower service levels and client interaction may benefit
significantly from improved atmospherics.

50
REFERENCES

Baker, J., A. Parasuraman, D. Grewal, & G. B. Voss (2002), “The Influence of Multiple Store
Environment Cues on Perceived Merchandise Values & Patronage Intentions.” Journal
of Marketing, Vol.66 (April), 120-141.
Baker, J., & A. Parasuraman (1994), “The Influence of Store Environment on Quality
Inferences and Store Image,” Journal of Academy of Marketing Sciences, 22 (4) 328-
339.
Belk, R. (1975), “Situational Variables and Consumer Behavior,” Journal of Consumer
Research, 11(2), (December), 157-164.
Machleit, Karen A. and Sevgin A. Eroglu (2000), “Describing and Measuring Emotional
Response to Shopping Experience.” Journal of Business Research, Vol. 49, Issue 2,
(August), 101-111.
Mazursky, D. and Y. Ganzach (1998), “Does Involvement Moderate Time-Dependent Biases
in Consumer Multi-Attribute Judgment?” Journal of Business Research, Vol. 41, 95-
103.
Mehrabian, A. and J. A. Russell (1974), An Approach To Environmental Psychology.
Cambridge, MA: MIT Press.
Michon, R., J. Chebat, and L. Turley (2004), Mall atmospherics: the interaction effects of mall
environment on shopping behavior,” Journal of Business Research, Vol. (57): 883-
892.
Petty, R. E., J. T. Cacioppo, and D. Schuman (1983), “Central and Peripheral Routes to
Advertising Effectiveness: The Moderating Role of Involvement,” Journal of
Consumer Research, 10 (Sept.), 135-146.
Petty, R. E., and J. T. Cacioppo (1986), Elaboration Likelihood Model of Persuasion. In L.
Berkowitz (Ed.), Advances in Experimental Social Psychology, 19, NY: Academic
Press.
Spangenberg, E., A. Crowley and P. Henderson (1996), “Improving the Store Environment:
Do Olfactory Cues Affect Evaluations and Behaviors?” Journal of Marketing, 60
(April), 67-80.

51
THE RELATIONSHIP BETWEEN AGE, EDUCATION, GENDER,
MARITAL STATUS AND ETHICS

Ziad Swaidan, University of Houston-Victoria


swaidanz@uhv.edu

Peggy A. Cloninger, University of Houston-Victoria


cloningerp@uhv.edu

Mihai Nica, Jackson State University


Mihai.P.Nica@jsums.edu

ABSTRACT

This research proposes that personal factors influence consumer ethics and are
important variables that help provide a theoretical base for designing more effective marketing
strategies. Specifically, this study explores the relationship between four personal variables
(i.e., age, education, gender, and marital status) and the Muncy and Vitell consumer ethics
model (i.e., illegal, active, passive, and no harm). This study obtained and analyzed data from
seven hundred sixty-one African-Americans. The findings indicate that females and married
participants were more sensitive to ethical issues than males and single consumers.

I. INTRODUCTION

Studying the relationship between personal factors and consumer ethics is vital for
marketing decision makers. Demographic segmentation is the most popular method to
segment consumer markets. One reason that demographics segmentation is so popular is that
consumer wants and needs are often associated with demographic characteristics.
Demographic variables are also easier to measure than other variables such as psychographic
variables. This research proposes and tests the proposition that demographics also influence
consumer ethics. Bartels (1967, p. 21) defined ethics as “a standard by which, business action
may be judged “right or wrong.” Standards differ from one consumer to another, and so
actions regarded “right” by one consumer may be in conflict with and judged unethical by
another consumer. Thus, this research contents that personal factors are important variables
that influence a consumer’s judgment about the “rightness” or “wrongness” of consumer
dealings. In summary, this research proposes that a greater understanding of how
demographics influence consumer ethics will allow marketers to develop better strategies that
include consumers’ ethical characteristics. This study explores the relationship between
consumer ethics and four important demographic variables: age, education, gender, and
marital status.

II. DEMOGRAPHIC FACTORS AND CONSUMER ETHICS

Many studies have investigated ethics in the marketplace; however, most of these
studies have focused primarily on the seller side of the buyer/seller dyad. After reviewing
research in marketing ethics, Murphy and Laczniak (1981) concluded that the vast majority of
studies had examined ethics of businesses. Past research emphasized that what we know
about consumers’ ethical decision-making is still very limited (Vitell, Singhapakdi, &
Thomas, 2001). In short, relatively few studies have examined consumer ethics in the

52
marketplace, yet consumers are the most important component of the business process.
Ignoring consumer ethics in research may result in the development of wrong marketing
strategies since all aspects of consumer behavior (e.g., the acquisition, use and disposition of
goods) have an integral ethical component. In this study, the focus is on ethics of final
consumers.

III. HYPOTHESES

Hunt and Vitell (1986) in their general theory of marketing ethics proposed that
personal characteristics affect individuals’ ethical beliefs and ethical decisions. Vitell (2003)
encouraged researchers to explore the relationship between demographic variables and
consumer ethics. Thus, several demographic characteristics-age, education, gender, marital
status- have been included in this study.

Age. Evidence suggests that consumer ethics change with age. Past research has
indicated that older individuals are more ethical than younger ones (e.g., Vitell, 2003). For
example, Fullerton et al. (1996) found that younger consumers were more accepting of
unethical behavior, Rawwas and Singhapakdi (1998) found that adults 20-79 years old were
more ethical than teenagers, Ruegger and King (1992) found that older students were more
ethical than younger ones, and Serwinek (1992) found that older workers had stricter
interpretations of ethical standards. Similarly in a study of Japanese consumers, Erffmeyer et
al. (1999) found that younger Japanese consumers were more relativistic and that they tended
to perceive questionable consumer activities as less wrong. These findings are in keeping
with Kohlberg (1984) suggestion that mature people have “higher level” of moral reasoning
than younger people. Therefore, this research proposes

H1: Older consumers will be less tolerant of questionable consumer activities than their
younger counterparts.

Education. Past research has suggested that education is associated with consumer
ethical beliefs. A great deal of research has found that more educated subjects tend to make
more ethical decisions (Goolsby & Hunt, 1992; Kelley, Ferrell, & Skinner, 1990). Similarly,
Browning and Zabriskie (1983) found that purchasing managers with more years of education
viewed gifts and favors as more unethical than less educated managers. These findings are
consistent with Kohlberg’s (1984) assertions that exposure to, and interaction with, more
sophisticated and complex moral situations increases one’s ability to render more appropriate
moral decisions. In short, education improves one’s ability to make moral judgments.
Therefore, this study proposes,

H2: Consumers with higher levels of education will be less tolerant of questionable consumer
activities than their counterparts with lower levels of education.

Gender. The variable gender has frequently been researched (Ford & Richardson,
1994), and found to be related to ethical beliefs Vitell (2003). Overall, the evidence strongly
suggests that women are less tolerant of ethically questionable behaviors than men (Franke et
al. 1997). In studies of students, for example, Beltramini, Peterson, and Kozmetsky (1984)
established that female students were more concerned with ethical issues than male students
were. Singhapakdi (2004) found that male students tend to be less ethical in their intentions
than female students. And Ruegger and King (1992) found that female business students tend
to be more ethical than male business students in their evaluation of different hypothetical

53
business situations. In studies of managers and marketers, Chonko and Hunt (1985) reported
that female managers noticed more ethical problems than males did. Ferrell and Skinner
(1988) reported that female marketing researchers exhibited higher levels of ethical behavior.
And Jones and Gautschi (1988) reported that females were less likely to be loyal to their
company in an ethically questionable environment. Women have also been reported to
demonstrate higher levels of moral reasoning than men (Loe & Weeks, 2000) and to be more
critical of ethical issues than men (Whipple & Swords, 1992). Ang et al. (2001), for instance,
found that males were more likely to have favorable attitudes towards piracy than women.
Based on this evidence we hypothesize that,

H3: Female consumers will be less tolerant of questionable consumer activities than their
male counterparts.

Marital Status. Previous studies of the relationship between marital status and ethics
have yielded mixed results. For example, in studying academic misconduct, Rawwas and
Isakson (2000) did not find a relationship between marital status and ethics. Serwinck (1992)
revealed similar results when studying the ethical views of small businesses. Similarly,
marital status has failed to predict psychologists’ attitudes toward the ethicality of sexual
contact scores (Collins, 1999). On the other hand, other studies have reported that married
consumers have different ethics than unmarried, divorced, or widowed consumers. For
example, Effmeyer, Keillor, and LeClair (1999) reported that married consumers were more
likely to be classified as either relativistic or Machiavellians than unmarried ones. In other
studies, divorced persons were found to have less relational trust than those who are married
(Hargrave & Bomba, 1993). Fournier, 2000 found that a widow’s psychological marital
status, whether she perceives herself as still married or single-again, appears to be an
important predictor of questionable intimacy. Finally, Poorsoltan, Amin, and Tootoonchi
(1991) reported that married students are more conservative and moral than their unmarried
counterparts. The above research strongly suggests that marital status is deserving of further
study. Although the evidence is mixed, overall evidence appears to support the proposition
that married individuals tend to be more ethical than unmarried consumers. Consequently, we
hypothesize that:

H4: Married consumers will be less tolerant of questionable consumer activities than their
non-married counterparts.

IV. METHODOLOGY

Sample. The data used in this study were collected in the U.S. via direct interception
method. All surveys were hand distributed and collected using sealed boxes by marketing
research assistants from an American university. Assistants were trained to distribute and
collect surveys in simulated and actual settings before this study. The sampling occurred at a
variety of public locations (e.g., shopping areas) over a wide range of days and times. The
sample consisted of 761 consumers. Most of the participants were females (61.4%). Almost
half of the sample was married (49.8%). Many of the participants were between the ages of
25 and 34 (39.7%), 15.3% were between the ages of 18 and 24, 34.6% were between the ages
of 35 and 49, and 10.4% were over 50 years old. Over forty-two percent of the respondents
had four years college degrees (42.8%), 32.4% of the respondents had less than four years of
college, and 24.7% had graduate degrees.

54
Measurement of the Constructs. Constructs in this study were measured using a one-
page questionnaire. The instrument consisted of two parts. The first part of the survey
measured the ethical beliefs of participants using Muncy-Vitell questionnaire (hereafter
MVQ). The second part of the survey measured the demographics of the participants. MVQ
was used to measure consumers’ beliefs regarding 26 statements that have potential ethical
implications. This questionnaire, developed by Muncy and Vitell (1992), has since been used
and validated by various studies (e.g., Kenhove, Vermeir & Verniers, 2001). Evidence
suggests that MVQ is a well validated measurement scale that is applicable for studying
ethical behaviors in a wide variety of situations (Chan et al. 1998). Responses to MVQ
statements were coded so that a high score indicates high ethical beliefs and low score
indicates low ethical beliefs. A five-point Likert scale with descriptive anchors ranging from
"strongly believe that it is NOT wrong" (coded 1), to "strongly believe that it is wrong" (coded
5) was used. The MVQ is categorized along four dimensions. The first dimension is
“benefiting from illegal activities” (ILLEGAL). Actions in this dimension are initiated by
consumers and are either illegal or likely to be perceived as illegal by most consumers. The
second dimension, “benefiting from questionable activities” (ACTIVE), is also one where the
consumer initiates the action. While, these actions are not as likely to be perceived as illegal,
they are still morally questionable. The third dimension, “passively benefiting from
questionable activities” (PASV), is one where consumers benefit from sellers’ mistakes rather
than their own actions. Finally, the fourth dimension is “no harm/indirect harm questionable
activities” (NOHARM). These are actions that most consumers perceive as not resulting in
any harm and, therefore, many consumers perceive them as acceptable actions. The above
Cronbach alpha coefficients for the ILLEGAL, ACTIVE, PASV, and NOHARM suggest that
these dimensions are internally consistent.

V. DATA ANALYSIS

ANOVA was used to explore the relationship between the demographic variables and
consumer ethics. The four personal factors (i.e., age, education, gender, and marital status)
were the independent variables and the four dimensions of the MVQ (i.e., ILLEGAL,
ACTIVE, PASV, and NOHARM) were the dependent variables. As shown in Table 1, the
ANOVA results found that relationships between the age four categories and illegal, active,
passive, and no harm ethical conditions are significant in this sample (Table 1). Means of the
four age categories across the four consumer ethics dependent variables found that older
consumers reject illegal, active, passive, and no harm activities more than younger consumers.
For example, participants who are 50 years and older are the most sensitive and consumers
who are between the age of 18 and 25 are the least sensitive to MVQ ethical statements.
These results support Hypothesis 1. ANOVA found that the relationships between education
categories and illegal, active, and passive activities are significant in this sample. More
educated consumers reject illegal, active, and passive activities more than less educated
consumers. The only relationship that was not significant was the relationship between
education and no harm activities. These results mostly support the Hypothesis 2. In this
sample, means of females and males across the four consumer ethics variables indicate that
females reject illegal, active, passive, and no harm activities more than males. These results
support hypothesis 3. Means of married and single consumers across the three ethics
variables indicate that married consumers reject illegal, active, and passive activities more
than single consumers. Again, the only relationship that was not significant was the
relationship between education and no harm activities. These results mostly support the
Hypothesis 4.

55
Table 1: ANOVA Results
Dependent Variables & Means
Independent Variables
Illegal Active Passive No harm

18-25 3.84* 3.46* 3.42* 2.77*


25-34 4.11* 3.60* 3.57* 2.55*
Age
35-49 4.25* 3.71* 3.80* 2.63*
50 or Over 4.30* 3.67* 3.86* 2.77*
< 4 Years College 4.02* 3.55* 3.56* 2.59
Education 4 Years College 4.16* 3.64* 3.66* 2.63
Graduate, Law, MD 4.29* 3.76* 3.84* 2.71
Female 4.20* 3.67* 3.74* 2.56*
Gender
Male 4.08* 3.57* 3.55* 2.74*
Single 4.05* 3.56* 3.54* 2.62
Marital Status
Married 4.24* 3.71* 3.79* 2.66
* Significant at p < .05, ** Significant at p < .10

VI. CONCLUSION

There is strong believe that if marketers can develop a better comprehension of the
variety of individual factors that exist within their target markets, they will develop better
marketing mixes that meet the needs and wants of those markets. The findings of this
research provide strong support that older, more educated, female, and married consumers are
more sensitive to ethically questionable activities than younger, less educated, male and
unmarried consumers. Each one of these results has some implications for international
marketers. The results presented in this study suggest that demographics are related to
consumer ethics and provide additional evidence that marketers should focus on the
demographics of individual and individual subgroups when developing their marketing
strategies. The results of this study had important theoretical and practical implications. This
research confirms that marketers concerned with international ethical decision-making must
study demographic variables. While this study makes an initial contribution by exploring the
relationship between four personal factors and consumer ethics much research is to be done to
explore the relationship between personal characteristics and ethical orientations.

REFERENCES

Beltramini, R.F., R.A. Peterson, and G. Kozmetsky. ”Concerns of College Students Regarding
Business Ethics.” Journal of Business Ethics, 3, (3), 1984, 195-200.
Erffmeyer, Robert C.; Bruce D. Keillor; and Debbie Thorne LeClair. “An Empirical
Investigation of Japanese Consumer Ethics.” Journal of Business Ethics, 18, (1), 1999, 35-
50.
Ford, Robert C.; Woodrow D. Richardson. “Ethical decision making: A review of the
empirical literature.” Journal of Business Ethics, 13, (3), 1994, 205-221.
Franke, George R., Deborah F. Crown, and Deborah F. Spake. “Gender Differences in Ethical
Perceptions of Business Practices,” Journal of Applied Psychology, 82, (December),
1997, 920-34.

56
Kenhove, Patrick Van; Iris Vermeir; and Steven Verniers. “An Empirical Investigation of the
Relationships between Ethical Beliefs, Ethical Ideology, Political Preference and Need
for Closure,” Journal of Business Ethics, 32, (4), 2001, 347-361.
Muncy, James A.; and Scott J. Vitell. “Consumer Ethics: An Investigation of the Ethical
Beliefs of the Final Consumer,” Journal of Business Research, 24, (4), 1992, 297-311.
Rawwas, Mohammed Y. A.; and Anusorn Singhapakdi. “Do Consumers' Ethical Beliefs Vary
with Age? A Substantiation of Kohlberg's Typology in Marketing,” Journal of
Marketing Theory and Practice, 6, (2), 1998, 26-38.
Rawwas, M.Y.A. and H. Isakson, ”Ethics of Tomorrow’s Business Managers: The Influence
of Personal Beliefs and Values, Individual Characteristics, and Situational factors,” Journal
of Education for Business, 75, (6), 2000, 321-330.
Vitell, S. J. “Consumer Ethics Research: Review, Synthesis and Suggestions for the Future,”
Journal of Business Ethics, 43, 2003, 33–47.
Vitell, S. J., Anusorn Singhapakdi, and James Thomas (2001), “Consumer Ethics: An
Application and Empirical Testing of the Hunt-Vitell Theory of Ethics,” The Journal
of Consumer Marketing, 2, (18), 2001, 153-178.

57
A CONTENT ANALYSIS OF AN ATTEMPT BY VICTORIA’S SECRET
TO GENERATE BRAND MENTIONS THROUGH PROVOCATIVE DISPLAYS

John Mark King, East Tennessee State University


johnking@etsu.edu

Monica Nastase, East Tennessee State University


zmnn1@imail.etsu.edu

Kelly Price, East Tennessee State University


rankinkb@mail.etsu.edu

ABSTRACT

A content analysis of newspapers from around the world and transcripts of American
TV shows was performed to measure increases in brand mentions of Victoria’s Secret after
the company used provocative window displays in two shopping malls in Wisconsin and
Virginia in the fall of 2005. Results showed differences in press coverage after the
controversial tactic was employed. Brand mentions increased 15 percent overall, but there
appeared to be a media effect; frequency of brand mentions decreased 19 percent in
newspapers, but increased 38 percent in TV shows. Brand mentions were significantly more
frequently negative overall in the week after the displays were launched than the week before.
Brand mentions in both newspapers and television stories were significantly more negative
after the displays than before. The controversial strategy had no effect on whether the brand
mention landed on front pages or in story leads.

I. INTRODUCTION

In September 2005, Victoria’s Secret launched a new lingerie collection in the


storefronts of two shopping malls in Wisconsin and Virginia that featured mannequins in
provocative poses with ropes, pulleys, and a pole, described by the company as “back stage
sexy” window displays.

This study examined how effective this controversial PR strategy was for Victoria’s
Secret by measuring the frequency, placement, and tone of brand mentions in newspapers
from around the world and in transcripts of American television shows the week before the
controversial window displays were displayed and the week after. The displays attracted
protesters of all ages who threatened to boycott the shopping centers.

The use of such controversial displays was likely to attract media coverage, but was
the change in the frequency of Victoria’s Secret brand mentions substantial? Did the brand
mentions appear more prominently in newspaper and television stories after the displays were
used? Was the media coverage more frequently negative or positive? Was there a difference
between newspaper and television coverage? This research aimed to answer these questions.

58
II. LITERATURE REVIEW

Store window displays, such as those employed by Victoria’s Secret, may be related to
store atmospherics, branding theory, mere exposure theory, and the use of controversial
strategies and tactics in some public relations efforts.

Kotler (1973) defined atmospherics as a conscious effort to design space for the
purpose of creating specific effects among buyers. Mehrabian and Russell (1974) introduced
the concept that consumers would react in one of two ways in regard to response to store
environment: approach or avoidance. Donovan and Rossiter (1982) determined as arousal
increases, enjoyment, money spent in the store and time spent in the store increased.

The physical attractiveness of the store has been shown to generate higher purchase
intentions (Baker, Levy, and Grewal, 1992). Many retail purchase decisions are made in the
store environment (Keller, 1987). It has been suggested that emotional responses produced by
the store environment can influence purchase intention (Donovan, Rossiter, Marcoolyn, and
Nesdale, 1994). Misuse of just one atmospheric element could have a negative impact on
purchase behavior (Turley and Chebat, 2002). Consumers will shop in an environment that is
unattractive, but they will spend less money (Sherman, Mathur, and Smith, 1997). If
consumers enjoy the environment, they are likely to re-patronize that environment (Wakefield
and Baker, 1998).

Victoria’s Secret planners may have been banking on branding theory when they used
the provocative displays in an attempt to attract media attention. This process has proved
effective whether it is used within a class or for across-class products, or if it is negative or
positive. Across-class branding (Van Auken and Adams, 2005, page165) has been verified as
an effective strategic option when one wants to anchor two products with different advantages
to the public. Negative publicity of a product has been long the worst fear of professionals in
the field (Weinberger, Allen, and Dillon, 1981, page 20), but Victoria’s Secret planners may
have disregarded this.

A company’s brand can bring about numerous characteristics in relation to marketing


and public relations efforts. Brand equity has been defined as the “marketing effects uniquely
attributable to the brand (Keller, 1993).” Branding has been proven to be important in
customer awareness (Rossiter and Percy, 1987), recall and recognition (Bettman, 1979),
image (Gardner and Levy, 1955), attitude (Wilkie, 1986), and association (Chattopadhyay and
Alba, 1988). It has also been determined that brands could be given attributes such as being
pioneering in the industry and being accountable for their actions and image. (Gregory and
Sellers, 2002). Many consumers place a personality to the brands they know and use.
Consumers place human qualities on brands (Belk, 1988) including clothing.

Brands that are strong in terms of connection with consumers may enter the mind of
the consumers as memory. The larger the number of cues linked to the brand or information,
the greater the likelihood the information or brand will be recalled (Isen, 1992). Fueling cues
to a brand is also supported by the theory of the mere exposure, which states that simply
exposing consumers to products may stimulate product purchases and that the more exposure
one has to a stimulus, the more one will tend to like it, and eventually choose it (Zajonc,
1968). “…Mere exposure effects persist when initial exposures to brand names and product
packages are incidental, devoid of any intentional effort to process the brand information
(Janiszewski, 1993, page 389).” Houston and Scott (1984, page 27) found a “…decay in

59
attention to an advertisement as advertising volume in an issue increases.” Furthermore, there
have been studies on the relationship between few exposures and the impact they have.
Lastovicka (1983, page 333) tested the theory of the main three psychological exposures that
impact the viewer. Initially, the decoding of the product occurs; then the second exposure
provides the recognition and evaluation phase, while the third acts as a reinforcement of the
previous evaluation. Gibson (1996) later raised the issue of single-exposure effect and
concluded that there is a gradual body of evidence of the effectiveness of a single exposure of
a TV advertisement. “Behavioral response to advertising exposure is probably nonlinear, like
the attitudinal and cognitive responses found in numerous experimental studies,” however
“two to three exposures is optimum (Tellis, 1988, page134).” In addition to these theories,
there is also the use of controversial public relations strategies and tactics to attract media
coverage. The planners for Victoria’s Secret were not the first to use this method of
inexpensive promotion. The Chrysler Group produced a controversial ad for Dodge trucks
three years ago that received huge media coverage (Halliday, 2002). A billboard
advertisement for a movie produced a lot of commotion in Los Angeles over its controversial
lines (PR Week, 2004). One of the most recent controversial campaigns was conducted last
year by Benetton, which attracted not only free publicity but also analysis. The major
observation was that, with no exception, all the marketing representatives denied having any
strategy, and expressed their amazement at such media coverage (PR Week, 2005). These
supporting theories and practices in the field of advertising and PR provided the basis on
which the efficacy of Victoria’s Secret strategy was analyzed.

III. HYPOTHESES AND EXPLORATORY QUESTIONS

H1: Using the controversial display method will result in more mentions in the media for
Victoria’s Secret brand, as compared to the mentions before the display. H2: Using the
controversial display method will result in more prominent page placement in the media for
Victoria’s Secret brand name, as compared to page placement before the event. H3: Using the
controversial display method will result in more prominent story placement in the media for
Victoria’s Secret brand name, as compared to story placement before the event. H4: Using
the controversial display method will result in more negative attributes in the media for
Victoria’s Secret brand name, as compared to the attributes before the event. Exploratory
question 1: Will using the controversial display result in more negative attributes in the
newspapers for Victoria’s Secret brand name, as compared to the attributes before the event?
Exploratory question 2: Will using the controversial display result in more negative attributes
on American TV for Victoria’s Secret brand name, as compared to the attributes before the
event?

IV. METHODOLOGY

Researchers conducted a content analysis of newspapers from all over the world and
transcripts of American TV shows, using LexisNexus. Articles and TV transcripts analyzed
covered September 8-21, 2005, one week before the display and one week after. The displays
were launched on Sept. 15, 2005. The unit of analysis was any mention of the Victoria’s
Secret brand. The independent variables were the brand name and the date. Dependent
variables were page placement, story position, and tone (positive, negative or neutral). An
intercoder reliability test achieved 100 percent agreement on each of the variables after two
rounds.

60
V. RESEARCH

Table I supports the first hypothesis which predicted that in the week after the
controversial displays, the Victoria’s Secret brand would get more mentions in the media, as
compared to mentions before the exhibits. However, the increase from 116 to 133 mentions is
not large, representing a 15 percent increase overall.

Table I: Brand Name Mentions Before/After Displays


Date Newspapers and TV Transcripts
Before display 116/ 46.6%
After display 133/ 53.4%
Note. N= 249.

An analysis of TV and newspaper mentions of the brand, revealed a media effect, as


seen in Table II. Brand mentions in newspapers decreased from 48 to 39, a 19 percent change
overall, but brand mentions on TV increased from 68 to 94, a 38 percent increase overall.
The chi-square analysis shows that in the week before the controversial displays were
used, 58.6 percent of the brand mentions were on TV and 41.4 percent were in newspapers.
On the day the displays were launched and the week following, brand mentions on TV
climbed to 70.7 percent and dropped to 29.3 percent in newspapers.

Table II: Brand Mentions On Tv/Newspapers Before/After


Date TV Newspapers
Before display 68 48
58.6% 41.4%
After display 94 39
70.7% 29.3%
Note. N= 249; Chi-Square= 3.96; df= 1; p <.05.

H2 and H3 were not supported; there was no effect on page placement or story
position.
Table III supports H4, which predicted that the brand mentions after the display would
become more negative. Negative mentions increased dramatically from 5.2 percent to 33.8
percent, while positive mentions decreased dramatically from 26.7 percent to 2.3 percent.

Table III: Tone Of Story Before/After Displays


Date Negative Neutral Positive
Before display 6 79 31
5.2% 68.1% 26.7%
After display 45 85 3
33.8% 63.9% 2.3%
Note. N= 249; Chi-Square= 52.18; df= 2; p <.001.

Table IV reflects the first exploratory question. Negative mentions increased from
12.5 percent to 23.1 percent, and positives dropped from 20.8 percent to 2.6 percent in
newspapers.

61
Table IV: Tone Of Newspaper Story Before/After Displays
Date Negative Neutral Positive
Before displays 6 32 10
12.5% 66.7% 20.8%
After displays 9 29 1
23.1% 74.4% 2.6%
Note. N= 87; Chi-Square= 7.26; df= 2; p <.05.

Table V reflects the second exploratory question. There is a significant difference in


the increase of the negative mentions, from 0 to 38.3 percent showing that the two displays
had an even greater impact in the TV media than in newspapers.

Table V: Tone Of Tv Transcripts Before/After Displays


Date Negative Neutral Positive
Before display 0 47 21
0.0% 69.1% 30.9%
After display 36 56 2
38.3% 59.6% 2.1%
Note. N= 162; Chi-Square= 49.59; df= 2; p <.001.

VI. CONCLUSION

Victoria’s Secret brand mentions on TV shows increased, while mentions in


newspapers decreased. The brand mentions were significantly more negative in newspapers
and in TV shows in the week after the displays were launched than in the week before. The
controversial displays had no effect on whether the brand landed on the front pages of
newspapers or in story leads, so the strategy had no power to land the brand name in more
prominent media placements.

If the strategists at Victoria’s Secret were more interested in quantity of exposure than the
quality of exposure, then the strategy had moderate success. If they were concerned about
how the media characterized the controversial displays, then the strategy largely failed.

REFERENCES

Auty, S., and Lewis, C. “Exploring Children's Choices: The Remainder Effect of Product
Placement.” Psychology & Marketing, 21, (9), 2004, 697-713.
Baker, J., Levy, M., and Grewal, D. “An Experimental Approach to Making Retail Store
Environment Decisions.” Journal of Retailing, 68, 1992, 445-460.
Belk, R. “Possessions and the Extended Self.” Journal of Consumer Research, 15,1988, 139-
168.
Bettman, J. An Information Process Theory of Consumer Choice. Reading, MA: Wesley
Publishing, 1979.
“Billboards Turn Indie Flick into Social, Political Story.” PR Week, May10, 2004, 3.
Bloemer, J., and Ruyter, K. “On the Relationship between Store Image, Store Satisfaction and
Store Loyalty.” European Journal of Marketing, 32, 1998, 499-513.
Chattopadhyaya, A., and Alba, J. “The Situation Importance of Recall and Inference in
Consumer Decision Making.” Journal of Consumer Behavior, 15, 1988, 1-12.
Donovan, R., and Rossiter, J. “Store Atmosphere: An Environmental Approach.” Journal of
Retailing, 58, 1982, 34-57.

62
Donovan, R., Rossiter, J., Marcoolyn, G., and Nesdale, A. “Store Atmosphere and Purchase
Behavior.” Journal of Retailing, 70, 1994, 283-294.
Fink, E., Monahan, J., and Kaplowitz, S. “A Spatial Model of the Mere Exposure Effect.”
Communication Research, 16, (6), 1989, 746-769.
Gardner, B and Levy, S. “The Product and the Brand.” Harvard Business Review, 33, March-
April, 1955.
Gibson, L. “What Can One TV Exposure Do?” Journal of Advertising Research, 36, (2),
1996, 9-18.
Gregory, J., and Sellers, L. “Building Corporate Brands.” Pharmaceutical Executive, 2002,
38-44.
Halliday, J. “Dodge Spot Courts Controversy; ‘Urinating boy’ by BBDO Scores High in
Awareness Study.” Advertising Age, 73, (40), October 7, 2002, 3.
Holmes, P. “Controversy Is an Acceptable by-Product for a Company That Stays True to Its
Core Values.” PR Week, September 5, 2005, 11.
Houston, F., and Scott, D. “The Determinants of Advertising Page Exposure.” Journal of
Advertising, 13, (2), 1984, 27-33.
Isen, A. “The Influence of Positive Affect on Cognitive Organization: Some Implications for
the Influence of Advertising on Decisions about Products and Brands. In A.
Mitchell, ed., Advertising Exposure, Memory and Choice. Hillsdale, NJ: Lawrence
Erlbaum, 1992.
Janiszewski, C. “Preattentive Mere Exposure Effects.” Journal of Consumer Research, 20,
1993, 376-392.
Keller, K. “Memory Factors in Advertising: The Effects of Advertising Retrieval Cues on
Brand Evaluations.” Journal of Consumer Research, 14, 1987, 316-333.
Keller, K. “Conceptualizing, Measuring and Managing Customer-based Brand Equity.”
Journal of Marketing, 57, 1993, 1-22.
Kotler, P. “Atmospherics as a Marketing Tool.” Journal of Retailing, 49, 1973, 48-64.
Lastovicka, J. “A Pilot Test of Krugman's Three-Exposure Theory.” In Percy, L. and
Woodside, A., eds., Advertising and Consumer Psychology. Lexington, MA: D.C.
Heath, 1983, 333-344.
Mehrabian, A., and J. Russell. An Approach to Environmental Psychology. MIT Press, 1974.
Obermiller, C. “Varieties of Mere Exposure: the Effects of Processing Style and Repetition on
Affective Response.” Journal of Consumer Research, 12, (1), 1985, 17-30.
Riondino, M. “Branding on the Web: a Real Revolution?” Journal of Brand Management, 9,
(1), 2001, 8-19.
Rossiter, J., and L. Percy. Advertising and Promotion Management. New York: McGraw-
Hill, 1987.
Sherman, E., Mathur, A., and Smith, R. “Store Environment and Consumer Purchase
Behavior: Mediating Role of Consumer Emotions.” Psychology and Marketing, 14,
1997, 361-378.
Tellis, G. “Advertising Exposure, Loyalty, and Brand Purchase: A Two-Stage Model of
Choice.” Journal of Marketing Research, 25, (2), 1988, 134-144.
Turley, L., and Chebat, J. “Linking Retail Strategy, Atmospheric Design, and Shopping
Behavior.” Journal of Marketing Management, 18, 2002, 125-144.
Van Auken, S., and Adams, A. “Validating Across-Class Brand Anchoring Theory: Issues and
Implications.” Journal of Brand Management, 12, (3), 2005, 165-176.
Wakefield, K., and Baker, J. “Excitement at the Mall: Determinants and Effects on Shopping
Response.” Journal of Retailing, 74, 1998, 515-533.

63
EFFECTIVENESS OF EMOTIONAL ADVERTISING:
A REVIEW PAPER ON THE STATE OF THE ART

Branko Cavarkapa, Eastern Connecticut State University


cavarkapab@hotmail.com

John T. Flynn, University of Connecticut


flynnj@easternct.edu

ABSTRACT

The purpose of this review was to determine if emotional advertising is effective in


brand/advertisement recall, purchase intentions, and in creating positive attitudes towards the
ad and the brand sponsor. The second objective was to determine if different emotions
develop a link to consumers’ attitudes, intentions, and feelings. The third objective was to
investigate gender differences in response to advertising with emotional context.

I. INTRODUCTION

Emotional advertising is a subject of growing interest to marketing managers and


researchers given its potential for increasing the effectiveness of marketing communications.
Ever since Ernest Dichter introduced motivation research marketers have been trying to
provide a more definitive answer to questions about how effective it is in influencing the
reactions of consumers. Researchers in various areas of communications, psychology,
consumer behavior, and statistics are trying to create more effective methods and techniques
to design advertising appeals. What has made this question a very difficult one to answer is
the fact that there are many different factors that can influence how effective is advertising
based on emotion.

II. BACKGROUND AND LITERATURE REVIEW

Many companies have had difficulty basing their competitive advantage on the
functional aspects of their products, and have begun to rely on more emotional advertising to
attract consumers' attention. Review of the literature shows that the focus of research has
been in several areas. Research and marketplace findings have identified that the consumers'
emotional response toward a brand and/or ad can be a powerful motivator to purchase,
influence brand recall, and determine brand differentiation (Hazlett & Hazlett, 1999). Today,
creators of advertisements are using an array of sensory images including computer graphics,
music, drama, and emotion to grab the attention of the viewer. Some of the most common
emotional appeals focus on humor, fear, and self-idealization. Zeitlin and Westwood (1986)
emphasized the use of fear as a motivator to influence consumer response to advertising
messages. The fear appeal could range in intensity from mild to severe. That research
suggests that, in order to be effective, a fear based message has to be followed by a reasonable
solution which the product/service advertised can provide. Although some advertisers view
the use of humor in advertising as critical to get consumers to attend to messages, research
tends to temper that judgment. A study by Kover, Goldberg, James (1995) indicates that the
humorous side of the massage may result in the loss of product message. The investigation
also indicated that ads, which are based on consumers' desire to accomplish personal
enhancement tend to be highly effective.

64
Another stream of research is aimed at comparing response to emotional advertising
across different countries (Chan, 1996; Morris, 1995; De Pelsmacker and Geuens, 1998;
Huang, 1998). The ultimate goal of any advertisement is to create strong linkages between
the brand and the viewer. The advertisement should convince viewers that the brand is
relevant to them, should influence how good they feel about buying and using the brand, and
should influence their pre-dispositions toward purchasing the brand. (Kamp & MacInnis,
1995) Past advertising research has often focused on conscious, deliberate, and rational
processing of product information, but in actuality, the consumer is often unaware of what
elements of an ad or attributes of a brand influenced their choice. (Hazlett & Hazlett, 1999)
Most processing of advertising messages is subconscious, implicit, and intuitive. (Hazlett &
Hazlett, 1999) Based on those assumptions, emotion is primarily a motivator of consumer
behavior and that the affect attached to the ad or brand may play a more critical role in an ad's
effectiveness than the attitude or thoughts about the brand. (Hazlett & Hazlett, 1999).
Another stream of research can be described as the modeling approach or a comprehensive
attempt to explain how emotional advertising works and what are the underlying forces that
shape this process (Stout and Lockenby, 1986; Kamp and MacInnis, 1995.

The focus of this paper was an investigation of three models that explain the role of
emotions in advertising response. Each model will be explained in detail, followed by the
results of each model. We compared and discussed research methodology used in explaining
the emotional models of advertising in the second part. The final portion of the paper is a
reflection on what the models contributed to the understanding of this topic.

III. MODELS OF EMOTIONAL RESPONSES TO ADVERTISING

Stout and Leckenby (1986) developed a multidimensional typology of how viewers


emotionally react to advertisements. Their model is organized by consumer's reactions to
emotional advertisements. The authors define emotions as "a response to some
psychologically important event, real or imagined, past or anticipated. It exhibits valenced
feelings occurring as reactions to self-relevant events (Stout and Leckenby, 1986)."
Therefore, an individual's ability to respond to an advertisement varies depending on their
ability to make “self-relevant” connections to the advertisement.

The results of Stout and Leckenby’s study found that respondents who had an
experiential emotional response had a better attitude towards the ad itself, the brand, purchase
intent, brand recall, and ad content playback. Respondents who had an empathic emotional
response had more brand recall, and ad content playback. Respondents who had a descriptive
emotional response had a better brand attitude, more purchase intent, and ad content playback.
The results can conclude that consumers experiencing an emotional response have better
brand recall, ad content playback, attitudes toward the brand, and purchase intentions.

However, there was a follow up article addressing some of the findings in this study
by a different group of authors (Page, et. all. (1988). They questioned the validity of the
constructs (descriptive, emphatic and experiential response to advertising) used in the Stout
and Lockenby article and their research findings indicating a strong connection between
emotional advertising and its effectiveness.

65
IV. THE KAMP AND MACINNIS MODEL

Kamp and MacInnis (1995) have developed a slightly different approach to how
viewers respond to emotional advertisements. They determined that what was depicted in the
ad also affected what viewers felt in response to the ad. They have identified two constructs
related to the portrayal of emotions in advertisements; emotional flow and emotional
integration. Both constructs have been shown by Kamp and MacInnis to influence the nature
and intensity of consumer's emotional responses, and effect involvement with the brand,
attitudes, self-brand image, and purchase intentions. Emotional Flow can be defined as "the
extent to which emotions portrayed in the advertisement are perceived to change in their
nature and/or intensity during the course of the advertisement. “ (Kamp & MacInnis, 1995)
Advertisements can vary in emotional intensity from negative to positive and from low to
high arousal.

VI. THE BURKE AND EDELL MODEL

The Burke and Edell Model (1989) focuses more on a viewer's attitude towards
feelings. Burke and Edell believe that understanding a consumer's feelings is as important as
understanding their thoughts. The main idea of their model is that feelings generated by the
ad are different from thoughts about the ad, and that both are important and contribute to
explaining the effects of advertising. (Burke & Edell, 1989) Thinking and feeling are two
independent evaluation systems, respondents tend to tell the emotions they see in the ad, and
not how it makes them feel. Three dimensions of feelings were identified: upbeat, warm, and
negative; and three types of judgments were identified: evaluation, activity, and gentleness.
The model does not indicate whether judgments or feelings occur first, or whether judgments
of the ad’s characteristics are made at the same time as evaluations of the brand’s attributes.
The effects of feelings on judgments about the ad concluded that all three of the feelings were
related to all three of the judgments. Feelings contribute to the evaluation judgment (activity,
evaluation, and gentleness) of an ad. The influence of upbeat feelings is generally positive.
Ads that produce upbeat feelings are evaluated positively in terms of both evaluation and
activity. The effects of upbeat ads lead to positive brand attribute evaluations, and evaluation
judgment. In conclusion, brands with ads that generate upbeat feelings are evaluated more
positively.

The influence of warm feelings work through the evaluation judgment to positively
effect attitudes toward the ad and positive brand attribute evaluations. Warm feelings have a
positive effect on attitudes towards the brand that occurs through attitudes towards the ad and
brand attribute evaluations. The influence of negative feelings affects attitudes towards
brands directly through an effect on attitudes towards the ad. For example, if a consumer
views an ad that evokes negative feelings, those feelings are immediately transferred to the
brand. All of the effects of negative feelings, both indirect and direct are negative. In
conclusion, positive emotions such as warmth, and upbeat feelings, lead to attitudes that are
more positive, evaluations, and judgments than negative emotions in ads.

VII. RESEARCH METHODOLOGY AND MODEL COMPARISONS

The three studies required participants to view a series of advertisements and answer
questions following the ads. The studies asked a set of questions that required the participants
to respond to the advertisements themselves, and questions involving any emotional responses
they may have had. There was a significant difference in the sample sizes, the probable

66
reason was cost. This conclusion can be made based on the fact that Stout & Leckenby's
study had the largest sample size (1498), and they did not offer any money to their
participants. Kamp & MacInnis, and Edell & Burke, had smaller sample sizes, but chose to
pay their participants. Stout & Leckenby's and Kamp & MacInnis' sample sizes both had a
preponderance of female participants. Stout & Leckenby's and Kamp & MacInnis’ studies
took place in several malls located in U.S. cities. Therefore, this may not have been the ideal
location to perform this study unless they had intentionally planned to recruit more females
than males. Unfortunately, the reason for having more female participants was not included
in Stout & Leckenby's study. It was mentioned that the reason for Kamp & MacInnis' 75
percent female sample size was because the ads used were primarily products used by
females. The products were not revealed in the study, due to certain legal issues. Edell &
Burke's study included 191 respondents from a University campus, with no specific
male/female ratio mentioned.

Stout & Leckenby's study and Kamp & MacInnis' studies used samples obtained from
mall intercepts from several U.S. cities. By taking their sample from several U.S. cities, it
decreases the probability of having participants with similar values, ethics, and attitudes.
Edell & Burke's sample was drawn from a University campus.

Figure I. Model Comparisons

Model Similarities Differences Measurements


Stout & ♦ Focus on "self-relevant" ♦ Focuses solely ♦ Positive brand/ad
Leckenby events on empathy attitudes
♦ Empathy classifications levels ♦ Greater Purchase
Intention
♦ Greater brand recall
♦ Great ad content
playback
Kamp & ♦ Focus on links created ♦ More in depth
MacInnis between ad-characters-and than other ♦ Same as above
viewers models with
♦ Character/viewer link- creation of
empathy character-
♦ Viewer/brand link-"self- viewer-ad link
relevant" connection
Burke & ♦ Focuses on
Edell feelings and ♦ Same as above
judgments
♦ No focus on
empathy

VII. CONCLUSION

The purpose of this paper was to investigate the influence of emotional advertising on
brand/advertisement recall, purchase intentions, and developing positive attitudes towards and
advertisement and its sponsor. Review and analysis of the literature indicates that this is a
very complex topic and that the results are not always conclusive. It seems that the bulk of
research indicates that emotional advertising works to improve advertising effectiveness.

67
Different models were examined. The literature review suggests that emotions play a
significant role in the design of advertising messages.

Different emotions lead to different results in brand recognition, recall, attitudes, and
purchase intent. Non-emotional advertisements lead to the least favorable emotional
reactions. Consumers showed less interest in non-emotional ads, over any other emotional
execution examined. Positive emotions, especially humor showed the most favorable results.
Humor outperformed all the other emotions with respect to ad recognition, purchase intent,
and attitudes towards the brand. However, even this element was not without its dissenting
voices. We can conclude that humor/positive emotions in advertising result in the most
desired advertising outcomes. Warm and negative feelings should be used in ads under
certain circumstances.

REFERENCES

Chan, Kara K W. “Chinese Viewers’ Perception of Informative and Emotional


Advertising.”International Journal of Advertising., 125, (2), 1996, 152.Chaudhuri,
Arjun. "Product Class Effects on Perceived Risk: The Role of
Emotion.”International Journal of Research in Marketing., 15, 1998, 157-168.
De Pelsmacker, Patrick M. Geuens. “Reactions to Different Types of Ads in Belgium
and Poland.” International Marketing Review., 15, (4), 1998, 277.
De Pelsmacker, Patrick, Dedock, Ben, Geuens, Maggie. “A Study of 100 Likeable TV
Commercials: Advertising Characteristics and the Attitude Towards the Ad.”
Marketing and Research Today., 27, (4), 1998, 166-180.
Edell, Julie A. and Marian Chapman Burke. "The Power of Feelings in Understanding
Advertising Effects." Journal of Consumer Research., 14, December, 421-433.
Fisher, Robert J., Laurette Dubé. “Gender Differences in Responses to Emotional
Advertising: A Social Desirability Perspective.” Journal of Consumer Research., 31,
(4), 2005, 850-859.
Geuens, M. and Patrick De Pelsmacker. “Feelings Evoked by Warm, Erotic, Humorous or
Non-Emotional Print Advertisements for Alcoholic Beverages."
Academy of Marketing Science Review., 1998, 1.
Geuens, Maggie, De Pelsmacker, Patrick. “Affect Intensity Revisited: Individual Differences
and the Communication Effects of Emotional Stimuli.” Psychology & Marketing., 16,
(3), 1999, 195-210.
Gunther, Albert C, Thorson, Esther. “Perceived Persuasive Effects of Product Commercials
and Public Service Announcements.” Communication Research., 19, (5), 1992, 574.
Hazlett, Richard L., Sasha Yassky Hazlett. “Emotional Response to Television Commercials:
Facial EMG vs. Self Report.” Journal of Advertising Research., 39, (2), 1999, 7-24.
Huang, Ming-Hui. “Exploring a New Typology of Advertising Appeals: Basic, Versus
Social, Emotional Advertising in a Global Setting.” International Journal of
Advertising., 17, (2), 1998, 145-169.
Kamp, Edward, Borah J. Macinnis. “The Power of Feelings in Understanding Advertising
Effects.” Journal of Advertising Research., 35, (6), 1995, 19-29.Kover, Arthur J.,
Stephen M. Goldberg and William L. James. "Creativity vs. Effectiveness? An
Integrating Classification for Advertising." Journal of Advertising Research., 6,
November-December, 1995, 29-38. Martineau, Pierre. Motivation in Advertising.
New York: McGraw Hill Book Co., 1957.

68
Morris, John D. "Observations SAM: The Self-Assessment Manikin-An Efficient Cross-
Cultural Measurement of Emotional Response", Journal of Advertising Research., 6,
November-December, 1995, 63-68.
Page, T., Daugherty, P., D. Erogoly, D. Hartman, S.D. Johnson, D. Lee. "Measuring
Emotional Advertising: A Comment on Stout and Leckenby.” Journal of Adver-
tising., 17, (4), 1988, 49-52.
Stout, Patricia A., John D. Leckenby. “Measuring Emotional Response to Advertising.”
Journal of Advertising., 15, (4), 1986, 35-43.
Stout, Patricia and Roland T. Rust. "Emotional Feelings and Evaluative Dimensions of
Advertising: Are They Related?" Journal of Advertising., 22 , 1993, 61-71.
Zeitlin, David and Richard A. Westwood. "Measuring Emotional Response." Journal of
Advertising Research., 5, October-November, 1986, 34-44.

69
PICK A FLICK: MOVIEGOERS’ USE AND TRUST OF ADVERTISING
AND UNCONTROLLED SOURCES

Thomas Kim Hixson, University of Wisconsin – Whitewater


HixsonT@uww.edu

ABSTRACT

Moviegoers have a variety of advertising and uncontrolled sources to use for information
about movies, but which do they prefer? One hundred seventy-five moviegoers were
surveyed on the frequency of use, level of trust, and level of utility they have for a variety of
movie information sources. Word-of-mouth communication and advertising sources that
provide a “sample” of the movie were most used, useful and trusted in helping moviegoers
choose a movie. Surprisingly, Internet movie information sources were little used, little
trusted, and not regarded as being very useful.

I. INTRODUCTION

Movie consumers can use a variety of sources to help them select a movie to attend.
Some sources, such as advertising, are controlled by movie marketers, but other sources, such
as word-of-mouth are not under their control. Scholars, marketers and industry pundits,
however, have disagreed about which sources are most effective in influencing a moviegoer’s
decision about what movie to attend. The advent of the Internet has movie advertisers
reevaluating the value of traditional media (Galloway, 2003). Some movie marketers believe
that controlled (advertising) sources are becoming more important due to the necessity of a
strong opening weekend for a movie (Kuklenski, 2004) as opposed to letting uncontrolled
word-of-mouth “buzz” spread. This study will examine which sources are most trusted, most
useful, and most often used in movie selection. It also will examine which sources are most
often consulted for screening times and locations. Furthermore, comparisons will be made of
the use of advertising and other sources by moviegoers with Internet access and those with no
access.

II. LITERATURE REVIEW

The movie marketplace sees an average of nine new movies released each week
(Motion Picture Association, 2005, p. 12). Movie advertisers want to make their movie the
one their target audience wants to see this week. Achieving this goal usually requires massive
amounts of advertising. Indeed, a positive relationship has been found between advertising
expenditures for a movie and its opening week box-office success (Elberse & Eliashberg,
2003). The costs of advertising and promoting a movie have risen from an average of $13.9
million in 1994 to $30.6 million in 2004 (Motion Picture Association, 2005, p. 20).

Controlled (Advertising) Sources. More money is spent on television advertising


than any other medium used for advertising movies (Motion Picture Association, 2005).
However, one study found that only 40% of moviegoers recalled they heard about a movie
from television commercials (Klady, 1994). With as many as 25 movies advertised on
television weekly, a cluttered media environment has developed for movie advertisers
(Galloway, 2003). Although used occasionally, radio and magazine advertising are not
known as powerful media for advertising movies.

70
Movie trailers have been found to be a powerful media source in attracting people to a
movie (Faber & O’Guinn, 1984; Hixson, 2000). Conversely, two movie industry studies
found that less than 10% of moviegoers are made aware of a movie through trailers (Klady,
1994). Movie trailers can be targeted to moviegoers based on their behavior of attending
movies at that specific theater (Goodale, 1998) and also on their movie genre preferences
(Hixson, 2005).

Movies were once heavily advertised in newspapers. Now, many movie executives
believe that newspaper ads are no longer useful in motivating people to attend a particular
movie (Dotinga, 2001) but are useful in providing the movie location and time. The telephone
is used to advertise movies and theaters as moviegoers have traditionally telephoned theaters
for show times and locations. Moviefone is the largest interactive movie guide/ticketing
service, with telephone and online services. The increase in advertising spending per movie on
the Internet from $168,000 in 2000 to $735,000 in 2004 (Motion Picture Association, 2005, p.
21) seems logical, as heavy computer usage has been associated with a greater use of movies
(Robinson, Barth & Kohut, 1997). Also, young adult moviegoers depend more on the Internet
than other movie information sources (Friedman, 2002). Despite the convenience, and the
Internet’s penetration into many households, people are used to accessing newspapers to find
movie times and locations. Therefore, the following hypotheses and research question are
generated:
H1: Moviegoers will trust trailers in the theater more than other sources to give them a
true sense of what a movie is really like.
H2: Moviegoers will consult newspapers more often than other sources to learn movie
locations and times.
RQ1: Is there any difference in the use and trust of movie advertising sources by
those who have Internet access and those who do not?

Uncontrolled (Non-Advertising) Sources. Several early studies agreed on the


importance and effectiveness of word-of-mouth in marketing movies (Faber & O’Guinn,
1984; Austin 1988); however, two more recent public opinion polls disagree with each other.
One found word-of-mouth is an important source of movie information and in “selling”
moviegoers a movie (Friedman, 2002). However, another poll found that only 6% of
moviegoers are made aware of a movie by word-of-mouth sources (Klady, 1994). With
today’s national saturation release effort, movie marketers cannot rely on word-of-mouth to
get moviegoers into theaters the first weekend of a movie’s release. Zufryden (1996)
suggests, however, that when positive word-of-mouth is expected, a studio can reduce its
advertising expenditure.

Movie studios release press kits and value the publicity that can be generated. An
online poll found, however, that publicity is not especially valuable in “selling” a movie
(Friedman, 2002). Movie reviewers/critics play a publicity role and other roles in their
relationship with moviegoers. Included are creating awareness, assessing entertainment value
and providing movie information (Austin, 1988). However, movie reviewers are of little
value to most moviegoers (Friedman, 2002). Based on this information the following
hypotheses are generated:
H3: Moviegoers will find Word-of-mouth to be more useful than other sources in
making a movie attendance decision.
H4: Moviegoers will use Word-of-mouth more often than other sources in making a
movie attendance decision

71
III. METHODOLOGY

A self-administered survey questionnaire was used. The survey was distributed in a


multiplex movie theater in a mid-sized city in the Midwest over a three-weekend period. It
consisted of 56 items and took approximately five minutes to complete. Participants
completed the survey on site and were selected using the systematic sample method. Every
seventh person leaving the theater was asked to participate.

The instrument consisted of six banks of items. 1.) Asked about movie viewing habits.
2.) Asked to what degree respondents trusted a source of information to give them a “true
sense of what a movie is really like.” 3.) Asked to what degree respondents found a source of
information useful in helping them decide which movie to attend. The Chronbach’s alphas
were .832 and .848, respectively. 4.) Asked how often respondents used a source to help them
decide which movie to attend. 5.) Asked how often respondents consulted a source to find a
show time and location of a movie already selected. The Chronbach’s alphas were .839 and
.581, respectively. 6.) Asked for demographic information including computer ownership and
use.

IV. RESULTS

Participating in the survey were 175 moviegoers. Nine out of ten participants were
Caucasian. The mean age was 39.8, with 31% under age 30. Females accounted for 51% of
the participants. Three out of ten (31.4%) participants reported an income of more than
$70,000 per year, while 32.8% reported incomes of less than $40,000 per year. Six out of ten
(61.7%) had no college degree, 14.9% had a graduate degree. One hundred fifty-two (87%)
had Internet access either at home, work or school.

One-sample t-tests were used to compare the means in testing hypotheses 1 – 4. The
test statistic used for each set of variables was the largest mean for each set. Hypothesis 1 was
not supported. Participants trusted word-of-mouth most to provide them the truest sense of
what a movie is like (Table I). Hypothesis 2 was supported. Word-of-mouth was also the
source of information that is most useful in helping the participants decide which movie to
attend (Table II). Hypothesis 3 was also supported. “Word-of-mouth” was used more often
than other sources to help participants make a movie attendance decision (Table III).
Hypothesis 4 was also supported. Participants used newspaper ads and listings most often to
learn the location and time of a movie they wanted to see (Table IV).

For research question 1, only three significant differences were found when comparing
moviegoers with Internet access and those without in how often those sources (listed in Table
IV) were consulted for movie locations and show times. One was Moviefone. Those with
Internet access (mean = 2.33) consulted it more than moviegoers with no access (mean= 1.40),
(F(1, 162) 4.31, p < .05). Not surprisingly, the two other sources that had significant
differences were found to be online information sources: theater/chain websites and a
movie’s website. Noteworthy is that no significant differences were found when comparing
moviegoers with Internet access and those with no access in how often newspaper ads/listings
and phoning a theater were consulted for movie locations and show times. As these two
sources have long been used by most people for this purpose, we can surmise that media use
habits are hard to break. There were significant differences in two sources of information in

72
TABLE I TABLE II
TRUST OF THE SOURCE TO USEFULNESS OF INFORMATION
PROVIDE A “TRUE SENSE” SOURCES IN MAKING A
OF A MOVIE MOVIE ATTENDANCE DECISION
Source n mean S.D. t Source n mean S.D. t
Word-of-mouth 171 5.62 1.59 --- Word-of-mouth 169 5.40 1.76 ---
Theater trailers 173 4.53 1.78 -8.04 Theater trailers 173 4.94 1.65 -3.66
TV ads 170 4.50 1.77 -8.24 DVD/vid trailers 167 4.50 1.93 -6.06
Clips/TV shows 172 4.37 1.71 -9.62 Clips/TV shows 169 4.36 1.90 -.7.14
DVD/vid trailers 169 4.26 1.93 -9.15 TV ads 170 4.35 1.75 -7.82
Critics TV 169 3.67 1.80 -14.11 Critics nsp/mag 169 3.69 1.81 -12.31
Critics nsp/mag 166 3.66 1.72 -14.69 Critics TV 168 3.67 1.73 -12.95
Internet trailers 161 3.42 1.76 -15.90 Newspaper Ads 170 3.56 1.63 -14.17
Movie website 161 3.40 1.95 -14.48 Radio Ads 173 3.38 1.69 -15.75
Radio Ads 166 3.33 1.57 -18.84 Internet trailers 160 3.26 1.81 -14.96
Magazine Ads 168 3.31 1.53 -19.60 Magazine Ads 168 3.18 1.74 -16.51
Newspaper Ads 162 3.16 1.69 -18.56 Movie website 164 3.01 1.84 -16.59
note: All t-values significant (p<.001) in note: All t-values significant (p<.001) in
comparison of means to word-of-mouth comparison of means to word-of-mouth
mean in a one-sample t-test. 7-point mean in a one-sample t-test. 7-point
Likert-type scales were used with 1 = Not Likert-type scales were used with 1 = Not
a true sense, 7 = A very true sense. very useful, 7 = Very useful.

TABLE III TABLE IV


HOW OFTEN INFORMATION SOURCES HOW OFTEN INFORMATION
SOURCES
ARE USED IN MAKING A ARE CONSULTED FOR MOVIE
MOVIE ATTENDANCE DECISION LOCATION AND SHOW TIMES
Source n mean S.D. t Source n mean S.D. t
Word-of-mouth 168 5.34 1.80 --- Nsp Ad/Listing 167 4.30 2.28 ---
Theater trailers 168 4.45 1.83 -6.34 Phone Theater 165 3.58 2.36 -3.93
DVD/vid trailers 167 3.96 2.09 -8.52 Moviefone/online 167 2.53 2.09 -10.99
Clips/TV shows 168 3.91 2.03 -9.14 Moviefone 164 2.21 1.89 -.14.15
TV ads 170 3.76 1.97 -10.50 Th’ter/chain ’site 163 2.15 1.90 -14.45
Critics TV 170 3.72 1.86 -11.32 Movie website 163 1.86 1.54 -20.20
Critics nsp/mag 168 3.45 1.92 -12.78 Moviefone/Email 165 1.63 1.49 -23.07
Newspaper Ads 168 3.08 1.81 -16.14 note: All t-values significant (p<.001) in
Radio Ads 169 2.92 1.88 -16.64 comparison of means to Nsp Ad/Listing
mean
Magazine Ads 167 2.74 1.68 -19.95 in a one-sample t-test. 7-point Likert-type
Internet trailers 166 2.59 1.88 -18.86 scales were used with 1 = never, 7 =
always.
Movie website 167 2.24 1.67 -23.95
note: All t-values significant (p<.001) in comparison of means to word-of-
mouth mean in a one-sample t-test. 7-point Likert-type scales were used
with 1 = never, 7 = always.

“trust in the source to provide a true sense of what the movie is really like” when the means
of moviegoers with Internet access and those with no access were compared. Those with
Internet access (mean = 3.57) trusted trailers on the Internet more than moviegoers with no
73
access (mean= 2.06), (F(1, 159) 11.19, p < .001). Likewise, those with Internet access (mean
= 3.52) trusted the movie’s website more than those with no access (mean = 2.39), (F(1, 159)
5.56, p < .019). Those moviegoers with Internet access (mean= 5.05) were significantly more
likely to use trailers in a theater “to help make a decision about whether to attend a particular
movie” than those moviegoers with no Internet access (mean = 4.23), (F(1,171) 4.87, p <
.029). Only one significant difference was found when comparing those who have Internet
Access (mean =3.33) and those who have no access (mean = 4.30) in their use of newspaper
critics in “how often a source is used to decide which movie to see” (F(1, 166) 4.58, p < .034).

V. CONCLUSION

Word-of-mouth is the source most trusted, most useful and most used by moviegoers
in helping them decide which movie to attend. These findings support findings earlier studies
(Faber & O’Guinn, 1984; Austin 1988), but contradict at least one industry poll (Klady, 1994)
and seem to be at odds with the movie industry’s practice of opening movies in wide release
that requires a heavy advertising effort. More in line with industry practice, those sources
that provide moviegoers a “sample” of the movie, such as trailers in the theater, television ads,
movie clips on television programs, and trailers on DVD/video, followed next in these three
variables. Interestingly, among the least useful and least used sources were trailers on the
Internet that do provide a “sample” of the movie. Perhaps explaining moviegoers’ less than
enthusiastic use of this medium, is the fact that many of these Internet trailers are usually
displayed on a very small screen within a computer screen.

As predicted, moviegoers consulted newspaper ads/listings most often to learn movie


locations and show times once a movie selection had been made. The next most consulted
source was telephoning the theater. Apparently, old habits are hard to break. Although this
sample had a higher level of access to the Internet than the rest of the nation, they were not
using it to find where a movie was playing or what time it begins. Going online and being
notified by email were the least consulted sources.

In fact, a surprising finding was how little trusted, useful, and used was a movie’s
website. This sample had a higher percentage (87%) of Internet access than the U.S. adult
population (63%) and a higher percentage of households (82%) with the Internet than the U.S.
population (55%) (U.S. Census Bureau, 2004), yet, despite the convenience of the Internet for
movie information these participants used it very little. Another interesting finding was that
moviegoers with no Internet access had a significantly higher use of newspaper
critic/reviewers than those with Internet access. Taking into consideration the convenience of
using the Internet coupled with its low use in this study for information access, it would seem
that those with no Internet access had a greater need for information. They gratified this need
more than those with Internet access who seemingly have information at their mouse-
manipulating fingertips.

There are several implications of this study for the movie industry. Because of current
market conditions, movie marketers often cannot have a limited release of a movie to enable
word-of-mouth, the most trusted, useful, and used source in this study, to spread. Therefore,
they should continue to try to provide moviegoers’ a “sample” of the movie as much as
possible as these advertising sources were the next most trusted, useful and used sources.
Movie marketers should also emphasize their websites more in traditional advertising to drive
Internet users to their movie’s website. This online activity needs to be emphasized to train
moviegoers to seek out movie information from the Internet. Perhaps, the low use of the

74
Internet for movie information is due to the several locations on the Internet where
information might be found.

REFERENCES

Austin, Bruce A. “Which Show to See?” Boxoffice, 124. Oct., 1988, 57-66.
Dotinga, R. “Film Strip.” Editor & Publisher. Jan. 15, 2001, 19-25.
Elberse, Anita. and Eliashberg, Jehoshua. “Demand and Supply Dynamics for Sequentially
Released Products in International Markets: The Case of Motion Pictures.” Marketing
Science, 22, (3), 2003, 329-354.
Friedman, Wayne. “Trailers Lead the Way to Some Films.” Ad Age, 73, May 27, 2002, 43.
Faber, Ronald J. and O’Guinn, Thomas C. “Effect of Media Advertising and Other Sources on
Movie Selection.” Journalism Quarterly, 61, 1984, 371-7.
Galloway, Stephen. “A Tangled Web.” Hollywood Reporter, 378, May 20, 2003, S-1.
Goodale, Gloria. “Coming Attractions May Not Be Suitable for Children.” Christian Science
Monitor, 90, n133, June 5, 1998, b6.
Hixson, Thomas K. The Effects of Motion Picture Trailers as an Advertising Medium on
Moviegoers’ Expected Gratifications. Unpublished doctoral dissertation, SIU-
Carbondale, 2000.
Hixson, Thomas K. “Targeting Movie Audiences Through Behavior and Preference
Segmentation.” Business Research Yearbook, 12,(1), 2005, 81-85.
Klady, Leonard. “Tyranny of TV Still Governs Movie Choices.” Variety, 354, June 27,
1994, 1.
Kuklenski, V. “Believe the Hype.” The Daily News of Los Angeles, June 3, 2004, u4.
available:
http://web.lexisnexis.com/universe/document?_m=332cb68b523fe3b0ad651f0
Motion Picture Association of America “U.S. Entertainment Industry: 2004 MPAA Market
Statistics.” available through www.mpaa.org, 2005.
Robinson, John P., Barth, K., and Kohut, A. “Social Impact Research: Personal Computers,
Mass Media, and Use of Time.” Social Science Computer Review, 15, n1, Sp, 1997,
65-82.
U.S. Census Bureau. “Internet Access and Usage and Online Service Usage: 2003.” Statistical
Abstract of the United States: The National Data Book, 1156. available at:
www.census.gov/statab, 2004.
Zufryden, Fred S. “Linking Advertising to Box Office Performance of New Film Releases: A
Marketing Planning Model.” Journal of Advertising Research, July/August, 1996.
29-41.

75
WHEN WEB PAGES INFLUENCE WEB USABILITY

Alex Wang, University of Connecticut


alex.wang@Uconn.edu

ABSTRACT

The purpose of this study is to examine the effects of strategic communication in the
context of web usability by comparing single and multiple publicity and advertising messages.
This study conducted a 4-condition experiment comparing the effects of exposure to a single
publicity article, a single ad, similar message from a publicity article and an ad, and varied
messages form a publicity article and an ad with 325 participants. The results suggested that
when similar or varied advertising and publicity messages were integrated and linked in a
website, the message effects in either condition operated similarly to repetition effects as the
messages generated more positive effects on web usability, measured by attitude toward the
website, purchase intention, communication sharing, and future website use, than a single
message condition, especially the publicity article.

I. INTRODUCTION

Usability is a valuable theoretical construct suggesting that consumers’ overall


assessments of their experiences of using a website are highly related to messages’ effects. In
other words, consumers evaluate a website based on information usability of a website,
involving inter-related processes, cognitive processing and attitude formation (Hallahan,
2001). Wang (2005) empirically examined the relationship between consumers’ cognitive
processing toward identical product information featured in an ad and an article and web
usability within the context of online shopping. Acknowledged in the limitation section,
however, his study was limited to examine only one condition. Different conditions such as
similar and varied messages featured in advertising and publicity were not empirically tested
and could provide comparative results to extend this type of research on web usability. Thus,
the purpose of this study is to further compare the effects of single and multiple messages on
web usability.

II. LITERATURE REVIEW

The present research seeks to study priming effects in the context of multiple messages
on web usability by comparing priming to repetition effects. Research has provided different
perspectives on multiple messages’ effects and repetition effects. Mere repetitions, generally
considered as an opportunity enhancement strategy, facilitate learning and contribute to liking
a message since multiple exposures increase consumers’ familiarity with the messages,
leading consumers to increase their positive associations with the messages (Harkins and
Petty, 1987). The priming literature suggests that first message has the potential to frame the
issue in a particular manner, orienting consumers to interpret the second message in a manner
consistent with the frame in the first message (Domke, Shah, and Wackamn, 1998). Focusing
on the case of similar messages, the first, priming message may enhance the effectiveness of
the subsequent message. Alternatively, the two messages may merely have the same effect as
viewing the second message alone would have had. Thus, multiple but similar messages may
work through increased familiarity to enhance liking of the messages. For instance, Moorthy
and Hawkins (2003) found consumers liked an ad more after they saw it repetitively.

76
In addition to merely repeating the same message, presenting information in different
sources can stimulate thinking (Anand and Sternthal, 1990). Hearing about a subject from two
or more independent sources, or hearing different executions of the same message with the
same theme or story line can result in greater cognitive effort, more elaboration, and higher
learning (Harkins and Petty, 1987). Following a priming explanation for varied messages, a
prime framed in a particular way sets up certain evaluative criteria consistent with the frame
and enhances the effectiveness of a subsequent and different message beyond what happens
from mere repetition. In the same way, viewing different publicity and advertising messages
may be more effective than seeing the same ad or the same article twice. Thus, integrating
advertising and product publicity to carry varied messages may increase consumers’ positive
attitudes toward the messages. This proposition echoes the concept behind synergy in strategic
communication, a coordination of messages for delivering more impact (Moriarty, 1996).
“This impact is created through synergy-the linkages that are created in a receiver's mind as a
result of messages that connect to create impact beyond the power of any one message on its
own” (Moriarty, 1996, page 333). This study builds on the previous research (Wang, 2005) to
study the effectiveness of integrating advertising and product publicity messages on
consumers’ perceived web usability.

Web usability research suggests that evaluations of web usability include attitude
toward a website (ATW), purchase intention, communication sharing with others, and future
use of a website (Hallahan, 2001, Wang, 2005). This study uses these variables as indicators
of individuals’ behaviors beneficial to advertisers since ultimate measures of successful web
relationship-building are consumers’ long-term engagements in behaviors that help advertisers
achieve their financial goals (Ba and Pavlou, 2002; Gefen, Karahanna, and Straub, 2003). In
sum, this study tests two hypotheses and a research question.

H1: Exposure to varied advertising and publicity messages will have a better effect on
consumers’ (a) ATW, (b) purchase intentions, (c) communication sharing, and (d)
future website uses than exposure only to either advertising or publicity messages.

H2: Exposure to similar advertising and publicity messages will have a better effect on
consumers’ (a) ATW, (b) purchase intentions, (c) communication sharing, and (d)
future website uses than exposure only to either advertising or publicity messages.

RQ: Are there significant differences between the varied and similar messages
conditions regarding consumers’ ATW, purchase intentions, communication sharing,
and future website uses?

III. METHOD

A 4-group, planned-comparison design was employed as participants were randomly assigned


to one of the four conditions. The four conditions included: (1) publicity only condition; (2)
advertising only condition; (3) similar advertising and publicity condition; and (4) varied
advertising and publicity condition. More than 500 participants were recruited from tennis and
regular classes taught at two large U.S. universities. A total of 325 participants eventually
took part in this study, and their responses were recorded and analyzed. As an incentive for
their participation, either a class assignment was waived or extra credit-point was rewarded
when they completed the study.

All participants received an instruction booklet including a questionnaire. The first


page of the booklet informed them that the principal investigator was interested in their

77
evaluations of a website and a tennis racquet featured in the website. The second page of the
booklet informed them that they would follow the instructions and review specific
information about the tennis racquet. An ad and an article were selected as two
communication forms presenting advertising and product publicity. The article was embedded
into Tennis Magazine’s website and shown in the center when participants clicked on the
article link. The same situation applied to the ad when participants clicked on the ad link
except the ad was not embedded into any third-party organizations’ websites. No participants
had heard of or brought the tested racquet before.

Participants logged on to a website that featured one link in either condition 1 or


condition 2. A link led participants to review a publicity article featured in Tennis Magazine’s
website in condition 1 and an ad in condition 2 respectively. In either similar or varied
messages conditions, participants logged on to a website that featured two links: each link led
them to review an ad and an article featured in Tennis Magazine’s website. In the similar
messages condition, advertising and publicity messages were similar and featured the tennis
racquet with superb power. In the varied messages condition, an ad featured the tennis racquet
with superb power, whereas an article featured the tennis racquet with superb control. Since
two different features of the tennis racquet could create a situational factor that participants
might favor superb power over superb control or vice versa, two features of superb power and
superb control were counterbalanced in the ad and the article. Moreover, since placement of
the ad and the article links could create a situational factor that participants would click on
whichever was on the top, the placements of two links in condition 3 and 4 were also
counterbalanced. Consequently, condition 1, 2, 3, and 4 had 51, 53, 110, and 111 participants
respectively. Participants read the information at their own pace. After completing the
reading, they were asked to complete the final set of questionnaires including their specific
and overall evaluations of the website and the tennis racquet.

Participants’ attitudes toward the website were measured by asking the participants
about “how the website is for buying tennis products” where 1 indicated ‘not a good website
at all’ and 7 indicated ‘an extremely good website’ (Chen and Wells, 1999). Two questions
were asked about whether the participants would “recommend the racquet to a friend” and
“tell another friend about the racquet featured in the website” where 1 indicated ‘not likely at
all’ and 7 indicated ‘extremely likely.’ These two questions measured participants’
communication sharing with others. Finally, participants’ purchase intentions were measured
by asking the participants whether they would buy the racquet. Participants’ future website
uses were measured by asking the participants whether they would ‘use the website for future
information search about tennis products’ and ‘return to the website.’ Likert-type scales were
used to measure participants’ purchase intentions and future website uses yielding scores
ranging from 1 (extremely disagree) to 7 (extremely agree) for each.

IV. RESULTS

Table I documented the main constructs measured in this study. All Cronbach’s α
values were larger than 0.73 if available. To test for main effects, a MANCOVA procedure was
used with participants’ ATW, purchase intentions, communication sharing, and future website
uses as the dependent variables. Message manipulation (single ad, single publicity, similar ad
and publicity, and varied ad and publicity) was used as the independent variable. The results
showed that there was a main effect for message manipulation, Wilks’ λ = .91, F (4, 311) =
6.57, p = .000, η2 = .08; the mean vectors were not equal and the set of means among

78
conditions were different. Thus, statistically significant differences in conditions with respect
to the dependent variables were established.

Table I. Dependent Variables


Construct Item Mean Standard Deviation Cronbach’s α
Attitude toward the website 1 3.99 1.08 N/A
Purchase intention 1 4.8 1.63 N/A
Communication sharing 2 4.94 1.4 .73
Future website use 2 4.74 1.65 .82

The tests of between-participants effects based on the individual univariate tests were
reported in Table II. H1 and H2 were partially supported as the effects of message
manipulation on three of the four dependent variables were established: ATW, F (3, 312) =
1.65, p = .178, η2 = .016; PI, F (3, 312) = 3.17, p = .025, η2 = .03; communication sharing, F
(3, 312) = 6.13, p = .001, η2 = .06; future website use, F (3, 312) = 5.83, p = .001, η2 = .05.
Several post hoc tests further disclosed that the participants exposed to varied messages (M =
5.25, SD = 1.07) exhibited higher inclination of communication sharing than the participants
exposed only to the article (M = 4.32, SD = 1.8, p = .000) and the ad (M = 4.76, SD = 1.44, p
= .005). The participants exposed to similar messages (M = 5, SD = 1.38) had higher
inclination of communication sharing than the participants exposed only to the article (M =
4.32, SD = 1.8, p = .002). Moreover, the participants exposed only to the article (M = 3.97, SD
= 1.78) had lower inclination of future website use than the participants exposed only to the ad
(M = 4.66, SD = 1.55, p = .03), similar messages (M = 4.83, SD = 1.52, p = .003), and varied
messages (M = 4.8, SD = 1.55, p = .000).

Table II. Tests Of Between–Participants Effects


Source Dependent Variable F (3, 312) p η2
Message manipulation Purchase intention 3.169 .025 .03
Future website use 5.830 .001 .053
Communication sharing 6.129 .000 .056
Attitude toward the website 1.648 .178 .016

The research question queried whether significant differences between the varied and
similar messages conditions would materialize on participants’ ATW, purchase intentions,
communication sharing, and future website uses. The results revealed that there were no
differences between the varied and similar messages conditions on participants’ ATW,
purchase intentions, communication sharing, and future website uses.

V. CONCLUSION

The most striking findings in this study were those that illustrated the effects of
exposure to similar or varied multiple messages: the varied or similar messages had
significant effects on participants’ purchase intentions, communication sharing, and future
website uses but not on their ATW. Particularly, exposure to similar or varied messages
resulted in no different levels of ATW, purchase intention, communication sharing, and future
website use. Moreover, publicity messages seemed to generate the weakest effect on purchase
intention, communication sharing, and future website use. It seemed that participants were not
overly concerned with the publicity and thus the priming effect did not materialize for varied
advertising and publicity condition. In other words, varied and similar advertising and

79
publicity messages affected participants in the same way as repetition effects occurred. These
results might suggest product review articles that provided endorsements of products based on
industry research sources and advertising might not exude either significant usability or
relevance to those consumers whose evaluations of a website were influenced by a host of
other factors that could produce stronger influence on their ATW. For instance, payment
method, security, or interactivity of a website could all have a strong impact on consumers’
ATW.

Another possible explanation for article’s weak influence is in fact closely related to
the unregulated nature of Internet publishing. Most consumers can receive product news from
various online sources, in addition to a growing number of sites affiliated with blogs. Given
the ease of accessing the multiplicity of product news online, which at times provides
contradictory information, it is likely that consumers may not perceive product information or
news form many of these sources with high credibility. This is especially worrisome with the
recent proliferation of online articles that have little or no sourcing information. Thus, the
editorial independence of publicity from the advertisers in this multi-sourced product
information environment may also be perceived as dubious. As consumers often receive
multiple product reviews, this situation may overburden their capacity for information
processing. As such, consumers may simply ignore the significance of the product publicity.

In addition to providing theoretical implications for extant research propositions, this


study makes a contribution in extending strategic communication theories to Internet context.
Due to the nature of the Internet, integrating various communication forms can be far more
feasible than traditional media. For example, interactive advertising, news clips, editorial
contents, and customer feedback can all be integrated in advertisers’ websites. Today's
selective consumers require more assurance that the source of the information is credible.
Obtaining credible product publicity from an objective third party, such as a major media
outlet or professional trade organization, gives advertisers the credibility they need to be
better believed by consumers. Even while product publicity is seldom controlled by
advertisers, it does not mean advertisers can not integrate available and positive product
publicity into their advertising campaign. The practice of integrating information from various
sources online can create a dialogue to build strong relationships with existing as well as
potential customers due to the value and role that the Internet plays in managing perceptions
of advertisers’ products or services.

Some limitations of this study should be acknowledged. First, the differences in the
effect sizes were very small, and so should be replicated before making much of the findings.
One limitation stems from the study’s laboratory settings. This study suffers from the generic
limitations of all laboratory studies with forced exposure to the ad and the article. It is possible
that under natural viewing conditions, consumers will choose not to read any of the ad or
article, and thus the observed effects of integration may not materialize. Other factors that the
study did not control were participants’ perceptions about the credibility of the article and the
ad and their perceptions about the credibility between paid advertising and product review
articles. It is possible that participants perceived product review articles as being low in
credibility and hence handily dismissed the possible priming effect on the ad.

Future research can also consider other pairings of communication formats and their
degree of difference online, such as using a game-playing as a prime activity and an ad as the
message. How similar do a prime and a message need to be, in order to have an increased
effect over simply viewing the similar message? Are there conditions under which a prime
plus a message will outperform a repetition effect? It is unknown whether priming plus
80
subsequent ad viewing or repetition effects may last longer. Thus, it is beneficial for future
research to examine the priming and repetition effects over time. In addition, websites vary
considerably based on their functionalities or the degree to which the technology can be used
adequately. Future work in designing aspects of web usability, examining the effect of such
moderating variables or their interactive effects, can further contribute to the ongoing
development of web communication strategy.

REFERENCES

Anand, Punam, and Sternthal, Brian. "Ease of Message Processing as a Moderator of


Repetition
Effects in Advertising." Journal of Marketing Research., 27, (3), 1990, 345-353.
Ba, Sulin, and Pavlou, Paul A. “Evidence of the Effect of Trust Building Technology in
Electronic Markets: Price Premiums and Buyer Behavior.” MIS Quarterly., 26, (3),
2002, 243-268.
Chen, Qimei, and Wells, William D. “Attitude toward the Site.” Journal of Advertising
Research., 39, (5), 1999, 27-37.
Domke, David, Shah, Dhavan V., and Wackamn, Daniel B. “Media Priming effects:
Accessibility, Association, and Activation.” International Journal of Public Opinion
Research., 10, (1), 1998, 51-74.
Gefen, David, Karahanna, Elena, and Straub, Detmar W. “Trust and TAM in Online Shopping:
An Integrated Model.” MIS Quarterly., 27, (1), 2003, 51-90.
Hallahan, Kirk “Improving Public Relations Web Sites through Usability Research." Public
Relations Review., 27, (3), 2001, 223-239.
Harkins, Steven G., and Petty, Richard E. "Information Utility and the Multiple Source
Effect."
Journal of Personality and Social Psychology., 52, (2), 1987, 260-268.
Moorthy, Sridhar, and Hawkins, Scott A. “Advertising Repetition and Quality Perceptions.”
Journal of Business Research., 58, (3), 2003, 354-360.
Moriarty, Sandra E. “The Circle of Synergy: Theoretical Perspectives and an Evolving IMC
Research Agenda.” In Thorson, Esther and Jeri Moore, eds., Integrated
Communication: Synergy of Persuasive Voices. Hillsdale, NJ: Lawrence Erlbaum,
1996, 333-354.
Wang, Alex. “The Effects of Integrating Advertising and Product Publicity on Web
Usability.” In Adams, Marjorie G. and Abbass Alkhafaji, eds., Business Research Yearbook.
Slippery Rock, PA: Slippery Rock University, 2005, 50-55.

81
UNIVERSITY BRAND IDENTITY: A CONTENT ANALYSIS OF
FOUR-YEAR U.S. HIGHER EDUCATION WEB SITE HOME PAGES

Andy Lynch, American University of Sharjah


alynch@aus.edu

ABSTRACT

This exploratory content analysis of 1329 HEI home pages established a snapshot of
current brand identity practices of U.S. four-year HEI categories (National, Liberal Arts,
Masters, and Comprehensive). Findings indicate that the majority of HEI’s spell out their
brand name fully, utilize positioning statements, and incorporate brand symbols on their home
pages. More prominent institutions are more likely to focus attention on their brand name and
traditional academic iconic symbols (crests/shields) while less-established HEI’s aggressively
incorporate positioning statements and modern logos to establish a distinctive brand identity.

I. INTRODUCTION

Marketing higher educational institutions (HEI’s) has become widely accepted by


college and university administrators (Cook, & Zallocoo, 1983; Shors, 1996). The demand for
HEI’s to develop distinguishable brands stems from an increasingly competitive marketplace
(Evelyn, 2002) and the need to project a consistent brand identity. HEI brand identity can be
developed through the use of a variety of marketing communication elements – brand name,
positioning statement, and brand symbol.

II. LITERATURE REVIEW

HEI’s have long positioned themselves guided by the four P’s of marketing: product;
price; place; and promotion (McCarthy, 1960). Modern marketing concepts now extend these
four pillars to a consumer focus and include a paralleling 4 C’s: consumer; cost; convenience;
and communication (Duncan, 2002). To varying degrees, all product-focused and customer-
focused variables are considered when developing an effective brand position and should be
done in comparison to the marketing mix of competing services (Palmer, 2005).

Students, parents, faculty, staff, and donors who experience one or more brand
messages are able to form an image of the institution (Braxton, 1979, Kotler, & Fox, 1995).
The image portrayed by HEI’s plays a critical role in the public perception toward that
institution (Yavas, & Shemwell, 1996; Landrum et al., 1998). Many elite U.S. “National” and
“Liberal Arts” HEI’s that still have an image built on academic rigor and prestige of their
alumni annually turn away thousands of applicants (Table I)(Boshier et al., 2001). Gutman
and Miaoulis (2003) found that older HEI’s in the United Kingdom (UK) are product-oriented
and focus on marketing their academic products and overall reputation. In contrast, many U.S.
“Masters” and “Comprehensive” HEI’s aggressively market themselves through traditional
media and by investing in areas of the institution that are highly visible such as athletics
programs (Table I) (Palmer, 2005). Newer less established HEI’s in the UK place an emphasis
on selling their service to individual prospective student groups and personalizing their
marketing with targeted promotional activities (Gutman, & Miaoulis 2003). These aggressive
marketing tactics are used by HEI’s to maintain or develop a distinct image to create a
competitive advantage in an increasingly competitive market (Paramewaran, & Flowacka,
1995).

82
TABLE I: OPERATIONAL DEFINITIONS
Item/Variable Definition
HEI Category Higher Education Institution
National University 248 HEI’s that offer a wide range of undergraduate
majors as well as master's and doctoral degrees;
many strongly emphasize research.
Liberal Arts Colleges 215 HEI’s that emphasize undergraduate education
and award at least 50 percent of their degrees in the
liberal arts.
Universities-Masters 570 HEI’s provide a full range of undergraduate and
master's programs. But they offer few, if any,
doctoral programs.
Comprehensive 324 HEI’s focus primarily on undergraduate
Colleges-Bachelors education just as the liberal arts colleges do but grant
fewer than 50 percent of their degrees in liberal arts
disciplines.
Brand Name Execution Spelled-out = Harvard University
Acronym = USC
Combination = UNC Asheville
Positioning Statement Intangible: ex. “dream a little…”
Product-focus: ex. “A world-class education”
Customer-focus: ex. “You Belong Here.
Positioning Statement Positioning statement is with brand name
Presence Positioning statement is isolated on home page
Brand Symbol

Ex. Ex.
Academic Icon Modern Logo

Studying online brand identity practices will be helpful in assessing how HEI’s
position themselves online in this highly competitive market. The Internet has emerged as the
single most important marketing communication tool for students (Integrating high-tech tools,
2002) and has been preferred to traditional print marketing materials for reliability and
usefulness (Wolff, & Bryant, 1999).

III. RESEARCH QUESTIONS AND HYPOTHESES

Based on the literature, four research questions were developed for U.S. four-year HEI
home pages. RQ1: How do HEI’s execute their brand names? RQ2: To what extent do HEI’s
incorporate positioning statements into their online brand identity strategies? RQ3: To what
extent do HEI’s incorporate brand symbols into their brand identity strategies? RQ4: To what
extent do HEI positioning statements incorporate tangible and intangible attributes?

Four sets of hypotheses were developed to support these research questions. H1: The
number of National HEI’s utilizing positioning statements will be significantly less than
National HEI’s that do utilize positioning statements. H1a: The number of Liberal Arts HEI’s
utilizing positioning statements will be significantly less than Liberal Arts HEI’s that do
utilize positioning statements. H1b: The number of Masters HEI’s utilizing positioning
statements will be significantly more than Masters HEI’s that do not utilize positioning

83
statements. H1c: The number of Comprehensive HEI’s utilizing positioning statements will be
significantly more than Masters HEI’s that do not utilize positioning statements.

H2: The number of National HEI’s isolating positioning statements on their home
pages will be significantly greater than National HEI’s that associate positioning statements
directly with the brand name. H2a: The number of Liberal Arts HEI’s isolating positioning
statements on their home pages will be significantly greater than Liberal Arts HEI’s that
associate positioning statements directly with the brand name. H2b: The number of Masters
HEI’s associating their positioning statements directly with the brand name will be
significantly greater than Masters HEI’s that isolate positioning statements from their brand
name. H2c: The number of Comprehensive HEI’s associating their positioning statements
directly with the brand name will be significantly greater than Comprehensive HEI’s that
isolate positioning statements from their brand name.

H3: National HEI’s will utilize traditional academic icon brand symbols significantly
more than modern brand symbols. H3a: Liberal Arts HEI’s will utilize traditional academic
icon brand symbols significantly more than modern brand symbols. H3b: Masters HEI’s will
utilize modern brand symbols significantly more than traditional academic icon brand
symbols. H3c: Comprehensive HEI’s will utilize modern brand symbols significantly more
than traditional academic icon brand symbols.

H4: National HEI’s will use product-focused strategies more frequently than
consumer-focused strategies. H4a: Liberal Arts HEI’s will use product-focused strategies
more frequently than consumer-focused strategies. H4b: Masters HEI’s will use consumer-
focused strategies more frequently than product-focused strategies. H4c: Comprehensive
HEI’s will use consumer-focused strategies more frequently than product-focused strategies.

IV. METHODOLOGY

Two trained research assistants electronically captured all four-year U.S. HEI home
pages listed on the U.S. News and World Report database (www.usnews.com) Jan. 15, 2005 –
Jan. 30, 2005 using Snag-it software. U.S. News and World Report first assigns schools to a
group of their peers, based on categories developed by the Carnegie Foundation for the
Advancement of Teaching. All home pages were analyzed to provide a snap shot of U.S. four-
year HEI brand identity strategies. The HEI home page was selected as the unit of analysis
because of its ability to project brand identity to all visitors (Boshier et al., 2001).

The independent variable analyzed was HEI “category” (Table I). Dependent
variables analyzed were “brand name execution,” “positioning statement,” “positioning
statement placement,” “brand symbol,” and “positioning statement strategy.” Two coders
achieved 100 percent agreement on all variables after three rounds. Individual items that
coders could not reach agreement on were removed from the study.

Chi-square analysis was used to explore any association between HEI category
(National; Liberal Arts; Masters; Comprehensive) and all dependent variables (brand name,
positioning statement, brand symbol, strategy). The Chi-square test was deemed appropriate
because all variables are categorical. Chi-square tests can be misleading if any cell has an
expected value of less than 1.0 or more than 20% of the cells have expected values less than 5.
All Chi-square tests were valid and did not exceed these parameters.

84
V. RESULTS

This exploratory content analysis of 1329 HEI home pages (Table II) found the
majority of institutions (94.4%) “spell out” their complete brand name (Table III), incorporate
“positioning statements” (51.5%) (Table IV), and utilize “brand symbol” (66.3%) to establish
their online brand identity (Table V).

TABLE II: HEI CATEGORY TABLE III: BRAND NAME


Category Execution
National 246 (18.5%) Spelled out 1148 (94.4%)
Liberal Arts 212 (16.0%) Acronym 53 (4.4%)
Masters 545 (41.0%) Combination 15 (1.2%)
Comprehensive 326 (24.5%)
Note. N=1329 Note. N=1216

TABLE IV: POSITIONING TABLE V:


STATEMENT PRESENCE BRAND SYMBOL PRESENCE
Presence Presence
Yes 685 (51.5%) Yes 876 (66.3%)
No 644 (48.5%) No 446 (33.7%)
Note. N=1329 Note. N=1322

Table VI indicates that all HEI categories feature their complete brand name
significantly more than alternative executions.

TABLE VI: HEI BY BRAND EXECTUTION


Spelled Brand Brand as Combination
HEI Category Fully Acronym Brand Execution
National 195 (87.4%) 25 (11.2%) 3 (1.3%)
Liberal Arts 169 (97.1%) 2 (1.1%) 3 (1.7%)
Masters 483 (94.2%) 21 (4.1%) 9 (1.8%)
Comprehensive 301 (98.4%) 5 (1.6%) 0 (0.0%)
Note. N= 1216; Chi-Square= 40.49; df= 6; p <.05.

H1 and H1a were both supported (Table VII).The majority of “National” (64.2%) and
“Liberal Arts” (53.3%) HEI’s do not use positioning statements to establish their online brand
identity. H1b and H1c were also supported. The majority of “Masters” (52.5%) and
“Comprehensive” (65.0%) HEI’s incorporate positioning statements into their online brand
identity strategies.

TABLE VII: HEI BY POSITIONING STATEMENT PRESENCE


HEI Category Present Not Present
National 88 (35.8%) 158 (64.2%)
Liberal Arts 99 (46.7%) 113 (53.3%)
Masters 286 (52.5%) 259 (47.5%)
Comprehensive 212 (65.0%) 114 (35.0%)
Note. N= 1329; Chi-Square= 50.42; df= 3; p <.05.
H2 and H2a were supported (Table VIII). The majority of “National” (53.7%) and
“Liberal Arts” (57.0%) HEI’s utilizing positioning statements isolate them away from their

85
brand name execution. H2b and H2c were not supported. The majority of “Masters” (54.1%)
and “Comprehensive” (52.8%) HEI’s isolate their positioning statements away from their
brand name more than through direct association.

TABLE VIII: HEI BY POSITIONING STATEMENT PLACEMENT


HEI Category w/Brand Name Isolated on Page
National 38 (46.3%) 44 (53.7%)
Liberal Arts 40 (43.0%) 53 (57.0%)
Masters 119 (45.9%) 140 (54.1%)
Comprehensive 91 (47.2%) 102 (52.8%)
Note. N= 627; Chi-Square= 0.44; df= 3; p <.05.
H3 and H3a were not supported (Table IX). The majority of “National” (72.1%) and
“Liberal Arts” (69.6%) HEI’s incorporate modern brand symbols with their brand names. H3b
and H3c were both supported. The majority of “Masters” (75.0%) and “Comprehensive”
(68.3%) HEI’s incorporate modern brand symbols with their brand names.

TABLE IX: HEI BY BRAND SYMBOL EXECUTION


Traditional Academic
HEI Category Icon Modern Brand Symbol
National 46 (27.9%) 119 (72.1%)
Liberal Arts 35 (30.4%) 80 (69.6%)
Masters 93 (25.0%) 279 (75.0%)
Comprehensive 71 (31.7%) 153 (68.3%)
Note. N= 876; Chi-Square= 3.52; df= 3; p >.05.
H4 and H4a were supported (Table X). “National” (31.8%) and “Liberal Arts”
(35.4%) HEI’s use product-focused more frequently than consumer-focused strategies
(National=12.5%; Liberal Arts=31.8%). H4a and H4b were not supported. “Masters” (37.1%)
and “Comprehensive” (34.9%) HEI’s both used product-focused more than consumer-focused
strategies (Masters=12.6%; Comprehensive=17.0%).

TABLE X: HEI BY POSITIONING STATEMENT STRATEGY


HEI Category Intangible (a) Consumer Focus (b) Product Focus (c)
National 72 (81.8%) 11 (12.5%) 28 (31.8%)
Liberal Arts 70 (70.7%) 6 (6.1%) 35 (35.4%)
Masters 241 (84.3%) 36 (12.6%) 106 (37.1%)
Comprehensive 181 (85.4%) 36 (17.0%) 74 (34.9%)
a. Note. N=685; Chi-Square=11.30; df=3; p >.05.
b. Note. N=685; Chi-Square=0.86; df=3; p >.05.
c. Note. N=685; Chi-Square=7.25; df=3; p <.05.

VI. CONCLUSIONS
Overall, U.S. four-year HEI’s utilize a variety of brand name executions, positioning
statements, and brand symbol strategies to project brand identity to visitors of their respective
home pages. Understanding brand identity strategies implemented by HEI’s category will
provide a foundation for future research and information for HEI’s contemplating marketing
strategy based on their competition.

86
Future studies should replicate portions of this study to analyze marketing
communication trends by all HEI’s classifications. Expanding the view to two-year and
international HEI’s are also areas that this study should direct future researchers. Higher
education is a product that impacts a great deal of consumers and this study identified various
U.S. four-year HEI’s differ in their online brand identity marketing communication strategies.

REFERENCES

Boshier, R., Brand, S., Dabiri, A., Fujitsuka, T. & Tsai, C. (2001) Virtual universities
revealed: More than just a pretty interface. Distance Education, 22 (2), pp. 212-231.
Braxton, J. M. (1979) The influence of student recruitment activities: Relationship between
experiencing an activity and enrollment. Paper presented at the annual forum of the
Association for Institutional Research (19th, San Diego, California, May 13-19).
Cook, R. & Zallocoo, R. (1983) Predicting university preference and attendance: applied
marketing in higher education administration. Research in Higher Education, 19 (2), pp.
197-211.
Duncan, T. (2002) IMC: Using advertising, & promotion to build brands. New York:
McGraw-Hill Irwin.
Evelyn, J. (2002) For many community colleges, enrollment equals capacity. Chronicle of
Higher Education, 48 (33), A41.
Gutman, J. & Miaoulis, G. (2003) “Communicating a quality position in service delivery: An
application in higher education Managing Service.” Quality Bedford, 13 (2), 105-
111 (7 pp.).
Integrating High-Tech Tools with Traditional Recruitment Strategies (March 2002). Project
Connect; A study by Carnegie Communications. LLC, 1-10, (978) 692-2313, [Online].
Available: www.carnegiecomm.com.
Kotler, P. & Fox, K. (1985) Strategic marketing for educational institutions. Englewood
Cliffs, NJ: Prentice-Hall.

87
BRAND KNOWLEDGE, BRAND ATTITUDE, PURCHASES & AMOUNT WILLING
TO PAY FOR SELF & OTHERS: THIRD-PERSON PERCEPTION & THE BRAND

Thomas J. Prinsen, The University of South Dakota


thomas.prinsen@usd.edu

ABSTRACT

This study illustrates the relationship between the third-person perception hypothesis
and branding theory. Survey respondents thought others would 1) have more knowledge about
the Nike brand, 2) have a more positive attitude toward Nike, 3) be more likely to purchase
Nike shoes the next time they purchase shoes, 4) pay more for Nike shoes, 5) own more Nike
shoes, 6) be more likely to have their image of self influenced by wearing Nike shoes, and 7)
be more likely to have their image of others influenced by the wearing of Nike shoes by
others. Also, “perceived differences between self and others” and “respondents’ attitudes
toward being influenced by branding” are correlated with many aspects of branding.

I. INTRODUCTION

This study was designed to test for a relationship between third-person perception and
branding. Branding influence was divided into brand knowledge, brand attitudes, purchase
intention, amount willing to pay, past purchases, self-image, image of others, and attitude
toward being influenced by branding. The third-person effect posits that people tend to see
others as more influenced by media messages than they themselves are. The difference
between the amount of perceived influence on self and others should be evident in each of the
previously mentioned areas of branding.

II. THIRD-PERSON PERCEPTION

Davison (1983) wrote, “individuals who are members of an audience that is exposed to
a persuasive communication (whether or not this communication is intended to be persuasive)
will expect the communication to have a greater effect on others than on themselves” (p. 3).
Since 1983, this “third-person effect” has been found in research conducted on social issues
such as: censorship of pornography (Gunther, 1995), censorship of rap lyrics (McLoed,
Eveland, & Nathanson, 1997), body image (Choi & Leshner, 2003; David & Johnson, 1998),
television violence (Hoffner, et al., 2001), direct-to-consumer prescription drug advertising
(Huh, 2003) and others.

Gender has been a variable in several third-person studies. Tiedge, Silverblatt, Havice
and Rosenfeld (1991) found that gender had no significant effect on perceived first-person
effects, perceived third-person effects, or discrepancy scores when respondents were asked
about media effects. Rojas, Shah and Faber (1996) found no significant difference in third-
person perception between male and female respondents but did find that females were more
willing to censor pornography. When surveyed about the O.J. Simpson trial and their ability to
serve as impartial jurors, women were more likely than men to perceive a third-person
perception (Driscoll & Salwen 1997).

Attitude toward being influenced has been a variable in third-person studies. Salwen
and Dupagne (1999), David and Johnson (1998) and Brosius and Engel (1996) found that

88
individuals perceive that being influenced by the media is a negative phenomenon. Perloff
(1993) found that the third-person effect is most likely to appear when people find the
communication message to not be personally beneficial, when the message is personally
important, and when they feel that the source has a negative bias. Many third-person studies
have focused on negative issues such as alcohol (David, Liu, & Myser, 2004), misogynic rap
lyrics (McLoed et al., 1997), controversial sex tapes (Chia, Lu, & McLoed, 2004),
pornography (Gunther, 1995), and television violence (Hoffner, et al., 2001). Preservation of
self-image is similar to attitude toward being influenced. If the effect of being influenced is
seen as negative, it would degrade a person’s self-image to admit to being influenced
(Gunther & Mundy, 1993). Being influenced by media or other outside forces indicates a loss
of one’s freedom (Brosius and Engel, 1996).

III. BRANDING

A brand is “a perception resulting from experiences with, and information about, a


company or a line of products” (Duncan, 2005). Keller (1998) has broken branding into
specific areas such as brand knowledge, brand attitude, brand equity, and others. “Brand
knowledge can be conceptualized in terms of a brand node in memory with brand
associations, varying in strength, connected to it” (Keller, 1998). Consumers need enough
information and enough positive associations to differentiate one brand of a product from
other brands of the same product because brands are often very similar to one another. For
Richards, Foster, and Morgan (1998), “knowledge, then, is the essence of what a brand
represents, how it can achieve competitive advantage and ultimately significant value to a
business. Brands are, quintessentially, knowledge” (P. 2). The purpose of this knowledge is to
create a source of differentiation (Kohli & Thakor, 1997). Differentiation helps explain why
consumers will pay significantly different prices for similar products with different brand
names.

The next step beyond brand knowledge is brand attitude. “The most abstract and
highest-level type of brand associations are attitudes” (Keller, 1998, p. 100). If consumer’s
attitudes toward brands are positive, there is a greater likelihood of a purchase and a purchase
is, after all, the goal of the marketing campaign. There are several different definitions of
brand equity but “in a general sense, most marketing observers agree that brand equity is
defined in terms of the marketing effects uniquely attributable to the brand” (Keller, 1998, p.
42). Keller goes on to specifically define customer-based brand equity “as the differential
effect that brand knowledge has on consumer response to the marketing of that brand” (p. 45).
One key area in which brand equity manifests itself is pricing. Brand equity allows companies
to charge more for their branded products. Purchase intention is related to brand knowledge.
Consumers who are more familiar with a brand are more likely to show an intention to
purchase that brand (Laroche, Kim, & Zhou, 1996). Grewal, Krishnan, Baker, and Borin
(1998) also found a relationship. Respondents with high brand knowledge were more
influenced by brand name. Low knowledge respondents were likely to be influenced by
techniques like price discounts. Although respondents with high brand knowledge are more
influenced by brand name, they might not be willing to admit it. Repeat purchases can have a
significant impact on the profitability of a company. Kirmani, Sood, and Bridges (1999)
studied the relationship between current owners and line stretches. “Compared with
nonowners, most owners are likely to have greater liking, familiarity, knowledge, and
involvement with the brand” (p. 2). These attributes of owners make them likely to become
repeat purchasers.

89
“A consumer’s self-concept (self-image) can be defined, maintained, and enhanced
through the products they purchase and use” (Graeff, 1996). This concept makes image
congruence important to marketers. Marketers must match the image of their product with the
self-images of their consumers to have a strong appeal. Related to the concept of self-image
are the concepts of how people see others and how people think others see them. Essentially,
people are concerned about how they fit into society. Jim Crimmins, , DDB’s world-wide
brand planning director, states that “We’re not marketing just to isolated individuals. We’re
marketing to society. How I feel about a brand is directly related and affected by how others
feel about that brand” (Kranhold, 2000, p. 1).

IV. HYPOTHESES

1. Respondents will estimate that others will have more knowledge about Nike shoes than
self.
2. Respondents will estimate that others will have a more positive attitude toward Nike
shoes than self.
3a. Respondents will think that others will be more likely to purchase Nike shoes than self.
3b. Respondents will estimate that others will be willing to pay more for a pair of Nike shoes
than self. (third-person perception)
4. Respondents will estimate that others own more Nike shoes than themselves.
5a. Respondents will think that wearing Nike shoes has a stronger influence on the "image of
self" in others than in themselves.
5b. Respondents will think that others’ image of “others” will be more influenced if “others”
were wearing Nike shoes than self's image of others in the same situation.
A secondary hypothesis concerning gender was added to each of the previously noted
hypotheses. For example: H1B would read “Female respondents will show more evidence of
third-person perception, in the estimate of self and others’ knowledge about Nike, than male
respondents.”
6. Perceived "difference" between self and "others" will be positively related to the amount
of third-person perception, (…as measured by brand familiarity [H6A].) (…as measured
by brand attitude [H6B].) (…as measured by purchase intention [H6C].) (…as measured
by amount willing to pay [H6D].) (…as measured by past purchases [H6E].) (…as
measured by “image of self” [H6F].) (…as measured by “image of others” [H6G].)
7. The amount of positive attitude toward being influenced by brand will be negatively
related to the amount of third-person perception, (…as measured by brand familiarity
[H7A].) (…as measured by brand attitude[H7B].) (…as measured by purchase intention
[H7C].) (…as measured by amount willing to pay [H7D].) (…as measured by past
purchases [H7E].) (…as measured by “image of self” [H7F].) (…as measured by “image
of others” [H7G].)

V. METHOD

The questionnaire was administered to 182 undergraduate mass communication


students at a Midwestern university. Students were used because undergraduate students are
highly brand conscious and often wear Nike tennis shoes. Students were also a convenient
sample. There were 100 male and 76 female students for a total of 177 usable surveys
completed.

The questionnaire was divided into segments according to the aspects of branding
noted in each hypothesis, including brand knowledge, brand attitude, purchase intention,

90
amount willing to pay, past purchases, self-image and image of others. Cronbach’s Alpha was
conducted on each scale and questions were dropped as appropriate until alpha levels were
acceptable. Response set was avoided by designing both positive and negative statements and
by varying the order of responses. “Self” statements were placed before “others” statements to
ensure that question order did not determine the results.

VI. RESULTS

Hypotheses 1A, 2A, 3aA, 4A, 5aA, and 5bA were supported. Third-person perception
was evident in brand knowledge F(1, 173) = 33.46, p < .01, brand attitude F(1, 172) = 41.76,
p < .01, purchase intention F(1, 173) = 105.65, p < .01, amount willing to pay F(1, 164) =
93.47, p < .01, past purchases F(1, 167) = 6.07, p < .05, image of self F(1, 174) = 33.13, p <
.01, and image of others F(1, 173) = 16.90, p < .01.

Gender was only significant for hypothesis 3aB F(1, 173) = 4.10, p = .05. Female
respondents showed more evidence of third-person perception than male respondents as
associated with purchase intention. Hypotheses 1B, 2B, 3bB, 4B, 5aB, and 5bB were not
supported.

A mixed relationship was found between the perceived difference between self and
others and the amount of third-person perception associated with each dependent variable.
Dependent variables with a significant relationship are brand knowledge (hypothesis 6A;
r=.20; p=.01), brand attitude (hypothesis 6B; r=.18; p=.02), past purchases (hypothesis 6E; r=-
.17; p=.03), image of self (hypothesis 6F; r=.19; p=.01), and image of others (hypothesis 6G;
r=.25; p=.00). However, a significant relationship was not found with purchase intention
(hypotheses 6C) or amount willing to pay (hypothesis 6D).

A mixed relationship was also found between the amount of positive attitude toward
being influenced by brand and the amount of third-person perception associated with each
dependent variable. Dependent variables with a significant relationship are brand knowledge
(hypothesis 7A; r=26; p=.00), brand attitude (hypothesis 7B; r=.32; p=.02), purchase intention
(hypothesis 7C; r=.26; p=.00), past purchases (hypothesis 7E; r=-.36; p=.00), and image of
self (hypothesis 7F; r=.20; p=.01). The significant relationships, except past purchases, were
in the opposite direction of the proposed relationship. A significant relationship was not found
with amount willing to pay (hypothesis 7D) or image of others (hypothesis 7G).

VII. CONCLUSION

This study illustrates the relationship between the third-person perception hypothesis
and branding theory. Respondents perceived others as being more influenced than self on
every aspect of branding. Respondents thought others would 1) have more knowledge about
the Nike brand, 2) have a more positive attitude toward Nike, 3) be more likely to purchase
Nike shoes the next time they purchase shoes, 4) pay more ($34.77 more on average) for Nike
shoes, 5) own more Nike shoes, 6) be more likely to have their image of self influenced by
wearing Nike shoes, and 7) be more likely to have their image of others influenced by the
wearing of Nike shoes by others. The findings underscore the importance of the old phrase
that “advertising is a battle of perception.” Consumer’s thoughts about the brand may not be
as important as what consumers think others think about the brand. This finding is also
consistent with the “double jeopardy” theory in which the leading brand maintains its position
because it is the leading brand. Perceived difference between self and others is an important

91
aspect of third-person perception and was found to be correlated with many of the aspects of
branding mentioned in this study. These correlations show that branding and third-person
perception are also related to one another and possibly share some of the same principles.
Future research can further explore this relationship to test if brand influence can be measured
by measuring third-person perception.

Rather than the hypothesized negative relationship between a positive attitude toward
being influenced by branding and the branding variables, there was a positive relationship. As
respondents knew more about Nike shoes, liked Nike shoes, intended to purchase Nike shoes,
and improved their self-image by wearing Nike shoes, the more positive their influence
toward being influenced by branding. It is impossible to determine exactly how respondents
interpreted survey statements, but it is possible that the final scale was taken under the context
of the Nike survey rather than an attitude toward being influenced by branding in general.
Being positively influenced by a brand you know and like makes sense. The reliability of the
scale is low and should be improved in future research into this topic.

REFERENCES

Brosius, H. B., and Engel, D. The causes of third-person effects: Unrealistic optimism,
impersonal impact, or generalized negative attitudes towards media influence?.
International Journal of Public Opinion Research., 8, 1996, 142-163.
Chia, S. C., Lu, K., & McLeod, D. M. Sex, lies, and video compact disc: A case study of
third-person perception and motivations for media censorship. Communication
Research., 31, (1), 2004, 109-130.
Choi, Y., & Leshner, G. Who are the “others”? Third-person effects of idealized body image
in magazine advertisements. Paper presented at the Association for Education in
Journalism and Mass Communication Conference, Kansas City, MO. 2003, August.
David, P., and Johnson, M. A. The role of self in third-person effects about body image.
Journal of Communication., 48, (4), 1998, 37-58.
David, P., Liu, K., & Myser, M. Methodological artifact or persistent bias?
Testing the robustness of the third-person and reverse third-person effects for alcohol
messages. Communication Research., 31, (2), 2004, 206-233.
Davison, W. P. The third-person effect in communication. Public Opinion Quarterly., 47,
1983, 1-15.
Driscoll, P. D., Salwen, M. B. Consequences of third-person perception in support of press
restrictions in the O.J. Simpson trial. Journal of Communication., 47, (2), 1997, 60-78.
Duncan, T. Principles of Advertising & IMC (2nd ed.). Boston: McGraw-Hill, 2005.
Eveland Jr. W. P., Nathanson, A. I., Detenber, B. H., McLeod, D. M. Rethinking the social
distance corallary: Perceived likelihood of exposure and the third-person perception.
Communication Research, 26, (3), 1999, 275.
Graeff, T. R. Using promotional messages to manage the effects of brand and self-image on
brand evaluations. Journal of Consumer Marketing., 13, 1996, 4-18.
Grewal, D. R., Krishnan, J., Baker, & Borin, N. The effect of store name, brand name and
price discounts on consumers’ evaluations and purchase intentions. Journal of
Retailing., 74, (3), 1998.
Gunther, A. C. Overrating the X-rating: The third-person perception and support for
censorship of pornography. Journal of Communication., 45,(1), 1995, 27-39.
Gunther, A. C., & Mundy, P. Biased optimism and the third-person effect. Journal of
Communication., 70, 1993, 58-67.

92
Hoffner, C., Plotkin, R. S., Buchanan, M., Anderson, J. D., Kamigaki, S. K., Hubbs, L. A., et
al. The third-person effect in perceptions of the influence of television violence.
Journal of Communication., 51, (2), 2001, 283-299.
Huh, J. (2003). Perceived effects, mediating influences, and behavioral outcomes of
direct-to-consumer prescription drug advertising: Applying the third-person effect
framework (Doctoral dissertation, The University of Georgia, 2003). Dissertation
Abstracts International, 64(4), 2690.
Keller, K. L. Strategic brand management Building, measuring, and managing brand
equity. Upper River Valley, NJ: Prentice Hall, 1998.
Kirmani, A., Sood, S., & Bridges, S. The ownership effect in consumer responses to brand
line stretches. Journal of Marketing., 63, (1), 1999, 88-101.
Kohli, C., & Thakor, M. Branding consumer goods: insights from theory and practice.
Journal of Consumer Marketing., 14, (3), 1997, 206-219.
Kranhold, K. Agencies beef up research to identify customer preferences. Wall Street
Journal., 2000, March 9, pp. B14.
Laroche, M., Kim, C., & Zhou, L. Brand familiarity and confidence as determinants of
purchase intention: An empirical test in a multiple brand context. Journal of Business
Research., 37, 1996, 115-121.
McLeod, D. M., Eveland Jr., W. P., & Nathanson, A. I. Support for censorship of violent and
misogynic rap lyrics: An analysis of the third-person effect. Communication
Research., 24, (2), 1997, 153-174.
Perloff, R. M. Third-person effect research 1983-1992: A review and synthesis.
International Journal of Public Opinion Research., 5, (5), 1993, 167-184.
Price, V. and Tewksbury, D. (1996). Measuring the third-person effect of news: the impact of
question order, contrast and knowledge. International Journal of Public Opinion
Research, 8, 120-142.
Price, V., Tewksbury, L., and Huang, L. Third-person effects on publication of a Holocaust-
denial advertisement. Journal of Communication., 48, (2), 1998, 3-26.
Richards, I., Foster, D., & Morgan, R. Brand knowledge management: Growing brand equity.
Journal of Knowledge Management., 2, 1998, 47-54.

93
CONGRUENCY IN STRATEGIC CORPORATE SOCIAL RESPONSIBILITY:
CONSUMER ATTITUDE TOWARD THE COMPANY & PURCHASE INTENTION

Youjeong Kim, Pennsylvania State University


yuk130@psu.edu

Charles A. Lubbers, University of South Dakota


clubbers@usd.edu

ABSTRACT

This study examines the effects of the corporate sponsorship that supports social
marketing programs as a part of corporate social responsibility (CSR) and the congruency
effect of sponsorship linkage that impacts consumers’ attitudes toward the sponsor and
purchase intentions. Through empirical research, this study found that Public Service
Advertisements (PSA) with congruent linkage between sponsor and a sponsored marketing
program are more persuasive than an incongruent linkage. Participants who watched a PSA
congruent with the sponsoring company favored the company more and had greater purchase
intention.

I. INTRODUCTION
Social responsibility in modern society requires a company to be a corporate citizen
conforming to the needs of customers in addition to supplying good products at a reasonable
price and thus contributing to society (Lee, 2002). In the long-term, pro-social activities are
profitable not only to the society and people, but also to the stakeholders and the company
itself by improving the company image and further increasing the sales. Some scholars with
this economic view of corporate engagement in good deeds call it “social investment” (Stump,
1999). In this view, profit and social responsibility are not separate issues. Companies
consider their interests before the desire to do good deeds. If activities such as encouraging
employees, improving the company image, and reducing government intervention are
profitable, companies would participate in social responsibility more actively.

Sponsorship of “doing good things” is a kind of corporate social responsibility. With


the growing interest in sponsorship as a part of marketing strategy, many companies have
participated in various events. Meenaghan (1991) argued that sponsorships have advantages
over other forms of promotion because they are “small, flexible, and can improve the image of
companies” (Hastings, 1984). In this context, along with the effect of advertising, corporate
sponsorship of social marketing programs should generate more positive consumer attitudes
and purchase intentions, with regard to behavioral effect. D’Astrous and Bitz (1995) found
that strong associations between sponsor and event were evaluated more positively than weak
company-event associations. The associative links could play a key role in maximizing the
behavioral effects when companies sponsor “good things.” The primary goal of this study is
to examine the persuasive effects of sponsorship linkage between company (i.e., the sponsor)
and the sponsored social marketing program (i.e., sponsee, the thing being sponsored). In
particular, the PSA (Public Service Advertisement) can serve as a for-profit organizations’
sponsee for social marketing program. In this study, attitude and purchase intention for
persuasive impact are examined in relation to sponsor-sponsee congruency.

94
II. REVIEW OF LITERATURE
Corporate Social Responsibility
Business is a social institution and thus obliged to use its power responsibly.
Wood (1991) described three driving principles of social responsibility, which are:
• businesses are responsible for the outcomes relating to their areas of involvement with
society; and
• individual managers are moral agents who are obliged to exercise discretion in their
decision making.
Lantos (2001) classified CSR into three types based on their nature (required versus optional)
and purpose (for stakeholders’ good, for the firm’s good, or for both): ethical CSR, altruistic
CSR, and strategic CSR. Ethical CSR is “morally mandatory and goes beyond fulfilling a
firm’s economic and legal obligations, to its responsibilities to avoid harms or social injuries,
even if the business might not benefit from this “(p.605).

Altruistic (humanitarian, philanthropic) CSR is defined as “contribution to the


common good at the possible, probable, or even definite expense of the business”(Lantos,
p.605). According to Lantos, in humanitarian CSR, firms “go beyond preventing or rectifying
harms they have done (ethical CSR) to assuming liability for public welfare deficiencies that
they have not caused” (p.605). This includes actions that morality does not mandate but that
are beneficial for the firm’s constituencies, although not necessarily for the company.

Strategic CSR is considered a part of marketing. It creates a win-win situation in


which both corporation and stakeholders benefit. According to Lantos (2001, p.618)
companies do CSR, or “strategic philanthropy” (Caroll, 2001), to accomplish a strategic
business goal. Good deeds are believed to be good for business as well as for society. With
strategic CSR, corporations “give back” to their constituencies because they believe it to be in
their best financial interests to do so. This is “philanthropy aligned with profit motives”
(Quester & Thompson, 2001).

Companies’ engagement in social responsibility entails short-run sacrifice and even


pain, but ultimately, it results in long term gain (Lantos, 2001). Vaughn (1999) viewed
strategic CSR activities as investments in a “Goodwill bank” (p.199) that yields financial
returns (McWilliams & Siegal, 2001). These long-term benefits might not immediately show
up on a firm’s financial statements, but as an investment, a deposit in this bank of goodwill, it
allows withdrawals when the company comes under fire (Lantos, 2001). Companies that
practice strategic CSR provide pro-social deeds in various ways, such as providing shelter for
the destitute, building a museum, or renovating the local park (Brenkert, 1996). Added to
this, sponsoring PSA (Public Service Advertisement) campaigns with non-profit organizations
is also a social responsible activity. PSAs address social issues in an effort to change public
attitudes and behaviors and thus stimulate positive social change. It is an important part of
social marketing (e.g., Andreasen 1994).

Theoretical Framework
Studies of successful social marketing programs have found that social marketing
programs that are strongly related to products and the category of the sponsoring company
were effective in achieving desired short-term effects. Here, we can assume that if the
message of a social marketing program and the category of the sponsoring company or its
product are congruent (matched), it might be possible to increase the effectiveness of the
message, while at the same time increasing the awareness of the product or company. To
understand match-up effect, it is necessary to touch on associative learning theory. The key
95
variable in an associative link between two concepts (such as a brand and an endorser) is
“belongness, relatedness, fit, or similarity” (Till & Busler, 2000). Generally, the more similar
two concepts are, the more likely the two concepts will become integrated within an
associative network (e.g., Hamm, Vaitl, & Lang, 1989). The associative link between a brand
and an endorser predicts endorser effects and match-up effects.

The Statement of Research Purpose


This study attempts to find the effective CSR. Here, the type of CSR is limited to a
PSA campaign on TV and the term effectiveness is defined as consumer attitudes toward the
sponsoring company and their purchase intentions for products of the company. Attitude and
purchase intention are used to measure the effect of advertising. Studies in advertising
generally assume that more favorable brand evaluations can result in increased purchase
intentions for products of the sponsor (Rodgers, 2003-2004). Therefore, assuming that
behavioral intentions for sponsorship can be enhanced with a congruent sponsor-sponsee link,
this study addresses the following research questions and hypotheses.
RQ1: How will the congruency of PSA type and the category of the company sponsoring
the PSA campaign affect consumer attitudes toward the company and purchase
intentions for products of the company?
H1: Attitude toward the company will be more favorable for a congruent sponsorship
of PSA campaign than an incongruent sponsorship link.
H2: Purchase intentions for products of the sponsor will be higher for a congruent
sponsorship of PSA campaign than an incongruent sponsorship link.

III. METHOD

The purpose of this study was to examine how the congruency of PSA campaign and
the category of the sponsoring company affect viewer attitudes toward the company and
viewer intentions to purchase products from the company.

Instrumentation
Testing Stimuli. Two Ad Council PSA campaigns were selected for this experiment.
One is a PSA campaign that has possible semantic relevance for for-profit companies. The
other PSA campaign was not relevant to for-profit companies. In this study, the PSA
campaign selected from the first category was infant and child nutrition promotion, while the
PSA campaign selected from the second category was preventing drunk driving. In addition,
among many product categories, two food companies were selected for congruence with
infant and child nutrition campaign. One company was familiar, and the other was not. This
research project was part of a larger investigation that also included the participant’s
familiarity with the company. The familiarity results are not presented in this article.
Treatment. Participants watched a 14-minute video clip with three news segments taken from
CNN Headline News presentation and three breaks. The breaks consisted of commercials,
PSAs without for-profit organization sponsorship, and PSAs with four different conditions
that were manipulated for the study. The news segments and break #1 and #3 were randomly
assigned commercials and PSAs without for-profit organization sponsorship and were
presented to all participants. For treatment, break #2 consisted of 2 commercials (the same for
all participants), and a PSA that was manipulated for the experiment. The sponsorships were
manipulated by showing a brief text (e.g., “sponsored by …. “) with the sponsor logo on a
black screen at the end of each PSA campaign. The logo of the unfamiliar sponsor was
created.

96
The first group saw a PSA that was congruent with its sponsor, which was a familiar
company; the second group saw a PSA that was also congruent with its sponsor, but which
was an unfamiliar company; the third group saw a PSA that was incongruent with its sponsor,
which was a familiar company; and the last group saw a PSA that was incongruent with its
sponsor, which was an unfamiliar company.

Experimental Design and Procedure


A 2 (familiar company versus unfamiliar company) × 2 (congruent PSA versus
incongruent PSA) experimental design was used. Participants were assigned to one of four
groups. After watching video clips, subjects completed a questionnaire that measured 1)
viewers’ attitudes toward an (un)familiar company sponsoring an (in)congruent PSA and 2)
viewers’ purchase intentions for products of the (un)familiar company sponsoring the
(in)congruent PSA.
Congruency Manipulation Check. An independent samples t-test revealed that there
was congruency between the child-nutrition PSA campaign and Post or D-food (M = 4.95; t =
7.819; p<.000) while preventing drunk driving PSA campaign sponsored by Post or D-food
showed incongruency (M = 3.59). Therefore, the manipulation for congruency was successful.

IV. RESULTS

To test the hypotheses, ANOVA (Analysis of Variance) was used. A factorial analysis
of familiarity of sponsoring company (familiar versus unfamiliar) and congruency of PSA
campaign and the category of sponsoring company (congruent versus incongruent) as between
subjects factors was conducted. After the data of aggregated scores were collected and divided
by the number of students in the group, the scores of groups were compared for familiarity
and unfamiliarity of sponsors, for congruency of sponsor and PSA campaign and
incongruency. This was used to determine the significance of the main effects and interaction
at the .05 level. First, reliability for attitude and purchase intention was checked and
manipulation for congruency and familiarity was checked. And then, main effects were
analyzed.

Preliminary Analysis
Reliability. Four items for attitude toward the company and four items for purchase
intention for products of the company were checked for their reliability scores. The alpha
scores were .819 for attitude and .839 for purchase intention. This is higher than .75 for basic
research recommended by Wimmer and Dominick (2003).
Attention to news items. To determine how many subjects paid attention to experimental
stimuli, the analysis of the ability to choose the correct answers was conducted. A cumulative
index score was created by adding the number of correct answers out of seven news items
questions. The results of analyzed participants’ recalled news items showed that overall
processing level for attending experiment was not significantly different.

Participants
The participants in this study were 193 graduate and undergraduate students (45.3%
male, 54.7% female) from various departments at a Midwestern university. Participants were
awarded a nominal amount of extra credit in exchange for their participation. They were
assigned randomly to four groups. Since twenty responses were discarded because they
marked that they have never heard the company and one response was discarded because of
incomplete answers, responses from 172 participants (89%) were analyzed.

97
Main Analyses
Hypothesis 1 predicted that a PSA congruent with the category of the company would
generate more favorable attitude than an incongruent PSA. According to ANOVA results, a
congruent PSA (child nutrition PSA sponsored by Post or D-food) (M = 5.08) was shown to
create a more favorable attitude toward the company (F (1, 172) = 47.452, p < .000) than an
incongruent PSA (preventing drunk driving sponsored by Post or D-food) (M = 3.91).
Therefore, H1 was supported (Table 1).
Table 1.The ANOVA results for attitude toward the sponsoring companies.
F p
Corrected Model 16.224 .000
Intercept 2777.876 .000
Congruency 47.452 .000
Familiarity .096 .757
Congruency × Familiarity .391 .532

Hypothesis 2 stated that a congruent PSA would generate higher purchase intentions
than an incongruent PSA. ANOVA results (see Table 2) showed that subjects who watched a
PSA (M = 4.31) congruent with sponsor produced higher purchase intentions (F (1,172) =
7.015, p < .009) than an incongruent PSA (M = 3.92). Therefore, H2 was also supported.
Table 2.The ANOVA results for purchase intention for products
of sponsoring company.
F p

Corrected Model 3.957 .009


Intercept 2519.755 .000
Congruency 7.015 .009
Familiarity 5.768 .017
Congruency × Familiarity .654 .420

V. CONCLUSION

This research examined the congruency effect of sponsor and sponsored social
marketing programs in terms of attitude toward the sponsor and purchase intention for the
products of the sponsor. Overall, the results of the current study suggest that congruency
between sponsor and sponsored social marketing program can be an important factor for
effective marketing communication. Hypotheses 1 and 2 were supported, showing that
sponsors who are closely associated with a congruent PSA campaign are more persuasive than
when they are associated with an incongruent PSA campaign. A congruent PSA was more
likely to generate favorable attitudes and purchase intentions for the products of the sponsor
than an incongruent PSA.

Empirical research on the match-up hypothesis asserts that a positive image from
“doing good things” is transferred to purchase intentions by matching sponsor to a relevant

98
social marketing program (Kahle & Homer, 1985). The match-up hypothesis asserts the
attitude-to-behavior process. This study confirms that congruency effect also applies to the
sponsorship linkage between company and sponsored social marketing program, supporting
the findings of McDaniel (1999) that a matched sponsor-sponsored event relationship is more
affective than a mismatched relationship in terms of attitude and purchase intentions.

REFERENCES
Andreasen, A. R. “Social marketing: Its definition and domain.” Journal of Public Policy and
Marketing, 13(1), 1994, 108-114.
Brenkert, G.B. “Private corporations and public welfare.” In R.A. Larmer. (ed.). Ethics in the
workplace: Selected readings in business ethics. Minneapolis/ St. Paul, MN: West
Publishing Company, 1996.
D’Astrous, A., and Bitz, P. “Consumer evaluations of sponsorship programmes.” European
Journal of Marketing, 29(12), 1995, 6-22.
Hamm, A.O., Vaitl, D., and Lang, P.J. “Fear conditioning, meaning, and belongingness: A
selective association analysis.” Journal of Abnormal Psychology, 98(4), 1989, 395-
406.
Hastings, G.B. “Sponsorship works differently from advertising.” International Journal of
Advertising, 3, 1984, 171-176.
Kahle, L.R., and Homer, P. “Physical attractiveness of the celebrity endorser: A social
adaptation perspective.” Journal of Consumer Research, 11(March), 1985, 954-961.
Lantos, G.P. “The boundaries of strategic corporate social responsibility.” Journal of
Consumer Marketing, 18(7), 2001, 595-630.
Lee, S.M. “Corporate social responsibility: The comparison of corporate social contribution
between Korea and America.” Korean Socialogical Association, 36(2), 2002, 77-111.
McDaniel, S.R. “An investigation of match-up effects in sports sponsorship advertising: the
implications of consumer advertising schemas.” Psychology and Marketing, 16(2),
1999, 163-184.
McWilliams, A., and Siegal, D. “Corporate social responsibility: a theory of the firm
perspective.” Academy of Management Review, 26(1), 2001, 117-127.
Meenaghan, T. “The role of sponsorship in the marketing communication mix.” International
Journal of Advertising, 10(1), 1991, 35-47.
Quester, P.G., and Thompson, B. “Advertising and promotion leverage on arts sponsorship
effectiveness.” Journal of Advertising Research, 41(1), 2001, 33-47.
Rodgers, S. “The effects of sponsor relevance on consumer reactions to internet
sponsorships.” Journal of Advertising, 32(4), 2003-4, 67-76.

99
INTERNET ADVERTISING AND ITS REFLECTION OF
AMERICAN CULTURAL VALUES

Lin Zhuang, Louisiana State University

Xigen Li, Southern Illinois University Carbondale


lixigen@siu.edu

ABSTRACT

This study explores dominant cultural values in Internet advertising of the top 100
U.S. Web sites. The findings reveal that Internet advertising reflects more utilitarian values
than symbolic values. The study also found that the type of advertising appeal is associated
with product categories. The results indicate that Internet advertising reflects a convergence of
the typical cultural norms of the American society and the particular features of Internet
advertising medium. The dominance of utilitarian values in the U.S. banner advertisements
fits the American low-context culture, which prefers logical and factual manners to
communicate thoughts and actions.

I. INTRODUCTION

Scholarly research indicates a subtle connection between advertising and culture. As a


reflection of social fabric, advertising conveys, directly or indirectly, evaluations, norms and
concepts that comprise an ideology of the society (Andren et al., 1978). Cultural attributes are
transmitted and assigned meanings through advertising (McCracken, 1986). Advertising as “a
vibrant and provocative social discourse” played a determining role in creating the
postmodern culture (Gross et al., 1996).

Cultural value is a complex and multifaceted construct. The term “value” has been
defined as “an enduring belief that a specific mode of conduct or end-state of existence is
personally or socially preferable to alternative modes of conduct or end-state of existence
(Rokeach, 1968, page 160). Pollay (1983) developed a measurement scheme to describe the
cultural characters of advertising. He recognized that cultural values, norms, characteristics
are integrated into advertising appeals, which are specific approaches advertisers use to
communicate how their products will satisfy customer needs. Constructing a list of 42 ad
appeals, Pollay’s measurement covered almost all common cultural values in advertising and
was conventionally applied in later advertising cultural studies. For example, Cheng &
Schweitzer (1996) conducted a comparative study of the cultural values reflected in Chinese
and U.S. television commercials. They constructed a list of 30 cultural values based on
Pollay’s measurement of the advertising appeals. Their study found three dominant appeals in
U.S. TV commercials: “enjoyment,” “individualism,” and “economy.” Described as the two
most common approaches used in advertising (Snyder & DeBono, 1985), utilitarian and
symbolic ad appeals have been discussed extensively in the advertising literature. Symbolic
appeal holds a creative objective to produce an image of the generalized user of the advertised
product and evoke emotions and thinking. Utilitarian appeal highlights the functional features
of the product, such as the performance, quality and price (Johar & Sirgy, 1991).

Researchers related the adoption of utilitarian or symbolic values to advertising


evolution (Leiss, Kline, & Jhally, 1990; Pollay & Gallagher, 1990; Cheng & Schweitzer,

100
1996), which refers to the progression of a certain advertising industry. Leiss et al. (1990)
identified a historical pattern of advertisement growth in a historical analysis of U.S.
advertisements. The pattern indicates a shift from “informational to symbolic presentation” of
human values in advertising, as the advertising industry grows “mature.” However, the
advertising evolution discussed relates only to pre-Internet industry.

The traditional view of advertising effectiveness suggests that a particular message


appeal is contingent on the type of product being advertised. Holbrook and O'Shaughnessy
(1984) argued that the type of appeal should "match" the type of product. Johar and Sirgy
(1991) also summarized the arguments in favor of such a relationship in positing that "value-
expressive" (similar to emotional) appeals work best for value-expressive products, whereas
"utilitarian" (similar to rational) appeals work best for utilitarian products. The literature on
cultural reflections in advertising mainly addressed the issues in traditional media. Studies
looked at the market mechanism of Internet advertising, but few paid much attention to the
cultural themes of Internet advertising. As a technically advanced medium, does the Internet
bring new notions to the existing cultural themes? This paper intends to bridge the gap
through a content analysis of Internet advertising. The study applied Pollay’s (1983)
measuring instrument to identify the cultural values embedded in the Internet advertising. It
also looked at how product categories are associated to cultural values of Internet advertising.
The study will answer one research question and test two hypotheses.

RQ1: What are the dominant cultural values presented in U.S. online banner advertising?
According to the literature, U.S. Internet advertising, like advertising in other media,
will reflect cultural values of the American society that creates it. At the same time, Internet
advertising is expected to convey some special themes such as “technology” and
“information” with respect to its unique communication technology characteristics.
H1: Internet advertising reflects more utilitarian values than symbolic values.
According to Leiss et al.’s (1990) historical pattern of advertising evolution, the preference for
using utilitarian or symbolic values reflects the maturity level of an advertising industry.
Different from other media users, Internet users are actively looking for information (Sterne,
1997), which may result in that the Internet advertising bears more utilitarian values that
consumers could perceive immediately.
H2: The type of advertising appeal is associated with the type of products (product
categories).
As the literature indicates, product-related characteristics have much to do with advertising
appeal. This hypothesis would test whether the type of advertising appeal in the Internet
advertising is associated with product-related characteristics, in this case, product categories.

II. METHOD

This study employed a content analysis by selecting the banner ads from the top 100
U.S. Web site. The ranking of the top 100 Web sites was provided by the monthly Internet
Ratings report from PC DataOnline, one of the three leading companies that specialize in
ranking Web site popularity. Web site popularity was ranked by unduplicated audience reach
and number of unique visitors. The sampling process lasted 10 days, from May 19, 2003 to
May 28, 2003. All banner ads on the home page or the front page were collected. A total of
268 banner ads from the top 100 most popular U.S. Web sites were downloaded for the study.
Banner Ad refers to a typically rectangular graphic element that acts as an advertisement and
entices the viewer to click it for further information. They often use GIF, Java, or Shockwave

101
animations and include some text, such as a phrase or a slogan and the advertiser’s name or a
Web address. If the user finds the ad intriguing enough, he or she may click on the ad, which
activates an embedded link, to visit the advertiser’s Web site (Sterne, 1997).
Cultural value refers to intangible beliefs, norms and characteristics that are embedded in
advertising appeals (Zhang and Gelb, 1996). The measurement of cultural value in this study
is largely based on Cheng and Schweitzer’s (1996) measurement instrument. It contains 32
items, of which 29 were borrowed or modified from Cheng and Schweitzer’s study.
Advertising appeal is the specific approaches advertisers use to communicate how their
products will satisfy customer needs. Advertising appeals are typically carried in the
illustration and headlines of the ad (Arens & Bovee, 1994). They are divided into two groups:
utilitarian value and symbolic value. Utilitarian value involves informing consumers of one or
more key benefits that are perceived to be highly functional or important to the target
consumers. Utilitarian value emphasizes product features or qualities, such as “convenience,”
“economy,” and “effectiveness.” Symbolic value evokes a wide range of emotional responses
from the ad’s audience. Symbolic values are those suggesting human emotions such as
“enjoyment,” “individualism,” and “social status.”
Product Categories. The products or services advertised in Internet banner ads are divided
into 16 categories. The product categories of Cheng and Schweitzer’s (1996) study was
modified to create a list of product for this study by adding Internet-related categories such as
online shopping and Online community/service.
The unit of analysis is one banner ad on the homepage or front-page of each Web site. If a
Web site did not post ads on their homepage, the front page was used to collect the banner
ads. Two coders participated in the coding. A subsample of 40 banner ads was used to test
intercoder reliability. Scott’s (1955) pi formula was used to calculate intercoder reliability.
The Intercoder agreement averaged 87.7 %, which is higher than the 85% standard for content
analysis (Kassarjian, 1977).

III. FINDINGS

The analysis of the data revealed six most dominant values. They are “economy”
(25.7%), “effectiveness” (12.3%), “incentive” (11.2%), “enjoyment” (10.1%), “informative”
(6.7%) and “convenience” (6.7%).
The four most dominant product categories were “business and finance” (17.6%), “computer
and Internet product” (16.0%), “online community/service” (14.6%), and “entertainment”
(9.4%).
Among the 16 product categories examined, 13 categories were more utilitarian-
centered; 3 categories were more symbolic-centered percentage wise. The hypothesis that
online advertising reflects more utilitarian values than symbolic values was supported. About
three fourths (75%) banner ads reflected utilitarian values, whereas 25% banner ads displayed
symbolic values.
The findings also supported the second hypothesis that the type of advertising appeal is
related to the type of products (product categories) (X 2 = 27.74, p < .05). The Chi-square test
shows a statistical significance in the relationship between the type of advertising appeal, the
utilitarian/symbolic values, and the product category (the type of product/service).

102
Table 1. Cultural Values In Internet Advertising of the Top 100 U.S. Web Sites

Variable Frequencies Percentage


By Cultural Value
Adventure 3 1.1
Beauty 1 0.4
Collectivism 3 1.1
Competition 6 2.2
Convenience 18 6.7
Economy 69 25.7
Effectiveness 33 12.3
Enjoyment 27 10.1
Family 3 1.1
Health 5 1.9
Incentive 30 11.2
Informative 18 6.7
Modernity 5 1.9
Patriotism 2 0.7
Popularity 5 1.9
Quality 10 3.7
Safety 2 0.7
Sex 1 0.4
Social Status 1 0.4
Technology 9 3.4
Tradition 5 1.9
Uniqueness 1 0.4
Wisdom 5 1.9
Others 6 2.2
Total 268 100.0
By Advertising appeal
Symbolic Appeal 68 25.4
Utilitarian Appeal 200 74.6

The results showed that the type of advertising appeal was more likely to be associated
with the product categories that possess the characteristics of the type of advertising appeal.
For example, “online service/ community,” “computer and Internet product,” and “business
103
and finance” are utilitarian centered, whereas “entertainment” is more symbolic-oriented.
“online service/ community” ads reflected utilitarian values such as “informative” and
“economy;” “computer and Internet products” ads demonstrated utilitarian values such as
“effectiveness” and “economy;.” and “business and finance” ads featured utilitarian values
such as “economy.” Meanwhile, “entertainment” ads reflected more symbolic values such as
“enjoyment.”

IV. CONCLUSION

This study found six dominant cultural values reflected in U.S. online advertising. This
value profile showed a convergence of the typical culture of the American society and the
particular features of Internet advertising environment. The findings are consistent with
Cheng and Schweitzer’s (1996) television advertising study on dominant cultural values in
advertising. To survive in the highly competitive U.S. market, advertisers have to provide
solid information in advertising to attract their target consumers. Symbolic values evoke
human emotions, which are often intangible in a quick glimpse. It is not surprising that
utilitarian values, which convey more tangible and straightforward information, would be
favored in online ads strategies.

The United States is regarded as a “low-context culture,” which relies heavily on its
Western rhetoric and logic tradition to relate thoughts and actions to people and their
environment (Hall and Hall, 1987). Therefore, advertisements in the United States were
presented in a more factual and logical manner. The low-context culture of the United State
orients its Internet banner ads to show more utilitarian values that emphasize product
attributes and quality. The dominance of the utilitarian values also fits the unique
characteristics of the online advertising environment. The small amount of space for banner
ads on a Web page requires conciseness and brevity. Also, different from other media users,
Internet users are actively looking for information (Sterne, 1997). Online users browse Web
pages and links so quickly that most of them seldom give a second glance to ads. Therefore,
messages in banner ads have to be direct and enticing instead of being symbolic and subtle.
Banner ads containing “economy,” “convenience” or “effectiveness” are usually clear,
straightforward, and compelling.

The United States is regarded as typical of the Western culture (Belk et al., 1985). The
value profile of the six dominant values in this study indicates that online advertising in many
ways conveys cultural norms of the Western societies. Like other traditional media, U.S.
online advertisers have tailored ads to reflect the prevalent cultural perceptions of the Western
societies, such as “enjoyment” and “incentive.” Meanwhile, the particular characteristics of
the Internet medium in some way have influenced Internet advertisers’ preference of cultural
values such as “informative.”

The result of hypothesis 2 yielded evidence consistent with previous studies of the
relationship between product categories and culture values. Cultural value varies greatly from
ad category to category. By looking into the cultural value difference in ad categories, this
study found that the type of advertising appeal was associated with the product categories.
This observed tendency seems to be consistent with the traditional view of advertising
effectiveness that a particular ad appeal is contingent on the type of product being advertised.

Johar and Sirgy (1991) found that value-expressive (symbolic) advertising appeals are
effective when the product is value-expressive (symbolic), whereas utilitarian appeals are
104
effective when the product is utilitarian. Although the percentage of the overall value of these
categories exhibits a consistent pattern as Johar and Sirgy found in their study, part of the
findings of this study suggested something different. For example, the value with a highest
percentage in the “business and finance” is “competition,” which is not a utilitarian value but
a symbolic value. Also, the most frequently found value in “entertainment” is “incentive,”
which is not a symbolic value but a utilitarian value. As Johar and Sirgy (1991) pointed out,
the pattern is moderated by a variety of factors including product-related characteristics such
as product life cycle, scarcity, and differentiation or consumer-related factors such as
consumer involvement, prior knowledge and self-monitoring. In other words, under some
situations, advertising utilitarian/symbolic appeals may not match the product’s utilitarian or
symbolic characteristics. Johar and Sirgy’s observation on pattern moderation offers some
explanations to the unexpected high percentages of unmatched values in some product
categories in this study.

REFERENCES

Andrén, G., Ericsson, L., Ohlsson, R. & Tännsjő, T. (1978). Rhetoric and ideology in
advertising: A content analytical study of American advertising. Sweden: LiberFőrlag.
Arens, W. F. & Bovee, C. (1994). Contemporary Advertising (5th Ed.). Burr Wood, Illinois:
Irwin.
Cheng, H & Schwitzer, J. (1996). Cultural values reflected in Chinese and U.S. television
commercials. Journal of Advertising Research, 36(3), 27-36.
Gross, M. (Ed.). (1996). Advertising and culture: Theoretical perspectives. Connecticut:
Praeger.
Hall, E. & Hall, R. (1987). Hidden difference: Doing business with the Japanese. New York:
Anchor Press.
Holbrook, M. & O’Shaughnessy, J. (1984). The role of emotion of advertising. Psychology
and Marketing, 1(Summer), 45-64.
Johar, J. S. and Sirgy, M. J. (1991). Value Expressive Versus Utilitarian Advertising Appeals:
When and Why to Use Which Appeal. Journal of Advertising, (September), 23-34.
McCracken, G. (1986). Culture and consumption: A theoretical account of the structure and
movement of the cultural meaning of consumer goods. Journal of Consumer Research,
13(1), 71-84.
Pollay, R.W. (1983). Measuring the cultural values manifest in advertising. Current Issues and
Research in Advertising. James H. L. and Claude R. M., eds, Ann Arbor, MI:
University of Michigan Press. 71-92.
Pollay, R. & Gallagher, K. (1990). Advertising and cultural values: Reflections in the
distorted mirror. International Journal of Advertising, 9, 359-372.
Rokeach, M. (1968). Beliefs, Attitudes and Values. San Francisco: Jossey-Bass.
Scott, W. (1955). Reliability of content analysis: the case of nominal scale coding. Public
Opinion Quarterly, 17, 321-325.
Sterne, J. (1997). What makes people click: advertising on the web. Indianapolis: Que
Corporation.
Zhang, Y. B. & Harwood, J. (2004). Modernization and tradition in an age of globalization:
cultural values in Chinese television commercials, Journal of Communication, 54(1),
156-172.

105
CHAPTER 4

APPLIED MANAGEMENT SCIENCE


AND
DECISION SUPPORT SYSTEMS

106
STUDENT ONLINE PURCHASE DECISION MAKING:
AN ANALYSIS BY PRODUCT CATEGORY

Carl J. Case, St. Bonaventure University


Darwin L. King, St. Bonaventure University

ABSTRACT

This study empirically examines undergraduate online purchasing behavior and


decision making. Monthly student logs are employed to assist in measuring e-behavior
activity and inactivity. Findings reveal that with regard to online purchasing, decision
making, quantity, transactions, and price vary by product category. Implications for decision
support system developers and marketers are discussed.

I. INTRODUCTION

“To shop online or not to shop online.” That question could be viewed as one of the
new Shakespearian dilemmas for today’s consumer. Previous research has suggested that
only 40% of undergraduates, for example, purchase products online (King and Case, 2005).
Of those students who performed Internet shopping, 43% merely purchased one item. These
findings are in contrast to a study by Feedback Research, a market research firm, that found a
majority of students (62%) plan to buy or buy books or textbooks online (Syllabus.com,
2004). Moreover, 73% of students who bought or planned to buy books/textbooks indicated
online research before purchase.

Privacy and security are factors that affect the decision to purchase online. In a June
2005 survey of 2,322 U.S. adults, 67% decided not to register at a Web site or shop because
the privacy policy was unclear (PC Magazine, 2005). In addition, 64% chose not to buy
online at least once because of concerns over personal information. Furthermore, fear of
identity theft may also cause individuals to avoid shopping online. In the prior survey, 20%
of respondents stated that they had been victims of identity theft. A study of undergraduates
found, however, that only 3% of respondents indicated being a victim of Internet fraud (Case
and King, 2004). Forty-four percent of students perceived Internet purchasing as highly
secure.

In an effort to increase online sales, companies have begun to provide "live help"
functions, through instant messaging or text chatting, on their Web sites to facilitate
interactions between online consumers and customer service representatives (CSRs). Because
text-based communication limits nonverbal communication with consumers and the social
contexts for the information conveyed, emerging multimedia technologies (such as computer-
generated voice and humanoid avatars) are being used to enrich the interactive experiences of
customers. One study demonstrated that the presence of text-to-speech voice with a 3-
dimensional avatar (humanoid representation of a CSR) significantly increases consumers'
cognitive and emotional trust toward the CSR (Lingyun and Benbasat, 2005).

Of particular concern to developers of marketing decision support systems (DSSs) is


the understanding of consumer behavior and decision making with regard to online
purchasing. Because undergraduates are an important subset (with current and potentially
substantial future purchasing power) of the online population, this study was conducted to
107
empirically examine why students do and do not purchase online. Rather than using recall to
respond to a survey, respondents utilized a monthly log instrument to record activity and
indicate for each purchase why the purchase was made online. Students who did not purchase
via the Internet were asked to specify the primary reason for not purchasing online. Both
results have implications for DSS development.

II. PREVIOUS RESEARCH

Prior research has examined facets such as customer service and willingness to buy. A
March 2005 market survey conducted in the U.S. by the National Retail Federations' NRF
Foundation and American Express Company found that 99% of the shoppers believe customer
service is at least somewhat important when deciding to make a purchase (Zid, Linda Abu-
Shalback, 2005). Results indicate that shoppers are more satisfied with the customer service
they receive online than they are with service at traditional retail stores. Only 16% of
traditional, retail shoppers surveyed were extremely satisfied with their most recent customer
service experience, while an additional 51% were very satisfied. However, online shoppers
were nearly three times (44%) as likely to be extremely satisfied, while 45% were very
satisfied. The most important component of good customer service for 88% of online
shoppers was that the web site was safe and secure.

A second study, an empirical experiment conducted in Singapore, examined online


buying behavior from a transaction cost economics perspective (Teo and Yu, 2005). Results
indicate that consumers' willingness to buy online is negatively associated with their
perceived transaction cost, and perceived transaction cost is associated with uncertainty,
dependability of online stores, and buying frequency. When consumers perceive more
dependability of online stores and less uncertainty in online shopping and have more online
experiences, they are more likely to buy online.

III. RESEARCH DESIGN

This study employs a survey research design. The research was conducted at a
private, northeastern U.S. University. A Student Internet Purchasing Survey instrument was
developed and administered in March 2004 and April 2005 to students enrolled in a School of
Business course. A convenience sample of class sections was selected. The courses
included Business Information Systems, Business Telecommunications, Introduction to
Managerial Accounting, Statistics II, Business Policy, and Entrepreneurship.

The survey instrument was utilized to collect student demographic data and examine
student Internet purchasing behavior. The survey requested that each student list the details of
each product purchased during the study month. The survey was distributed at the end of the
prior month and collected at the beginning of month following the study month. In the log,
purchase detail such as purchase date, item description, quantity, total price, method of
payment, and reason for using the Internet was collected. Survey data was subsequently
entered into a microcomputer-based database management system to aid in data analysis. All
surveys were anonymous and students were informed that results would have no effect on
their semester grade.

IV. RESULTS

A sample of 220 usable surveys was obtained. 123 (56%) of the respondents were male and

108
97 (44%) were female. The response rate indicates that respondents are relatively equally
distributed by class (Table I). Twenty-seven percent of students were Freshmen, 25% were
Sophomores, 29% were Juniors, and 19% are Seniors.

Table I. Response By Class


Class Count Percentage

Freshmen 60 27%
Sophomore 55 25%
Junior 64 29%
Senior 41 19%
Total 220 100%

To quantify behavior, students were asked to itemize each day the quantity, price, and
reason for each online purchase. Students were given 12 fixed purchase categories and four
fixed choices for why the purchase was made via the Internet. The reason options included:
convenience, price, variety of choice, and cannot find the item in store. Table II details the
quantity, number of transactions, average quantity per transaction, average price per item, and
purchase reason (summarized by percentage of incidence) for each category of purchase.
Results show that clothes are the most common transaction (36 of 146 transactions) and the
highest volume (85 of 300 total quantity) item. Concert/event tickets were second in
transactions (14) and quantity (51). All other categories were much less common except for
the “other” category (44 transactions). In terms of average quantity per transaction, antiques
were the highest (4.0), followed by concert/event tickets (3.6). Antiques, however, only
accounted for two of the transactions. The highest average price per item categories included
air/rail/hotel ($437.94), other ($108.29), computer equipment ($71.67), and concert/event
tickets ($63.10). The highest rated decision-making reason within a category was “price”
(specified in four categories), “no response” (four categories), “cannot find in store” (1.5),
“convenience” (1), and “variety” (.5).

Table II. Purchase Detail


Quantity

Average Price per

Cannot Find in Store


Category
Variety of Choice
# of Transactions

per Transaction

No Response
Convenience
Quantity

Average

Price
Item

Clothes 85 36 2.4 $31.80 6% 11% 11% 14% 58%


Concert/event tickets 51 14 3.6 63.10 57% 0% 7% 21% 14%
CDs/DVDs 22 12 1.8 11.32 8% 58% 8% 8% 17%
Textbooks/books 20 10 2.0 26.00 10% 40% 10% 10% 30%
Air/rail/hotel 17 11 1.5 437.94 18% 18% 0% 9% 55%
Antiques 8 2 4.0 20.88 0% 0% 50% 50% 0%
Computer equipment 6 8 .8 71.67 0% 63% 0% 25% 13%

109
Vitamins/crèmes 5 3 1.7 26.80 0% 33% 0% 0% 67%
Games 5 4 1.3 20.00 0% 50% 0% 25% 25%
Calling cards 1 1 1.0 20 0% 0% 0% 100% 0%
Jewelry 0 0 0 0 0% 0% 0% 0% 0%
Other 79 44 1.8 108.29 18% 27% 2% 16% 36%
Overall 300 146 2.1 $78.54 15% 26% 6% 16% 37%

The study also found that 69%, or the majority of students, do not purchase online. The
primary reason, as indicated by 41% of respondents, is the lack of money. Table III illustrates
that 26% of students do not have a specific reason, 11% do not have a credit card, 9%
perceive a lack of security, 8% want to see a product in the store prior to purchasing, 3%
cannot find the product they desire, and 1% indicated a reason not previously listed in the
survey.

Table III. Reasons For Not Purchasing Online


Reason % of
Students
Lack of money 41%
No specific reason 26%
Do not have a credit card 11%
Lack of security 9%
Want to see product in store prior to purchasing 8%
Cannot find product that I want 3%
Other 1%

IV. CONCLUSION

Overall, this study is useful in providing a better understanding of student online


purchasing activities. Results suggest that quantity, transactions, price, and decision making
vary by product category. Clothes ranked as the highest category with regard to quantity (85
items) and number of transactions (36). “Other” ranked second in quantity (79 items) and
number of transactions (44). Concert tickets ranked third in quantity (51 items) and number
of transactions (14). These three categories accounted for 72% of the quantity and 64% of the
transactions. In terms of items per transaction, antiques (4.0) and concert/event tickets (3.6)
were the highest, although there were only two transactions involving antiques. Price also
varied with the highest price per item categories including air/rail/hotel ($437.94), other
($108.29), computer equipment ($71.67), and concert/event tickets ($63.10). Furthermore,
within each item category, there was a predominant reason why the Internet was utilized for
the purchase. “Price” was specified in four categories as the most common decision-making
reason. “No response” was the most common in four categories, “cannot find in store” was
the most common in one category, “convenience” was most common in one category, and
“variety” and “cannot find in store” were equally chosen in one category.

Consistent with previous study findings, this study also found that 69%, or the
majority of students, do not purchase online. Forty-one percent indicated that the primary
reason was the lack of money. Only 9% perceived a lack of security.

110
There are two important implications as a result of these findings. One implication is
that since student online decision-making behavior varies by purchase category, marketing
DSS developers and users may need to refine their information systems to better measure and
predict purchase activity. Type of product may be a key factor in the purchase decision-
making process. Future research is needed to explore if categories should be further
subdivided for each market segment and if purchase reason can be manipulated to maximize
profit for the organization. Moreover, research needs to be conducted to determine how to
model the online decision-making process within the DSS.

A second implication is that the student online market is largely untapped and a
marketing opportunity. Most students indicated that they did not purchase online during the
survey month. Further research is necessary to determine if the survey months are anomalies
or representative of general purchase behavior.

The limitations of this study are primarily a function of sample size and type of
research. Even though responses were relatively equally distributed among academic class
and gender, a larger sample size and use of additional universities would increase the
robustness of results. Another limitation relates to the self-reported nature of the survey. This
limitation is minimized through respondent anonymity and the use of a log to collect data as it
occurs. The final weakness relates to the chosen months. Purchase activity may not be
representative of other months, thus, replication with other months would improve study
conclusions. Future research is also need to explore what items are included in the “other”
category and why students did not have a reason for certain purchases.

REFERENCES

Case, Carl J. and Darwin L. King. “Student Online Auction Fraud: Perception
and Reality.” Business Research Yearbook, Global Business Perspectives Volume
XI, 2004, 163-167.
King, Darwin L. and Carl J. Case. “A Review of Student Internet Purchase
Behavior.” Proceedings of the American Society of Business and Behavioral
Sciences, 12, (1), Las Vegas, NV, February 24-27, 2005, 972-978.
Lingyun, Qiu and Izak Benbasat. “Online Consumer Trust and Live Help Interfaces:
The Effects of Text-to-Speech and Three-Dimensional Avators.” International
Journal of Human-Computer Interaction, 19, (1), September, 2005, 75-94.
Teo, Thompson S.H. and Yuanyou Yu. “Online Buying Behavior: A Transaction Cost
Economics Perspective.” Omega, 33, (5), October 2005, 451-465.
Zid, Linda Abu-Shalback. “Another Satisfied Customer.” Marketing Management, 14, (2),
March/April, 2005, 5.
“The Perils of Online Shopping.” PC Magazine, 24, (14), August 23, 2005, 23.
“Two-Thirds Turn to the Internet for Back-to-School Textbooks.” Syllabus.com, September
14, 2004, SyllabusNewsUpdate@newsletters.101com.com.

111
ANALYZING ROLE OF OPERATIONS RESEARCH MODELS
IN BANKING

Dharam S. Rana, Jackson State University


dsrana@jsums.edu

SherRhonda R. Gibbs, Jackson State University


sherrhonda.r.gibbs@jsums.edu

ABSTRACT

Due to deregulation of deposit institutions, financial industry, especially banks, have


encountered enormous competitive pressure in the marketplace. Banking administrators have
to make optimal decisions to increase their performance and productivity under complex and
fast changing conditions in their operating environment. As a result, quantitative approaches
to decision making has become very popular with business managers to make informed
decisions. This paper develops a comprehensive framework on the application of Operations
Research/ Management Science (MS) techniques in the banking industry. This study covers
the time period from 1986 to 2004, and analyzes a large number of articles according to the
techniques and their application area. The study also identifies trends regarding applications
of OR/MS techniques in different areas.

I. INTRODUCTION

Nearly twenty years have past since Zanakis, Mavrides, and Roussakis (1986) research
on Management Science (MS) in banking was published. At that time, bankers were
experiencing significant competitive pressures due to the deregulation of institutional
proceeds (Zanakis et. al, 1986). The effective result was the adoption of Management Science
techniques by banking managers seeking sustainable competitive advantage. Management
Science allows decision makers to model real-life situations, evaluate different alternatives,
and determine the best course of action under the model’s assumptions to help bankers with
decision making (Zanakis et al, 1986). The objective of this study is to determine how MS
models are currently being utilized in the banking industry to optimize performance and
productivity while achieving sustained competitive advantages. Zanakis et al. (1986) created
a two-way classification scheme that elicited MS research into banking areas not fully
developed. Consequently, decision-makers in the banking industry may seek to revisit MS
strategies of the past to facilitate substantive performance gains now and in the future.

II. LITERATURE REVIEW

The American economy has faced and flirted with recession over the past four years.
This has caused instability in the global economy. The road to recovery has been rocky and
another crisis like that of the Savings & Loan’s in the 1980’s could throw the U.S and world
economy into a downward spiral. The connection of banks and the economy was made by
Cohen et al. (1981) who conjectured that banks play a crucial role in the nation’s financial
system. Beck and Levine (2004) recently performed a longitudinal investigation on banks and
economic growth from 1976 to 1998 which proved the merit of Cohen et al (1981) argument.
They found that stock markets and banks positively influence economic growth. A direct
lesson on this causal relationship can be learned from the Japanese banking crises which sent

112
its economy into a tailspin. In fact, Kawamoto (2004) discussed the need for structural reform
of Japan’s financial system to ensure long-term economic growth. In summary, it behooves
international and domestic bankers to fully employ MS techniques in underutilized banking
application areas to ensure performance and productivity levels are met and maintained. This
research attempts to provide banking managers a comprehensive view of the MS techniques
and tools available for application in different areas of banking. The current research provides
valuable information to banking decision makers in the following areas:
1) Identify trends regarding applications of MS models in banking,
2) Identify likely skills gaps or areas that need improvement, where MS
techniques have not been applied.

The purpose of this study is to develop a comprehensive framework of the applications


of management science models in Banking. Historical trends in the banking industry are
examined by identifying and categorizing published literature that employed MS techniques in
banking over the last 19 years. The banking literature is categorized using a two-way
classification scheme. The data for the research is collected from academic journal articles on
the banking industry published between 1986 and 2004, and internet literature search. The
primary collection sources were academic journals and electronic databases. Books and
conference proceedings were not used for the study. It is the contention of the research to
create a comprehensive, though not exhaustive reference list. Articles were categorized into
cells to ascertain salient areas in banking and the MS techniques used most frequently. The
MS techniques not frequently used, and banking application areas having limited applications
of MS tools were identified. Once these gaps are identified, researchers can facilitate MS
research, while bank managers formulate strategies for further development of deprived areas.
Finally, results of this study are compared and contrasted with the findings by Zanakis et al.
(1986).

III. RESULTS AND DISCUSSION

This research provided valuable insight into current trends in the banking industry.
The results of data analysis indicate that banking priorities seem to have shifted from that of
corporate planning for financial/liquidity management to bank operations (e.g. check
processing, profitability, productivity). In contrast, the study by Zanakis et al. (1986) found
that MS/OR was used most frequently in the banking application areas of financial
management, portfolio management, customer credit scoring and check operations. The most
frequently used techniques in their study were statistical analysis, linear programming,
forecasting and simulation. At that time, more research was needed in
productivity/profitability operations and international activities (arbitrage and currency
swaps). In our study, Forecasting and Simulation are not among the most frequently used MS
techniques. Our research findings show that technological advances as predicted by Zanakis
et al (1986), brought forth sophisticated information systems to aid bankers in daily operations
and decision-making (DeFarrari and Palmer, 2001). However, few empirical studies have
been performed to assess the quantitative benefits of such systems. Perhaps, this identifies an
area suitable for further study by MS researchers. Between 1986 and 2004, numerous
researchers employed techniques investigating areas for improvements in productivity and
profitability. Our findings reveal that MS tools were extensively used by the banks to achieve
greater productivity, and performance.

The results of our research do not fully support the conjecture of Zanakis et al. (1986)
that there will be a greater usage of MIS/DSS in banking in the future. In fact, this study

113
found that techniques preferred 20 years ago (Statistical Analysis and Linear Programming)
are still the top techniques in use today. Linear Programming has increased in popularity due
to technological advances such as data envelope analysis. Data envelope analysis is a linear
programming technique for measuring the performance of organizational units where the
presence of multiple inputs and outputs makes comparisons difficult (Ali, A. I., 1990). Kantor
and Maital (1999) contended that data envelope analysis (DEA) is used extensively to provide
quantitative measures of each branch’s efficiency relative to other similar branches. The most
frequently used MS/OR techniques (as shown in Table 1) are: linear programming (23.99%),
statistical analysis (18.07%), MIS/EDP (10.59%) and simulation (7.48%). User familiarity
with certain techniques may also have played a role in these results.

Table I
MS/OR TECHNIQUES FREQUENCY UTILIZATION
1986 – 2004

Linear Programming 77 23.99%


Goal Programming 4 1.25%
Integer Programming 1 0.31%
Dynamic Programming 8 2.49%
Stochastic Programming 17 5.30%
Forecasting 8 2.49%
Simulation 24 7.48%
Queuing 4 1.25%
Heuristics 4 1.25%
Statistical Analysis 58 18.07%
MIS/EDP 34 10.59%
Other Techniques² 82 25.55%
Total Number of
References: 321 100.00%
(with duplication)
¹Note: Some studies include multiple techniques.
²Other Techniques: Economic/Econometric Models, Empirical analysis, Game
Theory
Impact Analysis, Correlation Analysis, Cournot Model, Mathematical Models,
TQM/SQC, Qualitative Analysis

The Zanakis et al. (1986) study generated massive research in the banking operations
area. Table 2 illustrates the banking areas where MS/OR techniques were most frequently
applied including: operations, loan management and financial planning. This research found
that over the last twenty years, 38% of published articles were in the banking operations area.
15% of banking articles covered loan management, while 12% investigated financial
management. By far, most MS/OR studies during the past twenty years focused on bank
profitability, performance, and efficiency. DeFarrari and Palmer (2001) contended that
globalization, regulatory restrictions and increased competition in financial markets triggered
consolidation which brought forth larger banks. The trend toward larger, universal banks
(Rime and Stiroh, 2003), sparked an intense preoccupation with performance, efficiency and
productivity gains (hereafter referred to as, the big three).

114
Table II
FREQUENCY OF MS USE IN BANKING APPLICATION AREAS
Banking Area % use¹
Marketing and Organization 7%
Liquidity Management 3%
Loan Management³ 15%
Investment Management 8%
Financial Planning 12%
Balance/Sheet Management 7%
Operations² 38%
Manpower Planning 6%
Miscellaneous 3%
Total: 100%
¹Indicates frequency of MS techniques applied to banking areas (1986 -
2004)
²Operations area where MS was most frequently applied: Productivity,
Profitability assessment and improvement - 27%
³Loan Management area where MS most frequently applied: Risk measurement
And management - 6%

IV. CONCLUSION

Results from this study reverberate that of Kumbhakar and Sarkar’s (2002) who found
that in 1980’s, the banking sector underwent mass liberalization to boost performance,
productivity and efficiency. The two-way classification scheme presented denotes a
continued emphasis on improvement by banks throughout the 1990’s and into the new
millennia. Domestic banks may respond to the threat of foreign banks by further streamlining
operations since researchers purport that foreign entry in domestic markets lowers interest rate
spreads and margins (Unite and Sullivan, 2003). Subsequently, this may increase the usage of
those MS techniques that forecast profitability, and productivity. More research is still needed
in the international arena. However, a recent study by Berger, Dai, Ongena and Smith (2003)
suggests the future of bank globalization will be limited since firms frequently use host nation
banks for cash management services.

Results of this research clearly show that Linear Programming and Statistical Analysis
were the most frequently used techniques, followed by MIS/EDP, Simulation, and Stochastic
Programming; Integer Programming, Goal Programming, Queuing Theory and Heuristics
were the least preferred techniques. A review of MS applications in different sub areas of
banking shows that banking Operations (Productivity, Profitability Assessment and
Improvement) had the highest usage of MS models. It was followed by Loan Management,
Financial Planning, Investment Management, Balance Sheet Management, and Marketing and
Organization.

REFERENCES

Ali, A. I. "Data envelopment analysis: computational issues." Computers, Environment and


Urban Systems, 14, (2), 1990, 157-65.
Beck, T., and Levine, R. “Stock markets, banks, and growth: Panel evidence, Journal of
Banking & Finance, 28, (3), 2004, 423-442.

115
Berger, A. N., Dai, Q., Ongena S., and Smith, D. C. “To what extent will the banking industry
be globalized? A study of bank nationality and reach in 20 European nations.”
Journal of Banking & Finance, 27, (3), 2003, 383.
Cohen, K.J., Maier, Steven F., and Vander Weide, J.H. “Recent Developments in
Management Science in Banking.” 27, (10), 1981, 1097-1119.
DeFarrari, Lisa M., and Palmer, David E. “Supervision of Large Complex Banking
Organizations.” Federal Reserve Bulletin, 87, (2), 2001, 47-57.
Kantor, J. and Maital, S. “Measuring Efficiency by Product Group: Integrating DEA with
Activity-Based Accounting in a Large Mideast Bank”, Interfaces, 29,
(3), 1999, 27-36.
Kawamoto, Yuko, “Fixing Japan’s Banking System.” McKinsey Quarterly, 3, 2004, 118-122.
Kumbhakar, S. C., and Sarkar, S. “Deregulation, Ownership, and Productivity Growth in the
Banking Industry: Evidence from India”, Journal of Money, Credit, and Banking, 35,
(3), 2002, 403-424.
Rime, Bertrand, and Stiroh, Kevin J. “The Performance Of Universal Banks: Evidence From
Switzerland.” Journal of Banking & Finance, 27, (11), 2003, 2121-2150.
Unite, A. A., and Sullivan, M. J. “The Effect Of Foreign Entry And Ownership Structure On
The Philippine Domestic Banking Market.” Journal of Banking & Finance, 27, (12),
2003, 2323-2345.
Zanakis, Stelios H., Mavrides, Lazaros P., and Roussakis, Emmanuel N. “Notes and
Communications Applications of Management Science in Banking.” Decision
Sciences, 17, 1986, 114-128.
Note: This paper is a part of on going research. A copy of the complete paper can be obtained
from Dr. Rana at: dsrana@jsums.edu

116
DECISION SUPPORT SYSTEMS:
AN INVESTIGATION OF CHARACTERISTICS
Roger L. Hayen, Central Michigan University
roger.hayen@cmich.edu

Monica C. Holmes, Central Michigan University


monica.c.holmes@cmich.edu

ABSTRACT

Since the early 1970s decision support system (DSS) frameworks have been
formulated to describe DSS characteristics. This research examines several of those
frameworks using published case-based research. The purpose is to determine whether the
characteristics outlined by the frameworks are observed in these published cases. This is
useful in determining those characteristics of actual DSS that lead to their classification.
Overall, the research supports the framework characteristics. However, the results indicate
that the source and scope of information lack a strong relationship to the ad hoc and
institutional DSS decision categories.
Keywords: DSS, decision support systems, characteristics of information

I. INTRODUCTION

Decision support systems (DSS) frameworks have been developed to describe the
characteristics of DSS and their relationships. Being forward looking, they provide a view for
assisting in their classifications. However, advances in computer technology are rapid and
impact Information System (IS) applications including DSS, resulting in a suite of constantly
changing DSS applications. Generally, this dynamic nature of DSS applications make it
difficult for chief information officers and other managers to clearly define a static suite of
DSS applications through their characteristics. Yet the identification of DSS applications is
critical in planning organization strategy for the deployment of IT. This study examines
several frameworks to determine their efficacy by employing case-based research of DSS
applications to provide a perspective of DSS characteristics. First, the definition of DSS is
examined; next using a framework of information requirements by decision categories, data
are analyzed to evaluate DSS characteristics; and finally, the findings on DSS characteristics
are summarized.

II. DEFINING DECISION SUPPORT SYSTEMS

A DSS is defined using a computer to (Keen and Scott Morton, 1978): (1) assist
managers with their decision process in semi-structured tasks; (2) support, rather then replace
managerial judgment; and (3) improve the effectiveness of decision making rather than its
efficiency. (p. 1). Others (Marakas, 2003, Power, 2002, and Sprague and Carlson, 1982) have
also provided definitions for a DSS. Their definitions support the Keen and Scott Morton’s
(1978) definition, thus making it acceptable for this analysis.

Gorry and Scott Morton (1978) provide a context for the semi-structured characteristic
of this DSS definition, relating the work of Simon and Newell to a framework of structured
and unstructured decision-making processes. A fully structured problem is one where all
three decision-making phases – intelligence, design, and choice – are structured. A fully-

117
unstructured problem is one where all three decision-phases are unstructured. A semi-
structured problem is one where one or two, but not all, of the decision-making phases are
unstructured. They define IS that are largely structured as Structured Decision Systems
(SDS), whereas those that are semi-structured or unstructured are DSS. This viewpoint is
reinforced and summarized by Power (2002) as any IS that is not a SDS/TPS (transaction
processing system) is frequently labeled as a DSS. Therefore the definition of a DSS is
qualified by (1) the categories of use and (2) movement along the structured/unstructured
continuum. Furthermore, DSS can be divided meaningfully into two categories: institutional
DSS which deal with decisions of a recurring nature (repetitive), and ad hoc DSS which deal
with specific decisions which are not usually anticipated or recurring (one-shot) (Donovan, &
Madnick, 1977, Keen & Scott Morton, 1978).

For this analysis, the DSS application is distinct from the DSS tool. The DSS tool or
DSS generator, is the software used in the creation of a specific DSS application as the
enabling technology. The application is the specific system that actually accomplishes the
work and supplies a decision maker with the required information. Modern DSS utilize
computer-based tools to create more advanced DSS applications (Eom, 1999). An IS tool that
at one time is used with a primary focus for building an ad hoc DSS may at a latter time find
its use as primarily an institutional DSS. Because the tool was initially created for use in
building an ad hoc DSS, it does not infer that all IS subsequently created using that tool are
DSS. The fundamental DSS definition needs to be applied in determining whether or not the
application is, indeed, a DSS.

III. FRAMEWORKS OF DSS

A framework assists in gaining a perspective of DSS, serving as a powerful means of


providing focus on their characteristics. A development framework is “helpful in organizing a
complex subject, identifying the relationships between the parts, and revealing the areas in
which further developments will be required” (Sprague and Carlson, 1982, p.6). These
frameworks provide a number of parameters (Table 1) that are DSS characteristics (Gorry and
Scott Morton, 1989, Adam, Fahy, and Murphy, 1998), and are measured for each DSS
examined in this research investigation.

Table 1: Information Requirements by Decision Category

Characteristics of Information Operational Control Strategic Planning


Source Largely internal External
Scope Well defined, narrow Very wide
Level of Aggregation Detailed Aggregate
Time Horizon Historical Future
Currency Highly current Quite old
Required Accuracy High Low
Frequency of Use Very frequent Infrequent
SOURCE: Gorry, G. A., & Scott Morton, M. S. (1989). A Framework of Management
Information Systems. Sloan Management Review, 13(1), 51.
DSS may be divided into institutional DSS and ad hoc DSS (Donovan and Madnick,
1977). Institutional DSS are most appropriate for operational control activities, whereas ad
hoc DSS are most useful for strategic planning. An area of overlap occurs with regard to
managerial control applications. The development approach used with the deployment of a
DSS provides a framework for the creation and ongoing maintenance of the DSS. The

118
systems development life cycle (SDLC) and prototyping are the main development strategies
employed in constructing a DSS. (Marakas,2003, Power, 2002) The development strategy
will be one of the parameters to be investigated. The frameworks are examined to determine
the primary attributes to be included when reviewing DSS cases applications or use cases.

IV. METHODOLOGY

Characteristics of specific DSS are examined using a published case-based research


approach methodology. A case-based research approach provides a means for investigating
phenomena in information systems within their original context and is particularly
appropriate for exploratory studies (Yin, 1993). Case-based research is one of the most
powerful methods in the development of generalizable conclusions about a field of study
(Voss, Tsikriktsis and Frohlich, 2002; Meredith, 1998). The findings of case-based research
in DSS, a very dynamic field in which new applications and practices are continually
emerging and changing, can lead to new and creative insights with a high validity from the
perspective of practitioners – the ultimate user of research. Characteristics of each published
case are categorized according to the information characteristics basedontheframeworks.

V. RESEARCH ANALYSIS

An extensive literature search found 45 case-based research on DSS applications with


substantial descriptions of the use cases. The characteristics of the use cases were recorded
based on the parameters derived from the frameworks. Microsoft Excel and SAS Institute
JMP software were used to analyze these data. No quota or other preset limit was established
for the number of ad hoc or institutional cases included in this investigation. Therefore, the
values for these major groupings of DSS are considered representative.

Several general characteristics of DSS are analyzed. There was a nearly even split
between the ad hoc cases (53 %) and the institutional cases (47 %). Nearly all these specific
DSS (96%) were developed using prototyping, whereas the system development life cycle
(SDLC) was used for the other developments (4%) and these were all institutional DSS. So,
essentially all the observed cases were developed using prototyping. This suggests that the
SDLC has no significant use in the development of DSS, regardless of the type of DSS
applications. Several other overall characteristics of the evaluated DSS were observed. This
includes a measure of the extent of the structure of the decision. Of the observed cases, 69 %
were determined to be semi-structured and 27 % were structured. All Ad Hoc DSS were
semi-structured, whereas all the structured were included in the Institutional DSS category.

The primary types of DSS applications were observed for the use cases. Modeling
(47%) and expert systems (24%) are the most common types of DSS applications. The
occurrences of these two categories of applications over time indicate that modeling case
applications have been reported in a continuous stream from 1972 forward. On the other
hand, expert systems applications were reported from the period of 1988 to 1996. The rise
and fall in the application of expert systems in DSS could have occurred at this time.

The Gorry and Scott-Morton’s framework (Table 1, presented above) was examined
using the characteristic data collected on the case-based applications. This evaluation
confirmed the information characteristic requirement set forth in that framework. The
operational control level was recorded as an institutional DSS, whereas the strategic planning
level was recorded as an ad hoc DSS. This framework was examined first by using neural

119
network exploratory modeling to determine if the framework characteristic could serve to
predict the category of the application as either ad hoc or institutional.

The first neural network model include all the characteristics of the Gorry and Scott-
Morton framework. However, the result of Model One, was not significant (χ, p =0.12865).
Each characteristic parameter was evaluated individually to determine the relationship
between the ad hoc/institutional categorization and the characteristic using a pair-wise
analysis. In Figures 1 and 2, along with Table 2, the occurrence of both the source of
information and the scope of information appears to be nearly equal for the ad hoc and the
institutional applications. In Table 2, the source of information and the scope of information
do not produce a significant relationship. This suggests that both ad hoc and institutional DSS
make equal use of internal and external information and that both are as likely to have a
narrow or a wide scope of information. It is unlikely these two characteristics distinguish ad
hoc from institutional application. Both categories are about equally likely to include the
information component.

100%
100%
90%
90%
80% 80%
Ad Hoc vs. Institutional

Ad Hoc vs. Institutional


70% 70%
60% 60%
50% 50%
40% 40%
30% 30%
20% 20%
10% 10%
0% 0%
Narrow Wide External Internal Internal/External
Scope of Information Source of Information
Ad Hoc Institutional Ad Hoc Institutional

Figure 1: Scope of Information Figure 2: Source of Information

Using the same procedure, each characteristic was examined individually against the ad
hoc/institutional decision type. Figures 3 and 4 together with Table 2 present the results of
that analysis. Each information characteristic is significantly related to ad hoc and
institutional DSS, distinguishing between the two DSS types. As a result, the source and
scope were removed as characteristic parameters for the neural network Model Two
(Figure 5), producing a significant model (χ, p =0.02981) for predicting the decision type
based on the remaining characteristic parameters of this framework.
Table 2. Significance of Information Characteristics

Characteristic of χ, Significant at
Information probability 0.05 p-Value
Source 0.7724 No
Scope 0.5644 No
Aggregation Level < 0.0001 Yes
Time Horizon 0.0037 Yes
Currency 0.0041 Yes
Required Accuracy < 0.0001 Yes
Frequency of Use < 0.0001 Yes

120
100% 100%

90% 90%

80% 80%

Ad Hoc vs. Institutional


Ad Hoc vs. Institutional
70% 70%

60% 60%

50% 50%

40% 40%

30% 30%

20% 20%

10% 10%

0%
0%
Historical Current Projected
Current Old
Time Horizon
Currency
Ad Hoc Institutional
Ad Hoc Institutional

Figure 3: Currency Figure 4: Time Horizon

Figure 5: Model Two Neural Network Diagram

VI. CONCLUSION

This investigation examines characteristic parameters from DSS frameworks,


including the structure continuum, ad hoc and institutional DSS, the development method
used, and information requirements by decision category. Forty-five published cases were
used to measure actual DSS deployment in organizations. Overall, the data support the
characteristic parameters set forth in the various frameworks. The information requirements
by decision category were supported with the exception of the scope of information and the
source of information. An ad hoc or an institutional DSS is each about as likely to have an
internal or an external source of information and each exhibits similar results for a narrow or
wide scope of information. Future analyses should examine more case-based research studies
to compare characteristic parameters found in more DSS frameworks.

REFERENCES

Adam, F., Fahy, M., and Murphy, C. (1998). A framework for the classification of DSS
usage across organizations, Decision Support Systems, 22(1), 1-13.
Donovan, J. & Madnick, S. (1977). Institutional and ad hoc DSS and their effective use,
Database, 8(3), 79-88.Eom S. B. (1999). Decision support systems research: current
state and trends, Industrial Management & Data Systems, 99(5), 213-220.
Gorry, G. A., & Scott Morton, M. S. (1989). A framework for management information
systems. Sloan Management Review, 31(3), 49-61.

121
Keen, P. G. W., & Scott Morton, M. S. (1978). Decision support systems: An organizational
perspective. Reading, MA: Addison-Wesley.
Marakas, G. M. (2003). Decision support systems in the 21st century (2nd ed.) Upper Saddle
River, NJ: Prentice-Hall. Meredith, J. (1998).
Building operations management theory through case and field research. Journal of
Operations Management, 16, 441-454.
Power, D. J. (2002). Decision support systems: Concepts and resources for managers.
Westport, CN: Quorum Books.
Sprague, R. H., & Carlson, E. D. (1982). Building Effective Decision Support Systems,
Englewood Cliffs, NJ: Prentice Hall.
Voss, C., Tsikriktsis, N., & Frolich, M. (2002). Case research in operations management.
International Journal of Operations & Management, 22(2), 195-219. Yin, R. K.
(1993). Applications of case study research. Newbury Park, CA: Sage Publications.

122
TOURISM MARKET POTENTIAL OF SMALL RESOURCE-BASED ECONOMIES:
THE CASE OF FIJI ISLANDS

Erdener Kaynak, The Pennsylvania State University at Harrisburg,


k9x@psu.edu
Raghuvar D. Pathak, The University of the South Pacific,
pathak_r@usp.ac.fj

ABSTRACT

The Delphi survey as a market research tool was designed to project the future Fiji
Island tourism scenario from 2001 through the year 2020. It is predicted that Fijian tourism
industry is expected to grow and prosper substantially over the next five years due to
increased tourist demand for non-traditional holiday destinations. There will also be a high
demand for activity-based tourism. Although the tourism demand projections predict positive
trends in the short term, tourism demand is subject to a host of uncontrollable factors which
make long-term projections difficult and cumbersome. In view of the developments and
changes, tourism industry operators and planners need scientifically accepted projection bases
for tourism investment and effective operational tourism decisions.

I. FIJI TOURISM INDUSTRY: TRENDS AND PROSPECTS

In recent years, rapid socio-economic changes and market developments are taking
place in the Fiji Islands. Among all, Fiji Islands’ tourism has grown substantially over the
last decade to become a major source of income, exceeding agriculture as a source of foreign
exchange earnings. But as a large portion of tourist spending leaks out of the economy as
payments for imports, the net economic impact of tourism is smaller than that implied by
tourist spending numbers. However, tourism is still a critical source of creating jobs and
foreign exchange and is a means of access to the rest of the world.

In 1999, Fiji Islands visitor arrivals exceeded over 4,00,000 and was estimated to pass
the half a million mark by the end of 2003 (Fiji Bureau of Statistics, July 2002). However, a
civilian coup on May 19, 2000 (which overthrew a democratically elected government and
held parliamentarians hostage for over 50 days) shook visitors’ confidence in their safety
while holidaying in Fiji Islands and as a result visitor arrival figures dropped drastically to
2,94,000 for 2000 but the signs are that the industry is picking up again and in the year 2001,
arrivals exceeded 3,50,000 tourists. Also up to the end of the first half of the year 2002,
visitor arrivals totaled 1,830,000 more than the same period in the year 2001. All indications
are that Fijian tourism will grow and prosper in the future. How much and how fast it will
grow depends on many factors, both internal and external.

Australia and New Zealand have so far been Fiji Islands’ major markets supplying
over 40 percent of its tourist arrivals as recently as of 1997. Expanding these neighboring
markets should be relatively easy because of their proximity and close economic ties to Fiji
Islands. However, expanding other markets in Japan, North America, United Kingdom,
Continental Europe, South Korea, and other Pacific neighbors will remain a challenge but one
that is critical to further market development, growth and diversification. One way to expand
and diversify tourist markets, especially in North America and Japan, could be to attract more
global and regional hotel chains. Eco-tourism is another growing sector of the industry that

123
looks to play a major part in Fijian tourism in the years to come. A new $1.5 million Island
Sanctuary in the Mamanucas opened for business in 2002. The eco-tourism resort offers tent,
dormitory and private room style accommodation. Bounty Island is the home of the
endangered bird, the Banded Rail and is a nesting ground for the endangered hawksbill turtle.
Government policies on sustainable development, popular opinion on the benefits of eco-
tourism and the growing trend and belief of tourists to support only ecologically friendly
operations has given rise to more eco-friendly hotels, trekking and adventure tourism.

II. RESEARCH OBJECTIVES

The purpose of this study was to examine tourism market potential of Fiji Islands by
the year 2020 by utilizing the well known qualitative Delphi forecasting technique. The use
of forecasting is most needed in tourism industry because the industry is affected by a host of
uncontrollable factors. Current patterns of tourism in Fiji Islands and the world over, are
undergoing drastic changes and transformation as the new travelers are more diverse in their
interests, more discriminating, demanding and value conscious. Along with these, new
attitudes towards travel; the economic, the socio-demographic, socio-political, governmental
and technological environments are also changing, and new developments in each of these
areas have an impact on the tourism outlook for the 2000 and beyond. In particular, tourism
demand in Fiji Islands has been most affected by the degree of political stability and visitors
confidence in their safety whilst holidaying in Fiji Islands (as an aftermath of coups in the
years 1987 and 2000).

Kaynak and Marandu (2006) indicates that for orderly future planning purposes,
market demand should be measured with reference to a stated period of time. The longer the
forecasting interval, the more tenuous the forecast. Every forecast is based on a set of
assumptions about environmental and marketing conditions, and the chance that some of
these assumptions will not be fulfilled increases with the length of the forecast period. Much
interest in the whole subject of predicting future market environment is being stimulated by
futurists; but at the same time it is important to heed to the caution (Yong and Leng 1988).

The Panel of experts was selected by knowledgeable international and national tourism
analysts for their knowledge of the subject under review. The ensure a wide range of ideas
and views, the selected industry experts were not permitted to interact with one another.
Questions to be answered about tourism market potential of Fiji Islands were grouped into
four broad categories:

(i) To what extent, Fijian society will undergo changes and transformations in its value
systems from the year of 2001 through 2020?
(ii) How will the Fijian tourism industry undergo changes in its structure from the year
2001 through 2020?
(iii) Which are the tourism events and scenarios having a potential impact on tourism
development and tourism training in Fiji Islands from the year 2001 through 2020 in
terms of likelihood of occurrence of event, year of probable occurrence, and
importance to tourism training?
(iv) Which of the tourist regions and businesses would develop in importance in the
coming years?
(v) What direction should the Government’s tourism department take in regards to
tourism market development?

124
III. RESEARCH FINDINGS AND CONCLUSION

The purpose of this study was to present the results and recommendations of an
empirical study based on the Delphi qualitative forecasting technique. The study was
conducted among 72 tourism experts. Of the total, 69 respondents provided numerous
projections of various perspectives relating to the tourism industry of Fiji Islands. The tourism
experts indicated slight decrease in traditionalism in work, family, and education; hard work
as a virtue; authoritarianism in family decision making; materialism; and a slight increase in
other value changes in the Fijian society. With the changing infrastructure of Fiji Islands
tourism experts believe that motels, tourist homes, campgrounds and trailer parks, airline
traffic, inter-city bus lines traffic, handicraft centers, national heritage centers (i.e. sand dunes,
momiguns), cultural and educational attractions, museums, and back packers/budget
properties will become increasingly more popular owing to increased tourist potential. Future
scenario of Fijian tourism industry was delineated by seeking the tourism industry experts’
views on events having major impact on tourism development and tourism education. Some
of the most important trends and developments in the Fijian tourism industry are listed under
four major categories as follows:

Information and Technological Change


a) An international data bank for tourist information, b) A network of international travel
routes, c) Increased importance of satellite and on-line communications, d) Fully automated
data retrieval systems, and e) Most of the cash transactions occur through on-line system,
computerized credit, billing systems and credit/debit cards

Training and Education for Tourism Industry


a) Increased need training and tourism education, b) Need for more specialized and formal
types of tourism educational programs, c) Universities and colleges in Fiji Islands would
integrate tourism and hospitality degree programs in their curriculum, and d) Demand for
part-time and executive training programs in tourism would increase substantially

Tourism Strategy Development


a) Activity-based tourism will increase in importance, b) Artificial tourism environments are
created to create additional tourism opportunities, c) Increased importance of time-sharing
resorts (condominiums and luxury apartments/villas) throughout the world , and d) Better
protection of wildlife, scenic beauty, and natural environments

Cooperation and Co-ordination Among the Tourism Sector Participants


a) Integration of tourism development plan into overall economic development plan of Fiji
Islands, b) Local agricultural products are used by hotels, motels, restaurants and other tourist
establishments, and c) Land issues will be an important part of the Fijian tourism development
plans and programs

There are many factors that may influence the future growth of tourism industry, such
as: a larger segment of the population having the time and resources to travel; a narrowing
down of distances because of continuing transportation improvements (which also open new
and/or cheaper tourism destinations to compete with Fiji Islands), increased government
recognition and realization of the impact of tourism and its role as the major provider of
revenue for the economy, rapid advances in communications technology that allow the global
coordination of tourism industry, and the increasing importance and role that leisure and
travel has in consumer lives today as a result of changing attitudes towards work, women's

125
roles, affluence and inflation. As tourism is the mainstay of Fiji Island’s economy, therefore,
this study assumes greater importance by finding answers to questions that have a direct
bearing on tourism market potential of Fiji Islands .

Table 1. Value Changes in Fiji Islands Society, 2001-2020


Std. Impact on
Variable Mean Deviation Median Mode tourisma

Traditionalism in work 2.48 1.260 2 slight (-) high


Traditionalism in family 2.31 1.131 2 slight (-) high
Traditionalism in education 2.39 1.205 2 slight (-) high
Hard work as a virtue 3.64 1.177 2 slight (-) high
Authoritarianism in business decision making 2.79 1.196 4 slight (+) highest
Authoritarianism in family decision making 2.59 1.150 3 slight (-) high
Materialism 4.02 0.936 2 slight (-) high
Rewarding work as a virtue 3.85 0.989 4 slight (+) highest
Individualism 3.92 0.957 4 slight (+) moderate
Individual involvement in society 3.62 1.007 4 slight (+) moderate
Participation in decision making in business 4.02 0.969 4 slight (+) high
Participation in family decision making 3.73 1.144 4 slight (+) high
Self-expression 4.23 0.856 4 slight (+) high
Acceptance of change 3.87 0.983 4 slight (+) high
Communal obligations impinge upon work
settings 2.97 1.109 4 slight (+) high

Notes: A five point scale was used where 1=significant increase; 2=slight increase; 3=no
change; 4=slight decrease;
5=significant decrease.
a
Indicates the degree of impact that the value changes in Fiji Islands society will have on
tourism: 0=not important; 10=critically important. Under the impact of tourism 4 to 5 signifies
medium, 5 to 6 moderate, 6 to 7 high and above 7 highest; (-)=decrease (+)=increase

Table 2. Changing Structure of the Fijian Tourism Industry, 2001-2020

Variable Mean Std. Median Mode Impact on


Deviation tourisma
Hotels 4.55 0.585 3 None highest
Motels 4.18 0.783 5 significant (+) highest
Resorts 4.55 0.585 4 slight (+) highest
Tourist Homes 4.12 0.795 5 significant (+) high
Farm vacations 3.53 0.684 4 slight (+) moderate
Trekking, Hikking, and Mountaineering 4.46 0.725 3 None highest
Campgrounds, Trailer Parks 3.85 0.789 5 significant (+) moderate
Cottage and Vacation Homes 3.85 0.744 4 None moderate
Fast Food Outlets 4.59 0.632 4 slight (+) high
Airlines traffic 4.40 0.836 5 significant (+) highest
Inter-city bus lines traffic 3.85 0.973 5 significant (+) moderate
Cruising service 4.24 0.720 4 slight (+) highest
Retail sporting goods and recreational equipment 4.16 0.751 4 slight (+) high
stores
Gift shops 4.25 0.785 4 slight (+) highest

126
Duty Free shops 4.18 0.757 4 slight (+) highest
Travel agencies 4.09 0.866 4 slight (+) highest
Eco-tourism 4.61 0.602 4 slight (+) highest
Handicraft centers 4.16 0.828 5 significant (+) highest
National Heritage Centers (i.e. sand dunes, 4.11 0.704 4 significant (+) highest
momiguns)
Theme Parks 3.79 0.713 4 slight (+) high
Historical Parks 3.77 0.679 4 slight (+) high
National and Botanical Parks 3.86 0.699 4 slight (+) high
Local entertainment establishments 4.29 0.780 4 slight (+) highest
Personal services (i.e. tourist guide, sports 4.35 0.734 4 slight (+) highest
trainers, etc)
Cultural and educational attractions 4.28 0.740 4 significant (+) highest
Cruise ship traffic 4.18 0.875 4 slight (+) highest
Water sport facilities 4.27 0.851 4 slight (+) highest
Museums 3.61 0.653 4 significant (+) moderate
Festivals and events 4.11 0.930 4 None high
Passenger car traffic 3.98 0.886 4 slight (+) moderate
Bus tours 3.89 0.862 4 slight (+) high
Government involvement in tourism 4.56 0.914 4 slight (+) highest
Back packer/ Budget properties 4.56 0.787 5 significant (+) highest
Others 4.71 0.488 5 significant (+) highest

Notes: A five point scale was used where 1=significant increase; 2=slight increase; 3=no
change; 4=slight decrease;
5=significant decrease.
a
Indicates the degree of impact that the value changes in Fiji Islands society will have on
tourism: 0=not important; 10=critically important. Under the impact of tourism 4 to 5 signifies
medium, 5 to 6 moderate, 6 to 7 high and above 7 highest; (-)=decrease (+)=increase
b
Other factors indicated by the experts that could have a slight or significant increase in the
structure of the Fijian tourism industry an a high impact on Fijian tourism are indicated as
follows:

Table 3. Demographics of the Panel of Tourism Experts


Percentage Percentage
Regions of Operations breakdown Type of Business breakdown

NADI AREA 38.80% HOTEL 34.30%


LAUTOKA AND
RAKIRAKI 11.90% MOTEL 3.00%
NADI OFF-SHORE 9.00% RESTAURANT 6.00%
CORAL COAST 10.40% TOURISM EDUCATOR 9.00%
PACIFIC HARBOUR 7.50% RESORT 14.90%
SUVA CITY 56.70% TOURIST HOMES 1.50%
NORTH 10.40% PUBLIC SECTOR 13.40%
OUTER ISLANDS 9.00% FAST FOOD 0.00%
OTHERS 1.50% PROFESSIONAL ORGANIZATION 11.90%
BACKPACKER GROUNDS 3.00%
ECOTOURISM ATTRACTION
CENTRES 1.50%
OTHERS 22.40%

127
REFERENCES

Kaynak, Erdener and Edward Marandu (2006),"Tourism Market Potential Analysis in


Botswana: A Delphi Study” Journal of Travel Research, May, pp. 87-101.
Yong, Y.W. and T.L. Leng (1988), "A Delphi Forecast for the Singapore Tourism Industry:
Future Scenario and Marketing Implications", International Marketing Review, Vol.6,
No.3, pp.35-46.

128
WHO SAYS DECISION-MAKING IS RATIONAL: IMPLICATIONS FOR
RESPONDING TO AN IMPENDING FORESEEABLE DISASTER

M. Shakil Rahman, Frostburg State University


srahman@frostburg.edu

Michael Monahan, Frostburg State University


mmonahan@frostburg.edu

Ahmad Tootoonchi, Frostburg State University


tootoonchi@frostburg.edu

ABSTRACT

This paper focuses on the concept of rational decision-making and how it collapses
during an imminent natural disaster. The authors have devised a model that incorporates the
main decision-making processes and illustrates how they become intertwined with a “Black
Box” that may suspend rational decision-making. Hurricane Katrina and the Federal, State,
and Local government response exemplifies this process. Various factors are examined that
influence the Black Box processes and suggestions are offered to minimize the Black Box
effect.

I. INTRODUCTION

Decision-making is a process that is commonly defined as choosing among


alternatives. While this process can take a very short time or years to analyze, it proceeds
along an eight step process. These steps include: identifying the problem, identification of
decision criteria, allocation of weights to criteria, development of alternatives, analysis of
alternatives, selection of an alternative, implementation of the alternative, and evaluation of
decision effectiveness (Miller & Star, 1967).

DeYoung (2002) postulated that managers who make consistent, value-maximizing


choices are utilizing the rational model. This model is predicated on seven assumptions: The
problem is clear and unambiguous, a single well-defined goal is to be achieved, all
alternatives and consequences are known, preferences are clear, preferences are constant and
stable, no time or cost constraints exist, and the final choice will maximize economic payoff.
Thus, a decision maker who was perfectly rational would be fully objective and logical
(Robbins and DeCenzo, 2005).

Impending natural disasters, as their name implies, are neither initiated by nor
governed by man. Some disasters strike with minimal or no notice at all. They are categorized
unforeseeable, and regardless of the preparation, there is negligible protection against lighting,
tornadoes, tsunami and earthquakes. Conversely, other natural disasters, while equally
destructive, have foreseeable windows ranging from hours to weeks. These foreseeable
natural disasters include hurricanes, heavy rains which may result in flooding, forest fires,
significant snowfall/blizzard and volcanic eruptions.

129
II. PURPOSE
The purpose of this paper is to ascertain if rational decision-making occurs when
responding to foreseeable natural disasters.

III. LITERATURE REVIEW

Organizational crisis are events characterized by high consequence, low probability,


ambiguity and decision-making time pressure (Pearson and Clair, 1998). In addition, despite
the training, simulations and foresee ability of an impending natural disaster, some people
succumb to decidophobia, the fear of making decisions because deciding is the acceptance of
responsibility (Kauffman, 1973). Thus, decision-making paralysis can be costly in terms of
life, property, and commerce.

To combat inaction, the World Health Organization (2002) recommends that any
program of disaster prevention and preparedness should promote optimum coordination
between the various governmental, nongovernmental, and private organizations involved.
While planning is integral to preparedness, Rosenthal (1998), contends that industrial society
is especially susceptible to natural disasters and has become acerbated by policy makers who
have not prepared themselves or the public for appropriate responses once tragedy strikes.
Risk, uncertainty, crisis, collective stress, and “normal accidents'' now need to be incorporated
into a broader understanding of how governments and decision- makers respond to crises and
their concomitants: unpleasantness in unexpected circumstances, representing unscheduled
events, unprecedented in their implications and, by normal routine standards, almost
unmanageable.

Denis (1995) highlights six major types of activities on which disaster managers
should focus: (1) obtaining information about the situation; (2) getting advice on the best
course of action; (3) choosing: the decision to do something; (4) authorizing the action; (5)
having the action executed; and (6) explaining and communicating the action. These steps
attempt to logically frame the action in the midst of a potentially emotional crisis. However,
McCarthy (2003) found that the experience of crisis gave rise to a more rational, planned
approach to the strategy-making process. In the aftermath of crisis, entrepreneurs had to spend
more time communicating with, explaining and justifying their actions to key stakeholders.

Since people are not mindless automatons, the human behavior components must be
included in the rational decision-making process. French (2005) argued that while emergency
modeling incorporates technology, it does not accurately take into account the social aspects
and behavior of people. Previously, Walls (2002) cited the methods of slowing down,
listening, learning and feeling to augment rational decision-making.

Significant benefits can be achieved by establishing a team who will be empowered to


make decisions during the crisis and it is communicated throughout the supply chain (Hart,
2005). Conventional wisdom holds that men are generally perceived to be “cool as a
cucumber” in a crisis. However, Mano-Negrin & Sheaffer (2004) found in a study of 112
Israeli executives, that women were more likely to employ a holistic approach that facilitates
crisis preparedness.

Further, Heracleous (1994) contends rationality becomes inapplicable when


confronted with complex problems, fast moving markets, unpredictability and uncertainty as
social, political, and cognitive forces influence the decision making process.

130
Conversely, Simon (1957) noted that limits exist on our time, resources, and ability to
process information. He introduced the bounded rationality model where rather than a logical,
optimized solution is reached, a satisfying or a “good enough” solution is selected.

IV. DECISION MODEL AND ANALYSIS

Decisions are at the heart of leader accomplishment and may range from incidental to
life-saving. The decision may be, tricky, puzzling, nerve-racking, possessing clear boundaries,
or high in ambiguity, is both an art and a science. At times, the boldest decisions may not be
the safest, but they show the leaders mettle. To analyze the decision-making process in
responding to an impending foreseeable disaster, the authors present the following conceptual
decision-making model shown in Figure 1. This model is developed through an extensive
literature review and models of other authors mentioned by Arsham, 2005. According to the
model, a decision-making process in any foreseeable disaster is influenced by “Black Box
Effect.”
(3) Controllable input
(Resources/capability/etc)

Decision Making
(6) Action
(The best pick) (7) Black Box
(1) Range of
) (5) Information Analysis Objectives
by experts
(4) Alternatives
Choices (2) Uncontrollable Input
(Information explicit or Tacit)
Figure 1: Decision-making model in any foreseeable disaster

1. Range of Objectives: In case of impending disaster, the most fundamental goal is to


protect human life. The second goal would be to look after public properties including
government infrastructure, hospitals, and schools. The third priority will be to safeguard
business infrastructure and properties, and the last item in the range of objectives to be
addressed is personal property. It is extremely important to think about the full range of
objectives to be fulfilled to grasp complete representation.

2. Uncontrollable input: Forecasting the intensity of a natural disaster is a multivariate


phenomenon. The intensity of a natural disaster can alter by the minute due to variations
in ambient conditions. Therefore, the continuous search for new information relevant to
further evaluation of the alternatives is critical. Information can be classified as explicit or
tacit forms. The explicit information can be explained in structured form, while tacit
information is inconsistent and fuzzy to explain. Decision-making process must include
the reduction of uncertainty about the uncontrollable inputs. This can be achieved by
gathering reliable information. Uncertainty cannot be eliminated, but useful, relevant
information may help reduce its magnitude.

131
3. Controllable Input: Need to objectively evaluate your own capability, capacity, and
resources limitations. All decision makers should evaluate the situation at hand to
determine whether or not he/she is able to make a rational decision. To be capable, the
decision maker has to have the personal attributes, especially the mental power, required
to perform the task. The phrase “decision-making capacity” refers to ability to understand
the nature and consequences of situations, and to make and communicate decisions based
on that understanding. It also refers to the alternatives that are specified by the decision
maker. There are four levels of capacity including the ability to communicate a choice; the
ability to understand information; the ability to comprehend the situation; and the ability
to weigh information in a rationally defensible way. When making decisions, many people
tend to overlook their resource limitations. Sometimes decisions are made irrationally and
some don’t really know if they have all the information needed to approach the task. There
are times when the decision maker may be emotionally attached to the situation and does
not consider all the information at hand. Before confronting a situation, all controllable
inputs must be evaluated.

4. Alternative Choices: Carefully weigh your knowledge of the costs and risks of negative
as well as positive consequences that could flow from each alternative including 1) List all
the alternatives you are considering, 2) List all of the values or criteria that will be
affected by the decision, 3) Evaluate each alternative by each criteria or value, and 4)
Choose the alternative which you predict will satisfy the criteria the best.

5. Information Analysis: In conducting the information analysis, the objective is to opt for
the best course of action. Once the decision maker has chosen an alternative, he/she has to
make sure that they are sound on the logic and reasoning portion of the decision. Multi-
perspective analysis is used to look at decisions from a number of important perspectives,
thereby moving us outside our habitual thinking style, and helping us to get a more
rounded view of a situation. With this, there are infinite alternatives from which the
decision maker can choose. Many successful people think from a very rational and
positive viewpoint which may be part of the reason for their success. Often they may fail
to look at a problem from an emotional, intuitive, creative or negative viewpoint. This can
mean that they underestimate resistance to plans, fail to make creative leaps, and do not
make essential contingency plans. Similarly, pessimists may be excessively defensive.
Emotions may limit calm and rational decisions.

6. Action: The decision maker must first know what information and/or resources are
needed at each step to implement the plan. All choices come with obstacles and because
of this, the decision maker must consider all obstacles and how each one can be overcome.
After this, all steps for implementing the decision have to be determined along with
understanding of beginning and ending each step. Lastly, the decision maker needs to
identify all information and resources needed for each step. After thoroughly considering
a wide range of possible alternative courses of action, choose the one that best fits with the
set of goals, desires, lifestyle, values, and so on. Make sure that a contingency plan is
created just in case the best pick option doesn’t work out as intended.

7. Black Box Effect: The Black Box affects the rationality in decision making as well as
outcomes of a decision. The typical Black Box factors are: politics, time constraint, ego,
ideology, assumption of competency, scrutiny of the media, bureaucratic structure, false
hope, fishbowl mentality, etc.

132
Select Examples of Black Box Effects during and after tropical storm Katrina:
Political:
• Democratic Mayor Ray Nagin (Mayor & Governor does not get along)
• Democratic Governor Kathleen Blanco (lack of preparedness and mishandling the
rescue and relief operations)
• Republican President George Bush (Political Appointment for FEMA)
Distortion of the media:
• Overstated rapes and murders at Superdome
• Looting (Wide spread by law enforcement)
• Inflated death count predictions

The Black Box Effect is due to influences on human beings and can therefore never be
totally eliminated. However, steps can be taken to minimize human error. The authors suggest
the utilization of experts and technologies are critical to overcome human errors (explained in
conclusion and recommendations section).

V. CONCLUSION

Decision-making is an integral component of modern existence. People make literally


thousands of decisions a day and while most are minor and have little effect on anyone but
themselves, they must be cognizant of the fact that some decisions have life and death
consequences.

Some elected and appointed officials have the primary responsibility of safeguarding
the public safety. Planning, simulations, contingency and evacuation protocols are all
worthwhile measures and we naively assume that government paternalism will protect us
from any challenge. As a result, we place our trust in their competence and we expect them to
make clear, rational decisions.

While the day-to-day minutia of government may be carried out with aplomb,
imminent natural disasters have a way of disrupting public safety, economies and rational
decision-making. The ever-present watchful eye of television has made the country, if not the
world, Monday morning quarterback in almost every situation. Nearly all decisions are
scrutinized and filtered through ego, assumed competency, political, ideological, economic,
and magnifying lenses. The slow and unwieldy bureaucratic structure provides false hope
while there is a leadership vacuum in many major cases.

Experts who have been trained and field tested in a variety of crisis will be able to
more quickly analyze and evaluate potential courses of action as opposed to political
candidates and appointees who may lack the needed skills. Further, closer alignment with 911
services, FEMA, and Homeland Security to empower local agents the freedom of action to
cope with the impending crisis is imperative.

In addition, technological advances have made it possible to delineate the parameters


for non-human rational decision-making. The benefit would be an unbiased, consistent,
decision that is not subject to the pressures and political issues surrounding human decision-
making. For example, the Dutch Decision Support System has made significant progress in
flood mitigation (Price, 2000).

133
There’s an adage that “anything can be accomplished if no one is worried about who
gets the credit”. However, a more apt maxim has become “nothing gets done so that no one
gets the blame”. Making rational decisions follows a series of logical steps. However,
rationality may be suspended in the Black Box!

REFERENCES

Arsham, H. Decision science. Retrieved November 26, 2005 from,


http://home.ubalt.edu/ntsbarsh/index.html
DeYoung, R. (2002). Practical-Theoretical approach in the application of theory models of
organizational behavior. Journal of American Academy of Business 361-364.
French, S. (2005). Believe in the model: mishandle the emergency. Journal of Homeland
Security and Emergency Management, 2( 1).
Hart, B. (2005). Weathering the storm. Journal of Commerce, 6 (38), 58.
Heracleous, L. (1994). Environmental Dynamism and Strategic decision-making rationality.
Management Development Review, 7 (4).
Kaufmann, W. (1973). Without guilt and justice: from decidophobia to autonomy.
New York: P.H. Wyden.
Mano-Negrin, R. & Sheaffer, Z. (2004). Are women “cooler” than men during crisis?
Exploring gender differences in perceiving organizational crisis preparedness
proneness. Women in Management Review, 19 (2).
Pearson, C. & Clair, J. (1998). Reframing crisis management. The Academy of
Management Review, 23 (1) 59-77.
Price, R. (2000). Telematics assisted handling of flood emergencies in urban areas (DGXIII)
Retrieved November 30, 2005 from
http://www.unescohe.org/hi/default.htm?http://www.unesco-ihe.org/hi/projects.htm
Robins, S. & DeCenzo, D. ( 2005). Fundamentals of Management 5th edition. Upper Saddle
River, NJ: Pearson.
Rosenthal, U. (1998). Crisis and crisis management: toward comprehensive government
decision-making, Sage Public Administration Abstracts 25 (2)
Simon, H. (1957). Administrative Behavior. NY: The Free Press
Walls, H. (2002). The myth of rationality IIE Solutions, 34 (12).
Wolshon, B, Urbina, E, & Levitan, M. (2001). National Review of Hurricane Evacuation
Plans and Policies. LSU Hurricane Center.

134
CHAPTER 5

COMPUTER INFORMATION SYSTEMS

135
AN ALTERNATIVE APPROACH FOR DEVELOPING AND MANAGING
INFORMATION SECURITY PROGRAM

Muhammed A. Badamas, Morgan State University


badamas@hotmail.com
ABSTRACT

An effective security program has planning, implementation, reporting and


evaluation as its phases of the security management cycle. This is the traditional
management cycle that had been applied to program development, which contains the input
and support of top management during all four stages of the cycle. The application of the
traditional management cycle, however, often leads to a situation where computer security
is seen as something that can be implemented quickly and effortlessly, resulting in
questions about the productivity of the investment. A new approach to the development of
an information security program is discussed in this paper. This approach considers the
following activities - planning, administration and monitoring of information security as the
three phases instead of the traditional four phases. The approach discussed in this paper
provides a simple methodology for achievement of the framework of a good information
security management.

I. INTRODUCTION

The management of information security requires structured and disciplined process.


Therefore, a well-defined approach to maintain information security must be adopted.
While the aim of information security management is to protect IT resources, its underlying
aim is to ensure continuity of the organization. An effective security program has planning,
implementation, reporting and evaluation as its phases of the security management cycle.
This is the traditional management cycle that had been applied to program development,
which contains the input and support of top management during all four stages of the cycle.

The application of the traditional management cycle, however, often leads to a


situation where computer security is seen as something that can be implemented quickly
and effortlessly, resulting in questions about the productivity of the investment. A new
approach to the development of an information security program is discussed in this paper.
This approach considers the following activities - planning, administration and monitoring
of information security as the three phases instead of the traditional four phases.

II. LITERATURE REVIEW

Information as well as the systems that support information is important business


assets. The importance of IT in organizations has introduced the need for information
security. Information Security Management (ISM) is about ensuring business continuity
and minimizing business damage by preventing and minimizing the impact of security
incidents that threaten an organization’s information assets (BSI, 1995). ISM refers to the
structural process for implementation and ongoing management of information security in
an organization (Vermeulen, et al., 2002). The three basic components of information
security are to maintain confidentiality, integrity and availability (Pfleeger, 1997). Previous
studies have shown that corporate information is vulnerable to security attacks (Mitchell, et
al., 1999). Management of information security requires structured and disciplined process
and a well-defined approach to maintain information security. Such approach requires

136
information security management that refers to the structured process for the
implementation and ongoing management of information security in an organization
(Vermeulen, et al., 2002).

Information security management should be treated like any business functions with its
activities based upon business needs and policies (Mason, 2000). The policies should be
developed by the management steering committee, which will have the authority to
establish the overall directing of the organization’s information systems (Scott, 1986).
Policies provide the guidelines for operational security, as well as physical and technical
security (Vermeulen, et al., 2002).

Activities and decisions involved in information security management are the ultimate
responsibility of the top management (Dinnie, 1999). Standards are very important and
information security management standards must be considered in the development and
management of information security (Solms, 1999).

III. SECURITY PLANNING

The first phase of the new approach is about security planning. Security planning
involves policy, standards and plans. Security policy is a statement indicating:

• What is to be protected
• Who is responsible
• When does the policy take effect
• Where within the organization the policy reaches
• Why was the policy developed, and
• How will a breach of the policy be treated

A policy statement establishes a standard for security-related policies and procedures


in the operating departments. It also provides credibility and documented support for the
security officers’ functions. A policy requires decisions on some critical issues, such as
whether the security program will address only electronically stored data or all company
information, and whether line management will be required to accept its security
responsibilities. It also provides guidelines and standards for subsequent steps in security
program development and communicates the organization’s concern and commitment.

The security policy statement does not need to be long. Policy scope must be clearly
defined, responsibilities must be formally assigned, and the policy must be related to other
active policies such as physical security and systems responsibilities. The policy should not
define procedures or other implementation details and should not specify titles or names
that may become obsolete through reorganization. A policy is effective only if it is
communicated and enforced. Line managers should be required to show solid evidence of
policy implementation in their individual areas.

Standards are used to set objectives which need be achieved. Systems, often, are
converted to production mode without adequate security or internal controls. There is often
lack of quality procedures and assurance in IT production leading to defective products that
are unstable (Tryfonas, et al, 2001). There is better control if system development standards
address security and control. Work steps should identify security and control requirements,
and relate security requirements to the specific controls that will be implemented

137
Planning is a process that helps to ensure that security is addressed in a
comprehensive manner throughout a system’s life cycle. There are two main tasks that are
central to planning computer security: assignment of responsibility for security planning
and assessment of risks. Responsibility for direction and oversight should be assigned to a
management steering committee, with individuals in strategic positions from each of the
high priority areas to be addressed by the overall security program. To be effective, the
committee needs visibility and clout. Representation from the highest level of management-
officers or board members, even if by proxy is essential.

Security is the state of being free from unacceptable risk of business. The
management committee with members from various different departments and disciplines
can design a mission statement for an information security group. Responsibility for
ongoing security administration should be assigned to a security officer who must be placed
sufficiently high up in the organization to accomplish the job without undue influence by
organizational politics. Because the responsibilities of the position will vary, no one
organizational arrangement will work for every company. Ultimately, responsibility for the
security of specific information and information systems should be assigned to the line
managers who are primarily responsible for the systems. Information security
responsibilities should be seen as an integral part of line management responsibility, as it is
for other company resources.

Risk analysis is the process of identifying an organization’s greatest risks. There are
many risks attached to system development and there is a need for risk analysis (Maguire,
2002). Classic risk analysis, however, can be expensive and time-consuming. The results
are often misleading, their usefulness is outdated quickly, and seldom, if ever, are the
results as exact as they appear. A highly quantitative approach to security risk analysis can
be justified only when both the cost of implementing security measures and the potential
benefits of these measures are very large, and when the decision of whether to implement
the measures is not obvious. In all cases, a qualitative approach to risk analysis is more
efficient and effective.

The first step in qualitative risk analysis is to gain a business perspective on the
company’s most critical business functions and assets. Opinions from the CEO, top
financial and operating officers, and directors of major departments should be obtained.
These functions and assets are then related to the company’s high-volume transactions,
proprietary information, cash flow, and customer, creditor, and employee relationships.
Next, the systems and resources used for executing the critical function are determined. The
procedures of the information services department are reviewed to see how well existing
security and controls work. Finally, deficiencies that could lead to loss or compromise of
these critical assets and functions are identified. With this qualitative method, meaningful
priorities for the security program can be established with only a modest investment in risk
analysis.

The planning phase is complete once responsibilities are assigned, management


objectives are defined and communicated, and security priorities are established. The next
phase in the suggested approach is the security administration.

138
IV. SECURITY ADMINISTRATION

Security administration involves the procedures for determining and implementing


controls in the security management process. The administration of computer security
program has two activities:
• monitoring of security-related events, and
• continuous assessment of the impact of business and technological changes on the
organization’s information security.

These activities must be backed up by written procedures and reporting requirements; to


ensure that security objectives are followed. Informal security administration leads to
higher security costs and weakened protection of critical information. Controls are
necessary to prevent or deter the causes of risks, limit possible losses or assist in recovering
from consequences of risk occurrences. Any logical or physical mechanisms that mitigate
risks are considered as controls. There are three types of controls applicable to the security
management process.

Technical controls are those that are related to and dependent on software and computer
hardware systems. These include, data integrity and systems software integrity embodying
• Reliability of the data source
• Source data preparation
• Data entry control
• Data input acceptance control
• Output controls
• Auditability
• Evidence of correctness of software
• Evidence of robustness
• Evidence of trustworthiness
• Programming standards in operation

Physical controls relate to physical access to the computer systems and the computer
room, and the prevention of unauthorized personnel from gaining physical access to the
computer system.

Administrative controls include internal controls on application software, system software


and company policy on computer security. Administrative controls involve keeping records
of authorized users, assignment of passwords, allocation of computer time and terminals
.
V. AUDIT AND THREAT MONITORING

It is necessary to keep records for the purpose of tracing activities harmful to the
security and integrity of the computer system and information. The audit and threat
monitoring activities involves threat and risk evaluation, vulnerability analysis and controls
analysis

Many writers consider risk as a factor to be addressed after the implementation of the
system (Curtis, 1998), (Simon, 2001). Threat and risk evaluation is necessary in order not

139
to waste resources in protecting information from nonexistent and superficial threats, while
ignoring potential and actual threats. It is also necessary to consider the amount of
resources allocated to a given threat so that it would be proportional to the risk involved.

When risk is applied to computer security, it is considered as an interaction of cause


and consequence. The exact nature of this interaction is readily revealed when a matrix of
cause and consequences is constructed. Cause is any original instrument or agent that
initiates a risk occurrence. Three frequent causes of risk in computer security are people,
equipment or power sources and legal sanctions. Consequences are not totally independent
of one another. The linkage between a cause and a consequence is the means. Means can
refer to an operational technique, such as when someone forges a signature to obtain cash or
when a critical circuit failure removes a computer from service. Means can designate series
of cause and consequence relationships. The accidental disclosure of a data base record, in
turn, may result in a loss of an asset.

Vulnerability analysis involves efforts to locate the areas where the computer system is
vulnerable to compromise. This analysis is necessary during audit because it is important
to determine where a threat can affect the computer security.

Control analysis is the determination of whether the controls in place are adequate for
the prevention of any breaches of security. This analysis is done after determining the
threats and vulnerability areas that can affect the computer security.

Lastly, audit and threat monitoring includes post-processing controls and interactive
controls such as:
• Audit of accounting
• Mapping
• Tracing
• Storage of information at a remote location
• Online system performance evaluation
• Real-time error detection
• Monitoring of adequacy of controls
• Monitoring unusual conditions and insertions
Audits and monitoring are two basic methods for operational assurance. An audit is one-
time or periodic event to evaluate security while monitoring refers to ongoing activity that
examines either the system or the users. Monitoring most of the time applies to activities
that are performed in real-time.

The security management process involves dynamic activities and is a continuous


activity. Planning leads to administrative action, which, in turn, necessitates audit and threat
monitoring. Threats identified during an audit, result in the commencement of the security
management process.

VI. CONCLUSION

Security is never perfect and there are always changes and new ways and methods to
subvert security. Management should be aware of these changes and prepare for them.
Planning is the crucial step for defining and executing security needs. But without security
administration, followed by monitoring of the entire cycle, significant progress toward

140
information security is more dependent on chance than on the true capabilities of the
organization.

The three phases of security planning, security administration and monitoring of


information security discussed in this paper simplify the management of information
security program. Compared with the traditional methodology, the alternative methodology
saves the organization efforts and resources, and provided an efficient and effective method
of managing information security. The approach provides a simple methodology for
achievement of a good information security management.

REFERENCES

British Standards Institution, Code of Practice for Information Security Management


Systems, BSI, London, U.K. 1995.
Curtis, G., Business Information Systems, Addison-Wesley, Reading, MA, 1998.
Dinnie, G., “The second annual global information security survey”, Information &
Computer Security, Vol. 7 No. 3, 1999, pp 112 – 120
Maguire, S., “Identifying risks during information system development: managing the
process”, Information Management & Computer Security, Vol. 10, No. 3, 2000,
pp.126-134.
Mason, P., “Info-security is an issue for all”, Computer Weekly, No. 6, 2000.
Mitchell, R.C., Marcella, R. and Baxter G., “Corporate information security management”,
New Library World, Vol. 100, No. 1150, 1999, pp.213-227.
fleeger, C.P., Security in Computing, Prentice Hall International, Englewood Cliff, NJ., 1997.
Scott, G. M., Principles of Management Information System, McGraw-Hill, New York, N.Y.,
2002.
Simon, J. C., Introduction to Information Systems, John Wiley & Sons, New York, N.Y., 2001
Sols, R. “Information security management: why standards are important”, Information
Management & Computer Security, Vol. 7, Issue 1, 1999, pp.50-58.
Tryfonas, T., Kiountouzis, E., Poulymenakou, A., “Embedding security practices in
contemporary information systems development approaches”, Information
Management & Computer Security, Vol. 9, No. 4, 2001, pp 183-197
Vermeulen, C. and Von Solmes, R., “The Information Security Management Toolbox
taking the pain out of security management”, Information Management & Computer
Security, Vol. 10, No. 3, 2002, pp 119-125

141
UTILIZATION OF INFORMATION RESOURCES FOR STRATEGIC
MANAGEMENT IN A GLOBAL ENTERPRISE

Muhammed A Badamas, Morgan State University


badamas@hotmail.com

Samuel A. Ejiaku, Morgan State University


sejiaku@yahoo.com

ABSTRACT

Management of global enterprises involves considering the value of information


resource, the appropriateness, the depreciation of the value and the mobility of the information
resources. Three views that are forces influencing an organization’s competitive environment
and help management to align IS strategy with business strategy are discussed in this paper. In
addition, the paper investigates the link between business strategy and information strategy that
helps management of global enterprise understand how information resources are capable of
providing sustainable competitive advantage in a global economy.

I. INTRODUCTION

The global economy is characterized by global flow of funds and interdependence


among economies, with open markets for goods, services and labor, resulting in IT-based
products and services. Organizations face more competition today than they faced some years
ago. For a business to succeed, it must rely on the right combination of organizational
resources, all working together in an effort to penetrate and achieve leadership in the global
economy. Enterprises that seek to excel should exploit IT to develop global economic power,
need information resources. To achieve competitive edge, management of global enterprise
must change from the achievement of effective and efficient management of information
technologies to the strategic use and application of information resources. This paper discusses
the link between business strategy and information strategy and the information resources for
sustainable competitive advantage in a global economy.

II. LITERATURE REVIEW

Information Technology is now a strategic part of most businesses, enabling the


redefinition of markets, industries and the strategies and designs of firms competing within
them. (Applegate, et. al, 2003). Organizations that use information to make better business
decisions will enjoy an edge in achieving success (Myburgh, 2002). The global economy
which is now a networked economy encompasses various relationships between people since
all economies are built on the premise of meeting the needs and desires of humans by carrying
out transactions. Organisations of all types in the global economy need to learn how to use the
new combination of computers, connectivity, and human knowledge to remain competitive and
to survive. (McKeon, 2003).

In justifying expenditures on information systems, organizations no more rely on


Return on Investment (ROI) because it is no more appropriate (May, 1997). Information is an
asset with calculable return on investment (Eiring, 2002). It is known that investments in
information systems are large (Earl, et. al. 1994). Many organizations spend millions of dollars

142
on information system but are unable to develop adequate and usable functions (Lee, 2001).
Strategic investments in information are now made to gain competitive advantage (Weill, et.
al., 2003). An organization’s investment in information technology must be in line with its
strategic objectives, building the capabilities necessary to deliver business value. (Weill, et. al.
2003). Information has become a major economic good, frequently exchanged in concert with
or even in place of, tangible goods and services. (Applegate, 2003). Information system used
in an efficient process brings more value to performance than same Information System used in
an inefficient process. Information resources can be considered as "the services, packages,
support technologies, and systems used to generate, store, organize, move and display
information” (Orna, 1990).

III. THE EVOLUTION OF INFORMATION RESOURCES AS


STRATEGIC TOOLS

From the 1960s to the 1990s IS strategy was driven by internal needs of the
organization, to lower existing transaction costs. Later, it was used to provide support for
managers by collecting and distributing information. As competitors built similar systems, any
advantage gained from using IS diminished as competitors installed similar or newer systems.
Organizations not only use and seek those applications that provide them with advantage over
competition, but those applications keep them from being outmanoeuvred by new start-ups
with innovative business models or traditional companies entering new markets.

An organization generates, acquires, processes and uses information. (Myburgh, 2000).


The generated information is used for monitoring the organization’s performance, creating and
communicating instructions, exchanging ideas, experience and knowledge, scanning of the
business environment and taking major and minor organizational decisions. The following are
some of the information resources available to an organization:

• IS infrastructure (hardware, software, network, and data components


• Information and knowledge and proprietary technology
• Technical skill of the IT staff and End users of the IS
• Relationship between IT and business managers
• Business processes
The global manager must understand the type of advantage an information resource
might create. The value of information resource, the appropriateness of the value, the rate of
depreciation of the value and the mobility of the information resource, must therefore be
considered.

III. THE VALUE OF INFORMATION RESOURCE AND ITS


DEPRECIATION

With lack of empirical support for the positive economic impact of IT on organizations,
the productivity paradox of IT necessitates that we consider the value of information resources
(Jon-Arild, 1997). The market forces of supply and demand determine the value of
information resources that lies in the value of the actions an organization takes as a result of
having information (Myburgh, 2002). Value of information is determined by
a. Assessing the quality of information itself regarding accuracy, comprehensiveness,
credibility, relevance, simplicity and validity
b. Evaluating the impact of information on the productivity.
c. Assessing the impact on the effectiveness of the organization in terms of

143
contributing to new markets, improved customer satisfaction, meeting targets and
objectives, and promoting more harmonious relationship.

Scarcity does not add value unless there is the need driven by the competitive position
of the organization within its industry. Scarce information resources are likely to be either
hard to copy or to substitute. A demand must exist for the resources.

IV. THE VALUE CREATED BY THE INFORMATION RESOURCES


AND THE VALUE CHAIN

The value chain model helps in determining where a resource’s value lies. Value chain
is a model for depicting the increasing importance of activities in a process. Information
management value chain focuses on the discrete activities that incur costs in order to add value
to information. The value determination is to improve the usefulness of information, to the
ultimate users, helping them make better decisions (Cisco, 1999). Value chain is the chain of
activities that creates customer value and can be divided into two broad categories - support
and primary activities. Primary activities relate directly to the value created in a product or
service, while support activities make it possible for the primary activities to exist and remain
coordinated. Each activity affects how other activities are performed, suggesting that
information resources should not be applied in isolation. If information resources are focused
too narrowly on a specific activity then the expected value increase may not be realized if other
parts of the chain are not adjusted. The value chain framework suggests that competition stems
from two sources: lowering the cost to perform activities and adding value to a product or
service so that buyers will pay more. To achieve true competitive advantage, an organization
requires accurate information on elements outside itself. Lowering activity costs only achieves
advantage if the organization possesses information on its competitors’ cost structures.

Adding value is a strategic advantage only if an organization possesses accurate


information regarding its customer. While the value chain framework emphasizes the activities
of the individual organization, it can be extended to include the organization in a large value
system. From this perspective a variety of strategic opportunities exist to use information
resources to gain a competitive advantage. Understanding how information is used within each
value chain of the system can lead to the formation of entire new businesses designed to
change the information component of value-added activities.

Developing unique knowledge-sharing processes can help reduce the impact of the loss
of an employee, where the management relies on the individual’s IT skills. Such reliance
exposes a firm to the risk that key individuals will leave the organization, taking the resource
with them. Recording the lessons learned from all team members after the completion of each
project is one attempt at lowering this risk.

V. INFORMATION SYSTEMS STRATEGY AND BUSINESS


STRATEGY

Strategy is the creation of a unique and valuable position, involving different set of
activities. The essence of strategic positioning is to choose activities that are different from
rivals’ activities. (Porter, 1998). Strategy is creating a relationship among an organization’s
activities. There are three views on the alignment of IS strategy with business strategy. The
first view strategically directs information resources to alter the competitive forces benefiting
the organization’s position. The second view uses value chain model to assess the internal

144
operations of the organization (Porter, 1980). The third view uses the theory of strategic thrusts
to understand how organizations choose to compete in their selected markets.

One way competitors differentiate themselves with an otherwise undifferentiated


product is through creative use of IS. Companies erect entry barriers by offering customers
and other market participants’ attractive products and services at a price and level of quality
that competitors cannot march. (Applegate, et. al 2003). Digital technology makes it possible
for new entrants to serve customers in innovative ways, upsetting the careful plans of the
established companies and even threatening short-term profitability. (Downes, 2001)

VI. USING INFORMATION RESOURCES TO SUPPORT


STRATEGIC THRUSTS OF ORGANIZATION

Charles Wiseman’s theory of strategic thrust provides a comprehensive framework


management can use to identify opportunities for an organization’s information resources
competitively (Wiseman, 1988). Built from the concepts of Chandler (1998) and Porter (1998),
it identifies some major efforts that organizations undertake to gain competitive advantages.
These are

• Differentiation thrusts that focus resources on product or service gaps not filled by
competitors. These will allow value to be created and offered to customers in a new
form.
• Cost thrusts that focus resources on reducing costs incurred by the firm, by
suppliers, or by customers, or on increasing the costs of a competitor.
• Innovation thrusts that focus resources on creating new products to sell or on
creating new processes of creating, producing, or delivering a product.
• Growth thrusts that focus resources on acquisition, joint venture, or agreement. The
purpose of an alliance is to create one or more of the following four generic
advantages in the market: product integration, product distribution, product
extension, and product development. Alliances require coordinating information
resources of different organizations over extended periods of time.

These thrusts represent the strategic purposes that drive the use of the organization’s
resources. The organization has two choices when applying a strategic thrust. Either the thrust
acts offensively to improve the competitive advantage of the firm or defensively to reduce the
opportunities available to competitors. Each choice requires a different perspective on
collecting, organizing, and using information in the organization based on the purpose of the
thrust. An organization has two choices for direction - use the information system itself or
provide the system to the chosen target. Many examples exist where organizations began using
a system and then gained further advantages by providing the system to a new target. There are
two basic commercial strategies to fuel this growth – product differentiation and low
cost/price. For above-average performance, management has four generic strategies, in cost
leadership, differentiation, cost focus, and focused differentiation.

Competitive advantage within these generic advantages stem from the way an
organization organizes and performs particular activities. These activities in turn result in
strategic advantage supported by technology and information systems. Focusing on strategic
thrusts helps ensure that information resources are used with the same intent as the rest of the
organization’s resources.

145
VII. CONCLUSION

Organizations face more competition today than before and organizations that use
information strategically will enjoy an edge in achieving success. Such organizations must
process and use all the information resources available to them. To better understand the type
of advantages the information resources might create, organizations need to consider the value
of information resources, the appropriateness of the value, the rate of depreciation of the value
of the information resources and the mobility of these information resources. Understanding
the forces that influence an organization’s competitive environment and using value chain
model to assess the internal operations of the organization help management to align IS
strategy with business strategy. Information resources could be used to support the strategic
thrusts of the organization to achieve a competitive edge in the industry.

REFERENCES

Applegate, L; Austin, R; McFarlane, F., Corporate Information StrategyAnd Management,


McGraw-Hill, Irwin, IL, 2003.
Chandler, A., The Dynamic Firm, Oxford University Press, Oxford, UK, 1998.
Cisco, S.L., and Strong, K.V, “The Value Added Information Chain”, The Information
Management Journal, Vol. 33 No 1, 1999.
Downes, L., “Strategy Can Be Deadly Industry Trend or Event”, The Industry Standard, 2001.
Earl, M.J. and Feeny, D.F., “Is Your CIO Adding Value?” Sloan Management Review, MIT,
Boston, MA. 1994.
Eiring, H.L., “The Evolving Information World”, The Information Management Journal, Vol.
36, No. 1. 2002.
Jon-Arild, J., Johan Olaisen, Bjorn Olsen, “Strategic Use of Information Technology for
Increased Innovation and Performance”,Information Management and Computer
Security, Vol 7 No. 1, 1997, pp 5-22
Lee, C.S., “Modelling the Business Value of Information Technology,” Information and
Management, Vol. 39 No 3. 2001.
May, T. A., “The Death of ROI: Thinking IT Value Measurement,” Information Management
and Computer Security, Vol. 5, No. 3, 1997. pp. 90-92.
McKeon, P., Information Technology and the Networked Economy, Course Technology, New
York, NY, 2002.
Myburgh, S., “The Convergence of Information Technology and Information
Management”. The Information Management Journal, Vol. 34, No 2. 2000.
Myburgh, S., “Strategic Information Management:Understanding A New Reality The
Information Management Journal, Vol. 36, No 1, 2002.
Orna, E., Practical Information Policies: How To Manage Information Flow in
Organization, Aldershot-Gower, U.K., 1990.
Porter, M., Competitive Strategy, The Free Press, New York, NY. 1980.
Porter, M., “What Is Strategy?”, Harvard Business Review, Vol. 74 No. 6, 1998.
Wiseman, C., Strategic Information System, Irwin, Homewood, Ill., 1998.
Weill, P., Broadbent, M., Leveraging The New Infrastructure, Harvard Business School Press,
Boston, MA. 2003.

146
MANAGING THE ENTERPRISE NETWORK: PERFORMANCE OF ROUTING
VERSUS SWITCHING ON A STATE OF THE ART SWITCH

Mark B. Schmidt, St. Cloud State University


mark@stcloudstate.edu

Mark D. Nordby, St. Cloud State University


noma0401@stcloudstate.edu

Dennis C. Guster, St. Cloud State University


dcguster@stcloudstate.edu

ABSTRACT

Given the recent increased dependence on data networks for Internet and other
business needs, there is an increased need to address the functionality and security of the
devices which allow the transmission of data. To that end, this paper examines the relative
efficiencies of both switches and routers utilizing data obtained in a controlled laboratory
experiment. A Force 10 E-300 configurable switch was used to gather data with eight
configurations ranging from a 1 by 3 to a 2 by 6. The data collected appears to suggest that
while there is a difference between the packet inter-arrival time and the mean packet intensity
in comparing results for the separate router and switch configurations there is no difference
between the mean throughput between the router and switch.

I. INTRODUCTION

Ethernet emerged as the primary local area network (LAN) architecture in the 1990’s.
During this time, networks made the transition from a shared media in which workstations
were concentrated with hubs to an environment where a switched configuration was utilized
(Guster and Holden, 1996). What drove this conversion was a desire for better performance
because each port received a dedicated bandwidth allocation instead of sharing the available
bandwidth with every other port on the switch. Also, the switch offered better security
because a packet sniffer connected to a port on a hub could see all traffic coming in and out of
that hub. In contrast, on a switched network, the sniffer could only see the traffic coming in
and out of the port it is connected to. This transition was further aided by the fact many
switches were designed to be self configurable. Because a hub is essentially a multiport
repeater and requires no configuration, a hub could effectively be replaced with a switch.
Switches have the added functionality that allows them to send out address probing packets
and based on those packets could learn the required configuration parameters on their own
(Guster and Hall, 2000). Moreover, existing network management personnel need little or
additional training. For these reasons, it made business sense to utilize switches.

In this new environment, there was also a trend to minimize routing. Historically, in
hub based LAN’s there was a need in certain applications to exercise more control than could
be provided with a hub. For example, a company may have developed LAN’s for each of its
departments independently and needed to implement a different access control policy for each
department (Yuan and Strayer, 2001). In that case, a router could easily provide the
connecting point for all of these independent LAN’s and be programmed to control access to
different LAN segments by the network address structure (OSI layer 3). The switch

147
revolution of the 90’s and the need to recover internet address licenses because of the
limitations of IP version 4’s 32 bit address scheme relegated routing on the LAN level to
proving the interface to the wide area network (WAN) or the Internet (Guster and Shilts,
1998). Because this implementation allowed forwarding based on the physical instead of the
network address, this solution provides better performance and requires less personnel
resources to implement and manage (Tanenbaum, 1995).

With the near ubiquity of the internet and other networks, there is a new paradigm in
place in regard to network security (Schmidt & Arnett, 2005). Effective security paradigms
necessarily require mechanisms for deterrence, prevention, detection, and remediation (Straub
and Welke, 1998). The implementation of security should involve a holistic approach and
needs to be addressed prior to a systems implementation. Indeed security issues can
significantly impede system performance. In fact, after a short period of network activity, it
was found that certain metrics including disk access time increased by a whopping 20%
(Arnett and Schmidt, 2005). If one accepts the premise that the added management
capabilities are required in future enterprise networks to help mitigate security concerns, then
data is needed about the performance differences between switched and router based networks
so that decision makers can make informed and objective decisions regarding the most
economic, efficient, and effective network implementations. To that end, the purpose of this
paper is to run a series of controlled experiments with a high-end switch that is programmable
as either a switch or a router to ascertain its performance capabilities in each category.

II. METHOD

Several test-bed networks of various sizes were configured and provided the
destination mix for the various experiments. A workload generation program was used to
offer a consistent network loads across the multiple experiments. A Force 10 E-300 switch
was used to forward the packets. The manner in which this switch was programmed (either as
a switch or a router) became the experimental treatment. The test-beds were configured in the
four following ways: three devices sending to one device, six devices sending to one device,
three devices sending to two devices and six devices sending to two devices. The devices
were standard Intel based PCs running the Linux operating system. A packet sniffer was
placed on the receiving device(s) and it collected arrival time, size and source/destination
addresses from each packet as it traveled the network.

TCPDUMP was used to capture packets generated with eight different configurations
ranging from a 1 by 3 to a 2 by 6. These eight configurations were divided into 36 sets of
100,000 sets of packets, totaling 3.6 million lines of code. The first six sets were generated
with the Force E300 configured as a router. Each set consisted of 100,000 packets captured.
The first three sets were captured by one machine with three other machines generating
traffic. The second set was captured by one machine from six machines generating traffic,
again repeated three times. These same experiments were repeated with the Force E300
configured as a switch.

The next phase in the experiment consisted of two machines capturing traffic
simultaneously. First, two machines capturing data from three machines, and then from six.
Similar to the aforementioned experiment, the E300 was configured first as a router and then
as a switch with runs of 100,000 packets each. Like machines runs were then combined into
files of 200,000 packets each.

148
III. RESULTS

The results from the eight experimental trials are depicted in Table I. To help ensure
reliability of the experiments, the trial for each row in Table I below was run three times to
check for consistency. It was determined that the variation within each of the row trials was
minimal and consistent with what would be expected from a workload distribution generator
that uses a random number to devise its distribution. To account for this slight variation, the
values reported below come from the row trial with the median mean inter-arrival time.

Table I
Means and Standard Deviation for the Various Device Mixes

Device/mix Inter- Inter- Throughput Throughput Inten Mean Inten SD


arrival arrival Mean (per SD (per second)
time time second)
Mean SD

Switch 1x3 .00065 .00759 18,756,470.5 11,052,507.7 40,908.9499 23,071.8240


Router 1x3 .00075 .00863 18,822,434.1 11,464,997.3 38,501.0307 22,473.4411
Switch 1x6 .00018 .00085 21,665,126.4 267,198.0 55,974.6646 771.3960
Router 1x6 .00105 .01165 22,164,281.4 10,806,463.2 35,798.0194 21,830.0938
Switch 2x3 .00025 .00202 25,968,404.6 4,881,044.9 56,616.5839 11,111.0244
Router 2x3 .00068 .00809 17,321,205.8 9,887,535.3 35,250.3132 19,613.3701
Switch 2x6 .00064 .00753 18,922,729.5 10,010,317.6 36,933.9970 18,767.8578
Router 2x6 .00068 .00802 17,207,468.5 9,950,445.8 35,493.9146 20,016.1341

The data from Table I will be used to address the following research propositions:
P1. There is no difference in packet inter-arrival time between a switch and a router
configuration.

P2. There is no difference in mean throughput between a switch and a router


configuration.

P3. There is no difference in mean packet intensity between a switch and a router
configuration.

After a through analysis of the data, it is apparent that there is a difference in packet
inter-arrival time. That difference shows that the inter-arrival times on the switch
configuration are smaller which would be expected because the switch only needs to read to
the OSI layer 2 header, whereas the router must read to the layer 3 header. However, the
differences in every mix but one (1x6) are less than .0005 milliseconds. To put these values
in perspective, it is useful to look at the suggested values from an efficient network
application. Oppenheimer (1999), reports that the maximum target response time to an end
user should fall into the 400-800 millisecond range. If responses start exceeding this target by
two or three standard deviations then end users become dissatisfied and may resubmit the
same request again or abandon the application entirely. It is quite impressive that the Force
10 switch was able to pump packets through when configured as a switch on average in less
than one millisecond, which would take up just a small portion of the 400-800 millisecond
end-to-end target. Reconfiguring the Force 10 device as router consistently added delay.
However, that delay was minimal. Typically the delay was in the one millisecond range.

149
Therefore, it appears that P1 can be rejected and there is a difference in performance between
the switch and the router configurations in regard to inter-arrival time.

The results of the throughput data are less clear and there is no consistent pattern. In
some cases more throughput was observed in the switch configuration in others more
throughput was observed in the router configuration. Furthermore, the workload generation
program in theory is supposed to send about the same amount of data, it just gets there quicker
with the switch. So therefore the variation in the results can in part be attributed to the
workload generation program which itself will offer a workload that is generally consistent
within the boundaries of a distribution parameter. The mean throughputs ranged from about
17 to about 25 million bytes, which would appear to be consistent with a synthetically
generated distribution. These values may also be affected by sample size. In the one by
samples the number of packets collected was 100,000, whereas, in the two by samples
200,000 packets were collected (100,000 at each of the receiving stations). On the surface
this sounds like adequate data, but when one considers that the switch is designed to support
the transfer of data in the terabit per-second range on its backplane the 100,000 packets
collected could easily not be representative of the massive amount of data that might be
transferred. Based on the data collected herein one finds little evidence to reject P2.
Therefore, it appears that there is no statistical difference in throughput between the switch
and the router configuration.

The packet intensity data is in many ways similar to the packet inter-arrival data. In
all cases the intensity was greater for the switch. Why then are the throughput values so
inconsistent and sometimes favor the router configuration? In fact, there is less consistency in
the intensity values, which range from about 35 to 56 thousand packets. However, one needs
to evaluate these values in light of the fact that the packets are variable in size. Because these
packets were sent over standard Ethernet it would be expected that the packets would vary
approximately from a minimum of 54 to a maximum of 1514 bytes. Therefore, variation in
packet size would mean that one could send the same amount of data with fewer packets if the
packets sizes were larger. This is evident in the data, especially in the 1x6 level. In that
instance, the switch uses about 55 thousand packets to yield throughput of about 22 million
bytes, whereas the routers attains about the same through put with just 35 thousand packets.
Furthermore, this situation makes sense if one remembers that switches and routers evaluate
the destination address of a packet and forward the whole packet accordingly, rather that
evaluate the destination of each byte in the stream. Of course larger packets take more time to
transfer, but it is the speed at which the device can evaluate and forward packets that is of
prime concern. Therefore, based on the data collected it appears that P3 can be rejected.
Accordingly it appears there is a difference in packet intensity between the switch and the
router configurations.

Although knowing the means for the packet inter-arrival arrival rates provides a good
idea of central tendency within the router and switch configurations it was felt that
distribution plots might provide additional useful information. An examination of all
destination mixes reveals a strikingly similar distribution pattern, although there is an
occasional spike that favors a specific configuration. Given the small magnitude of the packet
inter-arrival means and their associated standard deviations one might expect plots of this
type. In all cases the switch activity quits before the router activity. This is explained by the
fact that the experiment was run until a specific number of packets were received, 100,000 for
the one bys and 200,000 for the two bys. Therefore, the switch configuration was able to
forward the prescribed number of packets more quickly than the router configuration.

150
IV. CONCLUSION

It was clear that there was a performance difference between the switch and router
configuration in favor of the switch. This would be expected because the switch only needs to
evaluate the physical address, whereas the router must evaluate the network address.
However, the magnitude of the difference was minimal on this high end enterprise level
switch. In fact, it was typically less on average than one millisecond. This value when
considered as a part of the total inquiry to response delay target of 400-800 milliseconds is
almost insignificant. Certainly, it is small enough that a company considering converting its
LAN forwarding logic to gain the added control of routing would not be discouraged by the
potential performance loss.

However as a matter of practicality, there may be a question of scale in this


experiment. The switch used in this experiment was nowhere near its capacity. The Force 10
E-300 switch with its 72 1 Gbs ports and backplane with a capacity of over a terabit per-
second only had to support a maximum of 8 devices each connected at 100Mbs. This switch
when used on an enterprise level often becomes the root switch of the switching hierarchy. In
other words, instead of having end user workstations directly connected to it, they are
connected to a second level switch which concentrates the traffic into a more demanding
bandwidth stream and that switch in turn would be connected to the enterprise switch. Future
research may reveal interesting results if the experiment were to be replicated on a work group
level switch. Or perhaps more realistically the experiment could be replicated with a work
group switch connected to the enterprise switch to ascertain the delay across the switching
hierarchy. Along the same line of thinking, the experiment could be replicated under much
higher load levels to determine how delay through the enterprise level switch would scale
under each configuration.

In the near future one can expect that the Internet will remain the dominant networking
infrastructure and the traditional 80/20 traffic model will continue to erode. Further,
applications will continue to grow in bandwidth requirements in part because of the trend to
provide more interactivity through multimedia. Unfortunately, one can expect hacking to
continue to grow as well.

All of these factors further reinforce the need for totally routed networks to support the
management, addressing performance and security needs of the future. In 1992 the largest
threats to information systems were internal in origin (Loch, Carr, and Warkentin, 1992); this
continues to be the case today (Whitman, 2003). A router allows for more control and
filtering by specific address. Due to this increased control, it is one mechanism that can be
utilized in an effort to reduce the internal threat. The results contained herein are
encouraging, it was demonstrated that a routing configuration does not add significantly to
delay at least on a lightly loaded switch. At this point additional research is needed to explore
these factors on a work group level switch and the effects of scaling on performance as
workload increases.

151
REFERENCES

Arnett, Kirk P. and Schmidt, Mark B. "Busting the Ghost in the Machine." Communications of
the ACM., 48, (8), 2005, 92-95.
Guster, Dennis, and Holden, Mark. “Integrating High-Speed Switches into Existing Networks as a
Means of Bridging to the 100Mbs World.” A paper presented at the Small College Computing
Symposium, St. Cloud, MN., 1996, April 18-20.
Guster, Dennis, and Hall, Charles. “Integrating High-Speed WAN Transmission Technologies
into a College Computer Networking Curriculum.” A paper presented at the 33rd Annual
Conference of the Midwest Instruction and Computing Symposium, St. Paul, MN., 2000,
April 13-15.
Guster, Dennis, and Shilts, Aaron. “Deploying Port Manageable Hubs to Create Virtual LAN’s to
Improve Network Performance in a General Education Computer Lab.” A paper presented at
the Small College Computing Symposium, Fargo, ND., 1998, April 16-18.
Loch, Karen D.; Carr, Houston H.; and Warkentin, Merrill E. “Threats to Information Systems:
Today’s Reality, Yesterday’s Understanding.” MIS Quarterly., 16, (2), 1992, 173-186.
Oppenheimer, Priscilla. “Analyzing Technical Goals and Constraints.” In Top-Down Network
Design: A Systems Analysis Approach to Enterprise Network Design. Macmillan Technical
Publishing (Cisco Press), 1999.
Schmidt, Mark B. and Arnett, Kirk P. “SPYWARE: A Little Knowledge is a Wonderful Thing.”
Communications of the ACM., 48, (8), 2005, 67-70.
Straub, Detmar W., and Welke Richard J. “Coping with Systems Risk: Security Planning
Models for Management Decision Making.” MIS Quarterly., 22, (4), 1998, 441-469.
Tanenbaum, Andrew. Distributed Operating Systems. Englewood Cliffs, NJ: Prentice Hall,
1996.
Whitman, Michael E. “Enemy at the Gate: Threats to Information Security.” Communications of
the ACM., 46, (8), 2003, 91-95.
Yuan, Raixi, and Strayer, Timothy. Virtual Private Networks. Reading, MA: Addison Wesley,
2001.
The authors would like to acknowledge Force 10 Inc. for their support of our research.

152
AN EMPIRICAL INVESTIGATION OF ROOTKIT AWARENESS

Mark B. Schmidt, St. Cloud State University


mark@stcloudstate.edu

Allen C. Johnston, University of Louisiana Monroe


ajohnston@ulm.edu

Kirk P. Arnett, Mississippi State University


karnett@cobilan.msstate.edu

ABSTRACT

Despite the recent increased attention afforded rootkits by media outlets such as CNN
Headline News (2005), there appears to be a dearth in our awareness and understanding of the
rootkit security paradigm. This paper defines and describes rootkits and the threat they pose.
Next, it presents a study that utilizes an instrument which was used in two prior studies.
Results are presented based on data collected from 210 IT users from three geographically
separate institutes of higher learning. The results indicate that knowledge of rootkits is well
below that of spyware and viruses understanding. Fortunately, even though rootkits are
potentially as damaging as other malware, users may not suffer the full effect of rootkits if the
security community can raise awareness to the point where end users will utilize rootkit
detection and removal tools as part of their overall computing paradigm.

I. INTRODUCTION

Given today’s reliance on computer networks and the Internet it is no surprise that
more attention is being given to malware such as viruses, worms, and spyware. The
professional literature has seen an increase in the number of journals and special issues that
deal with security issues. Among those outlets devoting content toward the pursuit of this
topic are ACM Transactions on Information and Systems Security, Computer Fraud and
Security, Computer Law and Security, Computers and Security, Computer Security Journal,
International Journal of Information and Computer Security, IEEE Security And Privacy,
Information Management and Computer Security, Information Systems Security, International
Journal of Information Security, Journal of Computer Security, Journal of Internet Security,
Journal of Information Privacy & Security, Journal of Information System Security, and
Journal of Privacy Technology to name a few (see http://www.misprofessor.com/).
Additionally, editors are publishing special issues of journals with a security focus. For
example, in August 2005, Communications of the ACM had a special issue on spyware.

Unfortunately, the research community has yet to produce substantial scholarly


research regarding rootkits. Perhaps apparent lack of interest from the research community is
not a lack of interest at all, but more a function of publication delay. Another reason for this
dearth of academic research may pertain to the fact that although rootkits have been around
for 10 plus years, they only recently have moved into the spotlight with their increased
prevalence in the Windows world. This research seeks to document awareness of rootkits and
contribute to the academic community by publishing the results.

153
Concurrent with the pervasiveness of modern security threats, recent research indicates
that corporate IT officials are finally starting to devote an increasing amount of resources to
threat detection and amelioration (Whitman, 2003). A recent survey of 301 IT executives
found that security concerns are increasing on the list of managements’ most important
concerns (Luftman and McLean, 2004). Increases in the number of formal security audits,
financial commitments to holistic security practices, and interest in security awareness
training are indeed steps in the right direction (Gordon et al., 2005).

The purpose of this paper is to present a comparison of concern for rootkits and other
security threats in an effort to increase the level of knowledge of the rootkit phenomenon, and
thereby further our progress in the struggle to effectively cope with the threat. The remainder
of this paper is organized as follows; the next section presents a discussion of rootkits,
followed by a description of the survey and respondents. Then,, findings are analyzed and
discussed.

II. ROOTKITS IN DETAIL

A rootkit is a “type of Trojan that keeps itself, other files, registry keys and network
connections hidden from detection. It runs at the lowest level of the machine and typically
intercepts common API calls. For example, it can intercept requests to a file manager such as
Explorer and cause it to keep certain files hidden from display, even reporting false file counts
and sizes to the user” (TechWeb, 2005). This malware has its origins in the UNIX world, and
because it allows access at the lowest level (or root level), was termed rootkit.
Rootkits were developed circa 1995 and originally targeted UNIX machines. Until
recently rootkits have been relatively rare on Windows machines (Roberts, 2005). Rootkits
were initially designed to infect UNIX systems. Perhaps in an effort to increase their depth
and reach, rootkits more recently began beleaguering systems running Microsoft Windows. It
is likely that the trend to focus on Windows based machines will continue into the near future
(Seltzer, 2005).

Specifically, a rootkit refers to a piece of code that is intended to hide files, processes,
or registry data, most often in an attempt to mask intrusion and to surreptitiously gain
administrative rights to a computer system. However, rootkits can also provide the
mechanism by which various forms of malware, including viruses, spyware, and Trojans,
attempt to conceal their existence from detection utilities such as anti-spyware and anti-virus
applications. The combination of two of more malware programs, such as rootkits, spyware,
viruses, and worms, is referred to as a blended threat. For instance, the product of a
spyware/rootkit blended threat is malware that contains, from the hacker’s perspective, the
best of both worlds. The spyware/rootkit blended threat would include the mobility and
payload of spyware with the stealth like nature and persistence of a rootkit. The resulting
threat is much more difficult to detect and remove.

Although it is difficult to say with 100% certainty when rootkits targeting Windows
first appeared, a program manager for Microsoft Solutions for Security, indicates that a rootkit
targeting Windows NT, was introduced in 1999 by Greg Hoglund (Dillard (2005). Still
interested in rootkits, Hoglund also maintains rootkit.com, a popular website for
disseminating information concerning rootkit exploits. Dillard (2005) contends that rootkits
target the extensible nature of operating systems, applying the same principles for application
development as found in legitimate feature-rich software. Unfortunately, in the case of the
rootkit, the purpose is solely intended to benefit the potential hacker.

154
The following section describes the details of the survey. The survey was administered
to 210 IT users at three institutions of higher learning and the results are detailed in
subsequent sections.

III. THE SURVEY

This study has its roots in two previous studies (Jones et al., 1993; Schmidt and Arnett,
2005). Both of these had a goal of examining relatively new malware as it emerged on the
computing landscape. The original study (Jones et al., 1993) focused on users perceptions of
computer viruses. More recently Schmidt and Arnett (2005) utilized a similar instrument to
access users’ perceptions of spyware. The focus of this study was similar in that it
investigated the relatively new phenomena of rootkits. Specifically, this study examined IT
users’ perceptions of rootkits, spyware, and viruses. The following section describes the
survey, its subjects and their demographics, and the analysis process that followed.

The survey used in this research is based on a survey that was originally published in a
1993 Computers and Security article (Jones et al., 1993) that discussed the knowledge of
computer viruses and was later used in a 2005 Communications of the ACM article (Schmidt
and Arnett, 2005) that examined that user knowledge of spyware. The survey used a five-
point Likert scale (1 = Strongly Disagree, 3 = Neutral, 5 = Strongly Agree) for the research
items and contained additional demographic items including gender, age, computer
experience, education and occupation.

The surveys were administered in-class, to students who were assured of the
confidentiality of their responses. IT professionals were asked to complete the survey in their
workplace. Respondents were asked to circle the answer that most closely described their
answer for each question. In an effort to increase our understanding of rootkit awareness and
knowledge, a survey was conducted involving 210 faculty, staff (including IT professionals),
and students from three public institutes of higher learning from various geographical regions
within the United States.

The majority of respondents were male (57%), with 5-10 years of computer experience
(43%). A large number (97%) of respondents have used the Microsoft Windows environment
for at least one year with 69% having at least five years of windows experience. Conversely,
considering UNIX, the platform from which rootkits evolved, more than 78% of respondents
have less than one year experience if any at all.

IV. RESULTS AND ANALYSIS

To develop a baseline understanding of rootkit awareness, some basic statistics were


calculated. Survey responses indicate that 83.3% of users have not even heard of rootkits. A
relatively low 17.6% have known about rootkits for one year or longer. User knowledge of
viruses was much higher, in fact, fully 99.5% of users have known of viruses for more than
one year. Spyware appears to have a high level of awareness with 83.8% of users having
known of spyware for more than one year. These findings indicate the relative newness of
rootkits from the user perspective. As one may suspect, this general lack of awareness of
rootkits is reflected in security practices as only 9.5% of users run rootkit detection software.

To further analyze the knowledge of malware, ANOVA techniques were used to


determine differences in awareness of viruses, spyware, and rootkits. Table I depicts the

155
ANOVA results. The ANOVA results indicate that there are differences between self
reported knowledge levels of these three types of malware.

Table I ANOVA Results


SUMMARY
Groups Count Sum Average Variance
fam_rootkit 209 302 1.444976077 0.902005889
fam_spyware 210 867 4.128571429 1.069514696
fam_virus 210 885 4.214285714 0.638072454

ANOVA
Source of Variation SS df MS F P-value F crit
Between Groups 1038.158428 2 519.079214 596.771045 9.11E-146 3.010114242
Within Groups 544.5029392 626 0.869813002

Total 1582.661367 628

After finding highly significant results (P<.001), the next step was to determine where
the source of the differences lie. Tukey’s Honestly Significant Difference (HSD) procedure
was used to determine that if the absolute difference between means was > .25, then there is a
difference in user perceptions between the two means in question. Table II depicts the means
and the absolute difference between them.

Table II Absolute Differences Between Means


x bar x bar abs diff result
rootkit spyware 1.44 4.13 2.69 sig
rootkit virus 1.44 4.21 2.77 sig
spyware virus 4.13 4.21 0.08 not sig

Respondents were more familiar with spyware than they were with rootkits.
Respondents were more familiar with viruses than rootkits. It appears that it takes some time
for the awareness levels of new malware to reach a point where IT users are cognizant of the
threats to a level which they can adequately protect themselves. A historical view of viruses
finds that 79.6% of respondents were aware of viruses for one or more years when viruses
were approximately 10 years old (Jones et al., 1993). More recently Schmidt & Arnett (2005)
found that even though spyware is substantially less than 10 years old, only 6% of
respondents were aware of spyware for less than a year. It appears that as time progresses,
users are more aware of malware and experience a compressed time period from the
introduction of a particular malware to widespread awareness. Time will tell as to whether or
not this pattern holds with rootkit awareness.
Interestingly, there is no statistically significant difference between self reported
knowledge of spyware and viruses. Specifically, on a seven point Likert scale users report an
awareness level of 4.13 for spyware and a level of 4.21 for viruses. This difference is not
statistically significant which points to the conclusion that users report that they are as
knowledgeable regarding spyware as they are regarding viruses.

Even with the increase in the popular press of reports of incidents involving rootkits,
awareness regarding the pervasiveness and threats posed by rootkits remains low. Of those
156
survey respondents that reported a familiarity with rootkits, only 59% were able to accurately
report a rootkit’s ability to manipulate logs, 56% knew that rootkits could be used to provide
false feedback to detection utilities, and only 62% indicated that rootkits can be used to
provide hackers with administrative rights.

The limited awareness and knowledge of rootkits is especially alarming considering


the recent “call to action” against spyware and other forms of malicious software. The August
2005, Communications of the ACM was devoted to the topic of spyware (Stafford, 2005).
However, within these articles it was difficult to find any acknowledgment of the presence of
rootkit technology within spyware and their combined emergence as a new form of blended
threat. To adequately prepare for the future, we must experience an intelligence and
preparedness ramp-up by the IT community consistent with that which is occurring in the
fight against spyware. Given the nature of rootkits, we may be on the verge of another battle
for control of personal computers.

V. CONCLUSION

Rootkits have the potential to cause a great deal of harm because they are designed not
only to conceal themselves but also to conceal other symbiotic malware such as viruses and
spyware (Seltzer, 2005). Because consumers are not demanding rootkit detection and
removal methods, antivirus software developers have been slow to add rootkit features to their
protection tools. However, some companies are now moving in that direction. For instance,
F-Secure (http://www.f-secure.com/) now includes “BlackLight,” a rootkit detection tool with
its “F-Secure Internet Security 2006” security suite. It seems obvious that as awareness
increases, perhaps due to recent high profile rootkit abuses, that user awareness of rootkits
will increase. When the knowledge levels increase it is then logical to assume consumers will
demand more adequate protection tools.

It is evident to many that rootkits pose a significant threat to computer security. Given
the current levels of awareness and knowledge within the user community, this threat will
likely continue to emerge much as virus and spyware threats did in their beginnings.
Unfortunately, it is likely that security professionals’ attempts to mitigate this threat will
encounter many of the same challenges we face in our efforts against viruses and spyware.
Given the signature of a rootkit is to conceal its presence and activities, many current
protection mechanisms are largely ineffective because they cannot mitigate what they cannot
detect. Unfortunately, a lack of knowledge of computer security threats negatively effects an
organization’s ability to counter those threats (Straub and Welke, 1998). Given the
aforementioned findings, it appears as though effective widespread rootkit threat amelioration
will likely be a phenomenon of the future. But, the future may be closer than we think!

REFERENCES

Dillard, Kurt. "What Is a Rootkit?" 2005. SearchWindowsSecurity.com.


Gordon, Lawrence A.; Loeb, Martin P.; Lucyshyn, William; and Richardson, Robert. "2005
CSI/FBI Computer Crime and Security Survey." Computer Security Institute, 2005. 1-16.
Jones, M.C.; Arnett K.P.; Tang, J.T.E.; and Chen N.S. "Perceptions of Computer Viruses a
Cross-Cultural Assessment." Computers and Security., 12, 1993, 191-97.

157
Luftman, Jerry, and McLean, Ephraim R. "Key Issues for It Executives." MIS Quarterly
Executive., 3, (2), 2004, 89-104.
News, CNN Headline. Rootkit Report, 2005.
Roberts, Paul F. "Rootkits Sprout on Networks." eWeek., October 17, 2005, 25.
Schmidt, Mark B., and Arnett, Kirk P. "Spyware: A Little Knowledge Is a Wonderful Thing."
Communications of the ACM., 48, (8), 2005, 67-70.
Seltzer, Larry. "Rootkits: The Ultimate Stealth Attack." PC Magazine., 24, (8), 2005, 76.
Stafford, Thomas F. "Spyware." Communications of the ACM., 48, (8), 2005, 34-35.
Straub, Detmar W., and Welke, Richard J. "Coping with Systems Risk: Security Planning
Models for Management Decision Making." MIS Quarterly., 22 (4), 1998, 441-69.
TechWeb. 2005. <http://www.techweb.com/encyclopedia/>.
Whitman, Michael E. "Enemy at the Gate: Threat to Information Security." Communications
of the ACM., 46, (8), 2003, 91-95.

158
CHAPTER 6

E-BUSINESS

159
I’M WITH THE BROADBAND: THE ECONOMIC IMPACT OF BROADBAND
INTERNET ACCESS ON THE MUSIC INDUSTRY

Matthew A. Gilbert, Clear Pixel Communications


mgilbert@clearpixel.com

ABSTRACT
Despite its illegal origins, by 2005 digital music distribution and online file sharing – in
concert with the growth of broadband Internet access – revolutionized the music industry.
Musicians, consumers and record companies are now fully beginning to grasp the greatness of
this popular new paradigm. This paper explores the origins of P2P systems, investigates their
effects on the music industry and surveys the future of this brave new world of music
production, promotion and distribution.

I. I WANT MY MP3!
When Napster launched in 1999, it fundamentally changed the way by which people
obtained music (Zentner, 2003). Leveraging a peer-to-peer platform (“P2P”) Napster directly
connected two or more computers, enabling them to share files and resources (Jacover, 2002).
Offering increased speed, unparalleled selection and unequaled affordability (it was free)
Napster simplified and centralized the process by which MP3 files could be exchanged around
the world. The benefits of MP3 compression were one of the fundamental factors to Napster’s
success. Moore and McMullen (2004) explain the mechanics and benefits of MP3 files as
follows:
Prior to employing the MP3 compression algorithm, a music file stored on a
computer could be as large as 40 to 45 megabytes in size, and would take
around one and one-half hours to transfer over a phone line…After using the
MP3 algorithm, the same music file would be around 3 to 5 MB and would
take around 8 to 15 minutes to transfer. (p. 3)
In true viral fashion, Napster quickly generated a user base of over 20 million unique
accounts at its peak, with more than 500,000 unique IP addresses connected to the system at
any one time (Blackburn, 2004). Prior to Napster, the music industry in the United States was
growing after several years of stagnation (Blackburn, 2004). However, the gains made in the
years prior to 1999 quickly disappeared once Napster launched.

II. HERE COMES THE JUDGE


Napster’s party would not last long. Citing copyright infringement, the Recording
Industry Association of America (RIAA) sued Napster and other illegal online file sharing
services. Berger (2001) estimates as much as 87 percent of files on Napster’s network were in
violation of copyright law. Despite the ethereal and intangible nature of an online file
exchange, the cost for this infringement is really quite tangible and troublesome.

Moore and McMullan (2004) highlight a 2002 congressional report estimating that there
were more than 3 million users swapping 2.6 billion songs per year, costing songwriters $240
million a month – an amount predicted to balloon to $3.1 billion annually by 2005. If the
number of music available on-line were reduced by 30 percent, sales could have been
approximately 10 percent higher in 2003 (Blackburn, 2004).

160
Neilsen SoundScan also recorded a significant drop in music sales near college
campuses between 1997 and 2000 (Blackburn, 2004). College campuses are the key to
understanding digital music distribution – partly due to the early adopter tendencies of
students when it comes to technology, music and popular culture, but also because most
institutions of higher learning provide high speed Internet access at little or no charge to
students. Zentner (2003) offers the following insight into the situation:
Universities have very fast connections and Napster and its successors were
banned in many of them because file swapping was consuming much of the
available bandwidth. In the case of the University of Illinois at Urbana-
Champaign, this amounted to 75 percent of the total bandwidth. (p. 3)

Overall, Zentner (2004) estimates file sharing networks reduce the probability of
purchasing music by 30 percent. However, the RIAA’s tactics have worked: Blackburn (2004)
estimates the lawsuits increased album sales 2.9 percent during the 23 week period after the
strategy was announced. Zentner (2004) adds that the RIAA’s pursuit of individual users
resulted in increased record sales. On March 5, 2001 Napster was ordered to cease operations,
and by July the system was shuttered.

III. NAPSTER RELOADED


Napster’s demise did little to dent the development of replacement systems. However,
Napster’s legal woes did influence one key technical factor in the design of all new platforms:
none use a centralized server. Additionally, beyond music, many systems have evolved to
accommodate movies, software, pictures, and documents.

Interestingly, despite the tremendous fiscal impact file sharing has on music sales, it
isn’t always considered negative by all artists. In fact, some actually welcome and fully
support it. Blackburn (2004) explains this dichotomy as it relates to established and aspiring
musicians:
First, there is a direct substitution effect on sales as some consumers
download rather than purchase music. Second, there is a penetration effect
which increases sales, as the spread of an artist’s works helps to make the
artist more well-known…The first effect is strongest for well-known artists,
while the second is strongest for unknown artists. The overall negative
impact of file sharing arises because aggregate sales are dominated by sales
of well-known artists. (p. 1)

Perhaps it is partly because of this that illegal file sharing systems continue to flourish
while several sanctioned options have also become available (many as official distribution
channels of large media corporations).

To accommodate the growing need, Napster returned in October 2003 as a paid,


sanctioned service. By the beginning of 2004, there was an array of additional options that
also offered a subscription-based model for MP3 downloads of single tracks or full albums
(Blackburn, 2004). Notably, Apple’s iTunes service (launched in January 2001) was the first
major player and has grown to accommodate the increasing demand for video files and
podcasts. Other players include Rhapsody, MusicMatch, Yahoo, and even Walmart.com.

Overall, the number of users on all worldwide P2P networks is estimated to have
reached nearly 10 million by October 2004 (Working Party on the Information Economy,

161
2005). The United States accounted for nearly 50 percent of all users (Working Party on the
Information Economy, 2005).

IV. BEHOLD THE BROADBAND


Driving interest in and development of online file sharing systems is increasingly
available, reliable and affordable broadband Internet access. The key to broadband – and the
reason why it is so fundamental to the increase in online file sharing is its increased
bandwidth. As explained by Zentner (2003), “broadband facilitates music swapping. A
soundtrack that takes more than 12 minutes to download with a dial-up connection can be
downloaded in as fast as 20 seconds with a high-speed connection,” (p. 3).

Broadband is most widely available from the cable company via existing coaxial lines,
or from the phone company (or similar provider) through a digital subscriber line (DSL) that
makes use of telephone wiring. Cable offers download speeds as high as 8 megabits per
second (mbps) and an upload rate as high as 768 kbps. DSL services offer users download
speeds between 384 kbps and 1.5 mbps and upload availability of 128 kbps to 384 kbps.

Cost remains a significant, though not necessarily limiting factor. Generally cable plans
are more expensive than DSL – especially since AT&T dropped their rates to $14.95 per
month. Cable costs between $30 and $50 per month, but offers additional benefits that justify
the cost for certain consumers. Additional expenses may include the need for special
equipment, installation fees and any other related expenses.

However, even without the recent pricing reductions, cost might not be a barrier to entry
into the broadband world for some consumers. Hatfield, et. al. (2003) explains how
microeconomic theory illustrates that most rational consumers prefer goods and services that
maximize their utility. After an initial awareness of a need is developed – faster Internet
access in this case – a consumer researches options that meet that need; weighing features and
benefits against strengths and weaknesses. The utility function helps consumers decide which
goods to purchase given their income, time and other constraints. The result of this cost versus
benefit assessment ultimately influences a consumer’s decision.

Atkinson and Newkirk (2002) add that, “broadband users…use the Internet for
telecommuting, distance learning, and multimedia applications such as television, movies, and
music. These represent the pressure points of broadband demand,” (p. 6).

V. FUEL FOR THE FIRE


Early adopters of broadband Internet access were willing to pay more for an untested,
slightly unrefined service because the benefit outweighed the risk. For the more mainstream
demographic, as broadband technology becomes more stable, available and affordable, more
people will migrate to it because the reward of faster, more efficient service outweighs the risk
of the higher cost. If trends continue, broadband Internet access will be quite ubiquitous very
soon.

Broadband Internet access was clearly a catalyst for online music sharing and
distribution. As accessibility, affordability and reliability improves, the number of subscribers
grows. The Pew Internet & American Life Project (2004a) found that 68 million adult
Americans use broadband Internet access at home or work, while 48 million adult Americans

162
have broadband access at home. Data from March 2005 indicates 50 percent of all home
internet users now have high-speed access (Pew Internet & American Life Project, 2005).

In light of recent developments, Moore and McMullan (2004) question the need for
continued use of the MP3 format. This is an especially relevant question since cable modems
and Digital Subscriber Lines (DSL)…allow for transfers of data at speeds greater than 50
times that of traditional phone modems. Both forms of broadband Internet access are
becoming more commonplace in residential establishments, making file sharing an even faster
activity,” (p. 4).

VI. BONUS TRACKS AND B-SIDES


Change isn’t always easy, but nothing worth having didn’t require effort to attain.
Likewise, the state of broadband access, P2P file sharing and digital music is best expressed
by Kooser (2005) who notes, “Switching to a new form of technology can be exciting for a
growing business but the road is rarely smooth,” (p. 32).

Realistically, P2P networks and broadband Internet access represent a viable paradigm
shift in the production and distribution of music and other forms of entertainment. Moving
music online presents a vast and greatly untapped marketing medium. It is an opportunity, not
a challenge. Research presented by Pew Internet & American Life Project (2004b) supports
the belief that digital is the domain in which musicians should be operating:
Artists and musicians are more likely to say that the internet has made it
possible for them to make more money from their art than they are to say it
has made it harder to protect their work from piracy or unlawful use. (p. ii)

Times are changing. In light of the current legal climate, more consumers now use paid
music and movie services than their illegal counterparts. According to the Pew Internet &
American Life Project (2005), “34 percent of current music downloaders…now use paid
services and 9 percent say they have tried them in the past,” (p. 6).

Distribution channels have evolved significantly since the advent of the Internet and file
sharing systems. According to Blackburn (2004), in 1999, 51 percent of albums were sold in
retail music stores and 34 percent were purchased in other retail establishments. By 2003, the
percentage of sales in music stores dropped to approximately 35 percent with more than 50
percent sold in other types of stores. Most significantly, by 2003, fully 5 percent of all music
sales occurred through the Internet – a figure that has continued to grow in recent years.

Generally, the core of music sales is shifting away from stores that exclusively sell
music to more general merchandisers who offer items that complement or encourage a music
purchase. Predominantly, large electronics chains like Best Buy and Circuit City, in addition
to general merchandisers such as Wal-Mart are reaping the rewards.

Despite the negative effect online file sharing may have on sales of music CDs, it
represents an entirely new distribution model for the future. Beyond the basics and mechanics
of this electronic delivery system, offering music online – in a paid scenario – opens the doors
to increased sales of complementary items and impulse purchases. Whereas MP3s are
substitutes for CDs (just as CDs were substitutes for LPs) migrating to online music delivery
provides new paths to revenue. In addition, the Internet gets music to more people in more
places than a single CD ever could. Curien, et. al. (2004) explains:

163
An increase in piracy should generate a drop in CDs sales but an increase in
expenses in ancillary products. [For example] DVDs and songs used as rings
for mobile phone…encountered a strong growth over the recent period. (p.
16)

Revenue from live concerts has increased in parallel with the explosion of online file
sharing and MP3 availability. So, despite the turndown in CD sales, the digital distribution of
music ultimately resulted in more revenue. Curien, et. al. (2004) offers the following
explanation:
Between 2000 and 2003, the 47 percent increase in revenues generated by
live shows revenues has been much more important than the drop in CDs
sales. Since the average price of a concert increased by 24 percent over the
same period…between 2000 and 2003, both the volume of sold units and
revenues do increase. (p. 16-17).

Despite its illegal origins, by 2005 digital music distribution and online file sharing – in
concert with the growth of broadband Internet access –has opened new doors for the music
industry. Online delivery is a considerably more user-friendly experience because it enables
consumers to choose the music they want to listen to in an order and grouping of their choice.
While many aspects of this new paradigm are not yet vetted, in general it appears that there is
a light at the end of the tunnel, for musicians, consumers and record companies alike. Rock
on!

REFERENCES
Atkinson, R., Ham, S. and Newkirk, B. (2002, September). “Unleashing the Potential of the
High-Speed Internet: Strategies to Boost Broadband Demand.” Progressive Policy Institute
Technology & New Economy Project.
Berger, S. (2001). ‘The use of the internet to ‘share’ copyrighted material and its effect on
copyright law.” Journal of Legal Advocacy & Practice, 3, 92-105.
Blackburn, D. (2004, December 30). “On-line Piracy and Recorded Music Sales.” Harvard
University: Department of Economics.
Curien, N., Laffond, G., Lainé, J., and Moreau, F. (2004, November 11). “Towards a New
Business Model for the Music Industry: Accomodating Piracy through Ancillary Products.”
Laboratoire d’économétrie, Conservatoire National des Arts et Métiers.
Hatfield, D., Jackson, M., Lookabaugh, T., Savage, S., Sicker, D., and Waldman, D. (2003,
February 8). “Broadband Internet Access, Awareness, and Use: Analysis of United States
Household Data.” University of Colorado, Boulder
Jacover, A. (2002). “I want my mp3! Creating a legal and practical scheme to combat copyright
infringement on peer-to-peer internet applications.” Georgetown Law Journal, 90, 2207-
2254.
Kooser, A. (2005, September). “Answer the Call: Two Companies Ring In Much-Needed New
Phone Systems.” Entrepreneur, September 2005, p. 32).
Moore, R. and McMullan, E. (2004, October 29). Perceptions of Peer-to-Peer File Sharing
Among University Students. School of Criminal Justice, University at Albany, State
University of New York.
Pew Internet & American Life Project. (2004a, April). “Pew Internet Project Data Memo: 55%
of Adult Internet Users Have Broadband at Home or Work; Home Broadband Adoption has
Increased 60 % in Past Year and Use of DSL Lines is Surging.”

164
Pew Internet & American Life Project. (2004b, December 5). “Artists, Musicians and the
Internet: They have embraced the internet as a tool that helps them create, promote, and sell
their work. However, they are divided about the impact and importance of free filesharing
and other copyright issues.”
Pew Internet & American Life Project. (2005, March). “Pew Internet Project Data Memo:
Music and video downloading moves beyond P2P.”
Working Party on the Information Economy. (2005, June 8). “Digital Broadband Content:
Music.” Organisation for Economic Co-operation and Development.
Zentner, A. (2003, June). “Measuring the Effect of Music Downloads on Music Purchases.”
University of Chicago.

165
U.S. ATTEMPTS TO SLOW GLOBAL EXPANSION OF INTERNET RETAILING
MEETS LEGAL RESISTANCE

Theodore R. Bolema, Central Michigan University


bolem1tr@cmich.edu

ABSTRACT

E-commerce has thrived in some sectors of the U.S. economy, including airline and
hotel ticketing, books, music recordings, and computers. In other sectors, Internet-based
commerce has not taken hold to the same extent. One factor in explaining slow expansion of
e-commerce in certain sectors has been the use of legal impediments to Internet commerce.
Entrenched business interests have sought to block entrepreneurs and consumers from
developing competitive alternatives in e-commerce. The antitrust laws limit how far
companies acting on their own may go in imposing non-regulatory limitations on e-
commerce. Where competitors have had more success in slowing e-commerce competition
has been through state law barriers in such industries as wine, contact lenses, automobiles,
caskets, real estate, mortgages, and financial services. Recent U.S. court decisions
demonstrate that courts are increasingly willing to strike down protectionist state and local
laws that impede Internet commerce.

I. INTRODUCTION

Despite some early overly-optimistic predictions about explosive growth of e-


commerce in the U.S. economy, e-commerce has demonstrated that it is here to stay. After a
shaking out period, Amazon, Dell Computer, eBay and numerous other Internet retailers have
established themselves and flourished. E-commerce sales in the third quarter of 2005 were
$22.3 billion, or 2.3% of all U.S. retail sales in the same time period, which represents an
increase of 26.7% over the e-commerce retail sales in the third quarter of 2004. (U.S.
Department of Commerce, 2005).

For other areas of retail commerce, however, on-line retailing has not taken hold to the
same extent. The Federal Trade Commission in October of 2002 held hearings on state-
imposed impediments to e-commerce. The hearings addressed ten areas where state laws are
holding back the growth of e-commerce, including wine, contact lenses, automobiles, caskets,
online legal services, health care, real estate, mortgages, and financial services. Many
participants testified that e-commerce sales in these areas are being held back for reasons that
have little to do with products being unsuitable for Internet purchasing, but rather, due to
outdated or deliberately protectionist impediments that impede Internet purchases. (Smith,
2003).

One factor, although certainly not the only factor, in explaining slow expansion of e-
commerce in certain sectors of the economy has been efforts of non-Internet competitors to
impede competition from on-line competitors. Besides the private efforts of entrenched
competitors, state laws have been used, often at the urging of entrenched business interests, to
effectively block entrepreneurs and consumers from developing competitive alternatives in e-
commerce. (Ribstein & Kobayashi, 2001).

Since the FTC hearings, federal courts have indicated greater willingness to strike
down state and local laws that impede commerce. Impediments to e-commerce by private

166
firms have always been subject to the antitrust laws. (See, e.g., ABA Section of Antitrust law,
2002). Another important area of law for the analysis in this paper is state franchise and
dealership law. The cases discussed below are an example of local interests using such laws
to effectively prevent the purchase of wine via the Internet from out-of-state sources. (See,
e.g., Brimer, 2004). With such laws, wineries had little incentive to develop Internet-based
distribution.

Limitations on e-commerce retail sales generally come from three sources which are
both considered below. (See. e.g., Atkinson and Wilhelm, 2001, Foer, 2001). The first is
private efforts by potentially competing businesses to hinder competition from Internet
retailers. These private efforts may be attempted unilaterally by firms with market power or
collectively by similarly-positioned firms, and are subject to the U.S. antitrust laws. The
second is by governmental authorities to place regulatory restrictions on e-commerce, or
alternatively to continue outdated regulatory restrictions that are more burdensome on e-
commerce retailers than on more traditional types of retailers. The third is a combination of
the first two—actions by competitors to lobby governmental regulators to restrict competition
from Internet retailers, which is also generally exempt from the U.S. antitrust laws. Of
course, such limitations are not confined to U.S. regulations, and pose challenges in the global
economy. (Frynas, 2002).

II. PRIVATE PRACTICES AIMED AT LIMITING INTERNET RETAILING

Efforts by businesses to limit Internet retailing take several forms. Manufacturers may
try to prevent Internet retail sales of their products by distributors. Or distributors may try to
require that their suppliers refuse to deal with Internet retail competitors, as Chrysler dealers
did in an early-commerce case (discussed below). Trade associations and other such groups
may also be used as a mechanism for limiting competition from Internet retailers, as may have
been the case with the U.S National Automobile Dealers Association. Each of these sources
of restrictions on Internet sales in the United States are generally subject to the antitrust laws.

The Sherman Antitrust Act of 1890 is the most relevant law for evaluating e-
commerce constraints. Section 1 of the Sherman Act, 15 USCS § 1 (2005), is rather broad in
its wording and makes illegal "every contract, combination in the form of trust or otherwise,
or conspiracy in restraint of trade or commerce." Despite this broad language, Section 1 of
the Sherman Act has consistently been interpreted to prohibit only those restraints of trade
that unreasonably restrict competition. See, e.g., Arizona v. Maricopa County Med. Soc’y,
457 U.S. 332 (1982). Section 2 of the Sherman Act, 15 USCS § 2 (2005), prohibits activities
by "every person who shall monopolize, or attempt to monopolize…any part of trade or
commerce.” Since most (but not all) companies seeking to hinder Internet competition do not
have significant market power, Section 1 of the Sherman Act is the most likely to be relevant
for this analysis.

A variety of restrictions on the resale of its products have been analyzed under the
Sherman Act and usually upheld by the U.S. courts. Such restrictions on Internet sales may
include refusing to sell to Internet resellers, limiting Internet sales to certain resellers, or
restricting a distributor’ Internet sales to specified territories.

A wide range of non-price restrictions can be justified as increasing interbrand


competition with products sold by competing manufacturers. A manufacturer in most
circumstances may refuse to supply distributors that make Internet sales on the grounds that
point-of-sale or post-sale services are necessary to protect the brand and to prevent free-riding.

167
Whether this is a good or bad business decision is not the issue. If restrictions are aimed at
enabling a manufacturer to compete more effectively with its competitors, such restrictions
usually will be found to be reasonable. See, e.g., GTE Sylvania Inc. v. Continental TV, Inc.,
433 U.S. 36 (1977).

For a manufacturer or distributor with market power, even what might appear to be a
unilateral refusal to do business is subject to more scrutiny under the antitrust laws. In
particular, manufacturers or distributors with significant market power face significant legal
issues if they condition sales or purchases on an assurance of not dealing with a competitor or
class of competitors. For example, Toys “R” Us, an important U.S. toy retailer, was found to
have abused its market power. The FTC claimed that Toys “R” Us pressured toy
manufacturers into not making the toys offered by Toys “R” Us available to warehouse club
stores. Toys “R” Us responded that it should be allowed to choose whether to do business
with suppliers as it chooses. The Court disagreed, and held that Toys “R” Us had used its
market power to hinder competing retailers through the unilateral threat not to do business
with the supplier who sell to competitors of Toys “R” Us. Toys “R” Us v. FTC, 221 F.3d 928
(7th Cit. 2000).

The rules change when the conduct is not unilateral. For example, if a manufacturer
(regardless of market power) enters into an agreement with a competing manufacturer not to
sell to Internet resellers, that agreement could be characterized as an unlawful agreement not
to compete under § 1 of the Sherman Act. In an early e-commerce antitrust case, 25 Chrysler
dealers in and around Idaho were charged by the FTC with entering into an illegal conspiracy
when they used their association, Fair Allocation System, Inc, (“FAS”) to demand that
Chrysler allocate new vehicles on a different basis. FAS’s demanded that Chrysler change its
allocation formula to disfavor an Idaho dealer with substantial Internet sales. This demand
was accompanied by threats of a boycott of certain models and refusals to provide certain
warranty repairs. The matter was resolved with a consent decree prohibiting FAS from
threatening such a boycott. In re Fair Allocation System, Inc., 63 Fed. Reg. 43,183 (August
12, 1998).

III. REGULATORY RESTRICTIONS ON INTERNET COMMERCE

When federal, state, or local government regulations are in conflict with free and open
competition, Congress and U.S. Courts have generally resolved these conflicts in favor of the
regulations over the antitrust laws. Important recent court decisions, however, suggest that
courts are in the process of clarifying greater limitations on state’s ability to interfere with e-
commerce.

Under the “state action” doctrine, states have been allowed to impose regulatory
requirements mandating conduct that would otherwise violate the antitrust laws. The state
action doctrine is often called “Parker immunity,” after Parker v. Brown, 317 U.S. 341, 352
(1943). To qualify for Parker immunity, (1) the state must clearly articulate and affirmatively
express the restraint as state policy, and (2) the policy must be actively supervised by the state
itself. California Retail Liquor Dealers Ass’n v. Midcal Aluminum, Inc., 445 U.S. 97 (1980).
To the extent that state laws meet this test, the courts will recognize the restraint as being
within the legitimate regulatory power of the state.

Many of the state-level restrictions on e-commerce are in the form of state franchise
laws. State franchise laws were originally adopted in response to concerns about vulnerability
of franchisees to abuses by franchisors. The first state franchise law, the California Franchise

168
Registration and Disclosure Act, was passed in 1971. Currently, eighteen states have statutes
of general applicability prohibiting termination of a “franchise” or “dealer,” as the terms are
defined in the statutes, without good cause. Many states also have comparable statutes
applicable to the distribution of wine and alcoholic beverages, retail automobile sales and
gasoline and related petroleum products.

State franchise laws may override the contractual provisions in agreements between a
manufacturer and its dealer or distributor. For example, in To-Am Equipment Co. v.
Mitsubishi Caterpillar Forklift America, Inc., 152 F.3d 658 (7th. Cir. 1998), the Court found
that even though a manufacturer strictly complied with the termination provisions of its
contract with its exclusive distributor in parts of Illinois (which the contract stated were to be
subject to Texas law), the manufacturer was subject to the Illinois Franchise Disclosure Act
and liable for violating the distributor’s rights under the Illinois state franchise law.

An example of how state franchise laws could be applied to e-commerce involved


informercials for Murad skin products. A federal court found that infomercial broadcasts of
Murad products by a New York television station carried in Puerto Rico, which led to
telephone order sales to customers in Puerto Rico, could be found to impair the Irvine’s
contractual rights as the exclusive Murad distributor in Puerto Rico under the Puerto Rico
Distributor Act. Irvine v. Murad Skin Research Laboratories, Inc., 194 F.3d 313 (1st. Cir.
1999). By the same reasoning, the availability of products for sale through Internet
distribution into a state with a franchise law could be found to violate traditional distributors’
rights.

Notably, the National Automotive Dealers Association (“NADA”), the largest trade
association representing new car dealers, has taken the position that most state franchise laws
effectively prohibit automobile manufacturers from engaging in any direct Internet sales.
(NADA 2002). In 45 states, automobile dealer franchise laws contain “relevant market area”
provisions placing the burden on the automobile manufacturer to justify to a state agency any
attempt to allow a new party, such as an Internet seller, to sell in an existing dealer’s territory.
Courts have upheld the constitutionality of these relevant market area provisions. See, e.g.,
New Motor Vehicle Board v. Orrin W. Fox Co., 439 U.S. 96 (1978).

While some federal courts have struck down certain state laws and regulations
impeding the growth of e-commerce as violations of the commerce clause of the U.S.
Constitution, other federal courts have upheld such restrictions. See, e.g., Bridenbaugh v.
Freeman-Wilson, 227 F.3d 848 (7th Cir. 2000) (upholding constitutionality of Indiana's
alcoholic beverage statute); Dickerson v. Bailey, 212 F. Supp. 2d 673 (S.D. Tex. 2002)
(holding unconstitutional Texas's statutory ban on direct importation of wine by Texas
residents).

A key case in this battle involves Michigan and New York restrictions on direct
shipments of wine into the state, such as by Internet sales. Eleanor Heald, a wine collector,
challenged the Michigan's Liquor Control Code as violating the Commerce Clause in Article I
of the U.S. Constitution by prohibiting out-of-state wineries from shipping wine directly to
Michigan residents. The Michigan state allowed in-state wineries to make such direct
shipments. The same argument was made in a separate case in New York. In a 5-4 decision,
the Supreme Court ruled both the Michigan and the New York laws unconstitutional.
Granholm v. Heald, 125 S.Ct. 1885 (2005). While states had the power to regulate alcohol
sales, states did not have the power to discriminate against out-of-state interests. The
Supreme Court concluded that any justifications offered by the states were undermined by the

169
unequal application which allowed Internet sales by in-state wineries but not by out-of-state
wineries.

Court opposition to protectionist state laws has extended into other areas and other
legal grounds. Perhaps the most significant is Craigmiles v. Giles, 312 F.3rd 220 (2002), in
which the Sixth Circuit Court of Appeals struck down a Tennessee law requiring caskets be
sold only by Tennessee-licensed funeral directors. The Court noted that funeral homes at the
time typically marked up the cost of caskets by 250 to 300 percent, while the plaintiffs
challenging the regulation typically sold caskets elsewhere for much lower prices. In
enjoining the enforcement of the casket sales restriction, the Court pointed to the Equal
Protection and Due Process clauses of the Fourteen Amendment to the U.S. Constitution. The
Court found that the statue contained an obvious protectionist bias, and found no rational basis
for the regulation that could not be achieved with a much less intrusive regulation. Note that
the restrictions on casket sales applied to both in-state and out-of-state interests, so that the
Sixth Circuit enjoined the enforcement of the Tennessee statute on a broader basis than the
U.S. Supreme Court used in the wine cases.

Thus, courts have been willing to use at least two Constitutional bases for striking
down state laws that restrict Internet commerce—the Commerce Clause in Article I of the
U.S. Constitution (used in the wine cases) and the Due Process and Equal Protection clauses
in the Fourteenth Amendment (used in the Tennessee casket sales case).

IV. A POSSIBLE LOOPHOLE: SOLICITATION OF GOVERNMENT ACTION

Competitors sometimes petition government entities to restrict the ability of their


rivals to compete in the marketplace. Even though such petitioning can have anticompetitive
results, courts have conferred “petitioning immunity” upon a wide range of activities designed
to induce government bodies to restrain competition, even when the clear intent of the
petitioners was to suppress competition. Eastern Railroad Presidents Conference v. Noerr
Motor Freight, Inc., 365 U.S. 127 (1961), United Mine Workers v. Pennington, 381 U.S. 657
(1965); City of Columbia v. Omni Outdoor Advert., Inc., 499 U.S. 365 (1991).

Nonetheless, despite the antitrust immunity granted by the Noerr decision for political
activity, the antitrust enforcement agencies do not sit silently when such restrictions on
competition are proposed. For example, the Antitrust Division and FTC jointly advised the
Rhode Island legislature of its opposition to a proposed law to restrict real estate closings from
being preformed by non-attorneys. (U.S. Department of Justice, 2002). Similarly, the FTC
opposed regulations to restrict the online sale of replacement contact lenses. According to the
FTC, requiring Internet-based sellers to obtain optical establishment licenses "would likely
increase consumer costs while producing no offsetting health benefits." (FTC, 2002). Rather
than improve consumer optical health, increased licensing costs could lead to higher prices,
which could lead consumers to replace their contact lenses less frequently.

V. CONCLUSION

Attempts to erect barriers to e-commerce within the U.S., and therefore affecting
overseas trade, often have anticompetitive effects on the marketplace. While proponents of
such barriers may claim to provide consumer protection through the restriction of e-
commerce, the primary purpose is often the protection of local interests, at the expense of out-
of-state or international competitors. Business managers seeking to limit Internet competition

170
should be on notice that United States policies have shifted and are now less likely to allow
such restraints.

REFERENCES

ABA Section of Antitrust Law. Antitrust Law Developments, 5th ed. Chicago, IL: American
Bar Association, 2002
Atkinson, Robert D., & Wilhelm, Thomas G. The Best States for E-Commerce. Washington,
DC: Progressive Policy Institute, 2001.
Brimer, Jeffrey A., and Smith-Porter, Leslie. Annual Franchise and Distribution Law
Developments, 2004 ed. Chicago, IL: American Bar Association, 2004.
Foer, Albert A. “Antitrust Meets E-Commerce: A Primer.” Journal of Public Policy and
Marketing, 20., (1), 2001, 51-63.
Federal Trade Commission. “FTC provides Connecticut with Comments on the Sale of
Contact Lenses by Out-of-State Sources,” March 28, 2002, available at
http://www.ftc.gov/opa/2002/03/contactlenses.htm.
Frynas, Jedrzel G. “The Limits of Globalization: Legal and Political Issues in E-Commerce.”
Journal of Management History., 40, (9), 2002, 871-880.
National Automobile Dealers Association. “Comments Submitted by the National Automobile
Dealers Association Regarding Competition,” submitted to the Federal Trade
Commission’s Public Workshop, October, 2002, available at
http://www.ftc.gov/opp/ecommerce/anticompetitive/comments/nada.pdf.
Ribstein, Larry E., & Kobayashi, Bruce H. “State Regulation of Electronic Commerce.”
Emory Law Journal, 51, (1), 2001, 1-82.
Smith, David H. “Consumer Protection or Veiled Protectionism? An Overview of Recent
Challenges to State Restrictions on E-Commerce.” Loyola Consumer Law Review, 15,
2003, 359-375.
United States Department of Commerce. “News,” November 22, 2005, available at
http://www.census.gov/mrts/www/data/html/05Q3.html.
United States Department of Justice. “Letter about Legislation Concerning Non-Lawyer
Competition for Real Estate Closings,” March 29, 2002,
http://www.usdoj.gov/atr/public/comments/10905.htm

171
SEGMENTING CELL PHONE USERS BY GENDER, PERCEPTIONS, & ATTITUDE
TOWARD INTERNET & WIRELESS PROMOTIONS

Alex Wang, University of Connecticut


alex.wang@uconn.edu

Adam Acar, University of Connecticut


acarnet@yahoo.com

ABSTRACT

Since cellular phone carriers offer many wireless promotions as incentives to elicit
cellular phone purchase among college students, one of the primary target audiences in
cellular phone market, and limited research is available, it is important to understand and
examine college students’ attitudes toward existing and potential wireless promotions for
strategic planning. Therefore, this study investigates college students’ practical perspectives
of what they are looking for in a cellular phone. Moreover, it examines the relations among
college students’ gender, experiences with the Internet and wireless promotions, and their
attitudes toward various wireless promotions. The results provide practical implications and
directions for future research.

I. INTRODUCTION

The increasing consumer interest in mobile devices and rapidly changing development
in wireless technologies make cellular phone market a very profitable market. Research
reveals that 1.4 billion mobile phone subscribers in the world have sent over 350 billion text
messages a month (www.cita.org). Among the 350 billion text messages, 15% of them are
commercial messages. In addition, 54% of cellular phone owners use one or more mobile
services including messaging, Internet access, downloadable ring tones, and mobile gaming.

While cellular phone carriers use advertising campaigns intensely to recruit consumers
with a variety of services, creating persuasive messages to gain favorable entry into
consumers’ minds has been challenging. Do consumers consider a service offer as a receipt of
a benefit? If they do, what are factors that may enhance their favorable attitudes? The main
purpose of this study is to find empirical evidence documenting factors that may affect
consumers’ attitudes toward existing and potential wireless promotions, which plays a
significant and constructive role in the development of future wireless marketing
communication strategies. Thus, the study’s methodology aims at investigating consumers’
cellular phone usages based on gender and information search behavior that may influence
their attitudes toward wireless promotions.

II. LITERATURE REVIEW

Past studies have suggested that males and females used their cellular phones
differently (Khoo and Senn, 2004, Madell and Muncer, 2004). Madell and Muncer (2004)
found females were more likely to own a cellular phone than males. They also found a
significant relationship between the way consumers used messaging services and e-mail
services. Khoo and Senn (2004) found females showed more negative attitudes than males
when exposing to sex-related text messages.

172
H1: Males and females would display different attitudes toward mobile promotions.

Past studies have indicated consumers formed their attitudes toward Internet
advertising based on their experiences with Internet advertising, whereas informativeness and
enjoyment were two main determinants of consumers’ attitudes toward Internet advertising
(Bracket and Carr, 2001, Schlosser, Shawitt, and Kanfer, 1999). Bracket and Carr (2001)
found that Internet advertising was perceived as a valuable source of information as well as
more irritating than advertising carried by traditional media. However, Schlosser, Shawitt, and
Kanfer (1999) found Internet advertising was perceived as more informative and trustworthy
than advertising carried by traditional media.

Tsang, Ho, and Liang (2004) measured consumers’ attitudes toward mobile
advertising and revealed that consumers perceived receiving advertisers’ text messages as
disturbing. However, they also indicated that entertainment, informativeness, irritation, and
credibility features of messages directly influenced consumers’ positive attitudes toward
cellular phone usages when advertisers delivered text messages with consumers’ permissions.
Similarly, Barwise and Strong (2002) found that mobile advertising generating favorable
impressions improved consumers’ brand attitudes and increased brand awareness when
consumers permitted the delivery of mobile advertising. In other words, with a prior
permission, most cellular phone users not only read the incoming text messages but also
responded to them.

H2: There would be a correlation between consumers’ experiences with cellular phone
usages and attitudes toward mobile promotions.

Research has studied the similarities and differences between online marketing and
mobile marketing. Tsang, Ho, and Liang (2004) believe that “Internet and mobile advertising
are emerging media used to deliver digital texts, images, and voices with interactive,
immediate, personalized, and responsive capabilities” (page 68). Although both channels
allow advertisers to personalize their messages and analyze behavior patterns, Internet
advertising provides unlimited amount of data delivery with little cost (Yoon and Kim, 2001),
whereas mobile advertising is more suitable for campaigns that are time and location sensitive
(Tsang, Ho, and Liang, 2004).

H3: There would be a positive correlation between consumers’ attitudes toward


Internet and mobile promotions.

Exploratory information seeking is a “process which satisfies consumers’ cognitive


stimulation needs through the acquisition of relevant knowledge out of curiosity”
(Baumgartner and Steenkamp, 1996, page 123). Exploratory behaviors take place when
consumers want to reduce or increase the stimulation level based on environmental factors
(Raju, 1980). Research suggests that highly educated individuals with higher income level are
more likely to engage in exploratory behavior, whereas intolerant individuals with dogmatic
values do not display exploratory tendencies (Steenkamp and Baumgartner, 1992).
Exploratory behavior can be manifested in seven categories: repetitive behavior proneness;
innovativeness; risk taking; exploration through shopping; information seeking; brand
switching; and interpersonal communication (Baumgartner and Steenkamp, 1996). In order to
satisfy the need for curiosity about new offers, consumers usually involve in exploratory
information seeking before they accept the offers. Since there is no existing study
investigating the relationship between the exploratory information seeking behavior and
consumers’ attitudes toward mobile advertising, this study asks a research question.

173
RQ: Is there a relationship between exploratory information seeking behavior and
consumers’ attitudes toward mobile marketing?

III. METHODOLOGY

A survey was developed and administered to the study’s respondents by using items
gathered from the literature and using literature as a guide to adapt, when necessary, to the
specific focus of this study. All respondents received an informed consent from the study’s
principal investigators prior to participating in this study. Once the respondents agreed to
participate in this study, they filled out all questions at their own pace. A total of 205 college
students were recruited form communication and psychology classes at a northeast university,
and 184 respondents’ responses were used. Twenty-one respondents’ responses were dropped
because of missing data. Research credits were given to students who participated in this
study.

The survey administered to the respondents used published scales (Baumgartner and
Steenkamp, 1996; Tsang, Ho, and Liang, 2004). First, the respondents were asked several
questions regarding their experiences and attitudes toward their cellular phone usages. Next,
respondents’ perceptions about the Internet and specific functionalities of a cellular phone
were measured. The third section of the questionnaire measured respondents’ past experiences
with Internet promotions and cellular phone usages and their behavioral intents in accepting
certain promotional offers.

Seven dependent variables were measured in this study. The respondents were asked
whether they would be interested in reading breaking news on their cellular phones without
additional charge with a bipolar, 7-point semantic differential scale. The other six dependent
variables were respondents’ attitudes toward various wireless promotions including receiving
text messages, receiving multi-media messages, receiving coupon, participating in
sweepstakes, receiving product information, and receiving free downloads. Respondents’
gender was asked and coded as the independent variable. Moreover, nine covariates were used
in this study.

IV. RESULTS

This study used a single MANCOVA test with respondents’ interests in reading
breaking news on their cellular phones, attitudes toward receiving mobile text messages,
receiving mobile multi-media messages, receiving mobile coupon, participating in mobile
sweepstakes, receiving mobile product information, and receiving mobile free downloads as
the dependent variables. Gender was used as the fixed variable, and nine covariates were used
for data analysis. An advantage of using a single MANCOVA test was that the correlations
among the set of seven dependent variables were considered simultaneously.

The Box's test of equality of covariance matrices (p = .052) revealed that the observed
covariance matrices of the dependent variables were equal across groups. H1 was supported as
there was a main effect (Wilks’ λ = .829) for the independent variable, gender, F (7, 167) =
4.916, p = .000, based on the multivariate tests (Table I). It was concluded that the mean
vectors for the gender variable were not equal. Overall, the set of means between male and
female respondents were different. Consequently, the gender differences with respect to the
dependent variables were established, and the individual univariate tests of between-subjects
effects could further determine on which variables male and female respondents differ. H2
and H3 were also supported since two covariates, frequency of receiving mobile text

174
messages (Wilks’ λ = .861), F (7, 167) = 3.85, p = .001, and respondents’ attitudes toward
participating in sweepstakes (Wilks’ λ = .681), F (7, 167) = 10.883, p = .000, were statistically
significant respectively based on the multivariate tests.

Table I. Multivariate Tests

Effect Wilks' λ F (7, 167) p Power


Intercept .882 3.186 .003 .945
Frequency of receiving text messages .861 3.850 .001 .979
Frequency of reading text messages .958 1.044 .402 .441
Frequency of participating in sweepstakes .966 .843 .553 .356
Frequency of reading text messages .934 1.683 .116 .679
Frequency of blocking unwanted e-mails .951 1.231 .289 .517
Frequency of signing up for free offers .970 .750 .630 .317
Frequency of signing up to receive product information .974 .625 .735 .264
Attitude toward participating in online sweepstakes .687 10.883 .000 1.000
Information seeking behavior .904 2.529 .017 .873
Gender .829 4.916 .000 .996

Based on the individual univariate tests of between-subjects effects, male respondents


(M = 5.07, SD = 2.06) were more interested in reading breaking news on their cellular phones
than female respondents (M = 3.86, SD = 2.06), F (1, 183) = 20.68, p = .000. Male
respondents (M = 3.65, SD = 1.69) were also more interested in receiving product information
on their cellular phones than female respondents (M = 2.87, SD = 1.73), F (1, 183) = 8.49, p =
.004. The more often the respondents received text messages on their cellular phones, the
more they would be willing to read breaking news on their cellular phones, F (1, 183) = 6.755,
p = .01, and the better attitudes they would form toward receiving text messages on their
cellular phones, F (1, 183) = 10.32, p = .002. Moreover, the better attitudes the respondents
generated toward participating in sweepstakes by giving out their personal e-mails on the
Internet, the better attitudes the respondents would form toward receiving multi-media
messages, F (1, 183) = 6.881, p = .009, receiving mobile coupon, F (1, 183) = 17.502, p =
.000, participating in mobile sweepstakes, F (1, 183) = 60.347, p = .000, receiving mobile
product information, F (1, 183) = 28.072, p = .000, and receiving free downloads, F (1, 183) =
9.57, p = .002.

Finally, this study asked a research question regarding the relationship between
information seeking behavior and respondents’ attitudes toward mobile promotions. The
results revealed that there was a main effect for information seeking behavior as a covariate
(Wilks’ λ = .904), F (7, 167) = 2.529, p = .017. Based on the individual univariate tests, the
higher information seeking behavior the respondents exhibited, the more they would be
willing to read breaking news on their cellular phones, F (1, 183) = 6.303, p = .013. The
respondents would also form better attitudes toward receiving mobile coupon, F (1, 183) =
7.816, p = .006, and receiving free downloads, F (1, 183) = 7.359, p = .007, if they exhibited
higher information seeking behavior.

V. CONCLUSION

This study raises hypotheses and a research question as to how consumers’ gender,
attitudes toward Internet promotions, cellular phone usages, and information seeking behavior
affect their attitudes toward mobile promotions. The results suggest that male consumers may

175
be more interested in reading breaking news on their cellular phones than female consumers.
Male consumers may also be more interested in receiving product information on their
cellular phones than female consumers. However, this study also suggests that gender is
neither a perfect predictor nor an appropriate segmenting criterion for consumers’ attitudes
toward receiving text messages, receiving multi-media messages, receiving mobile coupon,
receiving free downloads, and participating in sweepstakes.

The more often consumers receive text messages on their cellular phones, the more
likely they are willing to read breaking news and receive text messages on their cellular
phones, which suggests a possibility that consumers may get used to receiving text messages.
Moreover, the better attitudes consumers have toward participating in sweepstakes by giving
out their personal e-mails on the Internet, the better attitudes they may form toward receiving
messages, mobile coupon, product information, and free downloads and participating in
mobile sweepstakes. If consumers have higher tendency in seeking information, they are more
likely to read breaking news and receive mobile coupon and free downloads via their cellular
phones.

The results broach some important implications for advertisers to consider in


designing their advertising messages and selecting media to reach their target audiences. First,
advertisers need to understand that male and female consumers may use their cellular phones
differently. Male consumers may use their phones for getting news and product information
while female consumers may focus on practical usages of a cellular phone such as
communicating with others. The ability to associate or involve consumers with their
advertising messages is likely to be more effective when consumers’ potential needs and
advertisers’ offers are related to each other in the mind of consumers. Thus, promoting
different features of a cellular phone based on gender’s different needs of using a cellular
phone may enhance the persuasiveness of mobile promotions.

It is also useful to select the Internet as the medium to disseminate promotional


messages. Due to the interactive function of Internet advertising, consumers can control the
information they wish to see. A prototypical ad may feature a male and a female consumer
thinking about different features of their ideal cellular phones. Consumers who are intrigued
by this ad may pick the person he or she feels associated with and then seek more information
about a specific cellular phone’s features and plans. Then, this step can open a dialogue
between advertisers and consumers when consumers’ personal information and e-mails are
obtained with their permissions. Thus, relatively few repetitions of the ideal features via e-
mails provide advertisers with opportunities to persuade motivated consumers and economize
on media time.

When interpreting the findings from this study, some of the limitations should be taken
into consideration. One of the limitations has to do with the geographical representation of the
study’s sampling. This study used a college-based sampling, and no assumptions were made
about network coverage among different countries. Thus, the results could not be reliably
extrapolated to other countries. A related issue also has to do with different countries’ mobile
tariff plans. Different carriers may have different plans and fees for receiving text messages
around the globe. This study did not examine male and female consumers’ acceptable levels
of mobile plans around the globe. The irritation levels may be different between gender if
male and female consumers have different expectations toward different plans and fees for
receiving text messages around the globe. These issues certainly suggest that future research
should focus on studying cultural differences among cellular phone users. As one of the

176
reviewers suggested, future research should also attempt to obtain available market research
statistics from the likes of Vodafone for international comparisons on cellular phone usages.

Finally, this study did not account for differing levels of product involvement that the
respondents might have with their cellular phones. It is possible that this factor might affect
respondents’ responses and thus moderated their attitudes toward mobile promotions.
Consumers may develop a variety of brand associations that are subsequently paired based on
their perceptions of wireless promotions. In this case, an additional extension of this study lies
in the interactions between different types of pre-exposed brand association and different
types of post-exposed brand association among cellular phone brands. It is possible that
different types of brand association will materialize and mediate consumers’ attitudes toward
mobile promotions.

REFERENCES

Barwise, Patrick, and Strong, Collin. “Permisson-Based Mobile Advertising.” Journal of


Interactive Marketing., 16, (1), 2002, 14-24.
Baumgartner, Hans, and Steenkamp, Jan-Benedict E. M. “Exploratory Consumer Buying
Behavior: Conceptualization and Measurement.” International Journal of Research in
Marketing., 13, (2), 1996, 121-137.
Brackett, Lana K., and Carr, Benjamin N., Jr. “Cyberspace Advertising vs. other Media:
Consumer vs. Mature Student Attitudes.” Journal of Advertising Research., 41, (5),
2001,
23-32.
Khoo, Pek N., and Senn, Charlene Y. “Not Wanted in the Inbox! Evaluations of Unsolicited
and Harassing E-mail.” Psychology of Women Quarterly., 28, (3), 2004, 204-214.
Madell, Dominic, and Muncer, Steven. “Back from the Beach but Hanging on the Telephone?
English Adolescents’ Attitudes and Experiences of Mobile Phones and the Internet.”
Cyberpsychology and Behaviour., 7, (3), 2004, 359-367.
Raju, P. S. “Optimum Stimulation Level: Its Relationship to Personality, Demographics, and
Exploratory Behavior.” Journal of Consumer Research., 7, (4), 1980, 272-282.
Schlosser, Ann E., Shawitt, Sharon, and Kanfer, Alaina. “Internet Users’ Attitudes toward
Internet Advertising.” Journal of Interactive Marketing., 13, (3), 1999, 34-54.
Steenkamp, Jan-Benedict E. M., and Baumgartner, Hans. “The Role of Optimum Stimulation
Level in Exploratory Consumer Behavior.” Journal of Consumer Research., 19, (3),
1992, 434-448.
Tsang, Melody M., Ho, Shu-Chun, and Liang, Ting-Peng. “Consumer Attitudes toward
Mobile Advertising: An Empirical Study.” International Journal of Electronic
Commerce., 8, (3), 2004, 65-79.
Yoon, Sung-Joon, and Kim, Joo-Ho. “Is the Internet More Effective than Traditional Media?
Factors Affecting the Choice of Media.” Journal of Advertising Research., 41, (6),
2001, 53-60.

177
E-BUSINESS BASED SME GROWTH:
VIRTUAL PARTNERSHIPS AND KNOWLEDGE EQUIVALENCY

Zoe Dann, Liverpool John Moores University


z.dann@ljmu.ac.uk

Paul Otterson, Liverpool John Moores University


p.j.otterson@ljmu.ac.uk

Keith Porter, Liverpool John Moores University


j.k.porter@ljmu.ac.uk

ABSTRACT

This article describes a research programme investigating the use of e-Business by


SMEs to promote their growth through SME partnerships. It shows how e-business based
SME clusters can form collaborative, “virtual” organisations. The key issues of e-business for
SMEs are defined and described. It then goes on to look at how these are used to approach
working in partnerships and networks. It shows how SME knowledge equivalency is needed
for successful partnerships and describes the concept of Core Competence Knowledge
Equivalency (CCKE) for success in “virtual” partnership. The development and use of a
CCKE self-assessment tool is described. Other influencing factors are examined. A practical
case study of five SME virtual clusters is used as evidence.

I. INTRODUCTION

The UK Government follows one of the generally accepted definition of a Small-to-


Medium size Enterprise (SME); “…an enterprise having between 1 and 250 employees”.

Most will agree that E-business and the Internet are potential new ways of working,
both intra-organisationally and inter-organisationally, to deliver growth and development.
These new ways of working revolve around partnerships and virtual networks. To initiate and
facilitate these partnerships, SMEs have to address some major issues:
• Understanding and exploitation of e-business systems and technology requirements
• Knowledge equivalency: The need for comparative levels of capability in
key areas.
SMEs generate a substantial share of European GDP and they are a key source of new
jobs as well as a fertile breeding ground for entrepreneurship and new business ideas. There is
therefore cause for genuine concern about the consequences if SMEs were to miss the
opportunities offered by ICT and e-business to raise productivity and to foster innovation.
The UK’s Small Business Service (2001) reviewed the use of SME websites and found that:
• 34% of SMEs used a website to advertise products/services
• 34% used them for general publicity
• 14% used them for customer support and liaison
• 18% of them were actually trading online
• Only 4% considered themselves to be very successful in e-business

Thus, most SMEs are in the very early stage of e-business, with few of them fully
utilising the website as an efficient marketing tool. Even less have integrated e-business.

178
Jeffcoate et al (2002) claim that most SMEs have been slow to adapt to the Web. UK SMEs
have almost the same total sales turnover as large companies with fully integrated e-business
system but less SMEs have adopted e-business than in the USA and only 5% have what may
be described as a full, integrated e-business system.

It would seem that the key benefits of e-Business for SMEs are new
business/customers, improved profitability, competitiveness, improved efficiency and the
ability to create partnerships. Despite all these advantages, e-Business has had a relatively
poor uptake by UK SMEs with lack of knowledge being a key barrier. Any advantages that
are gained are derived from using e-Business as an extension of business strategy rather than
being technology driven. In this research we wanted to explore this new way of working via
e-based partnerships. We have been working with some 200 SMEs, based in Merseyside,UK,
to try to define how they can use e-Business and e-Technologies as the basis for collaboration
in SME partnerships and in particular how they approach working in partnerships and
networks. Success in this seems to revolve around SME knowledge equivalency. The
concept of Core Competence Knowledge Equivalency (CCKE) for success in “virtual”
partnerships is a key element in success. Our development and use of a CCKE self-
assessment tool is described via a case study of five SME partnerships involving some 25
SMEs. Finally, we look at factors other than e-capability that influenced the clusters.

II. CRITICAL ISSUES OF E-BUSINESS IN SMES

Fillis et al (2003) claim that it is the development of appropriate skills, investment in


staff training and poor knowledge of the Internet process that are central barriers to e-business
implementation and growth. SME networks should create a value chain with supplier,
customers, partners and even competitors internally and externally. The real challenge is to
integrate sales and procurement processes (including partners) electronically. It is clear that
there has to be a balance between confidentiality and sharing knowledge and information,
which is essential for successful networking. (Observatory of European SMEs, 2002).

The fact that there are many critical factors for success in any e-Business
transformation is actually a limiting influence. SMEs have to have an appropriate level of e-
Business competence to allow them to develop their e-Business systems and technology. This
is further complicated when SMEs try to enter SME partnerships. If they do not have
equivalent skills/knowledge, problems occur particularly in respect of the level of e-business
up-take.

III. E-BUSINESS BASED NETWORKS; THE E-BUSINESS ENVIRONMENT

The significance of the computer-mediated data networks that enable such networked
organisations (Snow et al 1992) and co-ordination at arms' length was recognised by Fulk and
de Sanctis (1995). Network technology has evolved from ‘hard wired’ systems such as
Electronic Data Interchange (EDI) to the ubiquity of the Internet and its common denominator
protocol TCP/IP. This in turn has led to the ability to easily and cheaply dissolve and re-
establish virtual co-ordinating relationships and to the development of dynamic network
organisations (Benjamin and Wigand, 1995). The huge benefit of the dynamic network
organisation is the reduced asset specificity and the resultant increase in flexibility. This is
the essence of the “Virtual Enterprise” (VE). Whilst our work is not true VE, we use the VE
characteristics as its basis. Camarinha-Matos and Afsarmanesh (1999) describe a VE as a
"temporary alliance of enterprises that come together to share resources and skills or core
179
competencies in order to better respond to business opportunities, and whose cooperation is
supported by computer networks." They emphasise its transitory nature, as does Byrne
(1993).

The VE is a diverse population of organisations, each tightly defined by its core


competences (Moore, 1996). And it exists in an opportunity environment (Moore, op. cit.),
interacting in a constant sequence of transient relationships, each motivated by a particular
market opportunity.

There is a key requirement that any e-Business based partnership is choreographed to


create a self-correcting, feedback-based model. Choreography is more collaborative in nature,
where each party involved in the process describes the part they wish to play in the interaction
and no one party owns the interaction. This is where knowledge equivalency is critical. In
the case of partnerships, especially VE and/or e-based partnerships, core competencies need to
be both appropriate and compatible (equivalent) across the participating organisations. Core
competencies are knowledge (and to some extent skills) based.

IV. RESEARCH METHODOLOGY

We wished to develop a method that would allow us to compare the level of a range
of CKFs within a group of SMEs participating in a potential e-based partnership. After much
debate, we finally turned to the Supply Chain Management (SCM) concept. SCM is
inextricably linked with the concept of core competence best practice. For supply chains to
function to maximum benefit to all partners, there is much evidence, (Kanter, 1994, Macbeth
and Ferguson,1994, Roy,1992) of the need for close relationships between partners. This led
to the need for the establishment of realistic working standards and practices between
companies (Lamming, 1993)
Andersen (1999) describes an exercise attempting to benchmark best practice (which
closely relates to our objectives) in SCM and also identified key supplier characteristics. This
work happened to be in an area where SMEs predominate.

V. THE CORE COMPETENCIES KNOWLEDGE EQUIVALENCY (CCKE) TOOL

The details of the development of the assessment tool have been reported elsewhere
(Porter & Barclay, 2003). In the next section we explain how it was used within SME
partnerships.

In ongoing work we had an applied research programme on Internet-based trading


which worked with some 100 SMEs to enhance their e-commerce/business capability.
Halfway through this three-year programme, we had the idea of selecting 30 of the most
promising of the SMEs and organising them into five e-based trading clusters with five or six
SMEs in each cluster. The intention was to group similar companies with a common interest
and then hold facilitated meetings centred on an e-business area of interest. Initial talks with
companies made it clear that participants would not engage unless there was a clear non-
competing environment/agreement. In the end, five clusters were formed with common
interests and matching capabilities being the main basis of selection.

As far as possible, the e-based clusters were left to organise themselves. During the
initial cluster organisation meetings, one of the research group took the role of facilitator to
encourage individuals to participate. A provisional agenda was drawn up to provide some
180
focus for the meetings. Essentially these first meetings were used to introduce individuals and
discuss areas of potential interest and activity.

The CCKE was run on all the companies both as individual SMEs and as part of a
cluster. Examples of the results from an individual cluster are given below in Table 1. This
shows the actual score of one of the SMEs as compared to the score needed to meet 50% of
the “idealised” supply chain benchmark standard.

Core Score to Access


Competencies
Code Actual Score 50% Shortfall
Business 1 17 24 7
Considerations
Financial 2 40 36 -4
Logistics 3 33 50 17
Customer 4 26 31 5
Service
Management
Technical 5 20 25 5
Quality 6 34 34 0
Suppliers 7 38 36 -2
IT Systems 8 31 38 7
Development 9 10 14 4
Human 10 26 31 5
Resources
Total 275 319 44
Table 1: Single Sme Knowledge And Skills Shortfall

This shows significant knowledge shortfalls in Logistics and the related IT Systems.
Thus, if this company wished to enter an e-based partnership where logistics (especially) was
important, there would be a prohibitive knowledge gap that had to be addressed.
This analysis was run at the start of the cluster activity on each individual company
(within each cluster) for self-awareness and development. By identifying knowledge gaps, we
were able to draw up development programmes and methodologies to address and/or
compensate for shortfalls. We will not go into the detail of this, as this is not the purpose of
this article.
After 12 months all the partnerships had succeeded in working together to some extent,
as e-based partnerships and had attracted business (some to a greater extent than others) that
they would not have gained as individual SMEs. As well as assessing the business growth
generated by their collaborative work, we also questioned them as to how they felt their
partnership was working. Using these two factors, we were able to assess that Cluster 3 was
the best and Cluster 1 was the worse cluster. The Table 2 shows the initial assessment scores
for both of these clusters together with the average score for all five clusters:

Core Competencies
Code Average Cluster 1 Cluster 3
Score
Business Considerations 1 19 20 21

181
Financial 2 43 32 37
Logistics 3 35 41 50
Customer Service 4 30 27 33
Management
Technical 5 26 21 27
Quality 6 33 31 35
Suppliers 7 37 30 36
IT Systems 8 33 28 40
Development 9 14 12 16
Human Resources 10 29 25 32
Total 299 267 327
Table 2. Results From Cluster Application

Cluster 1 had the lowest core competence score and this is reflected in problems with
IT Systems and Logistics use (this is the biggest single difference between the two clusters).
From this it is clear that the best performing cluster (Cluster 3) has a much higher average
core competencies score with less knowledge gaps. Of interest is the fact that the score for
Business Considerations is almost the same for all clusters. Additionally, Cluster 1 scored
better than any of the clusters on Financial Management.
Most importantly though, it is the fact that these were supposedly all e-based clusters
and it can be seen that the scores for IT Systems and Logistics were critical and reflect the
success of Cluster 3. All the SMEs in Cluster 3 had ICT in-house expertise.
Cluster 3 was the most successful in commercial terms as it bid for and got a £1.3M
contract that none of them as individual SMEs would have been able to deliver.
Cluster 1 was the only cluster that did not hold regular meetings of all participating
staff.
The initial learning from this limited data and time frame is:
• The results from the CCKE tool correlate well with the operational evidence
• Face-to-face cooperation (not by e-mail) is important
• Having an expert in the key area of the cluster is beneficial
• The mentoring/facilitating roles are important.

It can be seen that knowledge (and expertise) is critical to the success of any enterprise
and a company needs to be aware of the amount and depth of knowledge at its disposal. It
does appear that the CCKE is sensitive enough to identify the operational knowledge gaps in
the clusters. However, it is too early to say that this is the definitive measure as personal
interaction is obviously an important element. We found that it was immensely difficult for
any cluster to operate without external intervention that the companies trusted (i.e. The
University staff). However, once a basic trust had been established, the SMEs were much
more likely to collaborate on projects. What can be reasonably asserted is that they were
much more likely to enter a “partnership” where they see that they have a share of the power.

VI. CONCLUSIONS

The CCKE Assessment Tool was most useful in determining equivalency of capability
for assembling the clusters. It was also useful in allowing the companies to assess their
expertise and deficiencies. All the companies saw the potential advantage of working in

182
collaboration. However, despite the proven and apparent advantages, we met with significant
resistance to this whole process. The reasons given for this were:
Too much time, effort and energy would be expended to meet any minimum standards
requirement, “This could be better used developing the business by traditional routes”.
Entering into such a process “Removes the control of our destiny from us”.
• “We don’t trust them" e.g. the lead bidder may take future business.
• The head of any Supply Chain we may access is too powerful.
Thus it was the issues of trust and power that were the major problems.

While the work we did was not true virtual enterprise based, it was e-business based
and has lessons for the virtual enterprise activity. From our experience with this
project, the development of SME partnerships turns on some major issues:
• It is unlikely to happen unless a significant business opportunity arises.
• If the opportunity arises outside of existing working relationships, then there are
the problems of power and especially, trust.
• An external catalyst seems to be essential in creating the partnership.
• Higher-level technical capability seemed to promote good working relationships.
• Formation of partnerships tends to be driven by community, enterprise or
technology factors.
This last observation is in broad agreement with the findings of Lockett & Brown (2003).

Getting SMEs to collaborate outside of a major contract allows SMEs to build mutual
trust to the benefit of all. However, achieving this is an immensely difficult task. Whilst
some will become involved in a formal partnership such as a Supply Chain, most will not
because they have no power in such a system unless they have “expertise leverage” (which
most do not). The extreme alternative is one of “everybody doing their own thing” which
means reliance on organic growth, ruling out the high growth potential from “partnerships”.

REFERENCES

Andersen B, Fagerhaug T, Randmael S, Schuldmaier J & Prenninger J, “Benchmarking


supply chain management: finding best practices”, The Journal of Business and
Industrial Marketing, Vol 14, Nos5, 6, 1997 pp 378–389.
Benjamin R I & Wigand R T, “Electronic markets and virtual value chains on the information
superhighway”, Sloan Management Review, No. 2, 1995, pp. 62-72
Byrne J A, The Virtual Corporation, Business Week, February 1993
Camarinha-Matos L & Afsarmanesh H, "The virtual enterprise concept", in Infrastructures for
Virtual Enterprises L. Camarinha-Matos and H. Afsarmanesh (eds.), Kluwer
Academic Press 1999
e-Europe Go Digital, (2002), “Final report of the e-business policy group: Benchmarking
national and regional e-business policies for SMEs”, European Commission at:
<http://europa.eu.int/comm/enterprise/ict/policy/benchmarking/final-report.pdf>.
Fillis I, Johansson U & Wagner B, “A conceptualisation of the opportunities and barriers to E-
business development in the smaller firm”, Journal of Small Business and Enterprise
Development, Vol. 10, No. 3, 2003, pp. 336-341.
Fulk J & de Sanctis G, “Electronic Communication and Organisational Forms”, Organisation
Science, Vol. 6 No. 4 (July/August), 1995,pp 337-349
Jeffcoate J, Chappell C & Feindt S, “Best practice in SME adoption of e-commerce”,
Benchmarking: An International Journal, Vol. 9, No. 2, 2002, pp. 122-132.

183
Kanter R M "Collaborative advantage: the art of alliances", Harvard Business Review, July-
August, 1994, pp 96-108.
Lamming, R C, Beyond Partnership: Strategies for Innovation and Lean Supply, Prentice-
Hall, Hemel Hempstead, 1993.
Lockett N J, & Brown D H, “Innovations affecting SMEs and E-business with reference to
Strategic Networks, Aggregations & Intermediaries” Lancaster University
Management School Working Paper, 2003/020
Macbeth D K & Ferguson N., Partnership sourcing: An integrated supply chain Approach,
Pitman, Financial Times, London, 1994
Macpherson A & Wilson A, “Enhancing SMEs’ capability: opportunities in supply chain
relationships”, Journal of Small Business and Enterprise Development, Vol 10, No 2,
2003, pp 167-179.
Moore J F, The death of competition : Leadership and Strategy in the Age of Business
Ecosystems, Harper Business, London, 1996
Roy R & Whelan R, "Successful recycling through value chain collaboration", Long Range
Planning, Vol 25, No 4, 1992, pp 75-87.
Small Business Service,), “Small and medium-sized enterprise (SME) Statistics for the
UK,2001,” http://www.sbs.gov.uk/default.php?page=/ press/news 90 .php
Snow C C. Miles R E &. Coleman H J, “Managing 21st century network organizations”,
Organizational Dynamics, Vol.20, No.3, 1992, pp. 5-20
Acknowledgement
The authors would like to thank Professor I Barclay, Director, Merseyside SME Development
Centre for his input in preparation of this paper.

184
CHAPTER 7

ECONOMICS

185
IMPORTANT CHANGES IN THE U.S. FINANCIAL SYSTEM

Vincent G. Massaro, Long Island University


vincent.massaro@liu.edu

ABSTRACT

Despite the number of sharp changes in its economic world during this new century,
the United States remains a well-positioned and growing economy in today’s world. Among
the world’s major countries, the United States continues to grow rapidly. Its growth has taken
place in a world in which it has faced a decline in its stock market prices of more than $7
trillion (in a 3-year period) and a sharp move in its balance of payments position, that has left
it in a large negative trade position. Since the beginning of the year 2000, the balance of
payments deficit has risen from roughly $380 billion to $624 billion in 2004. The domestic
economy, despite an early recession, has been strengthened by a sharp increase in real estate
assets of well over $3 trillion. A key issue for the country is how these important changes will
affect the economy in future years.

I. INTRODUCTION

During the past five years, large and important changes have taken place in the United
States, both in the real and financial sectors of the economy. During the period, the U.S.
government has moved from having a surplus of almost $300 billion in the year 2000 to
having a deficit exceeding $360 billion in 2004. Also, since the beginning of the year 2000,
the balance of payments deficit of the United States has risen sharply. In the year 2000,
imports exceeded exports by $379.5 billion dollars; in 2004, imports were $624 billion dollars
greater than exports. In the first half of 2005 imports were running at a rate that suggested a
figure that could be $700 billion greater than exports (Flow of Funds Accounts of the United
States).

Household and nonprofit organizations began the year 2000 with more than $17.2
trillion in equity shares, and they ended the year 2002 with slightly under $10 trillion in
equity. Since then, equity shares have risen to nearly $14.4 trillion at the end of 2004. These
large changes reflect important changes that have taken place in the economy of the United
States during the past five years. They also indicate that there are important issues ahead for
the U.S. government in future years. Despite these large movements, the U.S. economy
suffered only a minor recession; indeed, so minor that it is questionable if there really was a
recession. And, even today, it is questionable if the large increases in prices of oil and gas
will lead the economy into recession. The majority view is that it will slow the growth of
GDP, but that GDP will remain positive during the period.

II. LOOK AT MORE THAN GDP

Earlier studies have shown that while changes in GDP are important for understanding
what is happening in the economy, other factors – independent of GDP – can also be
important. This is true both for understanding present economic activity and also for
understanding future activity. For example, in the United States in 1990, the consumer price
index rose at 6.1%. The Federal Reserve responded by tightening monetary policy and sent
the country into a recession. Interestingly, during the same year, the prices of a number of

186
existing assets were falling and were expected to – and did – decline in the future. These
assets included land and commercial and residential real estate. For example, commercial real
estate in the North-East fell at 7% in 1990 and 18% in 1991; and, in the Pacific, they declined
at 2% in 1990 and 11% in 1991 (Bank for International Settlements).

Using a measure that combined these movements in asset prices (and weighted them
approximately) with expected flows in the economy might well have led policymakers to
approach their decision to tighten with more caution. The economy might well have
experienced a slower rate of growth, but might well have avoided the recession that occurred.
A key lesson for the future is that while changes in GDP are important, it is also important to
assess the changes in other macro economic variables that are also important. We have, for
instance, mentioned the importance of changes in the prices of real estate and stocks
outstanding. Luckily, the changes in these variables had a relatively less serious effect on the
U.S. economy, largely because of important changes in other variables that had a
compensating effect. The effect of the huge drop in stock market prices between the years
2000 and 2002 of roughly $7.3 trillion was limited by several important offsetting factors.
First, there was an increase in value for other household assets of $6.3 trillion, of which real
estate assets were roughly $3.3 trillion (Goon, Massaro). In addition, the Federal budget
position changed from a surplus of 1.4 percent of GDP in 2000 to a deficit of 4.6 percent of
GDP in 2003. Also helping the economy, the federal funds rate declined from 6.5 percent at
the beginning of 2001 to 1 percent in 2003 (The Economist). These developments help to
explain why the decline in GNP was not more serious than would have been expected by the
huge decline in the stock market. The United States was, in retrospect, lucky in that the sharp
decline in over-valued stock market prices was neatly offset thanks, in part, to rising house
prices.

III. DEVELOPMENTS IN JAPAN

The overvalued stock market in Japan in 1989 fell more sharply and the price of real
estate assets also declined sharply. After a very difficult fifteen years, the value of the stock
market in Japan has now risen to be slightly over one-third, of its value in 1989 (it had been
well-below one-third of its 1989 value before its recent increase). On October 13, 2005 the
Tokyo Nikkei Stock average was 13449.24 (a net increase of 1960.48 during the year). (The
Wall Street Journal, October 14, 2005). Interestingly, prior to the collapse since 1990, the
Japanese economy was a favorable model for the rest of the world, having apparently good
growth with little inflation during the entire period of the 1980s. The tip-off of future changes
came when the prices of stocks and homes tripled between 1985 and 1990. The obvious
question, in retrospect, is whether policymakers should have responded earlier to these
changes, even though traditional measures of inflation and growth remained well-positioned
for future economic activity. Japan’s excellent economic growth in the 1980s was replaced by
a 15-year period of relatively slow growth.

IV. CONCLUSIONS

There have been substantial changes in important sectors of the United States
economy during the past six years. Equity prices have changed substantially, both up and
down. The balance of payments deficit of the country has risen sharply. Despite many large
changes in flow and asset prices, the rates of growth of the United States economy have not
been seriously affected. A key lesson from the behavior of the economy is that while changes
in the country’s GDP are important, it is also important to examine changes in other key

187
variables such as changes in real estate and stock market prices. It is also important to
examine policy changes that will affect the future growth of the economy, such as sharp
changes in tax policy and government surplus and deficits.

Future economic activity in the United States will be heavily influenced by whether
the large rise in real estate prices will continue or be met by a correction. Movements in stock
market prices and the prices of other financial assets will also be important.

REFERENCES

A. BOOKS:
Bank for International Settlements, 63rd Annual Report, Basle, 14 June 1993.
Federal Reserve Statistical Release, Flow of Funds Accounts of the United States,
Washington, D.C., September 21, 2005.
International Academy of Business Disciplines, Business Research Yearbook,Vol. VII, 2000,
Vincent G. Massaro, “What We’ve Learned from Derivatives.” International Academy
of Business Disciplines, Business Research Yearbook, Vol. XI, 2004. Goon, Robert,
and Vincent G. Massaro, “The Stock Market Bubble and Financial Challenges
Ahead.”
International Academy of Business Disciplines, Business Research Yearbook, Vol. XII, 2005,
Goon, Robert, and Vincent G. Massaro “A Financial Framework for Measuring
Inflation.”
B. JOURNAL ARTICLES:
The Economist, “Breaking the Deflationary Spell,” June 2003.
Federal Reserve Bank of New York Quarterly Review, Autumn 1993,
McDonough, William J., “The Global Derivatives Market.”
Federal Reserve Bank of Kansas City, Economic Review, Third Quarter 2000,
Filardo, Andrew J. “Monetary Policy and Asset Prices.”
The Wall Street Journal, February 21, 1995, pp. C1, 14, McGee Suzanne, “Got a
Bundle to Invest Fast? Think Stock-Index Futures.”
The Wall Street Journal, Friday, October 14, 2005, Section C, p. 10.

188
ACCOUNTING FOR SUCCESS IN SPORTS FRANCHISING

Ilan Alon, Rollins College


ialon@rollins.edu

Keith L Whittingham, Rollins College


kwhittingham@rollins.edu

ABSTRACT

Franchising is a contractual method of business that abounds across multiple service


industries. The success of franchising as a method of expansion has been documented in
multiple industries through its wide use. The restaurant sector, the retail sector and the hotel
sector are traditional franchising industries. The area of professional sports is also amongst
the established industries that use franchising, but this industry has not been typically
examined by franchising researchers. We use regression analysis to examine the patterns
between sports franchising financial performance and a number of operational factors. In this
way, we contribute to both the franchising literature on “success” and our understanding of
sports franchising as an industry that applies franchising. Our results show that the franchise
value, revenues, and operating income are all negatively affected by the media percentage of
total revenues and positively affected by non-player operating expenses.

I. INTRODUCTION
Franchising in the United States can be generally defined as a method of doing
business in which the franchisor grants brand name and sometimes operational protocols to a
franchisee in return for fees and royalties. Recent years have brought about a plethora of
research articles and books about franchising (Preble and Hoffman, 1995; Swartz, 2000; Alon,
2005). A number of observations can be made from this research:
(1) Franchising accounts for 10% of the United States’ private sector
(2) Franchising is becoming a global phenomenon
(3) Franchising is growing in many sectors of the economy (as many as 70 industries
are engaged in franchising)
(4) Franchising has a significant or dominant presence in selected service industries
(5) Franchising is a successful model for doing business
While we know the many external/environmental factors of “success” in franchising -- such
regulations, economic conditions, consumer demand, intellectual property protection, etc. --
little research has been done documenting organizational correlates of franchising success,
and even less has been done to empirically assess sports franchising success. An article by
Alon (2004) examined the key success factors of franchising in the retail sector and concluded
that organizational factors of the franchise system (age, royalties, fees, years to franchising,
proportion of franchising, and internationalization) can help explain the relative success rates
of companies. Other franchising researchers focused on failure rates (Bates, 1995;
Castrogiovanni et al., 1993)

It is our intention in this paper to add to the body of evidence that exists on franchising
success/failure, in general, and to help explain selected sports franchising success matrices –
franchising value, total revenue, and operating income, using a number of available
organizational factors, including the percentage of media income, player costs, non-player
operating costs, and league type.
189
II. METHODOLOGY
Study Sample
The sample under study in this investigation consisted of the 113 “Major League”
professional sports franchises in North America, in the sports of American Football,
Basketball, Baseball and Ice Hockey, governed by the NFL, NBA, MLB and NHL
respectively. The data was taken in the 2000 – 2001 period, and there were 26, 29, 28 and 30
teams in the four respective leagues at that time. The data consisted of a number of financial
measures for the franchises, including operating income, total revenue, franchise value, player
costs, total operating expenses and media revenues.

Analysis
In an effort to understand the operational factors that contribute to franchise success,
three major financial measures, total revenue, operating income and franchise value, were
individually modeled, using Ordinary Least Squares (OLS) Regression Analysis, as functions
of the other measures listed above. Media revenue was taken as a proportion of total revenue
and introduced as an explanatory variable. Additionally, two components of operating
expenses, player cost and non-player costs, were taken separately as explanatory variables.
Lastly, three dummy variables were used to indicate whether or not the sport played by the
franchise was football, baseball or basketball. If significant, coefficients for these dummy
variables would indicate the advantage or disadvantage of fielding teams in any of these three
sports with respect to the excluded sport of ice hockey.

Table I shows descriptive statistics for the financial variables discussed above. As can
be seen, these are all positively skewed, most quite significantly. To mitigate this positive
skew in the data, and to produce regression coefficients that could be interpreted as percentage
change in the response variable for each percentage change in the explanatory variables, Log-
Log regression was performed by taking the base-10 logarithm of each of the input and output
(non-dummy) variables described above. The variable Operating Income was offset by a value
of +15 in order to make all of its values positive prior to taking the base-10 logarithm.
Coefficients for the dummy variables for each sport (retained in original form) could be
interpreted as the percentage change in the response variable attributable to fielding a team in
the associated sport.

TABLE I. DESCRIPTIVE STATISTICS FOR MODEL VARIABLES


Variable Name Description Mean Std. Deviation Skewness
Statistic Std. Error
FrValue Franchise Value 146.47 54.822 0.279 0.227
TotRev Total Revenue 61.435 21.0724 0.485 0.227
Opinc Operating Income 6.965 9.4186 0.798 0.227
Plyrcst Player Cost 34.569 14.1696 0.474 0.227
MedRvPct Media Revenue as % of total 37.9778 16.13139 -0.114 0.227
NonPlrOpEx Non Player Operating Expenses 19.896 5.1567 0.738 0.227

III. EMPIRICAL MODELS AND ESTIMATED RESULTS


Table II shows the correlation table for the explanatory variables. The correlations
among the dummy variables for each sport are omitted in the table since we know these
variables must be negatively correlated due to their ipsative nature (i.e. a value of 1 for any
dummy variable must result in values of 0 for the other dummy variables). It must be noted

190
that a number of the correlations are significant, and a few (specifically some of the
correlations to the Football dummy variable) are large. In fact, in each of our regression
models the coefficient for the variable Football had a VIF (variance inflation factor) value that
was greater than 10, a sign of potential multicollinearity issues. This could negatively impact
our ability to make accurate predictions from regression models based on these variables.
However, if we alter the choice of omitted dummy variable (from Ice Hockey to Basketball
for instance) the VIF values all fall within acceptable limits, and the remaining coefficients of
our models remain virtually unchanged. The existence of equivalent models, with changes
only in dummy variables, suggests that any potential multicollinearity issues may not have
significant impact on the models.

TABLE II. CORRELATION TABLE FOR INDEPENDENT VARIABLES


LogMdPct LogPlyrCst LogNonPlrOpEx BasketBl BaseBall Football
LogMdPct 1 .559** 0.095 0.115 0.143 .554**
LogPlyrCst 1 .404** -.264** 0.065 .717**
LogNonPlrOpEx 1 -0.074 .349** 0.038
** Correlation is significant at the 1% level (2-tailed).

Three unique regression models were created using SPSS, one each for the base-10 log
variables for Franchise Value, Total Revenue and Operating Income. Two additional models
separately investigated the relationships for the cases where Operating Income was positive
and negative. The general form of all the models is given by:

DependentVarn = α + (β1 × LogMdPct) + (β 2 × LogPlyrCst) + (β 3 × LogNonPLrOpEx )


+ (β 4 × Football) + (β 5 × Baseball ) + (β 6 × Basketball)

where the explanatory variables are described as in Table I.


Results of the regression analysis are shown in Table III. Significant regression models
were found to exist for all of the response variables, with ANOVA f-values significant below
1% for all cases except for the model for Log of Operating Income for unprofitable franchises
(below 5% for that model).

Model For Franchise Value


Franchise Value was found to decrease by 0.139% for every 1% increase in Media
Percentage of Total Revenue, and to increase by 0.366% and 0.760% for every 1% increase in
Player Cost and Non-Player Operating Expenses respectively. Basketball and Football
participation yielded between 0.20% and 0.25% increase in franchise value compared to Ice
Hockey participation. Baseball participation yielded a negligible gain over Ice Hockey
participation.

191
TABLE III. OLS REGRESSION COEFFICIENTS
Response Variable in the Model
Model Coefficients Log FrVal LogTotRev LogOpInc LogOpIncPos LogOpIncNeg
Constant 0.684 0.281 0.711 -0.866 0.490
-0.139* -0.138** -0.364*
LogMdPct
(-2.481) (-2.719) (-2.026)
0.366** 0.438** -0.545**
LogPlyrCst
(6.728) (8.870) (-3.121)
0.760** 0.742** 1.359** 1.366* -2.776*
LogNonPlrOpEx
(12.659) (13.618) (7.048) (2.346) (-2.560)
0.207** 0.113** 0.300** 0.543*
Basketball
(7.250) (4.369) (3.276) (2.145)
0.058 0.063*
Baseball
(1.793) (2.142)
0.255** 0.136** 0.385**
Football
(5.883) (3.460) (2.759)

Model Parameters n=113 n=113 n=113 n=87 n=26


ANOVA F-value 180.94** 168.81** 14.14** 5.54** 2.68*
2
R 0.911 0.905 0.445 0.294 0.458

Notes: Coefficients with p-values > 10% not shown. T-values shown in parenthesis.
* Coefficient significant below 5% level.
** Coefficient significant below 1% level.

Model for Total Revenue


Total Revenue was found to decrease by 0.138% for every 1% increase in Media
Percentage of Total Revenue, and to increase by 0.438% and 0.742% for every 1% increase in
Player Cost and Non-Player Operating Expenses respectively. Basketball and Football
participation yielded between 0.11% and 0.14% increase in Total Revenue compared to Ice
Hockey participation, while Baseball participation yielded a small but significant .06% gain
over Ice Hockey participation.

Model for Operating Income


Operating Income was found to decrease by 0.364% for every 1% increase in Media
Percentage of Total Revenue, and additionally to decrease by .545% for every 1% increase in
Player Cost. Operating Income was found to increase by 1.359% for every 1% increase in
Non-Player Operating Expenses. Basketball and Football participation yielded between 0.30%
and 0.39% increase in Operating Income compared to Ice Hockey participation, while
Baseball participation yielded no significant gain over Ice Hockey participation.

When only profitable franchises (positive operating income) were considered,


Operating Income was found to increase by 1.366% for every 1% increase in Non-Player
Operating Expenses, a very similar effect to that seen in the overall case for this independent
variable. No other financial variables were significant however. Of the dummy variables for
sport, only Basketball yielded a significant effect, in the form of a 0.543% increase in
operating income compared to Ice Hockey participation.

In the case of non-profitable franchises (negative operating income), Operating


Income was found to increase (smaller negative value) by 2.776% for every 1% increase in

192
Non-Player Operating Expenses, more than double the effect that was observed in the overall
case for this independent variable. No other financial or dummy variables were significant for
this response variable however. Note that there were only 26 non-profitable franchises among
our data, a smaller sample size than we would ideally be comfortable with.

IV. CONCLUSION
An examination of each of our variables across the multiple success measures yields
interesting results for some of our models. The percentage of media on revenue had in all
cases a negative impact our success measures value, revenue and income. The largest
negative impact is on the operating expenses. There seems to be a negative financial return
for overemphasizing media as a revenue stream. In other words, the more owners of a
franchise seek to make media revenue grow as a percentage of total revenue, the more likely
their total revenues, valuation and operating income will suffer.

Player costs help valuation and revenues but not operating income according to our
models. This means that franchise owners who concentrate on bringing premium-priced
players can benefit their valuation and total revenue, but may suffer a loss in the operating
income. Premiums on celebrity players have skyrocketed leading to criticism from the
general public and franchise owners alike. However, our data show that despite increases in
premiums franchising values and revenues sorely depend on these expensive players. Player
operating costs has the largest (negative) coefficient in the operating income model.

Non-player operating expenses had a positive income in all our models, but one.
Generally speaking, spending on various non-player related items to enhance the image of the
franchise pays off in terms of valuation, revenue, and most of all operating income. This is
true particularly when the franchise is already operating at a loss. Then, non-player operating
expenses have a negative impact on the magnitude of the operating loss, equivalent to an
increase in operating income.

In terms of league, we used dummy variables with a comparison league of Ice Hockey.
In general, Basketball and Football show higher valuations, total revenues, and operating
income. Valuation is the most impacted by league. Baseball, too, had higher total revenues
compared to Ice Hockey, but insignificant difference in terms of valuation.

In summary, our various models were able to explain a large amount of the variation
in our dependent success variables. We explained over 90% of the variation in the franchise
value and total revenue using six organizational variables. Our model for operating income
explained somewhat less of the variability in that response variable, but still exhibited an R-
squared value around 45%. Our results show that even in a field as narrow as sports
franchising, on the one hand, wide variations exist in the relative impact of organizational
factors within sub-sectors of the industry – i.e., the different leagues – but, on the other hand,
a common structural framework can be developed to explain much of the variation of success
across the different leagues.

REFERENCES

Alon, Ilan. Service Franchising: A Global Perspective, New York: Springer, 2005.
Alon, Ilan. “Key Success Factors in the Franchising Sector in the Retailing Sector,”
Proceedings of the Southwest Academy of Management, (45th Annual Meeting)
Orlando, Florida (March 3-6), 2004.
193
Bates, Timothy. “Analysis of Survival Rates Among Franchise and Independent Small
Business Startups.” Journal of Small Business Management., 33 (2) 1995, 26-36.
Castrogiovanni, Gary J., Justis, Robert T., and Julian, Scott D. “Franchise Failure Rates: An
Assessment of Magnitude and Influencing Factor.” Journal of Small Business
Management., 16, 1993, 105-114.
Combs, James G., and Gary J. Castrogiovanni. “Franchisor Strategy: A Proposed Model and
Empirical Test of Franchise Versus Company Ownership.” Journal of Small Business
Management., 32, (2), 1994, 37-48.
Falbe, Cecilia M., Dandridge, Thomas C., and Kumar, Ajith. “The Effect of Organizational
Context on Entrepreneurial Strategies in Franchising.” Journal of Business Venturing.,
14, 1998, 125-140.
Falbe, Cecilia M. and Welsh, Dianne H. B. “NAFTA and Franchising: A Comparison of
Franchisor Perceptions of Characteristics Associated with Franchisee Success and
Failure in Canada, Mexico, and the United States.” Journal of Business Venturing., 13,
1998, 151-171.
Preble, John F. and Hoffman, Richard C. “Franchising Systems Around the Globe: a Status
Report.” Journal of Small Business Management., 33 (2), 1995, 80-88.
Sen, Kabir C. “The Use of Franchising as a Growth Strategy by US Restaurant Franchisors,
Journal of Consumer Marketing., 15, (4), 1998, 397-407.
Swartz, Leonard N. “Franchising Successfully Circles the Globe.” Franchising World., 32,
(5), 2000, 36-37.

194
DO CHINESE INVESTORS APPRECIATE
MARKET POWER OR COMPETITIVE CAPACITY

Aiwu Zhao, Kent State University


azhao@kent.edu

Jun Ma, Kent State University


jma@bsa3.kent.edu

ABSTRACT

This paper examines the market-based and resource-based views of value generating
process of Chinese public companies, a sample of firms operating in a transition economy
with controlled institutional differences. Our study, by incorporating institution influences
into value generating process, finds that the traditional view of resourced-based variables
seems play a more important role in determining firm value than market-based view
regardless of the government stake in the firm. The firm’s value is mainly determined by the
size of the firm, management ownership that is not traded publicly, and the share held by the
largest shareholders. The results also indicate that China’s stock market is still in an immature
stage. The market valuation of the firm is mainly determined by the internal resource of the
firm.

I. INTRODUCTION

Market-based view (MBV) and resource-based view (RBV) of firms are two
competing theories on firms’ value generating strategy. Market-based view argues that firms
are able to generate higher value through competitive market advantages such as monopoly,
barriers to entry, and bargaining power (Grant, 1991). Resource-based view argues that firms
can increase value through tangible and intangible assets that facilitate strategies enhancing
efficiency and effectiveness (Barney, 1991).

Empirical studies on industrialized countries usually find that both the firm’s market
position and competitive capacity are important in affecting firm performance. Because these
two sets of factors are entangled to each other, it is hard to distinguish which strategy takes a
more important role (Powell, 1996; McGahan & Porter, 1997). Some empirical studies on
emerging economies, in contrast, find evidence supporting resource-based view of firm
(Makhija, 2003), suggesting that firm’s unsubstitutable resources are the major determinants
of firm value in a state of flux.

Studies on institutional transformation in emerging economies, however, cast some


doubt on the evidence that favoring RBV. Emerging economies usually experience a
transition from state controlled economy to market oriented economy. But the economy is
usually still highly regulated during the transition. For example, Child and Lu (1996) indicate
that the reform of large state-owned enterprises in China was very slow because of material,
relational, and cultural constraints. Suhomlinova (1999) also notices that government
institutions influenced enterprise reform in Russia. Because the transition process unfolds
gradually over time, MBV proponents thus argue that such market power as privileged market
position will provide valuable source for competition in transition markets. Even though the
speed and nature of institutional change have important influence on enterprise strategies, the

195
empirical evidence on the role of institutions in transition economies is limited (Hoskisson et
al. 2000; Djankov & Murrell 2002). Because institutional factors have many dimensions and
each can change differently during economic transition, it is hard to capture the institutional
factors through appropriate measurement in time series analysis. Because of the complexity of
the institutional environment in transition economies, it is expected that different strategies
would be applied by enterprises. Previous studies on strategies in emerging markets usually
carried out cross-sectional analysis, in which institutional framework is usually considered the
same for all enterprise, thus ignoring the institutional influences in enterprise strategies.
Publicly traded companies in China provide an ideal case for the study to incorporate
institutional impact to enterprise strategies. In such a big scale transition economy,
government exercised different level of control on different industries.

It is expected to see market-based variables and resource-based variables play different


roles in various institutional environment. Industrial average government ownership stake is
used to distinguish the institutional impact. Smaller government stake indicates that the
industry is exposed to a lower level of government control. During China’s transition, these
industries were usually restructured earlier than industries that involve essential natural
resources or play a key role of national strategies. Market pressure is the major motive for the
operation of these firms, and such resource-based variables as innovation are expected to be
important on affecting the firm value. In the other group of industries in which government
still holds a big stake because of their importance of national strategies, those firms with
bigger size and higher level of government control would have a privilege over smaller firms
and firms with not so close relationship with government, because it is much easier for them
to attain government support.

II. METHODOLOGY AND RESEARCH DESIGN

The paper follows Makhija’s (2003) effort on distinguishing the MBV and RBV of
firm with application in the economic transition of Czech. Our data contain most of
companies publicly traded in Shanghai Stock Exchange and Shenzhen Stock Exchange in
China during 2001 and 2003. Meanwhile, we selected industries with at least 18 companies
listed in the exchanges. Small number of companies within the industry may create bias in
terms of the market share and the number of firms. The final data include 969 companies that
have been publicly trade since 2001. We calculated the average percentage of government
stakes for each industry and then divided the sample in to three groups. We only select the
two groups to make a comparison for our study. The first group with less government control
includes 316 companies and the second group with more government control includes 312
companies.

We outline the hypothesized relationship via linear regression methodology.


Firm valuation = β0 + β1 Firm Size + β2 Variability of profitability + β3 Leverage
+ β4 Profitability + β5 Market Share + β6 No. of Rivals + β7 Management Ownership
+ β8 Foreign Ownership + β9 Government Ownership
+ β10 Share held by the largest share holder + β11 Management Efficiency

MBV predicts that Firm Size, Profitability, and Market Share are positively related to
firm value, while Variability of profitability, Leverage, and No. of Rivals are negatively
related to firm value. On the other hand, RBV predicts that Variability of profitability,
Leverage, Management Ownership, Foreign Ownership, Government Ownership, Share held

196
by the largest share holder, and Management Efficiency are positively related to firm value,
while Firm Size is negatively related to firm value.
Since we attempt to examine the impact of resource-based variables and market based
variables on the market valuation of the firms, we use Tobin’s q as the measure of the listed
companies’ value. We used the data from 2003 to calculate all related variables. Tobin’s q in
our study is calculated by the following formula:
Tobin’s q = (ASP × NS + DEBT)/ ASSETS,
where ASP is the average stock price for 2003, NS is the number of stock issued, DEBT is the
total debt for 2003, and ASSETS is the total assets for 2003.

Independent variables in the model are measured as follows:


SIZE (firm size) = Log of total assets for 2003
VAR (variance of profitability) = (Maximal profit from 2001 to 2003- minimal profit from
2001 to 2003)/ average profit for 2001, 2002, 2003
LEV (debt leverage) = percentage of total liabilities to total assets in 2003, %
ROA (profitability) = return on assets for 2003, profit/revenue
MKTSH (market share) = percentage of sales of the individual firm relative to total sales of
all firms in its industry in 2003
NRIVAL (number of rivals) = Log of the number of firms in the industry of the firm in 2003
MOWN (management ownership stake) = Log of stock not in the market
FOR (foreign ownership stake)= Log of foreign ownership when there is nonzero
ownership; otherwise zero
GOV (government ownership stake) = Log of government ownership when there is non zero
government ownership; otherwise zero
LOWN (ownership stake by largest share holder) = percentage of fund held by the largest
share Holder
MNGEFF (management efficiency) = total revenue / total assets for 2003

III. RESULTS AND DISCUSSION

Table I shows the results for the full model using both MBV and RBV variables. In
both models, the coefficient of ROA is positive, as predicted by MBV, but it is not significant.
This result is similar as the results found in Makhija’s study (2003). The market share is
supposed to be positively related to the firm’s value. However, for both groups, market share
is negatively related to the firm’s value. Probably two reasons contribute to the results. First,
we calculated the market share based on public traded companies. For some industries, there
may be still many non-public companies, weakening the explanatory power of the test.
Second, companies that have bigger market shares are those companies that privatized from
the SOEs, and they may not work as efficiently as new companies that start from the
beginning.

For the three variables that are common for both market based view and resource
based view, market based view predicts a positive sign, the coefficient on firm size is
negatively related to firm’s value at 1% significant level. Although the market based view
predicts that the firm leverage is negatively related to the firm’s value, the coefficient for firm
leverage is positive, and statistically insignificant. While the variance of profitability is
expected to be negative based on the market based view, the coefficient of the variance of
market profitability for both groups are negative but non significant.

197
Table I Testing the Full Model Using Both Mbv and Rbv Variables
In comparison, the resource based view-based variables predict the firm’s value better.

We found that the size of firm, firm leverage, management ownership, and management
efficiency have predicted with the correct signs. The size of the firm, management ownership,
and share held by the largest shareholder are significant predictors. However, the government
ownership and share held by the largest shareholder are opposite to the expected sign
predicted by the resource-based variables. The sign of the coefficient of the firm that is
consistent with the sign expected by the resource-based view indicates that larger firms, with
larger bureaucracy, are less responsive. In China, the public firms may suffer from the agency
problems. The newly started firms with small size may be more efficient and the firm’s value
may be higher. The signs of foreign ownership are different between the two groups. For
companies in which the government has small stakes, the sign is opposite to the sign expected
by the resource-based view. As the resource-based view predict, the foreign ownership can
bring entrepreneurial skills and new knowledge and the firm’s value should be positively
related to the foreign ownership. However, the Chinese stock markets are still restricted to the
foreign investor. There are only very small portion of foreign ownership owned by foreign
investors. Only 2.9% companies have foreign investors. Others have no foreign investors.
This may contribute to the non-significant effect of the foreign ownership on the firm’s value.
The government ownership has a negative impact on the firm’s value for both groups. The
results contradict the expected results from resource-based view but are consistent with the
traditional view that government has a negative effect on firm’s efficiency and effectiveness.

Table II Testing for Mbv Variables Alone

Table II contains the results for the MBV model only. Each R2 is less than the
corresponding R2 for the full model. The signs for the two groups are not exactly the same as
estimated in the full model. For group 1, there are four variables with signs opposite to what
MBV predicts. For group 2, there are three variables with sign opposite to what MBV
predicts. The three variables are firm size, firm leverage, and number of rivals. It seems that
MBV can predict the firm’s value better for companies in which the government has a big
stake. Table III contains the estimations from the RBV model only for the two groups. The
coefficients estimated from RBV models are considerably higher than coefficients estimated
from MBV models for both groups. For both groups, the signs of coefficients estimated by the
RBV are consistent with those estimated by the full models.

198
Table III Testing for Rbv Variables Alone

We also conducted F tests to examine the contributions of the MBV and RBV
variables to the full model of both groups. By testing the MBV variables, the full model was
first estimated, and then we tested the constraint that the coefficients of the variables relating
to profitability (ROA), variance of profitability (VAR), firm size (SIZE), firm leverage
(LEV), market share (MKTSH), and number of rivals in the industry (NRIVAL) are all zero.
This hypothesis is rejected for both groups with an estimated F (6, 304) = 3.6, and a p-value =
0.0018 for group 1 and F (6, 300) = 6.56, and a p-value <.001 for group 2. Since SIZE, VAR
and LEV are also RBV variables, the test is repeated with only coefficients of ROA, MKTSH,
and NRIVAL hypothesized to be zero. This hypothesis can’t be rejected for both groups with
estimated F (3, 304) = 0.45, and p-value = 0.98 for group 1 and F (3, 300) = 0.26, and p-value
= .85 for group 2. Thus, inclusion of ROA, MKTSH, and NRIVAL does not contribute to the
full model. Inclusion of SIZE, VAR, and LEV does contribute to the full model. However, we
are not sure that inclusion of these three variables is attributed to the MBV or RBV. The F
tests were conducted for the RBV variables in a similar way. When all variables from RBV
are constrained to be zero, the hypothesis is rejected for both groups with F = 3.55, and p-
value <.001 for group 1 and F = 6.6, and p-value <.001 for group 2. When only MOW, FOR,
GOV, LOWN, and MGNEFF are constrained to be zero, the hypothesis is rejected again for
both groups with F = 3.28 and p-value = .0067 for group 1 and F = 8.79 and p-value <.001 for
group 2. Thus, the RBV variables make significant contributions to the full model whether
one includes variables unique to RBV or those common to both theories.

IV. CONCLUSION

In this study, we examined the determinants of both market-based view and resource-
based view on firm’s valuation. Since China’s stock market is still a new market and its
economy is a transitional economy, the traditional view of resourced-based variables seems
play a more important role in determining firms valuation than market-based view regardless
of the government stake in the firm. The firm’s value is mainly determined by the size of the
firm, management ownership that is not traded publicly, and the share held by the largest
shareholders. Other variables are not significantly related to the firm’s valuation. The results
from this study indicate that China’s stock market is still in an immature stage. The market
valuation of the firm is mainly determined by the internal resource of the firm.

REFERENCE

Barney, J. B. “Firm Resources and Sustainable Competitive Advantage.” Journal of


Management, 17, (1), 1991, 99-120.
Caves, R. E., and Porter, M. E., “Market Structure, Oligopoly, and Stability of Market
Shares.” Journal of Industrial Economics, 29, (1), 1978, 289–313.
Child, J., and Lu, Y., “Institutional Constraints on Economic Reform: The Case of Investment
Decisions in China.” Organization Science, 7, 1996, 60-67.

199
Djankov, S., and Murrell, P., “Enterprise Restructuring in Transition: A Quantitative Survey.”
Journal of Economic Literature, 40, (3), 2002, 739-792.
Grant, R. M., “A Resource-Based Perspective of Competitive Advantage.” California
Management Review, 33, 1991, 114–135.
Hoskisson, R., Eden, L., Lau, C., and Wright, M., “Strategy in Emerging Economies.”
Academy of Management Journal, 43, (3), 2000, 249-267.
Makhija, M., “Comparing the Resource-Based and Market-Based Views of the Firm:
Empirical Evidence from Czech Privatization.” Strategic Management Journal, 24, (5),
2003, 433-451.
McGahan, A. M., and Porter, M., “How Much Does Industry Matter, Really?” Strategic
Management Journal, 18, (Summer Special), 1997, 15-30.
Mehra, A., “Resource and Market Based Determinants of Performance in the U.S. Banking
Industry.” Strategic Management Journal, 17, (4), 1996, 307–322.
Powell, T. C., “How Much Does Industry Matter? An Alternative Empirical Test.” Strategic
Management Journal, 17, (4), 1996, 323-334.
Suhomlinova, O., “Constructive Destruction: Transformation of Russian State-Owned
Construction Enterprises during Market Transition.” Organization Studies, 20, 1990,
451-484.

200
CONSUMER ETHNOCENTRISM AND
EVALUATION OF INTERNATIONAL AIRLINES

Edward R. Bruning, University of Manitoba


ebrunng@ms.umanitoba.ca

Annie Peng Cui, Kent State University


pcui@bsa3.kent.edu

Andrew W. Hao, Kent State University


whao@bsa3.kent.edu

ABSTRACT

This study investigates the impact of consumer ethnocentrism on product evaluation


and the impact of national identity and economic well-being as antecedents to consumer
ethnocentrism in USA, Canada, and Mexico. The findings indicate that consumer
ethnocentrism has a positive impact on the evaluation of domestic products and a negative
impact on the evaluation of foreign goods. In general, consumers from these three countries
tend to evaluate domestic products higher than foreign products. Additionally, the results also
imply that national identity and economic well-being have a positive impact on consumer
ethnocentrism.

I. INTRODUCTION

Globalization provides considerable challenges and opportunities for international


marketers. International trade also presents consumers with more foreign product options than
ever before. Consequently, their attitudes toward foreign and domestic products have been of
interest to international marketers and consumer behavior researchers.

Shimp and Sharma (1987) first point out the concept of consumer ethnocentrism as the
beliefs held by consumers about the appropriateness, indeed morality of purchasing foreign
made products. Therefore, consumer ethnocentrism may be viewed as a way to differentiate
between consumer groups who prefer domestic to foreign products (Shimp and Sharma,
1987). These consumer ethnocentric tendencies may lead to negative attitudes towards foreign
products.
This paper begins with a brief review of the literature related to ethnocentrism and
consumer ethnocentrism. A number of hypotheses are then proposed. The research design
used to test the hypothesis, the results of the study and a discussion follow.

II. ETHNOCENTRISM

According to Sumner (1906), ethnocentrism means a differentiation between the in-


group and the out-group. Matsumoto (2000) views ethnocentrism as the tendency to view the
world through cultural filters, and Worchel and Simpson (1993) propose that ethnocentrism
indicates the response by a group that arises from perceived external threat. Elchardus et al.
(2000) regards ethnocentrism as the negative attitude towards out-groups and migrants.
Rothbart (1993) suggests a probably more practical point of view, which argues that

201
ethnocentrism, can be defined as the tendency to view in-group as the standard against which
other groups are judged.

III. CONSUMER ETHNOCENTRISM

According to Shimp and Sharma (1987), consumer ethnocentrism results from the love
and concern for one’s own country and the fear of losing control of one’s economic interests
as the result of the harmful effects that imports may bring to one’s countrymen. Ethnocentric
consumers prefer domestic goods either because they believe that products from their own
country are the best (Klein et al., 1998), or due to a concern for morality lead consumers to
purchase domestic products even though the quality is lower than that of imports (Wall and
Heslop, 1986). Consumer ethnocentrism may play a significant role when people believe that
their personal or national well-being is under threat from imports (Sharma et al., 1995; Shimp
and Sharma, 1987). The more importance a consumer places on whether or not a product is
made in his/her home country, the higher his/her ethnocentric tendency (Huddleston et al.,
2001). Research from the US and other developed countries generally supports that highly
ethnocentric consumers overestimate domestic products, underestimate imports, and feel a
moral obligation to buy domestic merchandise (Netemeyer et al., 1991; Sharma et al., 1995;
Shimp and Sharma, 1987).

Shimp and Sharma (1987) developed the CET Scale to measure consumer
ethnocentrism. CET scale has 17 items, each of which is measured on a 7-point Likert scale.
The validity of this scale is first tested by Shimp and Sharma in the US and later tested across
nations in US, France, Japan, and West Germany by Netemeyer(1997).

IV. NATIONAL IDENTITY AND NATIONAL WELL-BEING

According to Tajfel (1978), social identity is defined as the part of an individual’s self-
concept that derives from his knowledge of his membership in a social group together with the
value and emotional significance attached to that membership. A fundamental prediction of
social identity theory is that discriminatory behavior is related to an individual’s degree of in-
group identification (Tajfel, 1978). Social identity is a core ingredient of ethnocentrism
because it seeks to enhance self-esteem. The categorization and enhancement associated with
social identity supports ethnocentrism, i.e. in-group favoritism and out-group discrimination
(Perreault and Bourhis, 1999). Lantz and Loeb (1999) extend this effect to the notion of the
economic well-being and suggest that ethnocentrism is related to the social economic well-
being of the group.

V. CONCEPUTAL FRAMEWORK AND HYPOTHESES

We propose that national identity and economic well-being are two key antecedents to
consumer ethnocentrism. Consequence of consumer ethnocentrism will be reflected in
evaluation of domestic and foreign made products. The following hypotheses emerge from the
literature:
H1: There is a positive relation between national identity and consumer ethnocentrism.
H2: There is a positive relation between people’s interest in national economic well-being and
consumer ethnocentrism.
H3: There is a positive impact of consumer ethnocentrism on the evaluation of domestic
products.

202
H4: There is a negative impact of consumer ethnocentrism on the evaluation of foreign
products both from socially close countries and socially distant countries.

VI. RESEARCH DESIGN

Data was collected from 4975 air travelers in 18 cities’ airport of three different
countries (USA, Canada and Mexico) on a personal interview basis. During an interview,
respondents from these 18 airports were asked to complete a structured questionnaire, which
consisted of multiple items that are formed to operationalize our conceptual framework. The
total sample consisted of 4975 respondents: 2393 respondents in USA, 1996 respondents in
Canada, and 586 respondents in Mexico. Participants were asked to evaluate airline services
from home country, socially close countries and socially distant countries. To measure the
level of national identity, economic well-being, consumer ethnocentrism, established scales
were used.

VII. DATA ANALYSIS

To test the reliability of the measurement scales used in this study, we computed
Cronbach's alpha of each scale across the samples. The highest Cronbach's alpha is .96 and
the lowest is .67. The CETSCALE’s reliability of .88 is comparable to the reliabilities
reported by Shimp and Sharama (1987). Generally speaking, the Cronbach's alphas of the
measurement scales are acceptable.

Hypotheses 1 and 2 concern the impact of national identity (NI) on consumer


ethnocentrism (CET) and the influence of people’s interest in national economic well-being
(NW) on consumer ethnocentrism. To test these hypotheses, we conducted a linear regression
analysis of (1) consumer ethnocentrism as a function of people’s perception of their national
identity, and (2) consumer ethnocentrism as a function of people’s interest in national
economic well-being. For the total sample, the results of the regression analysis are shown in
Table 1. For equation CET=f(NI), 9% of the variance in national identity was explained by
this regression model. The result supports that people’s perception of their national identity
has a significant, positive effect on consumer ethnocentrism (β= 0.299), indicating that the
more people are concerned with their national identity, the more ethnocentric they are when
making purchases. H1 is supported. The result also supports hypothesis H2, showing that
there is a positive impact of people’s interest in national economic well-being on consumer
ethnocentrism (β = .320). We also conduct the analysis separately on each country’s data. The
results suggest that hypotheses H1 and H2 are supported in each individual country, the U.S.,
Canada and Mexico (see Table 2).
Table I Total Sample Results
NI NW CET PE-home PE-close PE-distant
Mean 5.18 5.27 4.03 6.04 4.81 4.82
SD 0.75 1.15 1.35 0.89 1.39 1.49
CET=f(NI) .. CET=f(NW).... PE=f(CET)
NI NW PE-home PE-close PE-distant
Coefficient Coefficient Coefficient Coefficient Coefficient
CET +0.299 +0.320 CET +0.183 -0.086 -0.134
(0.000)¹ (0.000) (0.000) (0.000) (0.000)
(9%)² (10.2%) (3.3%) (0.7%) (20%)
¹Numbers in this row are the p values. ² Numbers in this row are the R²s.

203
Hypothesis 3 argues that there is a positive impact of consumer ethnocentrism on the
evaluation of domestic products (PE-home); hypothesis 4 states that there is a negative impact
of consumer ethnocentrism on the evaluation of foreign products, both from socially close
(PE-close) and distant countries(PE-distant). We ran the regression analysis of product
evaluation (PE) of home country products as a function of consumer ethnocentrism. As
expected, for the total sample (see Table 1), as well as for each individual country (see Table
2), consumer ethnocentrism is found to be positively related to evaluation of domestic
products. In other words, consumers who score high on consumer ethnocentrism tend to give
higher evaluation for home country products. Hypothesis 4 is supported by the total sample
results (see Table 1), showing that consumer ethnocentrism has a negative impact on the
evaluation of foreign products, no matter the products are from socially-close or socially-
distant countries. The respective results from the U.S. sample and the Canadian sample
support H4 as well. The results from the Mexican sample represent slight deviation (see Table
2). The negative impact of consumer ethnocentrism on product evaluation of socially distant
countries is not significant, though the result indicates the same tendency (β= -.041).
Furthermore, the Mexican data also reveals a tendency that consumer ethnocentrism is
positively related to product evaluation of socially close countries (β= +.027), though the
result is not significant. Therefore, H4 is supported by results from the total sample and from
U.S. and Canada.

Table II Indivdual Country Results


CET=f(NI) CET=f(NW) PE=f(CET)
NI NW PE(home) PE(close) PE(distant)
US …. Coefficient Coefficient Coefficient Coefficient Coefficient
CET +0.352 +0.337 CET +0.215 -0.128 -0.207
(0.000)¹ (0.000) (0.000) (0.000) (0.000)
(12.4%)² (11.4%) (4.6%) (1.6%) (4.3%)
CET=f(NI) CET=f(NW) PE=f(CET)
NI NW PE(home) PE(close) PE(distant)
Canada Coefficient Coefficient Coefficient Coefficient Coefficient
CET +0.198 +0.267 CET +0.145 -0.096 -0.134
(0.000)¹ (0.000) (0.000) (0.000) (0.000)
(3.9%)² (11.4%) (2.1%) (0.9%) (18%)
CET=f(NI) CET=f(NW) PE=f(CET)
Mexico. NI NW PE(home) PE(close) PE(distant)
Coefficient Coefficient Coefficient Coefficient Coefficient
CET +0.304 +0.356 CET +0.229 -0.027 -0.041
(0.000)¹ (0.000) (0.000) (0.554) (0.360)
(9.3%)² (7.1%) (5.3%) (0.1%) (0.2%)
¹Numbers in this row are the p values. ² Numbers in this row are the R²s.

VIII. CONCLUSION

First and foremost, results of this study confirm that consumer ethnocentrism does
have an impact on consumers’ evaluation of products. There is a positive impact of consumer
ethnocentrism on the evaluation of home-country products, and at the same time consumer
ethnocentrism has a negative impact on the evaluation of foreign product. Consumers who
scored high on consumer ethnocentrism tend to evaluate home county products more
positively than foreign products. In other words, ethnocentric consumers prefer home-country
products to imports. This finding is consistent with the literature (Shimp and Sharma 1987;

204
Klein et al. 1998; Sharma et al. 1995; Bruning 1997). Furthermore, we extend the scope of
consumer ethnocentrism research to a global environment by studying consumers from US,
Canada and Mexico. Results from each country suggest that the positive impact of consumer
ethnocentrism on evaluation of home-country products is prominent both in US and Canada,
developed countries, and in Mexico, a developing country.

Second, our research suggests that consumers’ perception of their national identity and
their interest in national economic well-being can be viewed as antecedents to consumer
ethnocentrism. Consumer ethnocentrism is positively correlated with people’s perception of
their national identity. In addition, consumers’ interest in national economic well-being has a
positive impact on consumer ethnocentrism. Previous studies mainly focused on the impact of
national identity on in-group favoritism and out-group discrimination (Perreault and Bourhis
1999). Few researchers have examined the influence of both national identity and interest in
national economic well-being on consumer ethnocentrism. The result of our study helps to
establish the antecedent relationship between the constructs of national identity and interest in
national economic well-being, and consumer ethnocentrism.

The Mexican sample is quite similar to the total sample. When we study the impact of
consumer ethnocentrism on the evaluation of foreign products, we have found that respective
results from the U.S. sample and the Canada sample support our hypothesis – consumer
ethnocentrism has a negative impact on the evaluation of foreign products. This is consistent
with the total sample. The Mexican sample manifests slight deviation. The negative impact of
consumer ethnocentrism on product evaluation of socially distant countries is not significant,
though the result indicates the same tendency. Furthermore, the Mexican data also reveals that
there is a tendency that consumer ethnocentrism is positively related to product evaluation of
socially close countries, though the result is not significant. Unlike the U.S. and Canada,
Mexico is a developing country. Consumers from developing countries tend to have higher
evaluation of imported products (Wang and Chen, 2004). This may help explain the
differences between Mexico and the other two countries as reflected in this study.

As with any study, this research possesses limitations. The relationship of consumer
ethnocentrism and product evaluation was only tested via one product. Future research can be
undertaken to explore relevance for this relationship with other products, e.g. convenience
good. Moreover, this research only investigated the product evaluation of home country
products versus foreign products in general. We did not examine the product evaluation of a
specific foreign country. Perhaps, future research studies could investigate the relationship of
consumer ethnocentrism and product evaluation in the context of specific countries.

REFERENCES

Bruning, E. "Country of origin, national loyalty and product choice: the case of international
air travel", International Marketing Review., 14(1), 1997, 59-74.
Elchardus M., L. Huyse, and E. van Dael . Het Maatschappelijk Middenveld in Vlaanderen:
Een Onderzoek Naar de Sociale Constructie van Democratisch Burgerschap. Brussels:
VUB press, 2000.
Henri, Tajfel. Differentiation Between Social Groups. London: Academic press,1978.
Huddleston P., Good L. K., and Stoel L. "Consumer Ethnocentrism, Product Necessity and
Polish Consumers' Perceptions of Quality." International Journal of Retail &
Distribution Management., 29 (5), 2001, 236-46.

205
Klein, Jill Gabrielle, Ettenson, Richard and Morris, and Marlene D. "The Animosity Model of
Foreign Product Purchase: An Empirical Test in the People's Republic of China."
Journal of Markteting., 62 (1), 1998, 89-100.
Lantz G. and Loeb S. "Country of Origin and Ethnocentrism: An Analysis of Canadian and
American Preferences Using Social Identity Theory." Advances in Consumer
Research., 23, 1996, 374-78.
LeVine R. and Campbell D. T. Ethnocentrism: Theories of Conflict, Ethnic Attitude and
Group Behavior. London: John Wiley, 1972.
Matsumoto, D. Culture & Psychology. London: Wadsworth, 2000.
Netemeyer R. G., Durvasula S., and Lichtenstein. "A Cross National Assessment of the
Reliability and Validity of the CETSCALE." Journal of Marketing Research., 28,
1991, 320-28.
Perreault S. and Bourhis R. "Ethnocentrism, Social Identification, and Discrimination."
Journal of Personality and Social Psychology Bulletin., 25 (1), 1999, 92-103.
Rothbart, M. Intergroup Perception and Social Conflict. Chicago: Nelson-Hall, 1993.
Sharma S. and Shimp T. and Shin J. "Consumer Ethnocentrism: A Test of Antecedents and
Moderators." Journal of the Academy of Marketing Science., 23 (1), 995, 26-37.
Shimp T. and Sharma S. "Consumer Ethnocentrism: Construction and Validation of the
CETSCALE." Journal of Marketing Research., 24, 1987, 280-89.
Sumner, W.G. Folkways. New York: Ginn, 1906.
Wall M. and Heslop L. A. "Consumer Attitudes toward Canadian-made versus Imported
Products." Journal of the Academy of Marketing Science., 14 (Summer), 1986, 27-36.
Wang, Cheng Lu; and Zhen, Xiong Chen "Consumer ethnocentrism and willingness to buy
domestic products in a developing country setting: testing moderating effects." Journal
of Consumer Marketing., 21 (6), 2004, 391-400.
Worchel S. and Simpson J. A. Conflict between People & Groups: Causes, Processes, and
Resolutions. Chicago: Nelson-Hall Publishers, 1993.

206
THE VALUATION ABILITIES OF THE PRICE-EARNINGS-TO-GROWTH RATIO
AND ITS ASSOCIATION WITH EXECUTIVE COMPENSATION

Essam Elshafie, University of Texas at Brownsville


Essam.Elshafie@utb.edu

Pervaiz Alam, Kent State University


Palam@kent.edu

ABSTRACT

The objective of this study is to examine the valuation abilities of the price-earnings-
to-growth (PEG) ratio and its association with executive compensation. Financial analysts are
found to use price-multiple heuristics such as the PEG ratio, rather than residual income
valuation (RIV) models, to support their recommendations (Bradshaw, 2002). This study
measures the valuation abilities of the PEG ratio relative to a RIV model and extends prior
research on the PEG ratio to examine its association with executive compensation. The
results do not support the superiority of the RIV model over the model based on the PEG
ratio. However, the results provide support for the existence of a relationship between the
PEG ratio and executive compensation.

I. INTRODUCTION

The PEG ratio is defined as: PEG = (P/E) / LTG, where P/E is the forward price-to-
earnings ratio (i.e., price divided by forecasted earnings), and LTG is the analysts’ projection
of long-term annual earnings growth stated in percent (Bradshaw, 2004). Two surveys by
Block (1999) and Bradshaw (2002) indicate that financial analysts use models based on the
PEG ratio, rather than residual income valuation (RIV) models, to support their
recommendations. However, evidence on the association between the PEG ratio and firm
value is limited. The objective of this study is two fold: (1) to examine the valuation abilities
of the PEG ratio and (2) to investigate its association with executive compensation.

Among the motivations that are behind examining the valuation abilities of the PEG
ratio is that most ratio-based prediction literature focuses on using financial ratios to predict
future earnings (e.g., Ou and Penman, 1989 a, b). Instead, this study extends the ratio-based
prediction literature by investigating the association between financial ratios and stock prices.
This study considers the association of PEG ratio with executive compensation for several
reasons, among which is that the PEG ratio provides an indicator for how the firm is being
valued by the market. The results of regression and portfolio analyses did not provide
evidence that supports the hypothesis of the PEG ratio superiority as a valuation tool relative
to the RIV model. On the other hand, the results show that the association of the
compensation with the PEG ratio is significant.

II. LITERATURE REVIEW

Ohlson and Juettner-Nauroth (2000) propose a model of forward P/E and earnings
growth in a valuation setting. They consider how next period earnings per share and earnings
per share growth relate to a firm’s current price per share. Easton (2004) indicates that

207
analysts still pervasively focus on forecasts of earnings and earnings growth rather than book
value and book value growth implicit in the RIV models.

The dividend-discounting model defines share price as the present value of expected
future dividends discounted at their risk-adjusted expected rate of return. Starting from a
dividends-discounting model, Ohlson (1995) formulates the RIV model to expresses firm
value as the sum of current book value and the discounted present value of expected abnormal
earnings. Large number of studies examined the RIV and provide evidence on its validity
(e.g., Bernard, 1995).

Early studies in the compensation area focused on documenting the relation between
CEO pay and company performance (e.g., Jensen and Murphy, 1990a). Jensen and Murphy
(1990 a, b) find a modest relation between CEO compensation and corporate performance.
Lambert (1993) indicates that since stock prices reflect many factors such as macroeconomic
shocks and changes in interest rates, managerial remuneration is often based on performance
measures such as accounting earnings, which reflect firm-specific changes in value that result
from managerial actions.

III. HYPOTHESES DEVELOPMENT AND RESEARCH DESIGN

Bradshaw (2004) shows that a valuation model based on the PEG ratio explains
analysts’ recommendations. We extend Bradshaw (2004) by testing whether a valuation
model based on the PEG ratio can explain firm value better than the traditional residual
income valuation model.
H1: Firm value is better explained by a model based on the PEG ratio than by the residual
income valuation model
Due to the fact that the PEG ratio takes into consideration growth expectations which
are considered in executive compensation, that analysts support their recommendations with
the PEG model, and that analysts’ estimations affect market prices (Abarbanell and Bushee,
1997), we expect executive compensation to be associated with the PEG ratio. This
expectation is stated in the following hypothesis:
H2: Executive compensation is significantly associated with the PEG ratio.
To examine H1, we follow a method similar to that used by Bradshaw (2004). Two
values are estimated: one is estimated using RIV model, and the other is estimated using a
model based on the PEG ratio. First, a value based on the following residual income model is
estimated:
3
Eit [ RI t +τ ] Eit [TVt +3 ]
V RI it = BVPS it + ∑ +
τ =1 (1 + r )τ (1 + r ) 3
(3-1)
where V is the estimate of firm intrinsic value, BVPS it is book value per share of firm i at
time t. The second term is the present value of expected residual income, Eit [RIt+τ], over a
period of three years. (We used three years because of the small number of firms that have
forecasts on I/B/E/S beyond three year horizon.) The third term is the terminal value. We
use a two-year ahead earnings forecast and expected long term growth in earnings to estimate
the firm value based on the PEG ratio:
VPEG it = E it[EPS t+2] * LTG it *100

(3-2)
To test which model had stronger valuation abilities, the following regressions are estimated:
P it = α+ β1 VRI it + ε it

208
(3-3)
P it = α+ β2 VPEG it + ε it
(3-4)
Ret it = α+ β1 VRI it /P it-1 + ε it
(3-5)
Ret it = α+ β2 VPEG it /P it-1 + ε it
(3-6)
The significance of the coefficients, the adjusted R2, and Vuong test (1989) is used to test the
difference between the adjusted explanatory powers of the models. In addition, a portfolio
analysis is conducted to compare the size adjusted abnormal return provided by a trading
strategy based on the PEG ratio versus that provided by a trading strategy based on the RIV
model.
To test for the association of compensation with the PEG ratio (H2), the following
model is estimated:
ln Comp it = α + β 1 ln PEG it + β 2 ln E it + β 3 ln Ret it + ε it
(3-7)
where ln Compensation it is the change in the natural log of compensation,  ln PEG it is
the change in the natural log of the PEG ratio,  ln E it is the change in the natural log of
earnings per share before extraordinary items, and ln Ret is the natural log of stock returns.
The model is estimated twice, using data on the five highest paid executives and using data on
CEOs only.

IV. RESULTS

The data was collected from Compustat, I/B/E/S, CRSP, and ExecuComp. Table 1
shows the results of estimating the price and return modes.

Table I
Comparing the Residual Income Valuation Model and the PEG Ratio in Terms of Their
Association with Stock Prices and Returns
PRICE RETURN
(3-6) (3-7) (3-8) (3-9)
Variable VRI VPEG VRI/P VPEG/P
Intercept 27.85*** 29.77*** 0.15*** 0.15***
(33.42) (37.77) (9.03) (9.75)
VRI it 0.54*** 0.011
(19.31) (0.88)
VPEG it 0.41*** 0.016***
(18.47) (2.58)
N 2,618 2,618 2,618 2,618
Adj. R2 12.45% 11.50% -0.01% 0.22%
F-value 373.02 341.12 0.77 6.64
p-value <0.0001 <0.0001 0.3803 0.01
Vuong's Z 3.76*** 1.71*

P it = α + β1 VRI it + ε it (3-3) P it = α + β2 VPEG it + ε it


(3-4)
Ret it = α + β1 VRI it /P it-1 + ε it (3-5) Ret it = α + β2 VPEG it /P it-1 + ε it
(3-6)

209
Where VRI it is the firm estimated value based on a residual income valuation model that is
calculated as follows: VRI it = BVPS it + (EPS it +1 - r * BVPS it) / (1+r) + (EPS it +2 - r *
(BVPS it + EPS it +1 - (EPS it +1 * k)))/ (1+r) ** 2 + (EPS it +3 - r * (BVPS it + EPS it +1 - (EPS it
+1* k)) + EPS t+2 – (EPS t+2 * k)) / (1 +r) ** 3 + (EPS it +3 - r * (BVPS it + EPS it +1 - (EPS it
+1* k)) + EPS t+2 – (EPS t+2 * k)) / r * (1 + r) ** 3, VPEG it is the firm estimated value based on
the PEG ratio calculated as follows: VPEG it = E it[EPS t+2] * LTG it *100, Ret it is stock returns,
and P it is the stock price at the end of the fiscal year.

Price models (3-3) and (3-4) are significant with a p-value of less than 0.0001. The
coefficients on both VRI it and VPEG it are significant. The adjusted R2 of (3-3), which is 12.45
%, is higher than the adjusted R2 of (3-4), which is 11.50%. In addition, the significant
Vuong’s Z-statistics of 3.76 indicates that the explanatory power of model (3-3) is higher
relative to (3-4), which leads to the rejection of H1. On the other hand, the association
between stock returns and VPEG it is stronger than their association with VRI it. In addition,
Vuong’s Z-statistic is significant which indicates the higher explanatory power of model (3-
6). These conflicting results do not provide support for H1. To further examine the valuation
ability of the two models, we conduct a portfolio analysis to see whether there is a difference
between the results of two trading strategies, where one strategy is based on using the PEG
ratio while the second strategy is based on the RIV model. Table 2 shows the results of
portfolios based on taking a short position in low VPEG/P and VRI/P ratios and taking a long
position in high VPEG/P and VRI/P ratios.

Table II
Portfolio Analysis:
Measuring the Characteristics of Quintile Portfolios Formed by the Residual Income
Valuation-Based Value-to-Price (VRI / P) and the PEG-Based Value-to-price (VPEG / P)

VRI VPEG
n VRI/P RET AB_RET n VPEG/P RET AB_RET
LOWEST 521 -0.228 0.281 0.192 521 -1.162 0.173 0.122
t- statistic (-4.17) (4.47) (3.34) (-5.77) (3.22) (2.06)
Var. 1.562 2.064 1.579 21.14 1.499 1.54
526 0.280 0.109 0.320 526 0.358 0.106 0.022
t- statistic (98.40) (4.89) (1.53) (86.56) (4.49) (0.95)
Var. 0.004 0.261 0.222 0.009 0.297 0.26
528 0.469 0.143 0.070 528 0.576 0.135 0.060
t- statistic (130.28) (6.74) (3.13) (102.59) (7.20) (3.28)
Var. 0.006 0.237 0.235 0.0166 0.186 0.166
526 0.693 0.083 0.019 526 0.867 0.170 0.073
t- statistic (1.42) (5.01) (0.95) (100.97) (4.85) (3.08)
Var. 0.012 0.145 0.172 0.038 0.645 0.255
HIGHEST 523 1.672 0.180 0.097 523 2.018 0.210 0.145
t- statistic (21.03) (7.71) (3.90) (20.56) (7.80) (4.93)
Var. 3.307 0.284 0.279 4.990 0.380 0.391
Hedge -0.10 -0.10 0.04 0.02
t-statistic -0.27 -0.23 0.07 0.05

Where VRI is firm value estimated using a residual income valuation model, VPEG is firm value
estimated using the PEG ratio, P is price at beginning of the period, RET is stock return over
the year from CRSP, AB_RET is size adjusted abnormal return from CRSP.

210
− −
t-statistic = −
x1− x

2 . Where x1 , x 2 ; s12 , s 22 ; n1 , n2 are the mean value, the variance, and
s 12 / n 1 + s 22 / n 2
the number of observations in the highest and lowest portfolio, respectively (Howell, D. C.
1997).

The portfolio analysis is carried out by constructing uni-dimensional portfolios based


on the ranking variables: VRI/P and VPEG/P. The analysis shows that while there is a positive
relationship between the VPEG/P and the stock returns and size adjusted abnormal returns, the
relation is neither monotonic nor significant (t-statistics = 0.07 and 0.05). On the other hand,
the relations between the VRI/P and stock returns and size adjusted abnormal returns are not
positive, not significant, and not monotonic (t-statistics = -0.27 and -0.23). The results of the
regression analysis and the portfolio analysis do not support H1 which predicts the superiority
of the PEG ratio’s valuation abilities relative to the RIV.

To test H2, data is collected on the five highest paid executives in all firms in the
ExecuComp database over the period from 1992 to 2002. According to H2, a positive and
significant relationship between the change in the PEG and the executive compensation
indicates that managers are compensated for keeping the firm highly valued by the market.
Table 3 shows the results of measuring the association of the executive compensation with the
PEG ratio.

Table III
Testing the Elasticity of Executive Compensation to the PEG Ratio
ln Comp it = α + β 1 ln PEG it + β 2 ln E it + β 3 ln Ret it + ε it (3-7)

Full sample including the highest five paid emplyees


2
Variable n Intercept Δ ln PEG it Δ ln Earnings it ln Return it Adj. R F-value p-value
7,035 0.19*** 0.04*** 0.75% 54.11 <0.0001
(63.77) (7.36)
7,035 0.23*** 0.04*** 0.03*** 2.81% 102.73 <0.0001
(51.03) (7.46) (12.26)
7,035 0.14*** 0.04*** 0.01*** 1.22% 44.47 <0.0001
(16.30) (6.44) (5.88)
7,035 0.18*** 0.04*** 0.03*** 0.01*** 3.28% 80.43 <0.0001
(19.87) (6.53) (12.27) (5.90)
Restricted sample including only the CEOs
2
Variable n Intercept Δ ln PEG it Δ ln Earnings it ln Return it Adj. R F-value p-value
1,429 0.19*** 0.03*** 1.35% 20.5 <0.0001
(29.00) (4.53)
1,429 0.24*** 0.05*** 0.04*** 4.48% 34.51 <0.0001
(23.84) (4.63) (6.92)
1,429 0.12*** 0.05*** 0.02*** 2.44% 18.88 <0.0001
(5.98) (3.99) (4.13)
1,429 0.17*** 0.05*** 0.04*** 0.02*** 5.54% 28.96 <0.0001
(8.31) (4.09) (6.92) (4.13)

Where Comp is the cash compensation that equals the total of cash salary and cash bonus, the
PEG is the price-earnings-to-growth ratio calculated by dividing forward price earnings ratio,
which is stock price over the two-year ahead forecasted earnings, over earnings long term
expected growth, E is earnings per share before extraordinary items and discontinued
operations (Compustat item # A58), and Ret is the annual stock return. The ln Comp, ln
211
PEG, ln E, and ln Ret are the natural log of the cash compensation, natural log of the PEG
ratio, natural log of earnings, and the natural log of the stock return, respectively.

The results show that the coefficient on the natural log of PEG ratio is significant at a
0.01 level. After controlling for earnings and stock returns and earnings, the coefficient on
the change in the PEG ratio remains significant. When restricting the analysis to data on the
firms’ CEOs, the results show that the elasticity of compensation to the changes in the PEG
ratio is significant, with and without controlling for earnings and stock returns. The
interpretation for this positive elasticity is that managers are compensated for having the firm
highly valued by the market. These results provide support for H2.

V. CONCLUSION

The results do not provide evidence on the superiority of the PEG ratio, as a valuation
tool, on the RIV model. While the association between firm values estimated using the PEG
ratio and stock prices is stronger than the association of firm values estimated using the RIV
model and shock prices, the stock returns are more associated with firm values estimated
using the RIV model than with firm values estimated using the PEG ratio. In addition,
following a trading strategy based on either valuation tools do not provide significant
abnormal return. Finally, the results provide support for the existence of a relationship
between executive compensation and the PEG ratio. This means that managers are
compensated for keeping their firms highly valued by the market.

REFERENCES

Abarbanell, J. S., and B. J. Bushee. “Fundamental Analysis, Future Earnings, and Stock
Prices.” Journal of Accounting Research. 35, (1), 1997, 1-24.
Bernard, V. “The Feltham–Ohlson Framework: Implications For Empiricists.” Contemporary
Accounting Research., 11, (2), 1995, 733–747.
Block, S. B. “A Study of Financial Analysts: Practice and Theory.” Financial Analysts
Journal., 55, (4), 1999, 86-95.
Bradshaw, M. T. “The Use of Target Prices to Justify Sell-Side Analysts’ Stock
Recommendations.” Accounting Horizons., 16, (1), 2002, 27-41.
“How Do Analysts Use Their Earnings Forecasts in Generating Stock Recommendations?”
The Accounting Review., 79, (January), 2004, 25-50.
Easton, P. D. “PE Ratios, PEG Ratios, and Estimating the Implied Expected Rate of Return
on Equity Capital.” The Accounting Review., 79, (January), 2004, 73-96.
Feltham, G., and J. Ohlson. “Valuation and clean surplus accounting for operating and
financial activities.” Contemporary Accounting Research., 11, (2), 1995, 689–731.
Howell, D. C. Statistical Methods for Psychology. Fourth edition. Belmont, CA: Duxbury
Press, 1997.
Jensen, M., and K. J. Murphy. “CEO Incentives: It’s Not How Much, But How.” Harvard
Business Review., 68, (3), 1990a,138-153.
“Performance Pay and Top Management Incentives.” Journal of Political Economy.,
98, (2), 1990b, 225-264.
Lambert, R. A. “The Use of Accounting and Security Price Measures of Performance in
Managerial Compensation Contracts.” Journal of Accounting and Economics., 16, (1-
3), 1993, 55-101.

212
A SIMPLE NASH EQUILIBRIUM FROM “A BEAUTIFUL MIND”

G. Glenn Baigent, Long Island University – C. W. Post


Glenn.Baigent@liu.edu

ABSTRACT

This paper quantifies a scene from the movie about John Nash’s life, “A
Beautiful Mind.” In that scene, John Nash’s character envisions a special type of
equilibrium where it is better to cooperate with other market participants than to
compete. The popularity of this movie is seen as an opportunity to reveal the
importance of Nash’s work, and to topically cover the mathematical principles
involved. It is written in a form that is accessible to both undergraduate and graduate
students.

I. INTRODUCTION

Several years ago a television advertisement promoting education featured several


NBA stars including Michael Jordan. The panel of sports stars is shown watching a group of
children solve mathematical problems and making uproarious cheering sounds in the same
way that the children would cheer their favorite athletic heroes. Unfortunately, most of the
brilliant minds among us do not receive the recognition and admiration they so richly deserve.
Although a co-winner of the Nobel Prize in Economic Science for his research on game
theory, and widely recognized as a mathematician and economist, John Nash’s name was not
one known in many households outside of that realm. It is his honor that this article is
written.

The Academy Award winning film, “A Beautiful Mind” is more a celebration of John
Nash’s life than of his work (although the movie takes great artistic liberties). To me, the
popularity of this movie and the myth of Nash’s special equilibrium opened the door to show
students the power of his work. Not often do these opportunities present themselves, so I took
a few moments to define and solve a problem I interpreted from the movie. It is my intention
to use this example as a pedagogical tool to show the decision framework, as well as the
underlying mathematical principles.

Before beginning the formal definition of the problem and its solution, I would be
remiss if I were to not mention the impact such a decision framework has on our everyday
lives. For many years, firms and individuals were considered to make decisions on wealth
allocations, levels of production, etc., in isolation, without regard for the actions that might be
taken by other decision makers or competitors. Nash’s work brought us closer to the reality of
decision-making where an individual would, at the very least, try to guess the strategy of his
competitor and make a decision based on that conjecture. Thus, decisions are not made in
solitude, but with rational thought about what another individual or firm might do. This is
true even for young men fighting over the affections of a young lady.

213
II. REVIEW OF NASH EQUILIBRIUM

Most textbooks cover “game theory” applications at some level. Varian (1992)
provides a very nice coverage of this topic, and the following descriptions borrow liberally
from his text. Varian defines the following:

Nash Equilibrium
A Nash Equilibrium consists of probability beliefs (π r , π c ) over strategies “r” and
“c”, and probability of choosing strategies ( p r , p c ) such that,
(i) The beliefs are correct, p r = π r and p c = π c ; and
(ii) Each player is choosing p r and p c to maximize their expected utility.
An interesting special case of a Nash Equilibrium is one of “pure strategies” in which the
probability of choosing a particular strategy is 1.0.

Pure Strategies
A Nash Equilibrium in pure strategies is a pair (r*, c *) such that U r (r*, c *) ≥ U r (r , c *)
for all row strategies, “r,” and U c (r*, c *) ≥ U c (r*, c ) for all column strategies “c.”

III. TWO GUYS, ONE BLONDE, AND TWO BRUNETTES

In one of its most poignant moments, “A Beautiful Mind” interprets John Nash’s
vision through a pub scene where either real or imagined (it doesn’t matter) women enter the
pub. One is blonde and considered to be the most beautiful and desirable across social
standards. The others, brunettes, are sufficient in number to be paired with the young men in
the room without anyone being left alone. I cannot emphasize enough the probability that the
following could be construed as a misogynistic slant, but nothing could be further from the
truth. I am merely using a scene from a movie where the character has, indeed, questionable
behavior. That the women consider these gentlemen to be equally acceptable suitors, if at all
suitable, requires some heroic assumptions, but to advance the story, let’s allow the
assumptions to stand without question or debate. Nash’s character follows this line of
reasoning:
If we all compete for the blonde, we will block each other and no one will get
her. It will cost us a great deal of money and effort, and we will all end up
with nothing. At this result, we turn to the brunettes, but they will reject us
because no one wants to be second choice. Therefore, the initial decision to go
after the blonde causes an undesirable result in the first and second attempts.
The only way for everyone to “win” is to choose a strategy of co-operation
where we bypass the blonde and seek the brunettes.

Problem Definition
John and Robert are two young men in a pub who spot a group of attractive young
ladies - one blonde and two brunettes. If both John and Robert try to attract the blonde, they
will expend time and energy, not to mention a great deal of money. It is suggested that they
will block each other and neither will achieve the desired objective. A subsequent attempt to
attract the brunettes will be pointless because the brunettes will not be interested. After all, no-
one wants to be the second choice. By both attempting to go after the blonde, John and Robert
will end up with utility -3.

214
If John chooses the blonde he will get 2 units of utility but Robert will get 0. Even
though Robert can get the brunette, he “loses face” because he didn’t compete for the blonde,
hence, there is zero utility. If, however, John and Robert forego the blonde they will both
succeed and get 1 unit of utility each. The payoffs for each strategy are shown in the table
below where the coefficient on the left hand side is the payoff to John, and on the right hand
side, the payoff to Robert:
Robert
Blonde (L) Brunette (R)
Blonde (T) -3, -3 2, 0
John Brunette (B) 0, 2 1, 1

John’s Maximization Problem


In a Nash Equilibrium, individuals maximize expected utility under constraints.
John’s problem and constraints are written as

Max E (U John ) = pT [(− 3) p L + (2 ) p R ] + p B [(0 ) p L + (1) p R ] (1a)


Subject to,
pT + p B = 1
pT ≥ 0 (1b)
pB ≥ 0

Writing (1a, b) as a constrained optimization problem, the Lagrangian is

L John = pT [(− 3) p L + (2 ) p R ] + p B [(0 ) p L + (1) p R ]


(2a)
− λ1 ( pT + p B − 1) − λ2 pT − λ3 p B

Maximizing (2a) with respect to p T and p B give the Kuhn-Tucker conditions for the
Lagrange mutipliers. (Note: Kuhn and Tucker were colleagues of John Nash at Princeton.)

∂L John
= 0 ⇒ 2 p R − 3 p L = λ1 + λ 2 (2b)
∂pT
∂L John
= 0 ⇒ p R = λ1 + λ3 (2c)
∂p B

From complementary slackness conditions, λ 2 = λ 3 = 0 (see Appendix), thus, (2b) and (2c)
can be solved for p L = 1 4 and p R = 3 4 . Returning these probabilities to (1a) gives an
expected utility of 3 4 for a mixed strategy.

Robert’s Maximization Problem

Similar to John, Robert’s decision is made to maximize his expected utility shown in (3a,
b)
Max E (U Robert ) = p L [(− 3) pT + (2 ) p B ] + p B [(0 ) p L + (1) p R ] (3a)
Subject to,

215
pL + pR = 1
pL ≥ 0 (3b)
pR ≥ 0

Writing (3a, b) as a constrained optimization problem, the Lagrangian is

LRobert = p L [(− 3) pT + (2 ) p B ] + p R [(0 ) pT + (1) p B ]


(4a)
− λ1 ( p L + p R − 1) − λ 2 p L − λ3 p R

Maximizing (4a) with respect to p L and p R

∂LRobert
= 0 ⇒ 2 p B − 3 pT = λ1 + λ2 (4b)
∂p L
∂LRobert
= 0 ⇒ p B = λ1 + λ3 (4c)
∂p R

Again, complementary slackness requires that λ 2 = λ 3 = 0 , so that (4b) and (4c) can be solved
for pT = 1 4 and p B = 3 4 . Returning these probabilities to (3a) gives an expected utility of
3 4 for a mixed strategy.

Comparison to Pure Strategies


If Robert plays “L” then John’s best strategy is to play “B” for a payoff of zero. If
Robert plays “R” then John’s best strategy is to play “T” for a payoff of 2 units of utility.
Similarly, if John plays “T” then Robert’s best strategy is to play “R” for a payoff of zero.
And, if John plays “B” then Robert’s best strategy is to play “L” for a payoff of 2 units of
utility. Considering the probabilities computed above, the expected utility is 3 4 . However,
both players know that they can achieve higher utility of 1 unit if they both choose the
brunettes. Thus, a pure strategy of “B” for John (brunette) and “R” for Robert (brunette) are
the optimal strategies.
IV. CONCLUSION

Contrary to myth, blondes don’t always have more fun.


Appendix
As explained in Varian (1992), p. 503-504, the Kuhn-Tucker Theorem differs from the
Lagrange multiplier theorem because the Lagrange multipliers can take any sign, whereas the
Kuhn-Tucker Theorem requires that multipliers be non-negative (complementary slackness).
Since λ2 and λ3 are related to probabilities ( pi > 0, ∀ i ) , optimality limits the multipliers to
“0.”

REFERENCES

Varian, Hal R., Microeconomic Analysis, 3rd Edition, Norton (1992).

216
ECONOMICS OF/AND LOVE:
AN ANALYSIS INTO DOWRY PRICING IN EAST AFRICA

Waithaka N. Iraki, Kentucky State University


w.iraki@kysu.edu

ABSTRACT

This paper addresses dowry (bride price) payment in East Africa. It addresses a
cultural issue from an economic perspective. It is assumed that dowry payment is driven by
economic fundamentals such as recessions and booms, income levels, exchange rates, location
and has intergeneration effects. By collecting and using data on what a sample of men paid for
dowry to their in-laws, the paper attempts to develop a robust dowry pricing model. Other
issues that are raised but left for future studies includes the role of dowry as an impediment to
social and cultural progress of women, the future of dowry payment and its role as a form of
social security, akin to the US social security system.

I. INTRODUCTION

In 2001, The New York Times carried an intriguing story in which peasants in
Zimbabwe were responding to economic problems in a unique way. They demanded that the
dowry (bride price) be paid in US dollars instead of Zimbabwe dollars. The latter was less
stable. Another more perplexing but sad story appeared from the same country. “Patrick
Mupedzi (42) of Mupondi village in Masvingo allegedly beat to death his 25-year-old son
Zachariah Mupedzi following a 30-minute fist fight.The son had allegedly paid lobola(dowry)
to his in-laws with a cow belonging to the father without his consent (Daily News May 10,
2001)

In the Indian Subcontinent, it is the other way round, the girl pays dowry to the man.
Cases of bride burning, because the in-laws cannot accept the bride price were too common
until the Indian Government stepped in. In Kenya gender activists are up in arms against the
practice of dowry payment, they say it “commoditises” women and denies them dignity

Bride price, some people prefer to call it bride wealth or dowry has been well research
by anthropologists, sociologists and other social scientist. Jomo Kenyatta (1938) wrote
extensively about marriage among the Kikuyus of Kenya. But economists, particularly in East
Africa have paid scant attention to the issue. However, some economists like Gary Becker
have written extensively about marriage, divorce and other “soft issues”.

While there are studies that have addressed the issue of dowry mostly in Indian
subcontinent, few if any have addressed the issue in East Africa, where dowry payment is
common and where the man’s side pays dowry unlike in India where the lady’s side pays it.
Yet each year millions of dollars change hands as daughters “change hands” and the
purchasing power of their fathers improve. Dowry payment is not just an anthropological or
social issue, it is also an economic issue, and should be addressed from that perspective. There
a number of reasons why economists should pay attention to dowry payment.

217
Some Economists argue that dowry payment belongs to the underground economy,
because it is not taxed and is not represented in the official statistics, hence like housework
lowers the GDP of a given country.

In Dowry payment there is a market, which has rules and regulation e.g. in most
communities you cannot pay the entire dowry at once, you pay it over long period of time,
like a counsel. If a man dies without clearing the dowry, his children take over! The bride and
the bridegroom are not involved in the dowry negotiation. But as Banerjee (1999) observes,
this dowry disadvantages the marriage market. He cites the lack of marriage men.

Dowry payment can be seen as a form of social security (Dekker, 2002). Since dowry
is paid over the years, parents of the girl can always call upon the “boy” incase of any
financial problem. Dowry can also be seen from a social security point of view where parents
of the girl see her as an investment that can be recouped for the rest of their life. On the
contrary, the Man’s side sees the lady as an investment; she will contribute in terms of money,
in addition to children (traditional perspective).

Dowry is often priced in currency like dollars and shillings and is thus affected by
economic conditions like inflation. One may there ask if fathers have been getting fair
“prices” for their daughters. Rao (1993) notes that in South East Asia, dowries have risen and
amounts to 50% of household assets. However, Edlund (1997) suggests this rise could be due
to increase in wealth, not scarcity of men.

There is intermediation; the marrying parties never negotiate for themselves: Is there
an agency problem?
Dowry payment affects a man’s earning and investment potential. He is at a big
disadvantage because he often starts paying dowry when he has nothing; most people marry
after school. Instead of using his money to invest, they use it to pay dowry to in-laws. Dowry
can therefore be seen as a competitor for scarce resources. Some people say dowry creates
dependency on the part of the woman’s side. It may be that in some regions of the world,
cohabitation has been used as an escape to obligations of paying dowry; however, Clerkberg
(1999) found that cohabitators are likely to be having economic problems.

How is the dowry priced? What factors do the negotiators consider in arriving at a
“fair” price? Though the amount of dowry was fixed by traditions, the negotiators have a lot
of latitude in deciding the amount. E.g. in Eat Africa, educated girls demand a premium, with
some people speculating that that is “pricing-them-out –of –the-marriage market.”

II. RESEARCH PROBLEM

Despite centuries of Christianity in East Africa, globalization and Western Education,


dowry is still a predominant feature of most marriages in East Africa. Young men still
continue to pay dowry when they have hardly invested anything. Gender activists argue that
this dowry predisposes women to spousal abuse, while others argue that dowry enhances the
marriage harmony and reduces the chances of divorce (Oriang, 2005). What is interesting
about dowry in East Africa is its co-existence with modernity; Ivy League educated men still
pay dowry, in ways no different from illiterate people. In addition, weddings that are
indistinguishable from any American wedding take place as dowry is paid!

218
The Churches are quiet on dowry, while the legal system recognizes dowry payment
as a legal marriage rite, equivalent to marriage certificate or license.
Though Dowry payment is an economic issue, there are few studies that have looked at dowry
payment from an economic perspective in East Africa, though there are plenty of studies in
Indian subcontinent where women pay dowry, opposite to what happens in East Africa. This
paper will attempt to achieve the following objectives.

III. RESEARCH OBJECTIVES

• Determine the variables that determine the size of dowry in East Africa
• Develop a dowry pricing model.
• Investigate how dowry has been an impediment to progress of women in East Africa.
• Investigate the future of dowry in East Africa
• Investigate how the agency problem is resolved in dowry negotiations
• Make a contribution to cultural economics

IV. METHODOLOGY

The value of dowry (Bride Price) = what is paid at negotiation day + net present value of
future payments;
D
D = D +∑ t

(1+ r )
T 0 t

where
Dt = dowry payment at time t.
r = interest rate
The main factors that determine the value of dowry are interest rate, r (and exchange
rate), time t, and Dt; the stream of dowries paid later. These are paid at no agreed time and are
rarely standardized. Other determinants of D are the education of the man and lady, their age,
their net income, age of parents, economic situation( recession versus boom) and ladies
position in family, is she the only girl , a first born or a last born. r, the rate of interest can be
got from national statistics. The rest can be got from a survey.
Therefore, D = f (r, t, age1, age2, age3, Econ, Position)
Since we can’t really get what will be paid in future, we can use what is paid at the beginning
D0 as the proxy of the dowry because in most cases it is a good predictor of what will be paid
in future and in most societies it is considered as the most important part of dowry. So our
main focus now remains getting a model that can predict the dowry payment.

A simple regression model can do that. So that we have

D= β0 + β1Ed + β2Age1 + β3Age2+ β4Age3+ + β5 Econ + β6Income


of Bridegroom + β7Location.

Ed = Education of the wife, Age1 and Age2 are ages of wife and husband, Age 3 is the
average age of the parents when the daughter married, Location, which will be a dummy
depend on the region of the country or ethnic group. Econ refers to economic situation,
recession or boom. Data will be corrected from Kenya, Uganda and Tanzania. Each country
will provide 100 couples. This approach will be a variation of the method used by Dekker
(2002).The couples will be of varying ages i.e. the study will be cross sectional.

219
V. PRELIMINARY RESULTS

Table I Age Of The Husband And Dowry Paid


_________________________________________________________
Husband’s Age Dowry (Shillings) Dowry (US $)

28 36000 480
34 15000 200
35 74000 986.6667
35 56,000 746.6667
36 100,000 1333.333
45 34600 461.3333
45 137000 1826.667
55 54000 720
65 30,000 400
85 40,000 533.3333
_________________________________________________________
Note: 1US $ = about 75 Kenya Shillings

Figure I Dowry And Age Of The Husband

Dowry in US $ Dowry in East Africa

2000

1500

1000 Dowry in $

500

0 50 100
Age of Husband

It appears that young people are paying higher dowries, but the exchange rate has not
been factored in. The Kenyan Shilling may have been stronger in the past e.g. in 1977 it about
7 KSh to US dollar. Now it is 75 Ksh to US dollar. This data was corrected in 2004-2005.
To achieve the other objectives, a questionnaire will be designed and the couples and other
opinion makers interviewed. Actual dowry negotiations will be observed to gain deeper
insights into the negotiation process. Further, focus groups will be interviewed in both rural
and urban areas to get a modern perspective to this phenomenon. Interestingly dowry
negotiations don’t vary much between urban and rural areas, but the attitude towards it does
vary. Note: This is an ongoing study and more data will be corrected and analyzed.

220
REFERENCES

Anderson, S. “The Economics of Dowry Payment in Pakistan,” Tilburg University Center for
Economic Research, 2000.
Banerjee, K. “Gender Stratification and the Contemporary Marriage Market in India,” Journal
of Family Issues, September 1999.
Bell, D and Song, S. “Characteristics of Bride Wealth Under Restricted Exchange” Journal of
Quantitative Anthropology 2: 1990.
Borroah,V. “Does Employment Make Men less Marriageable?” Applied Economics, 34,
London, August 15, 2002: 1571-1582.
Clarkberg, M. “The Price of Partnering: The role of Economic Well Being in Young Adults’
First Union Experience,” Social Forces 77 (3), 1999: 945-968.
Coles, M and Burdett, K. “Marriage and Class,” The Quarterly Journal of Economics 112 (1),
February 1997: 141-168.
Dekker, M., “Bride Wealth and Household Security in Rural Zimbabwe, A Paper presented at
Cambridge University, March 2002.
Edlund, E. “Dowry Inflation: A Comment,” A Working Paper Series in Economics and
Finance No.193, September, 1997, Stockholm School of Economics.
Foster, J. “Is Medical Practice a Marriage Breaker?” Medical Economics 75 (7), April 13,
1998: pp 14.
Iraki, X.N. “Economics of Dowry and the Question of Taxing it”, The East African Standard,
August 3, 2005: Nairobi, Kenya.
Iraki, X.N. “Should Bride Price Be on Market Terms or do we Abolish it?” The People, June
3, 2002: Nairobi, Kenya.
Loscocco, K and Spitze, G. “Women’s Position in the Household,” Quarterly Journal of
Economics and Finance, 39, 1999: 647-661.
Oriang, L. “Ladies, no Surrender, no Retreat” The Daily Nation, March 11, 2005: Nairobi,
Kenya.
Rao, V. “The Rising Price of Husbands: A Hedonic Analysis of Dowry Increase in Rural
India,” University of Chicago Press, 1993.
Rowtorn, R. “Marriage and Trust, Some Lessons From Economics,” Cambridge Journal of
Economics 23 (5), September, 1999: London: 661-691.
Taneka, M. “The Economics of Marriage,” Japan Echo, 1996.
Tertilt, M. “The Economics of Bride Price and Dowry: A Marriage Market Analysis,”
University of Minnesota, April 2002.

221
INTERNATIONAL TRADE GROWTH AND CHANGES
IN U.S. MANUFACTURING CONCENTRATION

David B. Yerger, Indiana University of Pennsylvania


yerger@iup.edu

ABSTRACT

The impact of increased trade upon both the degree of regional specialization in U.S.
manufacturing sectors as measured by Hoover’s coefficient of localization, and the degree of
spatial concentration as measured by the Gini coefficient, is tested. The paper improves upon
prior work that was strictly cross-sectional by estimating over a panel of state-level data.
Contrary to prior published work, increases in trade activity in a manufacturing sector are
associated with declining levels of both regional specialization and spatial concentration.
These results do not support much of the theoretic work over the past decade that postulates
increasing regional specialization and spatial concentration as a consequence of rising
international economic integration.

I. INTRODUCTION

The oft-repeated stylized fact regarding the regional concentration and/or


specialization of manufacturing activity in the U.S. and other industrialized nations is that
industrial production tends to be localized, and that regions tend to specialize in certain types
of industrial production. Marshall (1920) was among the first to offer economic rationales for
regional clustering of production. Krugman (1991a, 1991b) initiated a growing literature now
known as the ‘New Economic Geography’ analyzing the issues raised by Marshall in a more
formal mathematical framework emphasizing interactions between transport costs, increasing
returns to scale, spill-over economies, and technological growth upon the degree of regional
specialization or de-specialization in production (see also Krugman and Venables (1995) and
Davis et al (1997)).

This paper contributes to the literature on empirical tests of international trade’s


impact upon measures of manufacturing spatial concentration or regional specialization.
Some theoretic rationales have been developed showing mechanisms by which increased trade
activity leads to rising manufacturing spatial concentration or regional specialization. The
existing empirical literature, however, is small and not definitive. This study’s use of annual
U.S. state-level data permits more powerful testing of the impact of increased trade activity
than in prior literature.

II. LITERATURE REVIEW

Porter (1990) has been an influential writer on the impact of increased trade activity
upon regional clustering. Porter’s theory emphasizes that a nation’s successful industries will
be linked through both vertical and horizontal relationships. Spillover economies, or external
economies of scale, are critical to Porter’s explanation of why successful firms tend to cluster
regionally. Porter goes on to posit that “As more industries are exposed to international
competition in the economy, the more pronounced the movement toward clustering will
become.” (Porter, 1990, page 152). Krugman (1991a,b) reaches conclusions similar to Porter.
In his model, a threshhold value for transport costs will exist where above (below) the

222
threshhold value manufacturing activity will be diffuse (concentrated). Trade barrier
reductions are treated as a decline in transport costs. Krugman notes, however, that extending
the model to include more than two regions introduces ambiguity into the results.

To date, limited empirical work has been done assessing the validity of the above
theories on the impact of increased trade upon regional specialization or concentration of
manufacturing activity, and the findings are mixed. Greenaway and Hine (1991) find
convergence, not divergence, in the overall similarity of industrial production structure from
1970-1985 for 18 of 22 OECD nations studied. In contrast, Brulhart (1998) analyzes each of
166 sub national EU regions, using the centrality index of Keeble et al (1986) as the
geographic concentration measure of economic activity, and finds a positive correlation
between these centrality measures and the importance of intraindustry trade in these regions
over the 1961-90 period.

Another type of empirical study has examined regional manufacturing specialization


and concentration for regions within the U.S. Kim (1995) provides an extensive overview of
key changes in regional manufacturing from 1860 to 1987 across the nine census regions.
Kim finds that changes in plant size economies (internal economies of scale) explain most of
the very long run time series variation in concentration levels in Kim's study. The lack of
support for extensive external economies of scale in Kim’s work weakens Porter’s conjectures
given the importance of spillover economies in Porter’s theory. The only study directly testing
for the impact of trade upon U.S. regional manufacturing specialization was by Shelburne and
Bednarzick (1993). They use state-level, four digit SIC manufacturing employment data to
estimate equation Hoover’s coefficient of localization as a function of the industry’s export to
shipments and imports to new supply ratios. They find that state-level manufacturing
specialization rises in response to both rising export and import intensity in an industry.

One should be cautious, however, in interpreting these findings as indicating that


increased international economic integration has lead to greater regional manufacturing
specialization within the U.S. The equation may well suffer from an omitted variable bias: the
absence of any estimate of transport costs in the regression. Since the equation is estimated
for a single-year cross section, one would expect that ceteris paribus those industries with
relatively lower transport costs would be both more likely to cluster geographically, as in
Krugman (1991a), and to have higher levels of trade activity. Hence, the correlation between
trade activity and geographic concentration may be spurious.

III. DATA AND DEPENDENT VARIABLES CONSTRUCTION

This paper combines national-level trade flow data by industry with matching
manufacturing industry specialization and concentration indices built from state-level
manufacturing earnings for 21 manufacturing sectors. These are the 20 2-digit SIC sectors,
except transportation is divided into motor vehicles and other transportation sectors. The
coefficient of localization and the spatial gini coefficients are computed for each of the 21
industries at the national level for every year in the data set. The trade activity variables,
exports to domestic shipments and imports to domestic shipments, are taken from the National
Bureau of Economic Research Database “Imports and Exports By SIC Category, 1958-94”.
The overlap of the two data sets gives this study a sample period of 1969-94 with annual data.

The first measure of regional manufacturing specialization estimated was the


coefficient of localization from Hoover (1936) which was used by both Krugman (1991b) and

223
Kim(1995). For a particular industry j, the coefficient of localization is derived from the
industry’s location quotients across the k states. The location quotient for industry j, state k, is
defined as:

(1) Ljk = (Ejk/Ek) / (EjU.S./EU.S.)

Where Ejk is earnings in industry j for state k, Ek is total manufacturing earnings for state k,
EjU.S. is national earnings in industry j, and EU.S. is total national manufacturing earnings. After
solving Ljk for all k states, the localization curve for industry j is constructed. It is identical in
concept to the more well known Lorenz curve for income distributions. The coefficient of
localization for industry j, LCj from equation (1), is related to the localization curve in the
same manner as the Gini Coefficient is derived from the Lorenz Curve. If LCj equals zero,
then industry j is dispersed across the states in direct proportion with total manufacturing
activity. If LCj equals one, then the industry is completely localized in one state. The formula
from Pyatt et al (1980) has been widely used to estimate gini coefficients so it was used to
estimate LCj.

LCj is the most commonly used measure of regional specialization by industry sector,
but it does have some unavoidable shortcomings because states are an imperfect regional unit
of measure given the vast differences in population size across states. An alternative to LCj
that avoids the above problem of small versus large states is to simply estimate a gini
coefficient of the spatial concentration of manufacturing sector j’s activity for each year in the
sample period. Then, examine how, if at all, changes in trade activity in an industry are
associated with changes in the distribution of the activity across the U.S.

IV. ESTIMATION METHODOLOGY AND RESULTS

The empirical focus of this paper is narrow. The aim is not to explain the variation
across manufacturing sectors in their average values for either regional manufacturing
specialization (localization coefficient) or spatial concentration (spatial concentration gini).
These differences across industries are taken as a given and the impact upon these industry
measures from changes in trade activity is estimated using fixed effects panel data models.
By allowing the intercept coefficients to vary by industry, differences in industries tendencies
to cluster or specialize regionally due to economies of scale, or resource availability issues,
are in large part captured. Moreover, any changes over time in LCjt, or GCjt values that are
common across industries due to changes in transport costs, information costs, regulatory
environments, or other economy-wide factors can be captured through the use of time period
dummies, thereby aiding in isolating the impact from the trade activity variables.

In all, eight different panel regressions are estimated:


(3a) LCjt = αj + β*(X/S)jt + μjt
(3b) LCjt = αj + β*(X/S)jt + γ69*DUM69 + ……γ94*DUM94 + μjt
(4a) LCjt = αj + β*(M/S)jt + μjt
(4b) LCjt = αj + β*(M/S)jt + γ69*DUM69 + ……γ94*DUM94 + μjt
(5a) GCjt = αj + β*(X/S)jt + μjt
(5b) GCjt = αj + β*(X/S)jt + γ69*DUM69 + ……γ94*DUM94 + μjt
(6a) GCjt = αj + β*(M/S)jt + μjt
(6b) GCjt = αj + β*(M/S)jt + γ69*DUM69 + ……γ94*DUM94 + μjt
LCjt is Hoover’s localization coefficient and GCjt is the spatial concentration gini coefficient,
both variables as defined in the data section, for manufacturing sector j in year t. (X/S)jt is the
224
exports/shipments ratio for sector j in year t, while (M/S)jt is the imports/shipments ratio for
sector j in year t and μjt is the error term. DUM69 is the time period dummy variable equal to
one in 1969 and 0 in all other years. The other time dummies are defined similarly. Recall
from the data section that there are j = 21 manufacturing sectors and that t goes from 1969-94.

The key findings from the regression analysis are summarized in Table 1. Not
surprisingly, the industry-specific intercept terms jointly are highly significant in every
regression equation. The trade activity variables also are statistically significant in every
regression equation and the overall fit of the model, as evidenced by the adjusted R2, is quite
high. The impact of increased trade activity in this model is clear: an increase in either
industry imports or exports is associated with reductions in the degree of both regional
specialization (LC) and spatial concentration (GC) in the industry. While the inclusion of
time dummies reduces the magnitude of the trade variables’ effects, they still remain negative
and statistically significant.

The results of the LC analysis contradict Shelburne and Bednarzick’s findings of a


positive impact upon regional specialization from increased trade activity. Apparently, their
result was due to the omission of transport cost estimates from their cross-section regression.
As noted by Krugman (1991a) among others, lower transport costs raise the likelihood of
regional industrial concentration ceteris paribus. At the same time, lower transport costs
obviously raise the likelihood that the industry engages in international trade. By embedding
the industry variations in the importance of transport costs within the industry specific fixed
effects, we find that the increasing trade in manufactures the past few decades has contributed
toward the observed trends in declining regional specialization and industry concentration in
the U.S. manufacturing sector.

The findings also suggest that increasing exports have a larger impact on both the LC
and GC measures than does a rise in import activity for an industry. For both with and
without time dummies versions of the estimating equations, the coefficient on the export share
variable always is larger (absolute value sense) than the import share parameter. Moreover,
this difference always is statistically significant as can be seen in the results for equations 3a
and 4a. The gap between the export share parameter (-.394) from equation 3a and the import
share parameter (-.163) from equation 4a will be significant at the 1% level given the standard
deviations of the two parameter estimates. Comparisons of equations 3b versus 4b, 5a versus
6a, and 5b versus 6b yield the same conclusion.

As a check on the sensitivity of the above results to the inclusion in the data set of a
few outlying industries, the data set was modified. Equations 3a-4b were estimated excluding
from the sample the two industries with the highest LC values and the two industries with the
lowest LC values. Similarly, equations 5a-6b were estimated excluding from the sample the
industries with the two highest and two lowest GC values. There are no material differences
in these results and those of Table 1. If anything, the estimated magnitudes of the trade
variables’ impact are slightly higher (results not presented due to space constraints).

225
Table 1: Summary of Key Regression Analysis Results

Parameter
Dep. Time Estimates*
Equation Vbl. Dummies (X/S)jt (M/S)jt F-Stat** (p-value) Adj. R2
3a LCjt No -.394 -- 1036.7 (.000) .97
(-12.44)
3b LCjt Yes -.274 -- 1197.1 (.000) .98
(-6.31)
4a LCjt No -- -.163 966.9 (.000) .97
(-11.13)
4b LCjt Yes -- -.053 1078.4 (.000) .98
(-2.94)
5a GCjt No -.543 -- 377.4 (.000) .93
(-17.25)
5b GCjt Yes -.371 -- 429.6 (.000) .95
(-8.81)
6a GCjt No -- -.270 434.3 (.000) .94
(-20.23)
6a GCjt Yes -- -.182 494.9 (.000) .95
(-11.22)

*t-statistics are given in parenthesis; **F-stat is the F-Statistic for hypothesis test that all
industry-specific intercepts are equal. For equations excluding the time dummies the critical
value is for F(20,524) and for equations including time dummies it is for F(20,499)

V. CONCLUSION

This paper extends previous work on the determinants of U.S. manufacturing regional
specialization and spatial concentration by focusing upon the impact of changes in trade
activity upon these patterns. Constructed annual industry measures of regional specialization
and spatial concentration based on state-level earnings data, correctly matched with national
industry trade flow data, allowed for more accurate testing of trade’s impact than in prior
cross-sectional work. The findings provide little support for Porter’s conjecture that increased
international economic integration will stimulate additional regional clustering of
manufacturing sectors. Gini coefficient measures of spatial concentration for industries (GC)
are negatively, not positively, related to rising import and export levels. Nor is much support
provided for Krugman’s simple two-region model in which rising economic integration leads
to increasing regional specialization of manufacturing activity. Since 1969 in the U.S. there
have been ongoing improvements in the transportation infrastructure. Even more dramatic
has been the decline in communication and information exchange costs across regions. The
ease of economic integration across U.S. states since 1969 must have risen. Yet this paper
finds that rising trade activity, a measure of economic integration, is associated with falling
not rising regional specialization of manufacturing sectors as measured by Hoover’s
coefficient of localization (LC).

It is possible that the spatial dispersion of manufacturing activity associated with rising
trade is due to significant spatial variation in the growth of U.S. trade activity. The relatively
rapid growth of the Pacific Rim region over the sample period, for example, would favor West
Coast firms over East Coast firms, helping to disperse manufacturing activity beyond its

226
Midwest and Northeast core. Similar arguments could be made for Mexico and the U.S.
Southwest. This study’s primary contribution is to show that by standard measures of spatial
concentration (GC) or regional specialization (LC), rising trade activity in the U.S. since 1969
is associated with reductions in both the spatial concentration and specialization of U.S.
manufacturing sectors. Hopefully, these results will be reflected in future theoretical efforts at
modeling the impact of trade activity upon regional manufacturing patterns.

REFERENCES

Brulhart, Marius. “Trading Places: Industrial Specialization in the European Union.”


Journal of Common Market Studies. 36, 1998, 319-346.
Davis, Donald, David Weinstein, Scott Bradford, and Kazushige Shimpo. “Using
International and Japanese Regional Data to Determine When the Factor Abundance
Theory of Trade Works.” American Economic Review. 87, 1997, 421-446.
Greenaway, David, and Robert Hine. “Intra-Industry Specialization, Trade Expansion, and
Adjustment in the European Economic Space.” Journal of Common Market Studies.
29, 1991, 603-622.
Keeble, D., J. Offord, and S. Walker. Peripheral Regions in a Community of Twelve
Member States. Luxembourg: Commission of the European Communities, 1986.
Kim, Sukkoo. “Expansion of Markets and the Geographic Distribution of Economic
Activities: The Trends in U.S. Regional Manufacturing Structures, 1860-1987.”
Quarterly Journal of Economics. CX, 1995, 881-908.
Krugman, Paul. “Increasing Returns and Economic Geography.” Journal of Political
Economy. 99, 1991a, 483-499.
Krugman, Paul. Geography and Trade. Cambridge, MA: The MIT Press, 1991b.
Krugman, Paul, and Anthony Venables. “Globalization and the Inequality of Nations.”
Quarterly Journal of Economics. CX, 1995, 857-880.
Marshall, Alfred. Principles of Economics. New York: Macmillan, 1920.
Porter, Michael. The Competitive Advantage of Nations. New York: The Free Press, 1990.
Pyatt, G., C.-N. Chen, and J. Fei. “The Distribution of Income by Factor Components.”
Quarterly Journal of Economics, XCV, 1980, 451-473.
Shelburne, Robert C. and Robert W. Bednarzik. “Geographic Concentration of Trade-
Sensitive
Employment.” Monthly Labor Review. June, 1993, 3-13.

227
THRESHOLD EFFECTS BETWEEN GERMAN
INFLATION AND PRODUCTIVITY GROWTH

David B. Yerger, Indiana University of Pennsylvania


yerger@iup.edu

Donald G. Freeman, Sam Houston State University


ecodgf@shsu.edu

ABSTRACT

We test for Threshold Effects in inflation’s impact upon productivity growth in


Germany and find a discernable difference in the impact of inflation upon productivity growth
in Germany depending upon the inflationary regime. In the ‘low’ inflationary regime
(<2.95%) there is no statistically significant impact from an inflation shock upon productivity,
but in the ‘high’ inflationary regime the inflation shock has a significant negative impact upon
productivity growth. This result is contrary to the existing literature which fails to find any
statistically significant impact from inflation upon productivity growth for the modern
German era. The previous literature, however, only utilized standard linear causality testing
methods that did not allow for the possibility of threshold effects existing in the relationship.

I. INTRODUCTION

One of the widespread macroeconomic policy success stories for industrialized nations
in the 1980’s and 1990’s was the reduction of inflation rates from double to low single digits.
There has remained in some quarters, however, the call for further inflation reductions,
perhaps even to zero, with the expectation of improved economic growth and productivity as a
consequence of even further reductions in inflation. This paper examines for Germany
whether there is empirical support for the position that reducing already low inflation rates
further will lead to increased labor productivity growth and thereby increased overall
economic growth. Germany has had one of the lowest average rates of inflation in the world
the past 40 years so it is a prime candidate for investigating the productivity benefits, if any,
from further inflation reductions.

A number of potential channels for inflation to reduce productivity growth have been
identified (Feldstein, 1982; Fischer, 1986; Briault, 1995, Thornton, 1996). While it is quite
plausible that adverse inflationary effects would arise through these channels at high rates of
inflation, it is much less certain that inflation in the low single digits adversely impacts
productivity growth in a meaningful way. Previous empirical work on this issue imposed
linear relationships in the testing for inflation-productivity linkages (Smyth, 1995; Freeman &
Yerger, 1997; Freeman & Yerger, 2000). After accounting for the impact of business cycle
effects upon measured productivity, these papers fail to find an adverse impact from inflation
upon measured productivity growth for Germany.

This paper extends the previous literature by testing for the existence of ‘Threshold
Effects’ from inflation upon measured productivity growth. If the impact of inflation upon
productivity growth varies depending upon the initial level of inflation itself, then it is
possible that the absence of findings in the prior literature was due to estimation techniques
that forced the same relationship across the ‘high’ versus ‘low’ inflation regimes.

228
II. LITERATURE REVIEW

Among the earliest relevant studies are those of Clark (1982) and Ram (1984) who
found that inflation negatively granger-caused productivity growth in the U.S. while Jarret
and Selody (1982) found the same effect for Canada. Follow-up studies appeared to confirm
these earlier findings. Simios and Triantis (1988) analyzed U.S. data through 1986 while
Saunders and Biswas (1990) examined U.K. manufacturing productivity through 1985, with
both studies finding a significant negative impact from inflation upon productivity growth.
Smyth (1995a, 1995b) analyzed both German and U.S. multifactor productivity data and
found a significant negative effect from contemporaneous inflation.

The conclusions to be drawn from these studies are limited, however, by several
factors. First, these studies for the most part include only the run-up of inflation through the
early 1980’s and not the subsequent decline. Second, most of these papers failed to control
for potentially relevant business cycle effects. Lastly, these studies did not test for stationarity
of the data, a necessary condition for their causality tests to be valid. Several more recent
papers have addressed some or all of the above issues, and their findings call into question the
conclusions of the prior literature. Among the studies failing to find evidence of a negative
impact from inflation upon productivity growth are Sbordone and Kuttner (1994), Cameron,
et al (1996), and Freeman and Yerger (1997, 1998, 2000).

While these more recent time series studies fail to find any robust relationship between
inflation and productivity growth, it remains possible that such a relationship exists but only
after inflation increases past a certain threshold. A number of cross-sectional based analyses
support this conjecture, but their estimated threshold inflation values vary widely, from 2.5%
to more than 24%, across these studies. See Fisher (1993), Bruno (1995), Sarel (1996), Ghosh
and Phillips (1998), Christoffersen and Doyle (1998), Bruno and Easterly (1998), and Khan
and Senhadji (2000) for technical details. These cross-sectional results suggest at least two
conclusions for the present study: inflationary threshold effects are likely to exist for a
number of nations; and, inflationary threshold critical values probably vary across nations.
Given the many differences across nations in their tax systems, labor market rigidities, and
inflation histories, the second conclusion is not surprising. Rather than impose the same
inflationary threshold value across multiple nations as in a panel setting, this paper returns to a
time series analysis of a single nation, Germany, but modifies the estimation technique to
allow for the existence of threshold effects. While the results here cannot be generalized to
other nations, the approach used could be replicated on other industrialized nations and the
consistency of the findings compared.

III. DATA

We use quarterly data for Germany from 1962 Q1 to 1998 Q4. This end date permits
restricting the German data to just the former West Germany, eliminating a structural break in
the data unrelated to the inflation regime itself. All variables are expressed as annual growth
rates on a year-over-year quarterly basis. Inflation is the CPI growth rate, productivity is
growth rate for real output per worker in the manufacturing and mining sector. To control for
potential spurious correlation between inflation and business cycle effects, the model also
includes the growth rate of Germany’s industrial output index. All growth rate variables were
tested for stationarity using the Augmented Dickey-Fuller test statistic with a constant term
included in the estimating equation. As seen in Table 1 the null hypothesis of non-stationarity

229
is strongly rejected for each of the three variables, so the estimation techniques utilized in this
paper are valid.

Table 1 Stationarity Test Results

Variable ADF Statistic P-Value #Lags


Inflation -3.17 0.02 10 lags
Productivity Growth: -3.69 0.004 4
Industrial Output Growth -4.82 0.001 6

IV. THE THRESHOLD MODEL

The general form of the threshold model used in this paper is given in equation one
below. The model includes industrial output growth as an explanatory variable in order to
account for the potential cyclicality of measured productivity growth with the business cycle.
Failing to include a control for business cycle effects may lead to findings of spurious
correlation between inflation and productivity growth (Sborrone & Kuttner, 1994; Freeman &
Yerger 2000).

In the model, the parameter estimates on all variables can differ depending upon
whether the observation comes from the ‘low’ or the ‘high’ inflation regime. Observations
are assigned into the low (high) inflation regime if the value of their inflation threshold
variable is below (above) the critical switching value used to divide the sample into two
inflation regimes. If in Germany the adverse effects of inflation upon productivity growth are
notably more pronounced in the upper end of the inflation range Germany experienced, then
the threshold model should find a more deleterious impact from inflation in the high inflation
regime.

(1) prodt = A0(L) prodt-1 + B0(L)Xt + {A1(L) prodt-1 + B1(L)Xt }*DUM(thresh > critval) +
et

A0(L), A1(L), B0(L), and B1(L) are lag polynomials; prodt is the productivity growth rate; Xt
is the vector containing the inflation and industrial output growth measures; thresh is the
threshold variable used to sort inflation regimes; critval is the selected critical value against
which thresh is compared; and DUM(thresh > critval) = 1 if thresh > critval and 0 otherwise.

Standard inference techniques are not appropriate here since under the null hypothesis
of no threshold effect the variables thresh and critval are not identified. Consequently, we
utilize the test developed by Andrews (1993) for tests of parameter instability when the
change point is unknown, or known to lie in a restricted interval. His sup-LR test statistic is
the largest maximum likelihood ratio statistic found over the tested range of all possible
threshold variables and threshold values. In his paper critical values are given for the
rejection of the linearity null hypothesis in favor of the alternative hypothesis that threshold
effects exist. These critical values are used for the linearity tests on equation (1) in this paper.
Since economic theory does not support any particular lag length structure a priori, we
proceeded as follows. The lag length on all variables was varied from one to six lags. Also,
consistent with standard practices, one to six lags of inflation was tested as the threshold
variable. For each inflation lag threshold variable, the threshold critical value was varied by
initially setting the threshold critical value at 1.60% and then raising the threshold critical

230
value in 0.05% steps until the final iteration utilized a threshold critical value of 5.15%.
These boundaries keep at least 15% of the observations in each inflationary regime.

The 3 lag specification of equation (1) with 1 lag of inflation as the threshold variable,
and 2.95% as the threshold critical value, generated the largest maximum likelihood ratio test
statistic over the range of tested models. This is the value needed to compare against the
critical values provided by Andrews (1993). As seen in Table 2, the linearity null hypothesis
for equation (1) is strongly rejected as the max-LR test statistic of 31.55 is well above even
the 1% critical value of 25.75. The findings imply a threshold critical value of 2.95% as the
switching point between the low and high inflation regimes.

Table 2: Estimation Of Equation (1) Summary Of Results

Model’s
Estimated Sup-LR
Sup-LR Critical Values
Test Statistic 10% 5% 1%
31.5 18.08 20.35 25.75

Estimated Parameter Values For Equation (1)


Low Inflation Regime High Inflation Regime
Sum of Sum of
Variable Coefficients P-value Coefficients P-value
Inflation 0.060 0.79 -0.297 0.03
Industrial Output
Growth -0.063 0.26 -0.305 0.000
Productivity
Growth 0.676 0.000 1.040 0.000

In the low inflation regime, the coefficients sum to a positive 0.060 but the sum is not
statistically significant as the p-value = 0.79. In the high inflation regime, however, the
impact of inflation upon productivity growth is both negative and statistically significant with
a parameter value of -.297 and p-value of 0.03. These results contradict previous findings of
no impact from inflation upon German productivity growth in studies that imposed a constant
linear relationship between inflation and productivity growth. Apparently, the prior findings
of no adverse effects from inflation were due to the mixing of the low inflation and high
inflation regimes’ observations in the same regression estimation.

V. CONCLUSION

We do not argue that an inflation rate of 2.95% strictly divides the German Data into
low versus high inflationary regimes across which the impact of inflation upon productivity
growth differs as a consequence of some abrupt regime change. Instead, we interpret these
results as being broadly consistent with negative effects from inflation upon productivity
growth emerging for Germany once the inflation rate moves into the upper one-half of the
inflation range experienced by Germany over this sample period. The threshold switching
model’s abrupt regime change approach likely is approximating less abrupt changes in the
underlying inflation-productivity relationship. We check the robustness of Table 2’s findings
by analyzing the results for all other inflation threshold critical values between 1.60% and
5.15% in 0.05% steps (not reported here due to space constraints). In the range of 2.0% to

231
4.0% threshold values, inflation in the ‘high inflation’ regime consistently has a statistically
significant negative impact upon productivity growth, but no adverse impact is found when
the inflation threshold is set below 2.0%.

These findings support those who argue for low inflation rates as a means to aid
productivity growth. In particular, it is consistent with the view that the German Central
Bank’s strong commitment to low inflation in the post World War II era contributed to
Germany’s strong record of productivity growth over this time. The finding suggests that if
the European Central Bank ultimately is as successful as was Germany’s Central Bank at
maintaining low inflation, then productivity growth across the entire Euro zone will be
enhanced. At the same time, however, this paper’s findings do not support a policy goal of
zero inflation as has been called for by some. Once inflation reaches the low 2% range or
below, this paper finds no evidence that further inflation reductions would aid productivity
growth.

REFERENCES

Andrews, Donald. “Tests for Parameter Instability and Structural Change With Unknown
Change Point.” Econometrica, 61, 4, 1993, 821-856.
Briault, Clive. “The Costs of Inflation.” Bank of England Quarterly Bulletin, February 1995,
33-45.
Bruno, Michael. “Does Inflation Really Lower Growth?” Finance and Development, Sept.
1995, 35-38.
Bruno, Michael and Easterly, William. “Inflation Crises and Long-Run Growth.” Journal of
Monetary Economics, 41, Feb. 1998, 3-26.
Cameron, Norman, Hum, Derek, and Simpson, Wayne. “Stylized Facts and Stylized Illusions:
Inflation and Productivity Revisited.” Canadian Journal of Economics, 29, 1, Feb.
1996, 152-62.
Christoffersen, Peter and Doyle, Peter. “From Inflation to Growth: Eight Years of Transition.”
IMF Working Paper 98/99, International Monetary Fund.
Clark, Peter. “Inflation and the Productivity Decline.” American Economic Review, 72, 2,
May 1982, 149-54.
Feldstein, Martin. “Inflation, Tax Rules, and Investment: Some Econometric Evidence.”
Econometrica, 50, 1982, 825-62.
Fischer, Stanley. Indexing, Inflation, and Economic Policy. Cambridge, MA: MIT Press,
1986.
“The Role of Macroeconomic Factors in Growth.” Journal of Monetary Economics, 32, Dec.
1993, 485-512.
Freeman, Donald and Yerger, David. “Inflation and Total Factor Productivity in Germany: A
Response to Smyth.” Weltwirtschlaftliches Archiv, 133, 1, 1997, 158-63.
“Inflation and Multifactor Productivity Growth: A Response to Smyth.” Applied
Economics Letters, 5, 1998, 271-74.
“Does Inflation Lower Productivity? Time Series Evidence on the Impact of Inflation on
Labor Productivity in 12 OECD Nations.” Atlantic Economic Journal, 28, 3, 2000,
315-332.
Ghosh, Atish and Phillips, Steven. “Warning: Inflation May Be Harmful to Your Growth.”
IMF Staff Papers, International Monetary Fund, 45, 4, 1998, 672-710.
Jarret, Peter and Selody, Jack. “The Productivity-Inflation Nexus in Canada.” The Review of
Economics and Statistics, 64, 3, August 1982, 361-67.

232
Khan, Mohsin and Senhadji, Abdelhak. “Threshold Effects in the Relationship Between
Inflation and Growth.” IMF Working Paper 00/110, 2000.
Potter, Simon. “A Nonlinear Approach to U.S. GNP.” Journal of Applied Econometrics, 10,
1995, 109-125.
Ram, Rati. “Causal Ordering Across Inflation and Productivity Growth in the Postwar United
States.” The Review of Economics and Statistics, 66, 1986, 472-77.
Sarel, Michael. “Nonlinear Effects of Inflation on Economic Growth.” IMF Staff Papers,
International Monetary Fund, 43 (March), 199-215.
Saunders, Peter and Biswas, Basudeb. “An Empirical Note on the Relationship Between
Inflationand Productivity in the United Kingdom.” British Review of Economic
Issues, 12, 8, October 1990, 67-77.
Sbordonne, Argia and Kuttner, Kenneth. “Does Inflation Reduce Productivity?” Federal
Reserve Bank of Chicago Economic Perspectives, Nov/Dec 1994, 2-14.
Simios, Evangelos and Triantis, John. “A Note on Productivity, Inflation, and Causality.”
Rivista Internazionale di Scienze Economiche a Commerciali, 35, 9, 1988, 839-46.
Smyth, David. “Inflation and Total Factor Productivity in Germany.” Weltwirtschlaftliches
Archiv, 131, 2, 1995a, 403-05.
“The Supply Side Effects of Inflation in the United States: Evidence from
Multifactor Productivity.” Applied Economics Letters, 2, 1995b, 482-83.
Thornton, Daniel. “The Costs and Benefits of Price Stability: An Assessment of Howitt’s
Rule.”
Federal Reserve Bank of St. Louis Review, March/April 1996, 23-28.
Tsay, Ruey. “Testing and Modeling Threshold Autoregressive Processes.” Journal of the
American Statistical Association, 84, 405, 1989, 231-240.

233
TRADE AND GROWTH SINCE THE NINETIES:
THE INTERNATIONAL EXPERIENCE

Paramjit Nanda, Guru Nanak Dev University


paramjit_nanda2002@yahoo.com

P.S.Raikhy, Guru Nanak Dev University


raikhy_ps@yahoo.co.in

ABSTRACT

The paper seeks to examine effect of exports, its structure, and concentration/
diversification on growth across countries (developing, developed and all) during 1992 and
2002, with the hypothesis that different dimensions of trade including commodity
composition and country concentration etc. have bearing on trade performance and growth.
The study brought out that primary exports, concentration index and diversification index
negatively and significantly, while manufactured exports positively and significantly affected
economic growth.

I. INTRODUCTION

There is some disagreement among economists about role of trade in promoting


growth. Recent studies have reported that innovations and technological and knowledge
transmissions through imitations of foreign technologies or direct importation of technologies
or direct interactions with sources of innovation or foreign direct investment can help in
promoting growth (Edward, 1992). Thus trade in manufactures have greater potential for
productivity gains than trade in primary commodities, indicating increased role of trade
structure in affecting growth. Similarly, commodity concentration and country concentration
can retard growth by increasing instabilities in earnings and expenditure. The paper focuses
on these issues viz. the role of volume of trade, trade structure and concentration in affecting
growth of different countries during 1992 and 2002.

II. DATA BASE AND METHODOLOGY

Trade liberalisation has been examined in terms of three indicators viz (i) exports to
gross domestic product (X-GDP) ratio, (ii) imports to GDP (M-GDP) ratio and (iii) (X+M)
i.e. trade to GDP (T-GDP) ratio. Export structure has been examined in terms of (i)
percentage share of primary exports in total exports (XP/XT) and (ii) percentage share of
manufactured exports in total exports (XM/XT). To examine commodity diversification and
market concentration, diversification (DIx) and market concentration indexes (CIx) have been
used. Diversification index reveals the extent of differences between the structure of trade of a
country and the world average. The index value closer to one indicates a bigger difference
from the world average. The index value is calculated by measuring the absolute deviation of
the country share from world structure as follows:
∑ h ij − h i
Sj = i
2
where hij = share of commodity i in total exports of country j.
hi = share of commodity i in total world exports

234
Further, Herfindhal-Hirchman index, which is a measure of market concentration, has
been computed as follows:
2
239 ⎛ xi ⎞
⎜ ⎟
∑i=1 ⎜ ⎟ − 1/239
⎝X⎠
H =
j 1 − 1/239
Hj = country index with value ranging from 0 to 1 (maximum concentration)
xi= value of exports of product i
239
X = ∑ x i and 239= number of products (at the three-digit level of SITC, Revision 2).
i=1
Number of products exported include only those products with value greater than $ 100,000 or
more than 0.3 percent of the country's total exports. Data for all these variables were available
for 61 countries (29 less developed and 32 developed countries) for 1992 and for 97 countries
(51 less developed and 46 developed countries) for 2002, from various issues of World
Development Report, World Development Indicators and World Tables published by World
Bank and Statistical Annual Yearbook and UNCTAD Handbook of Statistics published by
U.N.
In order to study the effect of these variables on economic growth, measured in terms
of gross national income/product per capita, simple regression analysis and step-wise (step-
up) multiple linear regression analysis was carried out for all countries, less developed
countries and developed countries exports in the following way:
PCY= f (X-GDP, M-GDP, T-GDP, XP/XT, XM/XT, DIx, CIx)
It is hypothesised that
∂ (PCY) ∂ (PCY) ∂ (PCY) ∂ (PCY) ∂ (PCY) ∂ (PCY) ∂ (PCY)
, , and > 0 while , and <0
∂ (X - GDP) ∂ (M - GDP) ∂ (T - GDP) ∂ (X M /X T ) ∂ (X P /X T ) ∂ (DIX ) ∂ (CI X )

III. TRADE AND GROWTH IN DEVELOPING COUNTRIES

In 1992, developing countries mainly laid stress on import liberalisation strategy,


while in 2002, export promotion strategy was largely adopted by these countries (World Bank,
1994 and 2004). These countries were major exporters of primary products in 1992 as well as
in 2002 and share in primary exports increased in 2002 as compared to in 1992. These
countries had higher diversification index in 1992 as well in 2002 indicating more diversified
trade structure of these countries than the world average.

The simple regression results given in Table I shows that in 1992, variables namely X-
GDP, T-GDP, XP/XT and CIx positively (or negatively) and non-significantly (T-GDP in
isolation as well as in combination with XP/XT, XM/XT, CIx and X-GDP, while X-GDP, XP/XT
and CIx negatively but non-significantly in combination with other variables) affected
economic growth. Three variables namely M-GDP, XM/XT and DIx negatively but non-
significantly (XM/XT in isolation as well as in any combination with XP/XT, T-GDP, X-GDP,
CIx) affected economic growth. Of 7 variables considered T-GDP positively and XM/XT
negatively affected economic growth. There has been little impact of these variables on
economic growth during nineties.

In 2002, variables namely X-GDP, M-GDP, T-GDP and CIx in isolation as well as in
combination affected economic growth positively but non-significantly. Variables XM/XT
affected economic growth in isolation positively and significantly while XP/XT and DIx
affected it negatively and significantly, indicating increased role of manufactured exports in

235
economic growth and reduced role of primary exports and diversification index in economic
growth of developing countries.

IV. TRADE AND GROWTH IN DEVELOPED COUNTRIES

In 1992, developed countries followed import liberalisation strategy, and in 2002


import-led export promotion strategy was followed(World Bank, 1994 and 2004). They had
much more trade liberalisation in 2002 as compared to in 1992. These countries were major
exporters of manufactures in 1992 as well as in 2002.

236
237
238
In 1992, about 40 percent countries and in 2002, 30 percent countries had lower
diversification index than world average, indicating trade structure closer to world average for
these countries. Market concentration index was lower for 50 percent of the countries in 1992
and 60 percent of the countries in 2002, indicating diversification of markets of developed
countries.

Regression results given in Table II show that in 1992, as well as in 2002, variables M-
GDP, and CIX affected economic growth negatively but non-significantly, XP/XT and DIx
affected it negatively but significantly (DIx in 2002 only), while XM/XT affected it positively
and significantly. However, the variables X-GDP and T-GDP affected it negatively and non-
significantly in 1992, but positively though non-significantly in 2002. This indicated little role
of trade related variables in affecting growth. Primary exports and diversification index
contributed towards reducing economic growth while manufactured exports helped in
promoting economic growth in developed countries also.

V. TRADE AND GROWTH IN ALL COUNTRIES

For all countries regression results given in Table III show that in 1992, X-GDP and
M-GDP affected economic growth negatively and non-significantly; in 2002, X-GDP affected
it positively and significantly while M-GDP and T-GDP affected it positively and non-
significantly (T-GDP in 1992 also). Thus, X-GDP ratio helped in promoting growth in 2002 of
all countries. Variables, namely XP/XT, CIX and DIX negatively and significantly affected
economic growth in 1992 as well as in 2002 also. Variables DIx and CIx in isolation as well as
in combination with other variables (X-GDP and XP/XT in 1992 and with X-GDP and XM/XT
in 2002) negatively and significantly affected economic growth of all countries.

VI. POLICY IMPLICATIONS

The study suggests that the industrial policy and trade policy should aim at promoting
exports in manufactures even when comparative advantages might exist in primary goods.
Developing countries should shun their dependency on primary exports through
industrialisation and lay stress on technology intensive industries (like electronics), as these
have higher potential for positive externalities (technology & knowledge spillovers) coupled
with higher productivity level (due to efficiency gains and economies of scale etc.). Efforts
should be made by countries (especially the developing countries) to make the trade structure
closer to world structure and diversify markets (especially by the developed countries).
Developing countries should change the trade structure while developed countries should
diversify markets for better gains from trade.

REFERENCES

Barro, Robert J. and Xavier Sala-i-Martin. "Technological Diffusion, Convergence, and


Growth", Journal of Economic Growth, 2, (1) 1997,1-26.
Edwards, Sebastin. "Trade Orientations and Growth in Developing Countries", Journal of
Development Economics, 39, 1992, 31-57.
Feder, G. "On Exports and Economic Growth", Journal of Development Economics, 12,
1983,59-73.
Fosu, A.K. "Export Composition & the Impact of Exports on Economic Growth of
Developing Economies", Economic Letters, 34, 1990, 67-71.

239
Greenaway, D., W. Morgan and P. Wright. "Exports, Export Composition & Growth",
Journal of International Trade & Economic Development 8 (1), 1999,41-51.
Kruger, A.O. "Trade Policy as an Input to Development", American Economic Review,70,
1980,288-292.
Wacziarg, Romain. Trade Competition and Market Size, Mimeo, Harward University, Nov., 1997.
World Bank. World Development Report,1994.
World Bank, World Development Indicators, 2004.
UN, UNCTAD Handbook of Statistics, 2004.

240
INVESTOR RELATIONS CHALLENGES WITHIN
THE LIFE SCIENCES CATEGORY

Kerry Slaughter, Emerson College


kerry_slaughter@emerson.edu

James Rowean, Emerson College


james_rowean@emerson.edu

ABSTRACT

This writing addresses unique challenges that Investor Relations face in regard to the
life sciences categories of biotechnology and pharmaceuticals. Within this business segment
companies are overseen by both the Securities and Exchange Commission and the Food and
Drug Administration. The paper goes on to highlight specific IR changes resulting from such
government regulation.

I. HISTORY OF INVESTOR RELATIONS

According to Theodore Pincus, credited as the founder of modern day investor


relations, the history of investor relations (IR), has been fraught with many challenges. In
their early years investor relations were merely referred to as “financial public relations”
when practitioners were focused on making sure the companies they represented followed
legal procedures and were seen as credible sources for the financial community. Pincus
faulted them for not seeking to do more with the practice, and therefore saw the evolution of
IR as a very slow process hindered by its early practitioners (Pincus, 1982). Eventually,
Pincus entered the IR space and created a specialty financial public relations firm called the
Financial Relations Board. With this practice, he revolutionized the idea of investor relations
and challenged current assumptions of the day which led to the birth of the investor relations
practice (Pincus, 1982).

II. THE CURRENT STATE OF INVSETOR RELATIONS

Factors Contributing to Investor Relations from 2000 Through Present


Three watershed events have occurred in a recent short time frame relative to the
history of investor relations; Reg FD, SOX, and 9/11.

Regulation Fair Disclosure 2000


The first event is the implementation of the Securities and Exchange Commission
(SEC) Regulation Fair Disclosure, or Reg FD as it is referred to by many current IR
practitioners. A white paper from the law firm Sullivan and Cromwellth explains that Reg
FD applies to companies disclosing material, non-public information (Sullivan & Cromwell,
2000). If a company knows it is going to make an announcement to certain members of the
financial community then it must simultaneously make a public announcement. However, if
material non-public information is given out unintentionally, then the information must be
disclosed “promptly” in a public manner within a specified amount of time. The SEC has
made itself clear that it expects company executives to take special precaution when speaking

241
to certain groups in private. Corporate executives must remain vigilant in making sure they
follow these new regulations in order to spare themselves reaction from the SEC.

Sarbanes-Oxley 2002
The second important event contributing to the current state of investor relations is the
Sarbanes-Oxley Act of 2002. This Act (called SOX, by industry professionals) is intended to
create an independent regulatory structure for the accounting industry, higher standards for
corporate governance, increase independence of securities analysts, to create improved
transparency of financial reporting and provide new civil and criminal remedies for violations
of the federal securities laws (http://www.ffhsj.com/secreg/pdf/sc020730-2 .pdf). This Act
came as a response to the Enron and WorldCom scandals and other high profile fraud cases
that truly shook the financial community to its core. In an effort to curb corporate
misconduct, stricter regulations have been placed on the chief executive officers of any public
company. They are now required to verify important documents such as the Form 10-Q and
Form 10-K, and be held accountable for accuracy of the information. These changes are part
of the sweeping reforms that are dedicated to ensuring corporate accountability.

September 11th and its Impact on Investor Relations


In comparison to the events that lead to the widespread reforms of Regulation FD and
the Sarbanes-Oxley Act, the attacks on September 11, 2001, while a truly a horrible
experience for the American public, did not have the same lasting impact on the financial
marketplace. The financial markets did not open on September 11th and remained closed
until September 17th. “When the stock markets reopened on September 17, 2001, after the
longest closure since the Great Depression in 1929, the Dow Jones Industrial Average
(“DJIA”) stock market index fell 684 points, or 7.1%, to 8920, its biggest-ever one-day point
decline.” For Slate.com, Daniel Gross wrote an article immediately following the London
attacks in July, 2005 in which he compared the market reactions to those attacks to the
market reactions to September 11th. Gross wrote, “But Wall Street recovered rapidly, and the
effects turned out to be temporary. Within a few weeks of trading, the markets regained all
the ground they lost.” (http://www.slate.com/id/2122187)

Therefore, from the investor relations perspective, it is easier for an investor to


overcome an external event that impacts trading, than it is to overcome events that are
intrinsically linked to corporate misleading which have a lasting impact on the trust needed
for investors to put their money into certain stocks.

III. INVESTOR RELATIONS FOR BIOTECHNOLOGY

Challenges for investor relations practitioners within biotechnology companies


Every publicly traded company in the United States is subject to rules and regulations
of the Securities and Exchange Commission. This presents a series of unique challenges for
investment relations practitioners charged with marking sure their companies are receiving
proper attention from the financial community while at the same time remaining compliant
with SEC regulations.

In addition, biotechnology and pharmaceutical companies are also subject to the


stringent oversight, rules and regulations of the Food and Drug Administration (FDA). The
FDA has outlined a specific timeline that all pharmaceutical companies must follow in
attempting to bring a new drug to market. The timeline first calls for preclinical research and

242
trials, followed by a series of phases of clinical trials that eventually result in human testing.
The final stage id FDA approval of the drug for sale to the public.

In the past, investors were willing to take risks on small biotechnology companies
with the assumption that the healthcare industry is a target-rich environment with many
opportunities for new breakthrough products that could become quickly and widely accepted.
Many early stage investors based their expectations upon the success model of start-ups
within the high tech industry in recent past decades. There have been numerous examples of
hardware, software and Internet-oriented brands rocketing from early stage companies to
global entities in very short time periods, oftentimes with very lucrative ROI’s for their early
investors.

However, investors have become increasingly risk averse in regard to biotechnology.


This change might be a result of the painstakingly long process it takes to bring a drug from
development into production. “It takes 12 years on average for an experimental drug to travel
from lab to medicine chest. Only five in 5,000 compounds that enter preclinical testing make
it to human testing. One of these five tested in people is approved.” (Dale E. Wierenga, Ph.D.
and C. Robert Eaton Office of Research and Development Pharmaceutical Manufacturers
Association).

In regard to the chart listed above, a crucial phase for investor relations practitioners
is when clinical trials are occurring. During this phase, new drugs are being tested and
scrutinized by the FDA. A notorious incident that reflects the pitfalls that can occur at this
stage was the situation with ImClone. In late December 2001 the FDA thwarted the
company’s application to file for approval for its cancer drug, Erbitix. The FDA’s action was
due to shoddy data and poor clinical trial research design. This negative news on its own was
a challenge for ImClone’s investor relations team. What happened next exacerbated the
problem when a second federal regulatory agency became involved. In January 2002 the
SEC launched an investigation into insider trading by company founder and CEO, Sam
Waksal (Forbes Magazine, Sept. 2004).

FDA Pharmaceutical Testing Cycle

Evolutionary Phase Investor Relations Challenge


Early Research and Development Manage/Govern early media hype r.e.
“Medical Breakthroughs”
Clinical Trials Awareness of FDA Oversight
- Preclinical Testing
- Human Testing Monitoring trial disclosure data released in
research studies

FDA Approval/Launch Phase Claim Validation


- Manufacturing
- Introductory Marketing

Post Approval Phase Constant monitoring of possible new


- Ongoing Marketing implications

243
While not always true, it is generally accepted that investors are more likely to invest
in pharmaceutical and biotechnology companies that already have a drug or product in the
last stages of development. This strategy makes it slightly more likely that the investors will
see a return on their investment. Therefore, investor relations practitioners have to convince
their investors and the rest of the financial community that they have viable products in the
pipeline. This process is aided through the use of third party physicians who write research
articles in prestigious medical journals to validate the claims made by these biotechnology
and pharmaceutical companies about their drugs. Additionally, the side effects of certain
medications can either thwart their development or cause them to get taken off of the shelves
even after they have been approved by the FDA. Such has been the case with the highly
publicized problems that Merk is facing with Vioxx which was taken off the market in 2004
after being linked to an increase in heart attack risks. Recently, The New England Journal of
Medicine published an unusual expression of concern regarding the fact that Merk had
excised data on patients suffering heart attacks from a crucial study on Vioxx published in
2000. This incident has implications back to the early days of investor relations when a
primary IR responsibility was to oversee a company’s credibility in the eyes of various public
constituencies (New England Journal of Medicine, Dec. 29, 2005).

Another recent example of less than full disclosure occurred with a study done in
2004 for GlaxoSmithKline’s Praxil. New York State Attorney General Eliot Spitzer charged
the company with “repeated and persistent fraud” for concealing problematic issues of
efficacy and safety when children took Praxil for depression (Wall Street Journal. December
30, 2005, page 5).

IV. CONCLUSION

The life science and pharmaceutical industry as a whole has encountered some major
setbacks in public opinion. In January 2006 the FDA indicated that new guidelines for
preliminary phases of drug testing might be considered. These new guidelines could reduce
the amount of mandatory testing done before giving experimental medicines to humans.
Such a reduction of early phase studies could lead to cost savings as well as shortened time
frames relating to arduous testing cycles.

While drug companies should not be responsible for saving everyone inflicted with
disease, they most definitely should be held accountable for full disclosure of clinical trial
data prior to and after their products receive FDA approval. While the biotechnology
category is future-oriented, it is still subject to the founding tenets of IR practitioners. It is
imperative that these companies follow the mandates set forth by the SEC and the FDA so
that they are perceived to be credible by investors in particular and the public in general.

REFERENCES

A. BOOKS
Pincus, Theodore H. Investor relations: A Strategic Approach. Prentice hall, 1982.
Marcus, Bruce W., and Wallace, Sherwood Lee. New Dimensions in Investor Relations:
Competing for Capital in the 21st Century. Wiley, 1997.

B. ARTICLES
Sullivan & Cromwell. “Regulation FD – Practical Issues Raised by the SEC’s New Selective
Disclosure Rule”. September 7, 2000.

244
http://www.ffhsj.com/secreg/pdf/sc020730-2.pdf
http://slate.com/id/2122187
Forbes Magazine. September, 2004, page 37.
New England Journal of Medicine. December 29, 2005, Volume 353, Number 26.
Wall Street Journal. December 30, 2005.

245
GENETIC ENGINEERING ,BIOTECHNOLOGY AND INDIAN
AGRICULTURE , IPR ISSUES IN FOCUS

Prabir Bagchi , Sims , Ghaziabad


prabir.bagchi@rediffmail.com

ABSTRACT

Genetic engineering technologies attempt to create seeds that cannot reproduce


themselves and thus biologically control the complete enslavement of agriculture. The
terminator technology which is not yet commercialized in India has its primary aim, the
maximization of the seed industry’s profits by destroying the ability of the farmers. The
research shows that protection of seeds of non basmati rice by non Indians will not reduce
India’s market share of non basmati rice in the international market.

I. INTRODUCTION

Genetic engineering technologies attempt to create seeds that cannot reproduce


themselves and thus biologically control the complete enslavement of agriculture. The
terminator technology which is not yet commercialised in India has its primary aim, the
maximization of the seed industry’s profits by destroying the ability of the farmers to save
their seeds and breed their own crops (Pray and Ramaswamy 2002). Genetic seed sterilization
goes far beyond intellectual property (Cohen 1999). A typical patent provides an exclusive
legal monopoly for 20 years but terminator is a monopoly with no expiration date. It is the
perfect tool for the corporate industry in a global market because it destroys the concept of
national seed sovereignty. Another GE technology that is potentially more dangerous than the
terminator technology is the genetic trait control technology. With genetic trait control the
goal is to turn a plant’s genetic traits on or off with the application of an external chemical.
The battle for the control of seeds of agriculture and of food has clearly pushed the concept of
public good to beyond the background. Given the vast economic power of the corporate
sector and the stagnant budgets for public research not only latter has been totally
marginalised, but also the benefits of public sector research are being privatized through the
IPR regime.(Rao and Gulati,2004).Corporations claim patents on genetically engineered
products on the grounds of predictability of the behavior of the inserted gene.

II. RESEARCH METHODOLOGY

Hypothesis: Genetic contamination of seeds of rice by non-Indians will reduce India’s market
share in the international market.

After an in depth analysis of the issues concerning the impact of IPR provisions on
agriculture, the Questionnaire was sent for an initial screening to National Council of
Agricultural policy and research , Pusa Road, New Delhi, and to Indian Institute of
Science, Bangalore. After the first screening the Questionnaire was checked by Shri
Devendra Sharma , agricultural expert and a journalist in Delhi .And in the third and final
stage the Questionnaire was finalized by Shri Biswajit Dhar, international expert on IPR
issues presently working as chief of WTO division at IIFT, New Delhi. The process of
collection of data was conducted through personal interviews, mail, surveys, and telephonic
interviews. The sample consists of the following five segments:

246
1. Non Governmental Organization and Farmers Organisations
2. Agricultural Scientists
3. Professors and Academicians
4. Seed companies
5. Experts

NGOS/Farmers Organizations
For NGOS ,the respondents were chosen carefully from the directory of NGOs in
India. All the NGOs were chosen from Northern India because rice is grown mainly in this
region only. For farmers organizations the respondents were chosen from the rice fields of
Karnal and Palwal from the state of Haryana and Pilbhit and Ghaziabad from the state of
U.P. These two states incidentally are the major rice producing states of India.

Agricultural Scientists: Under the aegis of Indian Council for Agricultural Research (ICAR)
, seed technology Division of National Council of Agricultural Policy Research has been
doing research on seed varieties over the Years . Most of the respondents were chosen from
this Institute. Other respondents were chosen from ICRISAT, Pune, and from ICAR campus,
Pusa Road, New Delhi.

Professors and Academicians: Professors were chosen from Universities and Institutes
where research is being done on agricultural issues and International business. Gobind
Ballabh Pant University, Pant Nagar, Indian Institute of Management , Bangalore and
Ahmedabad , Indian Institute of Foreign Trade ,New Delhi RIS, New Delhi, TERI, New
Delhi were the prominent institutions from where the respondents were chosen.

Seed Companies: Respondents were chosen mainly from seed Quest Yellow pages which is
an international organization that maintains the data base of seed companies in India and
other organizations in Delhi.

Experts: Respondents were chosen mainly from Institutions like, IGIDR, Mumbai and
Gujarat , Indian Institute of Pulses Research , Kanpur, and Indian Statistical Research
Institute , New Delhi .

III. DATA ANALYSIS

The Sample
50 respondents each were chosen from the group mentioned above.
The entire data of 250 respondents was analyzed using the Chi- square test. Chi
square test was used because this test analyses the data and verifies the degree of difference
amongst the data collected as the respondents come from various parts of the country no
other test seemed to be better other than the chi-square test. The quantitative data have been
put in the table and the subjective responses have been analyzed thereunder as per the
following legend:
P – Professors F - Farmers / NGOS
E – Experts A -Agricultural scientists
S - Seed Companies

247
The following issues were raised to the respondents.
The freedom of local indian farmers to use imported germplasm of rice seeds may be
constrained by the breeders business interests particularly in the case of export oriented crops
because of genetic contamination(Saxena, Dhillon 2002).Exports of India’s basmati rice
abroad can get affected significantly. Contract farming will encourage development of seeds
along commercial lines on a large scale. (Ghosh 2003) Genetic contamination may affect
many small farmers and small exporters in India exporting basmati rice may be affected.
Biotechnology , private sector firms have emerged as technological leaders in a number of
important areas and agricultural research centers have comparatively become minor players.
This may reduce their ability to provide technological support to very poor farmers in India.
For the country as a whole the increasing reliance on a narrow genetic range of crops can
limit the varieties of rice for exports Given the fact that Basmati and Non Basmati rice is
grown in India patenting of seeds by non indians abroad can adversely affect India’s
competitive market abroad.(Bagchi, Bannerjee and Bhattacharya, 2004). Rice seeds from
harvested crops will be experimented upon and bred with locally adapted varieties by the
plant breeders. (Rangnekar,2002). This may enrich biodiversity of rice. Reduction of
agricultural subsidies by developed countries will provide competitive advantage to indian
exporters of basmati rice in the international market. Smaller companies will experience
increased difficulty to compete because the market for seeds is becoming fickle, plant variety
protection will scotch a market in second and subsequent generations of both open pollinated
and hybrid seeds (Gupta and Kumar 2002). With IPRs in seeds the aspect of genetic pollution
is coming in. Quality conscious buyers of rice from india may turn down export orders from
India due to quality aspects. Large scale commercial farming by plant breeders may tempt
them to go in for larger land holdings and therefore small farmers producing for export may
be hurt. (Rao, Niranjan 1997)
Testing of hypothesis
Hypothesis: Genetic contamination of seeds of rice by non-Indians will reduce India’s market
share in the international market.
Responses have been clubbed below in the format given below.

Categories Agree Indiffrerent Disagree Grand Total


Seed 304 111 85 500
Companies
Agri 245 75 180 500
Companies
Farmers/NGOS 327 50 123 500
Experts 319 79 102 500
Professors 311 58 131 500
Grand Total 1506 373 621 2500

O/F E/F O-E2 O-E2/E


304 301 9 0.02
245 301 9 0.02
327 301 9 0.02
319 301 9 0.02
311 301 9 0.02
111 75 1296 17.28
75 75 1296 17.28
50 75 1296 17.28
51 75 1296 17.28

248
58 75 1296 17.28
85 124 1521 12.2
180 124 1521 12.2
123 124 1521 12.2
102 124 1521 12.2
131 124 1521 12.2
147.5
V= (r-1) X (c-1) = (4)x (2) = 8
For v= 8 X2 = 15.507
The calculated value is much lower than the table value.
The hypothesis stands null and void.
Protection of rice seeds will not affect India’s market share in the international market.
The hypothesis stands null and void.
Protection of seeds of non-basmati rice by non-Indians will not reduce India’s market share
of non basmati rice in the international market.

IV. CONCLUSION

Although the research proves that genetic contamination may not be dangerous in the
near future(Srinivasan, 2004), we still can’t take the situation casually. Organizations like
Monsanto are looking forward to the Indian market seriously. Indian research Institutes must
step up the research plan to upgrade the quality of the seeds. At the same time the Indian
Government should look closely look into the provisions concerning the aspect of plant
breeders rights, farmers rights, and farmers privilege. To be on the safer side farmers
organizations/ NGOs, Government Organizations should be involved to the hilt to synergize
the post WTO scenario concerning Indian agriculture in India.

REFERENCES

Bagchi, AK, P. Banerjee, and PK Bhattacharya (1984): “Indian patents and its relation to
technological development in India” – A preliminary investigation, Economic and
Political Weekly, 19(7): 287-304.
Cohen, J. I.(1999) “Managing intellectual property - Challenges and responses for
agricultural research institutes” Agricultural Biotechnology and the Poor: Addressing
Research Program Needs and Policy Implications. J. I. Cohen. Wallingford, OXON,
CAB International: 209-217.
Dwijen Rangnekar (2005) “No Pills for Poor People Understanding the Disembowelment of
India’sPatent Regime ” CSGR Working Paper No. 176/05 October.
Ghose Janak Rana (2003): “The Right To Save Seed .This study was conducted while the
author was a Centre Intern with the International Development Research Centre
(IDRC), 250 Albert Street, Ottawa, Ontario, Canada K1G 3H9, and Gene Campaign,
J-235 Sainik Farms,Khanpur, New Delhi, India 110062. All comments can be
forwarded to ranaghose@hotmail.com.
Gupta Sanjeev and Shiv Kumar(2002) “ Protection of Plant Varieties and Indian Agriculture”
, Yojana, December.
Pray, C.E., B. Ramaswami, and T. Kelley, 2001. "The Impact of Economic Reforms on R&D
by the Indian Seed Industry." Food Policy, 26:587-598.
Rao, Niranjan, C.,(1997), “Plant Variety Protection and Plant Biotechnology Patents: Options
for India”, Policy Paper no. 29, for UNDP funded Project LARGE, UNDP New Delhi
.

249
Rao, C.H.H.,and A.Gu lati.(1994) “Indian agriculture: Emerging perspectives and
Policy is sues, New Delhi: In dian Coun cil of Ag ricultural Re search and International Food
Pol icy Research Institute.Washington, D.C.
Saxena, S. and Dhillon, B. S., (2002) A critical appraisal of the Protection of Plant Varieties
and Farmers’ Rights Act 2001, India. NATP Trainers Training Jan 2002, Compilation
of Experts lecture notes,NBPGR, New Delhi, p.9.
Srinivasan, C.S., (2004)."Plant Variety Protection, Innovation and transferability: Some
Empirical Evidence." Review of Agricultural Economics, 28(4): 44520.

250
RESPONSE OF BUILDING COSTS TO UNEXPECTED CHANGES IN REAL
ECONOMIC ACTIVITY AND RISK

Bradley T. Ewing, Texas Tech University


bradley.ewing@ttu.edu

Daan Liang, Texas Tech University


daan.liang@ttu.edu

Mark A. Thompson, University of Arkansas-Little Rock


tmathompson@ualr.edu

ABSTRACT

The construction industry is a major driving force of the U.S. economy. As such, it is
important to identify major determinants of construction costs and to know how these costs
change over time. This study examines the response of a popular building cost index to
unexpected changes in economic activity and risk.

I. INTRODUCTION

Participants in the construction industry, including owners construction managers,


contractors, and many government agencies depend critically on projecting changes in
building costs. Analysis of building costs is used in many construction and engineering
applications including life cycle costing and benefit-cost analysis (Dahlen and Bolmsjo,
1996; Asiedo and Gu, 1998; Meiarashi et al, 2002; Hasrakl and Baim, 2001; USDOT,
FHWA, 2002; Williams, 2005). Improving our understanding of how material and labor
costs used in construction projects respond to unexpected changes in economic activity would
surely be seen as beneficial. This research focuses on documenting the response of changes
in building costs to unexpected changes in real economic activity and risk.

II. DATA

Our analysis examines the response of the growth rate in building costs to unexpected
changes in real economic activity and a measure of corporate or business-related risk. The
sample period is January 1989-August 2005. The Building Cost Index (BCI) is maintained
by Engineering News Record (ENR), a subsidiary of McGraw Hill. The BCI uses the
following when constructing the index: 68.38 hours of skilled labor at the 20-city average of
bricklayers, carpenters and structural ironworkers rates, plus 25 cwt of standard structural
steel shapes at the mill price prior to 1996 and the fabricated 20-city price from 1996, plus
1.128 tons of portland cement at the 20-city price, plus 1,088 board-ft of 2 x 4 lumber at the
20-city price. The BCI is used by contractors around the United States to gauge the state of
construction costs. The growth rate in industrial production (IPGROWTH) is used to
measure changes in real economic activity. The spread between Baa and Aaa corporate bond
rates is used to proxy for corporate or business risk in the economy and is denoted BAA-
AAA (Ewing, 2002). As we focus on growth rates in BCI and IP, the usable sample period is
February 1989-August 2005 for a total of 199 observations. Furthermore, we seasonally
adjusted the BCI index before computing the growth rate, denoted BCGROWTH. Table 1
presents descriptive statistics for the variables used in this study. Growth in construction

251
costs has exceeded the growth rate of industrial production over the sample period by nearly
9 percent, but is generally more stable.

Table 1 Descriptive Statistics

BCGROWTH IPGROWTH BAA-AAA


Mean 2.889727 2.656315 0.833216
Median 1.801802 2.942136 0.790000
Maximum 38.30645 22.43439 1.410000
Minimum -10.95890 -14.54620 0.550000
Std. Dev. 5.986473 6.220264 0.211088

III. METHODOLOGY

We are interested in the response of changes in the BCI to shocks in the


macroeconomic variables. Vector autoregressive (VAR) models and innovation accounting
methods such as impulse response functions are ideal for this type of dynamic analysis. The
conventional impulse response method is often criticized because results are subject to the
“orthogonality assumption.” That is, they may differ markedly depending on the ordering of
the variables in the VAR (Lutkenpohl, 1991). The generalized methodology of Pesaran and
Shin (1998) overcomes this problem and is not sensitive to the ordering of the variables in the
VAR.
The moving average representation of the VAR(m) model is given as: zt = Ψ(L)vt
where E(vtv′t) = Σv such that shocks are contemporaneously correlated, and L is a polynomial
in the lag operator. The generalized impulse response function of zi to a unit (one standard
deviation) shock in zj is given by: Ψij,h = (σii)-1/2 (e'jΣvei) where σii is the ith diagonal element
of Σv, ei is a selection vector with the ith element equal to one and all other elements equal to
zero, and h is the horizon. In contrast standard impulse responses, the generalized impulse
responses from an innovation to the ith variable are derived by applying a variable specific
Cholesky factor computed with the ith variable at the top of the Cholesky ordering. Thus, the
generalized responses are invariant to any re-ordering of the variables in the VAR and
provide more robust results than the orthogonalized method. This method also allows for
meaningful interpretation of the initial impact response of each variable to shocks to any of
the other variables.

IV. RESULTS AND DISCUSSION

Table 2 presents results from the estimation of the three-equation VAR. The order of
the VAR was chosen to be 3. Intuitively, the VAR is a reduced form model and can be
thought of as predicting the current value of BCGROWTH based on information available at
time t, in this case, past values of each of the variables. The expectation of BCGROWTH is
thus dependent solely on past observations of itself as well as past values of BAA-AAA and
IPGROWTH. Deviations from this expectation are seen in the error term. The results of the
BCGROWTH equation indicate that the current construction cost growth rate value does
depend on past values of itself indicating some persistence in cost growth, the growth in real
economic activity, and the corporate risk measure

Table 2 VAR Results

252
BCGROWTH IPGROWTH BAA-AAA
BCGROWTH(-1) 0.357680 0.095276 -0.000708
[ 4.98660] [ 1.23756] [-0.86781]

BCGROWTH(-2) 0.040711 -0.055393 0.001193


[ 0.53080] [-0.67289] [ 1.36856]

BCGROWTH(-3) -0.123866 0.044509 -0.000470


[-1.74987] [ 0.58582] [-0.58378]

IPGROWTH(-1) 0.088968 0.056169 -0.001325


[ 1.30841] [ 0.76961] [-1.71433]

IPGROWTH(-2) 0.108873 0.203771 -0.001361


[ 1.62078] [ 2.82629] [-1.78200]

IPGROWTH(-3) -0.026803 0.172915 -0.000245


[-0.38994] [ 2.34378] [-0.31319]

BAA(-1)-AAA(-1) -0.074345 5.759419 1.192809


[-0.01131] [ 0.81654] [ 15.9674]

BAA(-2)-AAA(-2) -19.71009 -17.99375 -0.334085


[-1.97513] [-1.67996] [-2.94511]

BAA(-3)-AAA(-3) 20.71479 9.477533 0.059481


[ 3.25325] [ 1.38676] [ 0.82178]

Constant 0.882818 3.596950 0.076121


[ 0.44859] [ 1.70287] [ 3.40263]

Adj. R-squared 0.186290 0.126923 0.915726


t-statistics are given in brackets.
Of course, in any particular period, the error term in any of the three equations may
be, and usually is, nonzero. Our focus is on how and to what extent the growth rate in
construction costs depends on shocks to IPGROWTH and BAA-AAA.

Figure 1 presents the generalized impulse response functions generated from the VAR
model. A significant impulse response function provides useful information about future
values of the growth in BCI. The generalized impulse response functions provide
information about the response of construction cost changes (BCGROWTH) to unanticipated
changes in the macroeconomic variables. In particular, unexpected changes in the state
variables constitute "news" and, thus, the generalized impulse response functions show how
long and to what extent BCGROWTH reacts to unanticipated changes in real output
(IPGROWTH) and the corporate risk measure (BAA-AAA). Statistical significance is
determined by the use of confidence intervals representing plus/minus two standard
deviations. At points where the confidence bands do not straddle zero, the impulse response
is considered to be different from zero (Runkle, 1987). The horizon is in quarters and is
represented on the horizontal axis.

253
CHART 1 GENERALIZED IMPULSE RESPONSE FUNCTIONS
Response to Generalized One S.D. Innovations ± 2 S.E.

Response of BCGROW TH to BCGROW TH


8

-2

-4
1 2 3 4 5 6 7 8 9 10

Response of BCGROW TH to IPGROW TH


8

-2

-4
1 2 3 4 5 6 7 8 9 10

Response of BCGROW TH to BAA-AAA


8

-2

-4
1 2 3 4 5 6 7 8 9 10

The top panel of Figure 1 indicates that a one standard deviation shock to
BCGROWTH persists for two months. Thus, project managers and cost estimators can rest
assured that growth in the BCI returns to a long run value (i.e., unconditional mean) relatively
quickly so that future, longer horizon projections of BCI based on simple time series models
should be fairly accurate.

The middle panel shows how BCGROWTH responds to a shock to IPGROWTH.


Note that the BCGROWTH rises only after about 2 months, becomes insignificantly different
from zero for a month, and then rises slightly again about four months after the shock. This
finding is consistent with macroeconomic models in which an unexpected increase in output
places inflation pressures on the economy. Specifically, as the BCI is comprised in large part
of skilled labor, it is likely that the rise in overall output and the corresponding increase in
BCI works in part through wage pressure effect. The lagged response is consistent with
sticky price and labor (e.g., union) contract models.

The bottom panel indicates a lagged BCGROWTH response to a risk shock of about 2
months. The response from an unexpected increase in macroeconomic business risk is
negative and persists for about 2 months. This finding is consistent with there being a decline
in demand for materials and labor when corporations are more likely to default on loans.

254
Generally speaking, the results presented in this paper are consistent with standard
macroeconomic theory. Unexpected increases in real economic activity place inflationary
pressure on building costs while increases in corporate risk depress the growth rate in
construction costs. The findings of this paper may help project managers and participants in
the construction industry better plan for unexpected changes in the economy. Moreover, the
results suggest that economic shocks have a relatively short-lived, and somewhat lagged,
impact on future changes in building costs.

REFERENCES

Aseidu, Y. and P. Gu. “Produce Life Cycle Analysis: State of the Art Review.” International
Journal of Production Research. 36, 1996, 883-908.
Dahlen, P. and G. Bolmsjo. “Life-cycle Cost Analysis of the Labor Factor.” International
Journal of Production Economics. 46-47, 1996 459-467.
Ewing, B. T. “Macroeconomic News and the Returns of Financial Companies,” Managerial and
Decision Economics. 23, 2002, 439-446.
Hastak, M. and E. Baim. “Risk Factors Affecting Management and Maintenance Cost of
Urban Infrastructure.” Journal of Infrastructure Systems. 7, 2001, 67-76.
Lutkenpohl, H. Introduction to Multiple Time Series Analysis. Berlin: Springer-Verlag, 1991.
Meiarashi, S., I. Nishizak, and T. Kishima. “Life-Cycle Cost of All-Composite Suspension
Bridge.” Journal of Composites for Construction. 6, 2002, 206-214.
Pesaran, M. H. and Y. Shin. "Generalized Impulse Response Analysis in Linear Multivariate
Models." Economics Letters. 58, 1998, 17-29.
Runkle, D. E. “Vector Autoregressions and Reality.” Journal of Business and Economic
Statistics. 5, 1987, 437-442.
United States Department of Transportation, Federal Highway Administration, Office of
Asset Management. Life-Cycle Cost Primer., 2002
Williams, T. P. “Bidding Rations to Predict Highway Project Costs.” Engineering Construction
and Architectural Management. 12, 2005, 38-51.

255
CHAPTER 8

ENTREPRENEURSHIP/SMALL BUSINESS

256
AN ANALYSIS OF FUNDING SOURCES FOR ENTREPRENEURSHIP IN THE
BIOTECHNOLOGY INDUSTRY

Sumaria Mohan-Neill, Roosevelt University, Chicago, IL


smohan@roosevelt.edu
Michael Scholle, Biosciences Division, Argonne National Laboratory
mscholle@anl.gov

ABSTRACT

Entrepreneurship and innovations have driven the recent phenomenal growth in the
biotechnology industry. However, due to the very sophisticated technologies and processes
required to accomplish innovations, biotechnology is a very capital-intensive industry, and
sources of human capital and financing are critical components for survival and eventual
success of companies. This paper explores recent secondary data on the global biotechnology
industry, and it also presents data from a primary data collection project on sources of
funding.

I. INTRODUCTION

Table I gives an overview of the global distribution of biotechnology companies. A


recent Ernst and Young (2005) report reveals that the U.S. has the largest number of publicly
held firms (330), while Europe has the largest number of private firms (1,717).

TABLE I Overview Of Global Distribution Of Biotechnology Companies

NUMBER OF EUROP CANAD ASIA-


COMPANIES GLOBAL US E A PACIFIC
Public Companies 641 330 98 82 131
Private Companies 3,775 1,114 1,717 390 554
Total 4,416 1,444 1,815 472 685

The revenues of the global biotechnology industry continue to grow showing a 14


percent increase during 2004. The US continued to dominate the industry with over 78
percent of total revenues. The global industry raised a total of $21.2 billion in capital during
this year. In the US, a total of almost $17 billion was raised (Table II). The total
biotechnology financing has increased for the last four years with 2004 funding being
eclipsed only during the bull market in 2000 (Ernst and Young 2005).

Table II Financing Break-Down Of U.S. Biotechnology Companies

Yearly Us Biotechnology
Financing ($M) 2004 2003 2002 2001 2000 1999 1998
Initial Public Offering 1,618 448 456 208 685 260 260
Follow-on Financing 2,846 2,825 838 1,695 14,964 3,680 500
Other Sources 8,964 8,306 5,242 3,635 9,987 2,969 787
Venture Capital 3,551 2,826 2,164 2,392 2,773 1,435 1,219
Total 16,979 14,405 8,699 7,930 32,722 8,769 2,766

257
II. BACKGROUND

Niosi (2003) surveyed 60 Canadian biotechnology firms and found that “Access to
Capital” was the number one perceived obstacle to growth. Coombs and Deeds (2000)
indicate the importance of the capital markets and venture capital, but focus on alliances and
direct investment from large pharmaceutical companies (big pharma), both domestically and
internationally. Duca and Yucel (2002) report on the importance of venture capital to the
biotech industry, but also indicate the important role of public funding in this area. Dibner
and Howell (2002) looked at the various forms of funding biotechnology firms exploit during
their growth and the importance of venture capital firms versus initial public offerings.
Powell et al. (2002) looked at how close physical access to funding sources drives
geographical location of biotechnology firms. Friedman and Seline (2005) discuss how
private and public funding sources limit collaboration between biotechnology firms. Much
earlier, Paugh and Lafrance (1977) discussed sources of funding for biotechnology
companies. Included in their list were public equity offerings, partnerships with other
companies (both big pharma and biotechnology) and venture capital. US Department of
Commerce (2003) in their survey of barriers to competitiveness in the biotech industry found
that access to capital ranked third behind the regulatory approval process and R&D costs.

More recently, Levison, chairman and CEO of Genentech Inc., points out the decline
in the profitability of big pharma in recent years and argues for the R&D efficiency in biotech
firms (Ernst and Young 2005). He states “In 21 out the last 25 years large pharma was the
most profitable industry on Fortune’s “most profitable” industry list”. He goes on to point
out that this has not been the case in recent years. He uses the number of new molecular
entities (NME) produced by companies relative to R&D spending to highlight the efficiency
of biotech’s R&D relative to that of big pharma. In 2003 the biotech industry surpassed big
pharma in the number of NMEs. In 2004 it is estimated that big pharma spent over $50
billion on R&D compared to an estimate of $20 billion for biotech (Ernst and Young 2005).
In 2005 it is estimated that 35 new products with a sales potential of at least $150 million
each will hit the market. Of these 20 are expected to come out of biotech R&D. This
disparity in the R&D efficiency between the big pharma and biotech industries is also
highlighted by Moses et. al. (2005).

The stock market also seems to reflect investors’ sentiment concerning the disparity.
Figure I illustrate a 2-year comparison of the performance of the Pharmaceutical (^DRG),
Nasdaq (^IXIC) and the Biotechnology (^BTK) indices. Clearly, the BTK has outperformed
both the NASDAQ and the DRG indices. The indices demonstrate that the capital markets
have been an important source of funding for public biotech firms, and they have attracted
more interest and investment relative to the NASDAQ or DRG firms in recent years.
However, an important issue of funding relates to the “private” biotech firms, who do not yet
have access to the public capital markets. What are their perceptions of sources of funding
available to them? This study explores sources of funding for both private and public
companies. The Ernst and Young Global Biotechnology Report (2005) provide a wealth of
information based on industry statistics and the opinions of top industry executives and
experts in biotech investment and funding. However, as expected, their focus is often on
larger companies with greater financial potential. It is understandable why “top-tier”
companies are of greater interest to venture capitalists and fund managers. The advantage of
the current study, while modest in scope, does not have the “top-tier” company bias. Rather,
during the primary data collection we sought to obtain a much broader sample base.

258
Figure I. Two-Year Comparison of Pharmaceutical (^DRG), NASDAQ (^IXIC)
and Biotechnology (^BTK) INDICES

III. RESEARCH METHODOLOGY

Secondary data analysis and in-depth interviews were the two methods utilized to
conduct exploratory research. The main source of secondary data was obtained from Beyond
Borders: The Global Biotechnology Report 2005 (Ernst and Young 2005). The primary data
was obtained by interviewing senior executives of selected private biotechnology companies.
Based on the results of these interviews a structured survey was developed for a descriptive
research design. The structured survey was posted on the university’s web site. An email
distribution list was developed to include executives and scientists from both public and
private companies. Electronic mailings were sent out to the sample with a link to the survey
site. There were a total of 48 responses. Both SPSS and Excel were used for data analysis.
The sample was predominantly US-based with a small percentage from Europe.
Approximately 60 percent of respondents were with private companies and about 40 percent
were public companies.

IV. RESULTS OF DESCRIPTIVE RESEARCH

Data was collected on a number of important issues facing the biotech industry.
However, the main focus of this paper relates to funding issues. Respondents were given a
list of six sources of funding (developed from the exploratory research process), and they
were asked to rate the importance of each funding source to their company. A five-point
rating scale was used, with 1= not important and 5=extremely important. The six sources of
funding were (1) Venture Capital, (2) Private Investors, (3) Founder Investment, (4) Revenue,
(5) Licensing, (6) Stock Equity, (7) Other. Figure II ranks the sources of funding based on
three measures of central tendency, mean, median and mode. Based on Figure II, Revenue is
the most important source of funding in the sample. It scored an average of 4 on a five point
scale where 5= extremely important. Stock Equity is the second most important source of
funding. Founder Investment is ranked third, Private Investors are ranked fourth, Licensing
fifth and Venture Capital is ranked Sixth.

259
FIGURE II. COMPARISON OF RELATIVE IMPORTANCE OF SOURCES OF
FUNDING FOR OVERALL SAMPLE BASED ON CENTRAL TENDENCY

5.00

4.00

3.00

2.00 MEAN
MEDIAN
1.00
MODE

0.00
Stock Founder Private Venture
Revenue Licensing
Equity Investment Investors Capital

MEAN 4.00 2.72 2.66 2.61 2.61 2.43


MEDIAN 5.00 3.00 2.00 2.00 2.00 1.00
MODE 5.00 1.00 1.00 1.00 1.00 1.00

Figure III illustrates the relative importance of sources of funding based on the
extremes of the five-point scale. Revenue is again ranked number one because sixty three
percent of firms rated it as extremely important. The second most important source is
Founder Investment (29.5 percent rated it as extremely important). The third most important
source is Stock Equity (23.3 percent rated it as extremely important). The fourth most
important source is Venture Capital (22.7 percent rated it as extremely important). The fifth
most important source is Private Investors (18.2 percent rated it as extremely important).
Licensing is the lowest ranked (sixth) at 13.6 percent.

FIGURE III. RELATIVE IMPORTANCE OF SOURCES OF FUNDING


(% OF FIRMS)

80.0

60.0

40.0

20.0

0.0
Revenue Founder Stock Venture Private Licensing
Not important 17.4 47.7 37.2 54.5 38.6 31.8
Extremely Important 63.0 29.5 23.3 22.7 18.2 13.6

260
Table III summarizes the differences between public and private companies in their
assessment of sources of funding. Revenue has the highest importance rating for both public
(mean=4.12) and private (mean=3.89) companies, but there is no statistically significant
difference between them. There are also no statistically significant differences in terms rating
the importance of Venture Capital, Private Investors or Founder Investment (t values < 1.64).
However, public companies rate licensing as having greater importance than do private
companies (t=1.71). Stock Equity is also rated significantly higher in public companies; this
accounts for the most significant difference between public and private companies (t=4.93).

Table III Relative Importance of Sources of Funding and Comparison of Differences


Based on Mean Values
SOURCES OF FUNDING PUBLIC CO PRIVATE CO. T VALUE SIG.
Revenue 4.12 3.89 0.47 n.s.
Stock Equity **** 4.00 2.00 4.93 SIG
Licensing ** 3.06 2.30 1.71 SIG
Private Investors 2.73 2.50 0.48 n.s.
Founder Investment 2.07 2.89 -1.58 n.s.
Venture Capital 2.00 2.63 -1.19 n.s.

Even though Founder Investment is below the benchmark for statistical significance
at the 95.5% level (t= -1.58), it is interesting to note that private companies do view it as an
importance source of funding relative to public companies. It is actually ranked as the second
most important source of funding for private companies. Based on anecdotal evidence, it is
believed that serial biotech entrepreneurs are an important source of funding for new firms.
They found companies which hit financial jackpots when those companies go public; they
then take some of their capital gains from the now public companies to fund new start-up
firms. This of course is a vital source of capital for start-up companies because VCs and
private equity firms are now more risk-averse, and are more cautious before they commit
capital to firms, since they want a much faster return on their investments. As a consequence,
VCs are courting firms that are much further along the development process, and so newer
and start-up firms cannot attract investment from these outside sources anymore.

Another potential source of funding for many firms without the Amgen or Genentech
potential is the merger and acquisition (M&A) path. In recent years, more technology is being
acquired to supplement the product pipelines within larger firms. So, the prediction by many
is continued increase in M&A activity involving pharmaceutical and biotech companies, and
within the biotech sector itself (including both public and private firms). The recent
acquisition of Abgenix (ABGX) by Amgen (AMGN) is an illustration of what may lie ahead
for smaller biotechnology companies with promising drug pipelines. Amgen, the second
largest biotech company has chosen to acquire Abgenix in a $2.2 billion deal. Following
positive phase 3 clinical trials for late-stage colorectal cancer therapies, biotech giant Amgen
thought that instead of sharing 50% of the profits with its partner Abgenix, it was better to
buy the company instead, and to get 100% of profits. The lingering question is whether an
increase in M&A activity is a two-edge sword. Does M&A activity involving entrepreneurial
biotech firms simultaneously provide financial support, while leading to a reduction in
technological innovation by firms? Is the serial entrepreneur/founder investor needed as a
critical contributor and catalyst for nurturing entrepreneurship and innovation within firms?

261
REFERENCES

Combs, Joseph E. and Deeds, David L. “International Alliances as Sources of Capital:


Evidence from the Biotechnology Industry.” The Journal of High Technology
Management Research, 11(2), 2000, 235-253.
Dibner, Mark D. and Howel, Michael “Finding Funding in Biotechnology: Keeping the
Companies Alive.” BioPharm, June, 2001, pp. 35-38.
Duca, John V. and Yucel, Mine K. “An Overview of Science and Cents: Ex-
ploring the Economics of Biotechnology.” Federal Reserve Bank of Dallas
Economic and Financial Policy Review, 1(3), 2002. URL:
http://dallasfedreview.org/articles/v01_n03_a01.html
Ernst and Young “Beyond Borders: The Global Biotechnology Report.” 2005.
Friedman, Yali E. and Seline, Richard S. “Cross-Border Biotech.” Nature Biotechnology,
23(6), 2005, 656-657.
Moses, Hamilton, Dorsey E. Ray Matheson, David H. and Their, Samuel O. “Financial
Anatomy of Biomedical Research.” JAMA: Journal of the American Medical
Association, 294(11), 2005, 1333-1342.
Niosi, Jorge “Alliances are not Enough Explaining Rapid Growth in Biotechnology Firms.”
Research Policy, 32, 2003, 737-750.
Paugh, Jon. and Lafrance, John C. , “Meeting the Challenge: U.S. Industry Faces the 21st
Century: The U.S. Biotechnology Industry.” U.S. Department of Commerce, Office of
Technology Policy, 1997.
Powell, Walter W., Koput, Kenneth W., Bowie, James I., and Smith-Doerr, Laurel “The
Spatial Clustering of Science and Capital: Accounting for Biotech Firm-Venture
Capital Relationships.” Regional Studies, 36(3), 2002, 291-305.
US Department of Commerce “A Survey of Uses of Biotechnology in US Industry.” 2003,
90-93.
Acknowledgement: The authors would like to thank the industry executives who participated
in the survey, Tim Hopkins and Maureen Doyle (Roosevelt University), and Dr. Paul
H. Neill (USG Corp.) for their assistance with this project.

262
THE IMPACT OF TEAM DESIGN ON TEAM EFFECTIVENESS

Lawrence E. Zeff, University of Detroit Mercy


zeffl@udmercy.edu

Mary A. Higby, University of Detroit Mercy


higbym@udmercy.edu

ABSTRACT

This paper reports on field research comparing initial and on-going designs of work
groups in two different organizations. Four components including task structure, group
boundaries, norms, and authority, were specifically compared. Teams designed within the
context of these four components were much more effective than work groups designed
without consideration of these components. This field study, therefore, supports earlier
results that design activities have a positive impact on team performance and project
outcome. In addition, appropriate design activities can result in stronger team self-
management.

I. INTRODUCTION

Teams literature describes the positive impact that teams have on productivity,
conditions under which teams are successful and factors which lead to team success. Recent
empirical studies suggest that well-designed teams tend to be more effective than work
groups that do not account for critical design factors. This paper reports the findings of a
field research tested components of team design and their impact on team effectiveness. In
particular, we used the framework established by Hughes, Ginnett and Curphy (2006) and the
initial research by Hackman (1987) and Wageman (2001) which provided the basis for four
critical components.

II. REVIEW OF LITERATURE: TEAMS AND TEAM EFFECTIVENESS

Katzenbach and Smith (1993) establish a working definition: “A team is a small


number of people with complementary skills who are committed to a common purpose, set of
performance goals, and approach for which they hold themselves mutually accountable."
(p.112) Team members hold themselves to be mutually accountable. Teams have a sense of
shared purpose (Katzenbach and Smith, 1993) and the team's purpose is jointly determined
and planned with management (Zenger and Associates, 1994). Teams have a leadership role
shared by team members (Katzenbach and Smith, 1993). Katz (1997) describes a high
performing team as one that is empowered, self-directed, and cross-functional to have
complementary skills. In addition, team members are committed to working together and
achieving their agreed upon common goal. Teams have collective work products requiring
joint contributions of members (Katzenbach and Smith, 1993) and can perform at higher
levels as the result of synergy resulting from collaboration and jointly produced outputs
(Katz, 1997).

Another critical skill in determining success of teams and often more critical for new
product development teams is the set of political skills. These skills include the ability to
gain support from key areas outside of the team, to gain acceptance of the team’s output, to
gather required resources which allow the teams to work towards its goal, and to protect the

263
team against external threats and overcome obstacles in the team’s path. Likewise, internal
political skills are required of team members to confront overcome conflict issues as they
arise. An agreed upon conflict resolution process is necessary to provide opportunities for
intra-team cooperation and high performance levels (Katz, 1997).

Interpersonal skills comprise the sports analogy of team chemistry and are the
necessary component to allow for synergy. Synergy requires people to willingly and openly
share ideas, comments and criticism. Open communication and concentration on informal
networks separate technically skilled from high-performing teams. Katz (1997) identifies
effective communication as one of those characteristics usually associated with high
performing teams while Katzenbach and Smith (1993) suggest teams encourage open-ended
discussion and active problem solving.

Hackman’s (1987) conceptual model for work-team effectiveness identified four


general conditions necessary to facilitate team effectiveness: (1) a real team, (2) clear
direction, (3) an enabling team structure and (4) a supportive organizational context. As
suggested above a real team must have a common purpose, set of performance goals, and
approach for which they hold themselves mutually accountable (Katzenbach and Smith,
1993). Wageman (2001) states that real teams are a bounded social system with clear
membership that is reasonably stable over time. Teams in many organizations may not meet
this condition. The extent to which a team’s purpose is clearly stated rather than focused on
achieving a short-term goal is important for team commitment and effectiveness (Wageman,
2001). An enabling structure includes five basic design features: appropriate size; optimal
skill diversity but with sufficient differences but not too many to impact coordinating
activities (Ancona and Caldwell, 1992; Ray, Zuckerman and McEvily, 2004; Thompson and
Brajkovich, 2003); task interdependence so members are dependent upon one another to
accomplish the overall work product (Wageman, 1995); challenging task goals with “stretch
performance targets (Katzenbach and Smith, 1993); and articulated strategy norms
(Hackman, 1990). Hackman’s (1987) fourth condition was a supportive organizational
environment where there is a reward system that rewards team performance (Cohen and
Bailey, 1996; Wageman, 1995); an information system that provides members with data
required to competently perform their work; an educational system to provide training or
technical consultation (Edmondson, 2003); and material resources necessary to complete the
work product.

A normative approach to team effectiveness has been used by Hackman (1990),


Ginnett (1993), and cited in Hughes, Ginnett and Curphy (2006). This approach is used in a
comprehensive research study by Wageman (2001). In this last study team design was found
to be associated with leader behavior, team self-management and overall team performance.
Hughes, Ginnett and Curphy (2006) present a model of four components of team design that
have a positive impact on team effectiveness. They include: (1) task structure (the task is
known to the team and the team has sufficient autonomy to perform it; (2) group boundaries
(team size, skill set and diversity are appropriate for the task); (3) norms (members share
norms for team functioning); and, (4) authority (an appropriate climate is established by the
leader to fit situation demands). The present paper reports on the impact of team design on
team effectiveness in an engineering firm and a computer software organization.

264
II. FIELD RESEARCH

Information was gathered from two different organizations by interviewing people on


the initial design of teams and modifications to these work groups over time. Performance
measures were compared so conclusions could be drawn from the differences in team/group
formation in the two companies. While the two companies were non-competing, one in
computer software design and development, and the other in engineering design and
consultation, both companies required similar creative problem-solving approaches to fulfill
client needs.

In the software company, teams were purposefully formed by identifying skill sets
and key roles necessary to create and develop software packages. All four variables
presented by Hughes, Ginnett and Curphy (2006) were purposefully included in the team
design phase. Task structure was the basis for team composition and membership. Top
management’s leadership style was inclusive and managers were concerned with leadership
development for all organizational members. Organizational culture encouraged open
questioning to ensure full understanding of organizational mission and task requirements.
Tasks were ambiguous by nature but unambiguous with respect to expectations and
performance criteria. Members perceived a climate conducive to open interaction and
commitment to both team and organizational members. Empowerment seemed to be an
everyday occurrence. As a result, team members were readily delegated autonomy necessary
to creatively solve problems that occurred and team members accepted leadership
opportunities.

Group boundary issues were openly considered. Skill set requirements was the basis
for determining team membership. Size was carefully controlled to ensure both efficient and
effective project completion. Organizational culture encouraged creative problem-solving,
highly cohesive teams, open communication, and effective conflict resolution. An inordinate
amount of time was spent on interpersonal skills to ensure member interaction would be both
positive and effective. Thus, both technical and interpersonal skills were included in team
design and seemed to have a positive impact on team output and performance.

As Hughes, Ginnett and Curphy (2006) suggest, team norms came from all three of
the possible sources. Team norms can: “(a) be imported from the organization existing
outside the team, (b) be instituted and reinforced by the leader or leaders of the team, or (c)
be developed by the team itself as the situation demands.” Importing norms from the larger
organization was a natural outcome of the culture since team members were highly
committed to the team and the organization; empowerment was an organizational norm; and,
team members readily created norms as required for task performance. Leadership styles of
top managers helped infuse team members with performance norms. All three of these
sources, therefore, were very apparent in our interviews with organizational members. High
cohesiveness levels further support the purposeful design of team norms in this whole process
of team development.

The last component of team design that was found to have a positive impact on team
performance is authority. In the computer software company teams were empowered to
create and make their own work rules with the authority necessary to collect needed
information and establish work processes to complete projects. The nature of client demands
required flexibility authority to respond immediately to changing situations. Authority did
not appear to ever become an issue of concern to either team members or top management.

265
All parties concerned with the projects responded to changing demands in a way that
appeared to be very effective. Performance outcomes and measurements support this
conclusion.

Team/group development and design in the engineering company took on a very


different perspective. While the task structure was unambiguous, work groups had narrowly
defined constraints within which they were required to complete these tasks. There was little
if any autonomy in completing assigned projects as managers determined what to do and how
to do it. When client demands changed an element of a project, all changes had to be
approved by upper management. With respect to group boundaries, managers constructed
group membership by deciding which members they wanted to work together and ignored
issues of interpersonal skills that might be required to enhance group performance.
Moreover, membership was dependent upon the nature and duration of a particular task. That
is, groups were formed exclusively to complete a particular project and were immediately
disbanded upon completion. Group norms were discouraged. Instead, managers established
company-wide norms and tried to gain commitment to these. As a result, group cohesiveness
was low and group boundaries were ill-defined. Managers perceived task performance to be
more effective when group members did not gain an attachment to each other. Rather,
commitment to the organization was described as more critical to mission accomplishment.

In the computer software company, because the norms of the team were consistent
with the culture of the organization, team members seemed to be loyal and committed to both
their team members and the organization. Authority differences were virtually non-existent
within the context of the workgroup. Leadership styles of management personnel naturally
supported self-managed teams within the context of true empowerment. These styles were
readily emulated by team members and were particularly apparent when the leadership role
rotated across team members.

The leadership style at the engineering design firm was more authoritarian and, as was
true in the computer software company, this style permeated throughout the workgroups.
This was most evident since a top manager typically directed each of the project groups that
he created. Members did not feel they could question management decisions and eventually
lost the willingness to do so. No real empowerment took place here and group members had
no real flexibility to modify task assignments, or how to do them.

Team effectiveness at the computer software company was rated high along a number
of dimensions. Internally, member satisfaction was very high. We would expect this given
the high level of team cohesiveness and autonomy. In addition, client satisfaction was very
high both with the specific team output and ability to interact with and modify task
requirements during the course of a given project. They appreciated the flexibility of these
teams and willingness of group members to undo, redo, or modify existing project modules.
Moreover, company management was delighted with how performance goals and
expectations were fully met and often surpassed. Since client and management feedback
often was provided directly to team members, everybody had the opportunity to gain
psychological rewards on the job.

Performance results at the engineering firm were dramatically different. Member


satisfaction was not very high, and commitment was to the company rather than the group.
Lack of autonomy almost numbed members with respect to intrinsic satisfaction of the task at
hand. While group members were willing to work hard and long hours, they did so within

266
the context of job performance and monetary rewards to be gained (extrinsic motivation)
rather than the satisfaction derived from intrinsic motivation of task accomplishment. Client
responses suggested work performance was adequate and minimally met expectations.
Likewise, management performance expectations were also met but rarely surpassed. Thus,
while all goals and expectations were met, there was little desire to go beyond a minimally
acceptable performance level. Faced with increasing competition, however, organizational
prospects were actually decreasing. Adequacy of performance suggested significantly lower
outcome results than found in the more effectively designed teams of the computer software
company.

Member behavior, while not an immediate concern of this investigation, showed


dramatic differences in these two companies. At the engineering design firm, group members
performed in an almost robot-like manner. This seems to be consistent with an
organizational culture that did not support empowerment and a leadership style that did not
readily delegate authority to groups in completing assigned projects. This is in stark contrast
to the culture created within the computer software company. Team members here were
animated from the time they walked into work. They never used the clock to cut off creative
interactions, had a friendlier, more open relationship with team members, and relished the
opportunities created by full empowerment. We could not determine which was cause and
which was effect from data collected.

Performance differences found in these two companies cannot be explained by


differences in corporate missions or industries. Management styles were very different,
which resulted in cultural and climate differences. Whether team/group effectiveness helps
create leadership behavior and/or organizational culture or is instead created by behavior and
culture is beyond the scope of this research. Other studies have found engineering design
firms that were highly effective based on appropriate initial team designs (e.g., Whiteley,
1994).

IV. CONCLUSIONS

Results from this field study strongly support Wageman’s (2001) conclusion that
initial team design has a positive impact on team task performance. A lot of effort was put
into team design in the computer software company. Moreover, the specific design was
appropriate for self-managed teams and ultimately for team performance. All four of the
design components described by Hughes, Ginnett and Curphy (2006) were indeed present in
this firm. We could find support for all components being purposely included in team design.
Instead there seemed to be a conscious effort to create an appropriate organizational culture
to support team development and performance. This culture actually contained all four of
these components, or at least supported each of these four components. In any case, team
design seemed to lead to higher team performance and effectiveness in the computer software
company.

Group design in the engineering company lacked elements of each of the four
components tied to group effectiveness. For example, while the task structure included an
unambiguous project assignment for group members, there was no, sufficient, autonomy to
complete it. Likewise, technical skills were present in group members to initially complete
assigned projects. Team size also seemed to be appropriate. Interpersonal skills, however,
were clearly lacking, perhaps, purposefully so. Moreover, when task requirements changed
as a result of client concerns, skill sets did not include abilities to respond and solve these

267
problems. In addition, any interpersonal issues were beyond the scope of members’
interpersonal skills. As noted earlier group norms were purposefully discouraged in favor of
broader corporate norms. Finally, with respect to authority, members perceived a climate that
prevented leadership within the group or appropriate authority to effectively handle any
modification in the initial project definition.

It seems clear to us that the differences in characteristics of work teams in the


computer software company and the workgroups of the engineering firm are quite significant.
However, the design requirements as reported in the literature cited in this paper are strongly
replicated in this field research. The impact of work team/group differences are left to future
research. The impact of work team/group design is further supported by this research.

REFERENCES

Ancona, D. G. and Caldwell, D. F. “Demography and Design: Predictors of New Product


Team Performance.” Organization Science, 3, 1992, 321-341.
Cohen, S. & Bailey, D. “What Makes Teams Work: Group Effectiveness Research from the
Shop Floor to the Executive Suite.” Journal of Management, 23, 1997, 239-290.
Edmondson, A.C. “Speaking Up in the Operating Room: How Team Leaders Promote
Learning in Interdisciplinary Action Teams.” Journal of Management Studies, 40,
(6), 2003, 1419-1452.
Hackman, J. R. “The Design of Work Teams.” In Handbook of Organizational Behavior,
J.W. Lorsch, editor. Englewood Cliffs: Prentice-Hall,1987, 287-292.
Hughes, R., Ginnett, R. and Curphy, G. Leadership: Enhancing the Lesson of Experience.
Prentice-Hall: Englewood Cliffs, NJ, 2006
Katz, R. “How a Team at Digital Equipment Designed the 'Alpha' Chip.” In The Human Side
of Managing Technological Innovation, R. Katz (Ed.). New York: Oxford University
Press, 1997, 137-148.
Katzenbach, J. R. and Smith, D. K. “The Discipline of Teams.” Harvard Business Review,
71, (March-April), 1993, 111-146.
Ray, R., Zuckreman, E., and McEvily, B. “How to Make the Team: Social Networks vs.
Demography as Criteria for Designing Effective ATeams.” Administrative Science
Quarterly, 49, (1), 2004, 101-133.
Thompson, L. and Brajkovich, L.F. “Improving the Creativity of Organizational Work
Groups.” Academy of Management Executive, 17, (1), 2003, 96-109.
Wageman, R. “How Leaders Foster Self-managing Team Effectiveness: Design Choices
versus Hands-on Coaching.” Organization Science, 12, (5), 2001, 559-577.
Wageman, R. “ Interdependence and Group Effectiveness.” Administrative Science
Quarterly, 40, 1995, 1145-180.
Whiteley, R. “Building a Customer-driven Company - The Saturn Story,” Managing Service
Quality, 4, (5), 1994, 16-20.
Zenger, John H. and Associates. Leading Teams. New York, McGraw-Hill, 1994.

268
STRATEGIES IN STARTING YOUR OWN BUSINESS

Omid Nodoushani, Southern Connecticut State University


nodoushanio1@southernct.edu

Julie Brander , Gateway Community College


jbrander@aol.com

Patricia Nodoushani, University of Hartford


nodoushani@hartford.edu

ABSTRACT

In order to succeed in business here is an overview of how to start a business. This


information is helpful in exploring the ideas that will help one formulate a business idea.
This information is useful for entrepreneurs who have an idea that needs to be developed and
structured. It will allow you to focus, think about, and develop the dream of owning a
business. The aspects of how to start a business with essential thought provoking
information that is addressed and the process of how to communicate effectively.

I. INTRODUCTION

Evaluating the initial business idea and questions that will determine if the idea is
feasible. Areas of importance when starting a business are to determine if you are an
entrepreneur and the characteristics that lead to your utmost success. Having an idea with a
vision and a mission statement will communicate clearly the purpose of the business. How to
market the business beginning with an effective name, business card and a plan on how to get
customers to buy. The financial plan will determine how much money is needed to start the
business and what it will take to run the business profitably. The professional team is
essential in formulating the structure and systems.

The beginning stages of a business are crucial in determining if in fact this business
will succeed. One must also consider if the dedication and desire is strong enough to pursue
owning a business.

Starting a business is a major life event that takes persistence, passion, motivation,
desire, initiative and knowledge. It involves a lot of planning, research and skill. As times
have changed, there is currently no job security. Statistics show that more and more people
are starting their own business. Corporate training gives those who have been downsized the
expertise and owning a business allows them to control their own destiny.

Thinking about the business strategy, direction, what the business is, who the
customers are and why customers will do business with you are questions that need to be
answered. This is indeed the challenge. How can you differentiate your business from the
wide range of competition? How can you create excellent customer relationships and
customer service? The business idea can be formulated with the information presented in this
paper. Each step is part of the overall business plan and will bring the entrepreneur closer to
developing a feasible business. Business is not an exact science and it can change with time.
It is important to understand trends, the economy, society and the needs, which satisfy your

269
customers. What will make your business a success? It begins with the business idea, the
marketing plan, the financial plan, and the team of professional advisors.

Business flexibility and spontaneity must be realized, as not every idea works and
alternative strategies have to be considered. Having many different ideas on the business and
how to position the business is essential. The ability to think broadly and way outside the
expected norms will be helpful as there are customers in many markets. Talent, creativity,
flexibility and thousands of small ideas can create unlimited business opportunities. If you
pursue your dreams and learn all you can, you will succeed!

II. WOULD YOU LIKE TO START A BUSINESS?

This information is for Entrepreneurs who have an idea that needs to be developed
and structured in order to formulate the business. It provides essential information that will
help determine if the business is realistic and feasible. All the information presented here
will inspire and motivate an entrepreneur with a plan and a strategy that can be implemented
to create a successful business.

III. ARE YOU AN ENTREPRENEUR?

Entrepreneurial enthusiasm is what it takes to start a company. If you believe in the


product or service the enthusiasm, passion, strategy and relationship building is what will
help in developing the business. Common entrepreneurial characteristics include having the
initiative to start a project, as well as having the self-discipline and drive to pursue and stick
with what you like to do. Being in charge and taking on responsibility, being energetic,
hardworking, having the ability to solve problems, and know there is always more than one
solution to every problem. Being competitive and one step ahead of the competition can
maintain the market share or increase the market share. Do you have the confidence,
discipline, and drive to stick with it and accomplish what is needed? Are you able to
overcome obstacles as they arise? Do you have the talent of persuasion, which enables one to
solve problems? Do you have the ability to convince people to buy your product or service?
A self-assessment can be helpful.

IV. THE BUSINESS IDEA

Most businesses are developed when problems need to be solved. Come up with lots
of ideas and evaluate them, one by one, if one idea does not work then try another. Think
about a product or service that makes life easier and more enjoyable. If you believe in a
product or service, a strategy and a plan is what helps in developing a business. You learn as
you go, as there are no rules. There are many theories that make sense but to implement them
is much more challenging. Business is not an exact science. It is a general big picture of
opportunity that takes planning and a creative twist in order to make the business unique. A
good business idea is something that society finds useful and will support.

V. CREATING A VISION AND MISSION STATEMENT FOR THE BUSINESS

The vision and mission statement describes the business in three to four sentences. It
is what you would say to someone when you meet him or her. It is a short message that
distinguishes your business. Think about a message that is clear and describes what the
company does and how it will benefit the customer. The message has to be short, flexible,
and distinctive.

270
VI. CHOOSING THE RIGHT NAME FOR YOUR BUSINESS

Once you have chosen the business, the next step is the name, which represents the
character and the image of the business. A name reflects the essence of the business, and
what you stand for. It will create recognition of the company and the products and services
that you sell.

VII. THE BUSINESS CARD

The business card reflects who you are and captures the essence of your business
image. Consider a logo, graphics or a photo. Collect business cards and identify what you
like and do not like. A business card can express the benefits you offer at a glance. It should
be persuasive and the message should be clear.

VIII. MARKETING THE BUSINESS

What is marketing? Marketing is communication. Identify the needs of the customer.


Design products or services that meet those needs. Communicate the information about the
products or services so that the customers can buy. Make the products or services available
at times and a place convenient for the customers. Price the products or services with an
understanding of all the costs, the competition and the customer's willingness to buy.

IX. THE FINANCIAL PLAN

Financial planning is the most important aspect of starting a business. Maintaining


accurate records is essential. First you must consider how much money is available to start
the business and project a gradual growth strategy starting small while developing the
business. When you plan for success and growth, the business will be ready for a traditional
business loan, which will take the business to the next level.

X. THE PROFESSIONAL TEAM

The use of professional services is an essential element of a successful business. The


professionals include an accountant, a lawyer, a banker and an insurance agent. They can
provide the knowledge and the expertise needed. They are the experts that will ensure that
your business is operating proficiently.

XI. CONCLUSION

Starting a business is a not only a challenge, it is a huge time commitment, as one


must be dedicated and persistent. It is also very rewarding as a result of watching an idea
grow into a reality. From my personal experience in starting more than one business this is
the preliminary information that should be considered when planning a business. The
business plan provides information about the business idea, the business goals, and how those
goals will be achieved. It helps clarify the vision, evaluate the market, determine the costs,
forecasts the growth, and determines the risks. The plan has to be easily understood by
anyone who reads it and is always changing, as it is always work in progress.

Anyone starting a business needs to be prepared and understand the steps needed in
order to succeed. Everyone should pursue his or her passion in life. It takes time, planning
and constant thinking of the business strategy and angles to position a company. When all

271
risks and rewards are considered and the finances are in place, the business is ready to be
started.

REFRENCES

Bangs, David. (1998) “The Start-Up Guide” Chicago: Upstart Publishing


Belch, George & Michael., (1998) “Advertising and Promotion” Boston: Irwin/McGraw
Hill.
Bowman, Sharon.(2003) “Shake, Rattle & Roll” Glenbrook:Bowperson Publishing.
Bygrave, William., Zackarakis, A., (2004) “The Portable MBA in Entrepreneurship” New Jersey:John
Wiley & Sons, Inc. Third Edition.
Gumpert, David. (1996) “How to Really Start Your Own Business” Boston: Inc. Publishing.
Hall, Doug.(1995) “Jump Start Your Business Brain” Warner Books, Inc.
Mariotti, Steve. (1996) “The Young Entrepreneur’s Guide to Starting and Running a
Business” New York: Three River Press.
www.onlinewbc.gov The online Woman's Business Center
Sahlman, William., Stevenson, L., Roberts, M., et.al. (1999). “The Entrepreneurial Venture”
Boston: Harvard Business School Press.
Sohnen-Moe, Cherie. (1997) "Business Mastery 3" Tucson: Sohnen-Moe Associates, Inc. 3rd
Edition.
Timmons, Jeffrey., Spinelli, S., (2003) “New Venture Creation, Entrepreneurship for the 21st
Century” Boston:Irwin/McGraw Hill. 6th Edition.
Von Oechmon, Roger.(1998). “A Whack on the Side of the Head!” New York: MJF Books.

272
THE PERILS OF STRATEGIC ALLIANCES:
THE CASE OF PERFORMANCE DIMENSIONS INTERNATIONAL, LLC

Robert A. Page, Jr., Southern Connecticut State University


pager1@southernct.edu

Edward W. Tamson, Performance Dimensions International LLC


pdisurvey@juno.com

Edward H. Hernandez, California State University, Stanislaus


edh1965@aol.com

Alfred R. Petrosky, California State University, Stanislaus


APetrosky@csustan.edu

ABSTRACT

This case concerns PDI, a small consulting firm specializing in survey-driven


organizational development and change, who attempted to exploit a market opportunity
created by rapid technologic change. Recognizing the explosion of competition in the survey
hosting business and the collapse of differentiation strategies based on survey hosting
capacity alone, PDi sought to form an alliance with an established survey hosting service.
The service would provide PDi with friendly marketing leads while PDi would provide the
service with the distinctive products and services necessary to continue a viable
differentiation strategy instead of competing as a commodity. Three potential strategic
alliances are presented for evaluation..

I. PERFORMANCE DIMENSIONS INTERNATIONAL (PDi)

In 2001, Edward Tamson and Robert Page formed Performance Dimensions


International, LLC (PDi), a management consulting firm specializing in improving business
intelligence through survey-based research and action planning. PDi specialized in two areas
– superior research/ performance metrics and action planning - that drove meaningful
changes and improvements. While business related surveys abound, the partners noticed that
many of them were of relatively poor quality, featuring poor questions, questionable rating
scales and ambiguous results that were not useful in developing effective action plans. To
increase differentiation from larger, more well-established firms PDi also offered an
attractive value proposition: “exceptional results at exceptional value.” Through low
overhead, a network organization the latest survey technologies and continuous
improvements in effectiveness and efficiency, PDi could make good margins from reasonably
priced products and services the bigger consulting firms simply could not afford to offer
(Nitkin & Eccles, 2002). To minimize overhead both partners work out of their home offices,
and employ no permanent, full-time staff. PDi uses a network organizational structure, which
involves contracting out segments of the consulting value chain which are not core
competencies, such as website design and maintenance, on-line survey hosting services,
telephone services, data entry, printing, and mass mailing, to dependable vendors. Only the
marketing (all high level personal contact, such as contract development, executive
debriefings and action planning) and creative elements (survey creation, validation,
hypothesis testing, proprietary models and constructs) remain exclusively in-house, as core

273
competencies. As with many new consulting firms, generating viable marketing leads and
developing productive distribution channels became PDi’s greatest challenge (McMurty,
2003). This case describes a critical decision point where PDi has the option of continuing to
market new clients through in-house efforts, such as referrals, cold-calling, mass mailings/e-
mailings, and conference attendance, or through developing strategic alliances with other
firms, trading survey products and services for marketing leads, joint ventures and
distribution channels, as some experts recommend (Berquest, Betwee & Meuel, 1995;
Forrest, 1990). PDi’s partnership proposition is as follows:
1. Competitive rivalry in providing online survey services has exploded, with customers
regarding online surveys as inexpensive commodities. For example Survey Monkey (the
WalMart of the Web) offers a reliable online survey for about $50.00 per administration
period.
2. Major clients now want one-stop shopping, or they will go and find someone else who
can offer the complete survey process package. Segmenting out front-end survey
development and back-end action planning from administration, data collection, and basic
reporting is no longer competitive. Customers can afford to be more demanding and
selective.
3. Old school differentiators would be particularly open to adding products and features
to their offerings to make themselves truly distinctive, and worth the additional cost in the
eyes of their clients. A set of nationally benchmarked surveys would increase customer
loyalty by raising the switching costs – customers would lose their ability to use the
questions, and compare their performance to the normative data if they switched survey
hosting services.
4. The longer customers used the benchmarks, the more addicted they would become to
the comparative data, and the more time series and trend analyses could be offered.
5. Pre-survey needs analysis and post-survey action planning would ensure the surveys
were measuring important issues, valid sampling strategies were employed, any needed
unique, customized questions were added, effective administration protocols were followed,
and survey results led to action plans and implementation, to guarantee an impressive return
on investment for the survey process. This would avoid the “garbage-in, garbage-out” survey
dilemmas that frustrate so many managers – they do not get what they need to make a
difference.
6. For PDi, the time necessary to populate a benchmark database would be considerably
reduced through partnering with an existing firm with many current survey customers.

A strategic alliance seemed intuitively attractive – PDi would provide the pre- and
post-administration services, as well as the survey content. Our web hosting partner would
provide the technologic platform to administer the survey and generate the reports. PDi
would build their business with market leads from the partner, while the partner would build
their business by offering distinctive product and service offerings most competitors could
not match (Hamel, Coz & Prahalad, 2002; Kanter, 2002). As we presented our consulting
services and benchmark surveys to potential partners, we received the following three
business alliance offers (Company names are changed to honor confidentiality agreements).
Please help us evaluate the pros and cons of each of these three offers, so we can either select
the right partner, or decide to continue to stay independent and keep all marketing and
distribution efforts in-house.

274
II. PROPOSED ALLIANCE WITH SCAN CORPORATION

Scan Corporation is one of the biggest survey processing houses in the world, and is
an industry leader in the design and printing of scannable forms. Founded in the 1970s, Scan
Corporation employs approximately 600 hundred employees worldwide and reported 115
million in gross sales in 2004. Scan Corporation offers suites of fixed form and computer-
adaptive online testing engines as well as test scoring machines and Optical Mark Readers
(OMRs). These services dominate the education market, and are common in the financial
services, healthcare, government and hospitality industries as well. Scan Corporation
differentiates itself from most competitors through size and scope, offering complete, one-
stop-shop product and service packages. As they state in their Mission Statement: “From
forms and scanners to maintenance, outsourcing services and application software, no other
company can match Scan Corporation’s range of products and services.”

Both Tamson and Page had worked David Johnson, who became a Survey Services
Manager at the Scan Corporation in California. He was expected to establish a world class
consulting division to increase the scope and profitability of Scan Corporation’s survey
scanning business, including potentially offering benchmark surveys in employee and
customer satisfaction. Currently the primary offering of the Survey Services division is a
survey software package called “Survey Stats” which allows users to create a survey
electronically, administer the survey through a variety of media (Internet, LAN, e-mail,
paper, etc.), create a database, perform basic analyses (such as descriptive statistics and
demographic comparisons), and generate reports. Scan Corporation was interested in an
alliance with PDi to bolster sales of Survey Listen in three areas.
1. They needed comprehensive libraries of survey questions on different topics
(employee satisfaction, customer satisfaction, conference satisfaction, etc.) to offer in
conjunction with Survey Listen, as add-on modules. Each library of questions shipment
would generate a royalty of $75.00, regardless if it were an actual sale or a marketing give-
away.
2. PDi could offer pre and post survey administration services such as testing, validation
and action planning so Scan could offer clients a truly comprehensive survey processing
package. They would pay any related PDi travel expenses, and would receive 40% gross of
any direct joint contracts or referral contracts PDi received from Scan Corporation leads.
3. Scan Corporation wanted to own a series of survey benchmarks on topics such as
customer and employee satisfaction. These benchmarks would consist of standardized,
validated surveys around which Scan Corporation would develop comprehensive databases,
allowing for reports on national and industry norms. Scan Corporation would pay PDi
$25,000 for each benchmark, and agreed to give PDi exclusive first right of refusal on any
consulting services ordered in conjunction with those benchmarks.

This emphasis on ownership was reiterated in the proposed contract, where Scan
Corporation inserted a clause stating that Scan Corporation co-owned any custom surveys or
materials PDi subsequently produced for Scan Corporation clients. Preliminary
collaborations revealed the following observations:
• Scan Corporation’s products and services were premium-priced for the market.
• Scan Corporation reserved the exclusive right to arbitrarily adjust the contract to close a
deal. In some cases this meant deleting or reducing the contract for PDi services.
• Contacts with other consultants revealed that Scan Corporation managers tended to gang-
up on their vendors. This means that if there was a problem, about 5 different Scan

275
Corporation managers would call up separately, each demanding in-depth explanations
and updates.
• Scan Corporation’s sales force was undergoing both rapid consolidation and high rates of
turnover. Survey administrators and support personnel sometimes turned over mid-
project, causing occasional miscommunications and scheduling problems.
• Current external consultants/vendors also complained that Scan Corporation carried
“customer-focus” to an extreme. This means that the customer is always right and the
vendor is always wrong, even when subsequent customer demands are unreasonable,
ethically questionable and/or outside the scope of the contract. Scan Corporation’s
preferred method of dealing with these situations was to arbitrarily rebate part of the
vendor’s compensation back to the complaining customer without discussing the vendor’s
concerns with the customer. This heavy-handedness is a typical problem in strategic
alliances where one partner is significantly bigger and more powerful than the other
(Miles, 1999; Slowinski, 1996).

III. PROPOSED ALLIANCE WITH WEBMETRICS CORPORATION

Tamson began discussions with WebMetrics while evaluating various web hosting
services. WebMetrics had been providing basic survey software and supporting services at a
reasonable price for our on-line survey efforts. Founded in the late 1990s, WebMetrics
employs about 35 employees and has 4.1 million in annual sales. Most of their clients were
small to medium size businesses interested in a lower-cost “do-it-yourself” type of survey
effort. PDi felt that WebMetrics might be particularly responsive to our proposition since
they are little more than a web hosting platform. Given increasing levels of competitors
offering equivalent or superior data collection and reporting options at better prices, we felt
WebMetrics would recognize the potential value of incorporating our services and
differentiating their offerings.
After lengthy negotiations and several on-site visits, WebMetrics proposed a licensing
agreement where they would market PDi services in return for a percentage of any resultant
contracts.

Basically the WebMetrics contract offered PDi at 66/34 split of net revenue, not gross
revenue. Any marketing expenses and sales commission costs would be split, 50/50 between
WebMetrics and PDi. We contacted marketing research consultants from SR Marketing
already working with WebMetrics as strategic partners, and found that they felt the
percentages WebMetrics requested, and the calculation of net revenues was reasonable.
WebMetrics was particularly interested in developing a benchmarking website capability
around employee satisfaction, customer loyalty, and Sarbannes Oxley ethical compliance.
They promised to dedicate a portion of their sales force to aggressively market these
products, and to make PDi products and services a top priority for some of their database
engineers and web programmers. Tamson and Dr Page traveled to WebMetrics offices near
Washington, D.C. and met with the proposed PDi support team, and made the following
observations:
• Two of the three WebMetrics owners appeared to be genuinely enthusiastic at the
prospect of a partnership, and committed to making it a success, while one was lukewarm
at the prospect of sharing the wealth.
• WebMetrics’s sales force featured high rates of turnover. Their two senior salespeople
were high quality, but complained of having little to offer high end clients; they were very
enthusiastic about the increased sales potential PDi offered. Tamson conducted an initial
product and service introduction tr

276
• Training program for all of the WebMetrics sales team. Post evaluation of the training
showed a very positive and motivated sales team who were very excited about the
potential alliance with PDi.
• Upon receipt of PDi copyright and trade mark surveys and related materials WebMetrics
began the habit of dropping the PDi copyrights and replacing them with their own. In
phone conversations and emails, the WebMetrics partners assured that this was an
oversight, and PDi would, of course, retain full copyrights to all of our original materials.

IV. PROPOSED ALLIANCE WITH GLOBAL TECHNOLOGY SOLUTIONS

Global Technology Solutions (GTS) provides a sophisticated data analysis software


package called “Firmit” . GTS is based in Norway and has offices in London, New York and
San Francisco. GTS is unlisted, but posts press releases on its website implying annual
estimated sales of somewhere over 10 million dollars. Beyond software, GTS developed the
“Firmit Marketplace”, to provide customers with partners offering a variety of services,
including consulting and support, so GTS could offer complete survey solutions and to
extend representation in markets where GTS currently does not operate. This marketplace is
an electronic network where value added resellers and other GTS partners could do business
with each other. The partnership process works as follows:
1. The partner buys a licensing agreement to use Firmit software for one year.
Depending upon negotiations, such licenses usually cost from $12,000. to $15,000 US dollars
annually. In addition, there is a $1.10 per survey usage charge.
2. The partner agrees to use Firmit software exclusively for all survey projects. An
exemption would be granted to PDi, given that some of our clients request on-line action
planning, and Firmit neither offers, nor plans to offer, that service.
3. At that point the partner has the opportunity to purchase a Marketplace membership
for one year for approximately $5,000. Such memberships are open to all Firmit users,
regardless of whether they are direct competitors. No exclusivity agreements will be offered.
4. GTS will market and promote all partners through the GTS Web Site, Sales &
Marketing staff and customer communications. A comprehensive list of the “hundreds” of
GTS clients will not be provided, to honor confidentiality agreements.

Tamson visited the San Francisco GTS office and made the following observations:
• The GTS office appeared to be in a state of disarray – cluttered, messy, and somewhat
unprofessional in appearance. Similarly the GTS employees were disorganized and
seemingly confused about where the company was headed.
• Key GTS decision makers, including the chief executive officer of the company, were
being replaced during the course of the negotiations (over a three month period).
• The GTS pricing structure was one of the highest in the industry. While fee assessed for
the use of the survey software was consistent with high end data collection services, the
additional GTS $1.10 per survey respondent charge was not.

V. CONCLUSION

PDi tried each alliance sequentially. The first alliance was with Scan Corporation,
which lasted two years. PDi ended the relationship for several reasons: turnover precluded
the kind of adequately trained staff needed to sell and support benchmark services; Scan’s
cost structure was not favorable for its consulting partners due to base charges were so high

277
that clients often balked at the prospect of adding PDi services; and royalty payments from
Library of Question placements and sales seemed significantly underreported. WebMetrics
signed a licensing agreement so they could offer PDi benchmark survey instruments to their
customers. However, despite the licensing agreement, PDi copyrights continued to be
removed in favor of WebMetrics copyrights. When PDi demanded the reinsertion of PDi
copyrights, they claimed partial ownership due to minor editing changes made while posting
these services on their website. A detailed letter from a contract lawyer ended this illegal
attempt permanently. Lastly, we backed away from the Global Technology Solutions (GTS)
deal. The pricing structure was so high it was unclear whether it was competitive. It seemed
the technology oriented leaders of this corporation overestimated that value of their
technology, and underestimated the need for differentiation. Further it was difficult to project
the marketing leads and potential income we could realize from being part of the Global
Technology Solutions (GTS) marketplace. We continue to develop our own marketing
contacts and regard future alliances with great caution..

REFERENCES

All corporate information is taken from corporate websites and from Hoovers, an online
business and industry database. Specific references will not be provided to honor
confidentiality.

Bergquist, William H., Julie Betwee and David Meuel, Building Strategic Relationships. San
Francisco: Jossey Bass, 1995.
Forrest, Janet E. “Strategic Alliances and the Small Technology-based Firm,” Journal of
Small Business Management, 28 (3), 1990, 37-45.
Hamel, Gary, Doz, Yves and Prahalad, C.K. “Collaborate With Your Competitors and Win,”
in Harvard Business Review on Strategic Alliances. Boston: Harvard Business School
Press, 2002, 3-27.
Kanter, Rosabeth Moss. “Collaborative Advantage: the Art of Alliances,” in Harvard
Business Review on Strategic Alliances. Boston: Harvard Business School Press, 2002,
113-131.
McMurtry, Jeannette M. Big Business Marketing For Small Business Budgets. New York:
McGraw-Hill, 2003.
Miles, Grant. “Dangers of Dependence: the Impact of Strategic Alliance Use by Small
Technology-based Firms,” Journal of Small Business Management, 37 (2), 1999, 20-27.
Nohria, Nitkin and Robert G. Eccles. Networks and Organizations: Structure, Form, and
Action. Boston: Harvard Business School Press, 2002
Slowinski, Gene. “Managing Technology-based Strategic Alliances Between Large and
Small firms,” SAM Advanced Management Journal, 61 (2), 1996, 42-59.

278
CHAPTER 9

ETHICAL AND SOCIAL ISSUES

279
INTELLIGENT AGENTS-BELIEF, DESIRE, & INTENT FRAMEWORK USING
LORA: A PROGRAM INDEPENDENT APPROACH

Fred Mills, Bowie State University


Jagannathan V. Iyengar, North Carolina Central University

ABSTRACT

The development of new intelligent agents requires an interdisciplinary approach to


programming. The initial challenge is to describe the desired agent behaviors and abilities
without necessarily committing the agent development project to one particular programming
language. What are the appropriate linguistic and logical tools for creating a top level,
unambiguous, program independent and consistent description of the functions and behaviors
of the agent? And how can that description then be translated easily into one of a number of
program languages? This article provides a case study of the application of a simple Belief,
Desire, and Intention (BDI) first order logic to a complex set of agent functions during the
basic research stage of a community of intelligent nano-spacecraft. The research was
conducted at NASA-GSFC (Greenbelt), Advanced Architecture Branch, during the summer
of 2001. The simple examples of applied BDI logic presented here suggest broad application
in agent software development.

I. INTRODUCTION

The demand for intelligent agent software is likely to grow as both public and private
sector innovators seek to deploy adaptive, autonomous, information technologies to
production, scheduling, resource management, office assistance, information collection,
remote sensing, and other complex functions. Intelligent agents, also known as autonomous
agents, are distinguished from other software programs by their ability to respond to a
changing environment in pursuit of goals. One of the most critical stages of such intelligent
agent development is the basic research that goes into determining how to translate client
needs into agent software. Since programmers and operations managers have different
expertise, there is often a linguistic gap between the functional language of the customer and
the technical programming language of the agent developer. If the customer and agent
developer do not speak a common language, both time and money may be lost in needless
errors and misunderstandings. There is a need for a program-independent, unambiguous,
logically consistent language for describing the desired computer agent practical reasoning
abilities and behaviors. Program independence gives the customer maximum flexibility in
choosing a programming language to implement the desired functions. The avoidance of
ambiguity allows the customer to say exactly what she means. And logical consistency
ensures that only valid arguments will be generated by the agent’s knowledge base, avoiding
false beliefs about the world from being deduced from true beliefs.

A program independent language for describing agent functions, practical reasoning,


and behaviors, solves what may be called the top-level description problem. The top level
of description should be close enough to ordinary English to be interdisciplinary, yet rigorous
enough to avoid ambiguity and inconsistency. The strategy of solving this problem borrows
from first order logic, cognitive science and recent work in the field of artificial intelligence,
specifically autonomous agent theory (see, e.g., the collection of articles in Muller,
Wooldridge, and Jennings, 1997). The solution to the top level description problem

280
suggested by some of the recent autonomous agent research is to combine the practical
reasoning tools developed by psychologist Michael Bratman’s belief, desire, and intention
(BDI) framework (1987), with some of the tools of first order logic. This hybrid language is
usually referred to as BDI logic. There are a number of BDI logics under development; here
we employ the Logic of Rational Agents (LORA) developed by Michael Wooldridge (2000)
to illustrate how BDI logic works in practice.

The authors faced the top-level description problem at NASA during the basic
research stage of specifying the behaviors of a community of agents, in this case, the
Autonomous Nano Technology Swarm (ANTS).i ANTS will be designed to engage in
practical reasoning and behaviors that implement the variety of functions necessary to
explore the asteroid belt and communicate observations to earth.

II. METHODOLOGY

The first methodological consideration is to determine the level of detail that is


appropriate in applying LORA as an agent development tool. The purpose of using BDI logic
here is not to prove theorems; a sufficient body of deductive proofs has already been
developed in standard first order logic texts. Nor is it to impress the client with obtuse
descriptions of functions. It is rather to use just enough symbolism to economize and clarify
the practical reasoning of the agents. The second methodological consideration is to
announce our position on exactly what we mean to imply when employing psychological
terms to describe computer agents. In what John Searle calls strong AI, conscious experience
is attributed to intelligent computer agents (1984). In this paper we remain agnostic with
regard to the debate over whether computer agents have real as opposed to “as if”
intentionality (see Mills, 1998a; 1998b, for a detailed discussion). We discuss only those
features of the debate that elucidate the advantages of using BDI logic as opposed to
engineering or design idioms for top-level descriptions of communities of agents.

Part one of this paper provides theoretical justification for the use of Bratman’s BDI
framework for understanding agent behaviors. Part two provides an explanation of how first
order logic can be combined with BDI and a dynamic component to account for agent
decisions in time. Part three presents a practical problem of describing the behavior of a
community of agents in a very complex scientific endeavor, in this case, the ANTS
exploration of the asteroid belt. We believe the insights presented here have practical
applications for the large variety of agents that will be developed in the near future.

III. BRATMAN’S BDI FRAMEWORK: THE INTENTIONAL STANCE

The intentional stance treats a sufficiently complex system as if it had mental states
and engaged in practical reasoning. The term “intentionality,” first used in empirical
psychology by Franz Brentano (1973/1874), means directness towards an object. As John
Searle points out (1984), one of the unique features of mental states is that they are always
directed towards or about some object. If I perceive, I perceive something. If I believe, I
believe that something is the case. If I intend something, I have some purpose in mind. We
often use the intentional stance casually, as when we say the car does not want to start. But
we do not really believe that a car has desires. Regardless of where one stands in the
philosophical debate regarding whether computer agents have real or just “as if”
intentionality, both sides generally agree that the intentional stance is an economic way to
describe a complex system that engages in practical reasoning. Daniel Dennett (1978) points

281
out that this economy of expression becomes clear when we contrast the intentional stance
with the engineering stance. ANTS provides a good example of the economy of expression
that results from opting for the intentional stance for top-level description of agent behaviors.

Imagine a community of nano-spacecraft setting out to explore the asteroid belt. * At


some point in the deployment of this community ANT #24 determines some activity is a
priority and desires to execute that activity. At the programming level of description one
would need to know the semantics and syntax of the programming language and read the
code. At the even more detailed engineering level, one would then translate this higher level
code into the impossibly complex machine code. By contrast, if one employs the intentional
stance, the practical reasoning of Agent#24 becomes simple and concise: “Agent #24 desires
to observe Asteroid Psyche, believes that the conditions are now optimal for such
observation, and thus intends to make such observations.” The next step is to imbed this
intentional language in a first order logic to ensure the consistency of all descriptions of the
ANTS behaviors.

IV. COMBINING BDI WITH FIRST ORDER LOGIC & DYNAMICS

First Order Logic


First order logic contains simple symbolic logic rules for combining propositions. For
example, “Anna is a student and Anna is a biology major” can be represented by conjunction:
p ∧ q. First order logic provides a generally accepted consistent logic. It is ideal for stating
combinations and relations between beliefs, desires, and intentions about the world. It
allows us to state just what the system knows without having to account for what it does not
yet know (see Levesque and Lakemeyer, 2000, for a discussion). In order to represent the
knowledge of the agent, some initial set of beliefs about the world are formulated. The
inference rules of first order logic are used to ensure that the beliefs are consistent and that
arguments are always valid. By following the basic rules of first order logic, new beliefs can
be deduced from the current set of beliefs and acquired beliefs. (These acquired beliefs are
based on new inputs from the environment.) In the case of ANTS, the initial beliefs are
about known asteroid types, relative locations, shapes, rotations, mass, distribution, gravity,
albedo and in some cases provisional classifications of asteroids. [Clark,2001] The inputs will
originate in sensor instrument data and communications data.

An example of a representation in the ANTS knowledge base using some simplified


LORA is:
λ (differentiated)Psyche ∨ (undiff) Psyche [either, or]
λ (differentiated)Psyche⇒(PriorityObject)Psyche [if, then]
λ (differentiated)Psyche {new knowledge!}
λ (5000MiFrAnt#24)Psyche
Notice that first order logic is used here, without BDI, to represent the truism that asteroid
Psyche is either differentiated or undifferentiated. If it is differentiated, it becomes a priority
object of investigation that could trigger other actions. Assume that sensor information
confirms that Psyche is differentiated. It then follows necessarily that Psyche becomes a
priority object. The last formula states that ANT#24 is in the proximity of Psyche. We can
expect that, given the right combination of BDI, #24 will set as a goal the closer observation
of Psyche and finally actuate behaviors towards the attainment of the goal. In order to begin
the construction of a language that includes practical reasoning, Wooldridge adds a Belief
component to represent epistemic states (2000). Thus we refine our description of ANT#24’s
knowledge by adding:

282
(Bel q PriorityObject(Psyche))
Here, agent #24 (represented by the symbol “q”) believes that pysche is a priority object.
This belief can be updated or modified given additional information from the sensors or new
communications. In order to complete Bratman’s framework for practical reasoning, we now
add the intentional and emotive components: intentions and desires. Consider a desire as an
enduring goal. An agent may have a number of desires, only some of which are realizable at
any given time. An intention is a pro-attitude, that is, it is a movement toward an immediate
objective until that objective is fulfilled or some new event changes the current intention
(Wooldridge, 2000). This distinction between intentions and desires helps us to explain how
an agent may change course in response to environmental variables that change through time.
Here is an example of epistemic and emotive states combined, using LORA:

(Des ruler (Bel worker#24 PriorityObject (Psyche)))


Here the ruler agent desires that ANT#24 believes that psyche is a priority object.
This mental state is important because it may trigger an action by ANT#24, which has among
its desires a desire to pursue priorities set by the ruler ANT.

Dynamic (Temporal) Component


Now that first order logic and BDI have been combined there is one more critical step
required to complete a top-level description of autonomous agency: dynamics. In order to
represent practical reasoning and resulting behaviors through time, a schema of relevant
features of the present state of affairs is constructed. Since not every feature of the universe
can be represented, the state of affairs, that is, the world of the agent contains beliefs about
only those features of the universe relevant to the functions of the agent. Since some agent
technologies will be able to learn from their experience, in some architecture the relevant
states of affairs can expand and change. For the present purposes humans will define the
relevant state of affairs. Each node in the schema will branch out into possible worlds.
These possible worlds are alternative paths that the agent may select, depending on its current
beliefs, desires, and intentions. The following map illustrates a world time schema:
w,t=world,time; p,q,r,s,t are beliefs, a=transition action; branches represent possible
worlds (see Wooldridge, 2000, for a more detailed description of agent behaviors through
time; the above chart is adapted from pp. 56, 63, 94)
Specify functions
We are now ready to specify the functions of various agents in the community of agents and
then apply LORA to describe their behaviors.

General ANT community functions


λ mapping and close up imaging of Asteroids using multi-spectral band coverage [Clark,

Memo 04/05/01.]
Worker ANT functions
λ Communications, resource management, navigation, local status (housekeeping), local

conflict resolutions Science data acquisition processing [Curtis et al. 2000, 3]


Ruler Ant functions
λ Plan assignments for worker ANTS

λ Maintain shared SWARM statistics

λ Resource management [see S. Curtis, et al., 2000, 3]

283
The ANTS is conceptualized here as a community of autonomous rational agents
whose behaviors are generated by a knowledge base (KB), a set of goals, inference
procedures, and percepts. Although each worker ANT is autonomous in terms of its own
function, it is subordinate to the function of the ruler. The intentions of the ruler are passed
on to the workers through the messengers and the workers actuate the plans that achieve the
goal. Each worker has as its permanent goal the appropriate and timely collection or
discovery of data from the target type of objects, the maintenance of health and safety, and
the timely communication of data to the messengers, who in turn report to the ruler and to
earth. The workers’ goals are sub-goals of the ruler’s goal and the worker’s plan at any given
state of affairs is a sub-plan of the ruler’s plan.

V. EXAMPLE OF A LORA SPECIFICATION

In the following scenario, the Ruler has received information about an opportunity to
view Psyche under ideal conditions and there is a group of worker ants in the neighborhood
of Asteroid Psyche (A). The Ruler forms an intention to attain a goal as a result of a
deduction that employs beliefs in its KB, acquired beliefs derived from current percepts, and
its belief derived from communications with all ANTS. There is a potential for a group of
workers and allied messengers to achieve the goal. This is exactly the sort of scenario
supported by BDI type logic ! For simplicity I leave out temporal considerations,
quantification, and proofs, and goal/sub-goal relations. I seek to illustrate here the usefulness
of BDI logic to model the practical reasoning or “mental states” of cooperating agents.
A= constant for Psyche
PfC = perception of the potential for cooperation
i = ruler agent
j = messenger agent
g = group of worker agents, each with specialized observation instruments
α = action
ϕ = goal to study Asteroid Psyche
achvs= attain through an action
Bel = Belief
Int = Intention
Des = Desire
π = Set of plans to be actuated (executed) by each worker to attain ϕ
ψ = Preconditions for ϕ to become the next goal (Science agenda from humans, combination
of percepts in relation to KB).
J-attempt = joint attempt

Assumption
The Ruler has formed an intention to achieve the goal of collecting data about
Asteroid Psyche. I do not here represent the deliberations that lead to this intention. We
begin with the process of mobilizing the workers to achieve the goal and a description of the
mental state of the community of agents. The Ruler forms an intention to attain a goal (to
study Psyche) as a result of ψ having been met. ψ, in this case, is a combination of the ruler’s
KB, mission priorities, current percepts, its belief that the goal has not yet been attained, its
belief that the ruler itself cannot or does not desire to achieve the goal by itself, and its belief
that there is a potential for a group of workers and allied messengers to achieve the goal. The
ruler will therefore intend not only the goal, but that the messenger intends the goal and that

284
the group of workers intends the goal and finally achieves the goal. This is exactly the sort of
scenario supported by LORA and other BDI type logic!

VI. DESCRIPTION OF THE LOGIC:

(Int i ϕ) ∧ (PfCi ϕ)∧ (Int i (Int j ϕ (Int g ϕ)))


The ruler intends the goal of studying the Asteroid Psyche and acknowledges the
potential for cooperation among the workers to being about the goal and intends that the
messenger intends that the group of workers intends the goal.
Int i (Achvs g α π ϕ)
The ruler intends that each member of the group following its share of the plan can
achieve the goal through the group’s actions.
{Inform i j α π ϕ}∧{Request Th i α {Inform j g α π ϕ}}
{Agree j i α α’}; α’

REFERENCES

Bratman, M. (1987). Intentions, Plans, and Practical Reason. Cambridge: Harvard


University Press, 1987.
Campbell, M. Principle Investigator. (05/31/99). Intelligent satellite teams for space systems.
Final Report for the NASA Institute for Advanced Concepts Program. Seattle
Washington.

285
THE PROPENSITY FOR MILITARY SERVICE OF THE AMERICAN YOUTH: AN
APPLICATION OF GENERALIZED EXCHANGE THEORY

Ulysses J. Brown, III, Savannah State University


brownu@savstate.edu

Dharam S. Rana, Jackson State University


dsrana@jsums.edu

ABSTRACT

Declining propensity for military service (PMS) is becoming an increasingly serious


problem. The PMS among young Americans has been steadily declining during the last few
decades. A declining PMS, over an extended period of time, may cause military mission
degradation, lowering of military recruitment standards, base closings, and reinstatement of
the unpopular military draft system. This paper used structural equation modeling to
investigate the PMS among 18-24 year old Americans. Respondents with prior military
exposure had higher levels of commitment and PMS. Commitment mediated the generalized
exchange-PMS relationship.

I. INTRODUCTION

Propensity of young people for military service is an important issue for our
government and armed forces. To attract young Americans between the ages of 18-24 years
for enlistment, the military offers a variety of career opportunities. Despite these abundant
career opportunities, young Americans are not as interested in military service as their grand
parents were a few generations ago. The American young people have very little information
regarding the military and military life. Moreover, these young Americans have expressed a
declining propensity for military service (PMS) since the 1990’s (Secretary of Defense
Annual Report, 2000). PMS is defined as one’s “interest and desires, as well as plans and
expectations regarding military service” (Segal, Bums, Falk, Silver, & Sharda, 1998, page
67). The decline in PMS, over an extended period of time, may adversely impact both
military and civilian sectors of American life. The shortage of qualified military personnel
can impact the mission of the branches of the military.

II. LITERATURE REVIEW

The military views its recruitment efforts primarily as a marketing campaign (Navy
Personnel Research Studies & Technology, 1998); and therefore, social exchange theory can
be applied to the military recruitment process. Social exchange theory holds that “given
certain conditions, people seek to reciprocate those who benefit them” (Bateman & Strasser,
1984, page 97). To the extent that military enlistees derive satisfaction from the actions of
their military supervisors and their occupation, assuming that this military action is not
contrived, then enlistees may feel obligated to reciprocate. This reciprocation may take the
form of prosocial behaviors and/or reenlisting in the military.

The current research utilizes the four factors employed by Marshall (1998) to measure
the generalized exchange construct. Marshall found that the four factors contributing to

286
generalized exchange were: 1) perceptions of social responsibility ethic, 2) perceptions of
social equity, 3) perceptions of effectiveness of organization’s performance, and 4)
perceptions of community benefits. The current study added a fifth factor, “Voluntary
Association Activities (VAA)” to the generalized exchange model. An extensive search of
the literature revealed that only one study (Brown & Rana, 2005) had linked these five
predictors with social exchange theory. Brown and Rana found that generalized exchange
was directly related to PMS. It is therefore posited that a positive relationship exists between
generalized exchange and propensity for military service.
Hypothesis 1: Generalized exchange will be positively related to propensity for military
service.
Prior exposure to the military may also impact one’s propensity for military service.
Some scholars have opined that PMS is a result of family members’ prior or current
military service and patriotic motivations. Faris (1995) found that “most soldiers join the
Army in part for patriotic motivations which are primarily the result of direct
interpersonal influence of persons, usually relatives, who have served in the military”
(page 415). Moreover, Legree, Gade, Martin, and Fischl (2000) found that the attitudes of
both youths and parents predicted PMS and subsequent military enlistment. The present
study argues that prospective enlistees who have family members with prior military
service will have higher levels of PMS.
Hypothesis 2: The relationship between generalized exchange and PMS
will be moderated by prior exposure to the military.

We argue that VAA will be positively related to PMS. Voluntary behaviors are
defined as free will activities that are characterized as paid/unpaid work performed in civic,
charitable, or religious organizations for the greater good of the community (Youniss,
McLellan, Su & Yates, 1999). Eccles & Barber (2001) found that teenagers who engaged in
prosocial activities such as attending church and volunteering liked high school, had a higher
GPA, and were more likely to be attending college at age 21 than their nonparticipating
counterparts. Notwithstanding the plethora of studies indicating the beneficial effects of
student participation in VAA, a search of the literature only found one study that had
examined the linkage between VAA, social exchange, and PMS; Brown and Rana (2005)
found a direct relationship between VAA and generalized exchange. In light of these
findings, we advance that respondents with high levels of VAA will also have higher levels
of generalized exchange and PMS. It can, therefore, be argued that these students may be the
prospective recruits that the military should target for enlistment.

Hypothesis 3: VAA will be positively related to generalized exchange

The commitment of new military enlistees to finish their term of enlistment may
determine job performance, job satisfaction, and reenlistment intentions (Allen & Meyer,
1990; Ganzach, Pazy, Ohayun & Braynin, 2000). Organizational commitment, as defined by
Allen and Meyer (1990), consists of three components: affective, continuance and normative.
The current research limits its focus to normative commitment; thus, affective and
continuance aspects of commitment are not addressed. Normative commitment is
characterized by the employee’s belief that he or she is obligated to stay with a particular
organization because of personal loyalty and/or allegiance. We conjecture that commitment
will mediate the generalized exchange-PMS relationship. In addition we also posit that
commitment will be directly related to both generalized exchange and PMS.

287
Hypothesis 4: The relationship between generalized exchange and PMS will be
mediated by commitment.
Hypothesis 5: Generalized exchange will be positively related to commitment.
Hypothesis 6: Commitment will be positively related to PMS.

This study examines the role of social exchange theory in the military personnel
selection process. Hiring prospective military recruits who are committed and more likely to
reenlist requires a thoughtful personnel selection strategy. In the proposed conceptual model
(See Figure I), community benefits, military performance, social responsibility, social equity
and VAA are hypothesized to influence generalized exchange. Generalized exchange, along
with the factor of commitment, is expected to predict PMS. We argue that community
benefits, military performance, social responsibility, social equity, and VAA are constructs
that provide indirect benefits to the American society. Generalized exchange (indirect
benefits) may, therefore, have a positive relationship with these five constructs, as
generalized exchange benefits accrue to the larger society and not to the individual directly.

III. METHOD

A telephone questionnaire was used to collect the data. The current study used data
collected under a grant funded by the Office of Naval Research (Grant No. 140110363;
Marshall, Brown & Gillon, 2001). The study employed a random quota sample of 300 males
and 300 females, between the ages of 18-24 years, who were unmarried, resided in the
continental United States, were non-institutionalized, were currently enrolled in a 4-year
college or lesser institution, and who did not have their own children at home. The overall
response rate was 59.3%. Structural equation modeling (SEM) was employed to test all
hypotheses. The linear structural relationships (LISREL, version 8.30) program was used to
develop and test all structural models (Joreskog & Sorbom, 1993).

288
Figure I Hypothesized Model

Community
Benefits
ξ1
γ11

ζ1 ζ2 ζ3

Military
Performance
ξ2
γ21

Commitment
Social γ31 Generalized β21 η2 β32 Propensity
Exchange Military Svc.
Responsib. η3
η1
ξ3
β31
γ41
Social
Equity
ξ4

γ51

Voluntary
Activities
ξ5

IV. RESULTS AND DISCUSSION

As indicated by the following fit indices, the hypothesized model had an acceptable fit
to the data, Chi-square= 375.731(p-value = 0.09), CFI=0.961, IFI=0.962, GFI=0.949, and
RMSEA= 0.033 (Joreskog & Sorbom, 1993). We now discuss the relationship among the
latent variables by inspecting the path coefficients. Hypothesis 1 stated that generalized
exchange would be positively related to propensity for military service. No support was
established for Hypothesis 1 because the path coefficient between PMS and Generalized
Exchange was not significant. Generalized Exchange did not influence PMS, indicating that
respondents who valued indirect benefits from the military may not necessarily have higher
levels of enlistment propensity as hypothesized. Support was established for Hypothesis 3,
which stated that VAA would be positively related to generalized exchange. Respondents
who engaged in voluntary behaviors were more likely to value indirect benefits from their
social exchanges. However, Social Responsibility and Social Equity were not significant
predictors of Generalized Exchange. These findings may be problematic for the military
branches that have traditionally depended upon patriotic and altruistic feelings among young
Americans to help facilitate enlistments.

289
A significant relationship was established between Community Benefits and
Generalized Exchange. The path coefficient between Generalized Exchange and Military
Performance was also statistically significant. Thus, respondents who valued indirect benefits
reported higher levels of Military Performance. In addition, the path coefficient between
Commitment and PMS was significant; thus, support was established for Hypothesis 6. The
Commitment of respondents predicted their level of PMS. Therefore, if possible, perhaps the
military recruiting managers may want to select committed prospective recruits; committed
recruits have higher levels of job satisfaction, performance, reenlistment intentions, and are
easier to train (Allen & Meyer, 1990). As expected, there was a positive and significant path
coefficient between Generalized Exchange and Commitment, which established support for
Hypothesis 5. It implies that respondents who valued indirect benefits to the larger society
also had high levels of Commitment. Hypothesis Two was supported because a chi-square
difference test indicated that Prior Military Exposure moderated the Generalized Exchange-
PMS relationship; lastly, Commitment was found to be a mediator of the Generalized
Exchange-PMS relationship, indicating support for Hypothesis 4.

V. CONCLUSION

Using structural equation modeling techniques to evaluate the hypotheses, our


findings indicate that the relationship between social exchange theory and propensity for
military service was mediated by commitment and moderated by prior exposure to the
military via family members’ prior military service. Voluntary association activities were
positively related to generalized exchange. Community benefits, military performance and
voluntary association activities had a statistically significant impact on generalized exchange.

These findings lead to the conclusion that prospective military recruits who have a
generalized exchange orientation may be more committed to the military and more likely to
reenlist as a result of their commitment. Our findings may also have practical implications for
military recruiting managers. Indeed, the military should consider directing its recruitment
efforts to young Americans embodied with these generalized exchange characteristics (i.e.
predisposed to voluntary behaviors and effectiveness of the military). Also, the military may
consider selecting prospective recruits with prior exposure to the military. This practice may
go a long way in assuring that recruits are endowed with the qualities identified as being
essential and predictive of propensity for military service. Furthermore, the results of this
study clearly indicate that parents influence the enlistment behavior of their children. If the
military has a good image in the eyes of the parents, then their children may be more inclined
to enlist in the military, ceteris paribus.

The current research, like other studies, has some limitations. First, the results are
limited to young Americans between the ages of 18 to 24 years. Thus, generalizability is
limited to this age group, and therefore social exchange theory may not explain the
propensity for military service outside of this age range. Another limitation of the study is
that the sample size did not contain a substantial number of African Americans (only 6.2%),
while they represent 22.44% of the active duty military (Department of Defense, 2002).

REFERENCES

Allen, Natalie. J., and Meyer, James. P. “The Measurement and Antecedents of Affective,
Continuance and Normative Commitment to the Organization.” Journal of
Occupational and Organizational Psychology, 63, (1), 1990, 18-38.

290
Bateman, Tony. S., & Strasser, Sammie. “A Longitudinal Analysis of the Antecedents of
Organizational Commitment.” Academy of Management Journal, 27, 1984, 95-112.
Brown, Ulysses J., III and Rana, Dharam S. “Generalized Exchange and Propensity for
Military Service: The Moderating Effect of Prior Military Exposure.” Journal of
Applied Statistics, 32, (3), 2005, 259-270.
Department of Defense. Department of Defense 27th Annual Population Representation in the
Military Services Report. 2002. http://dticaw. dtic. mil/ pr home/poprep2000/.
Eccles, Jamie. S., and Barber, Brice. L. “Student Council, Volunteering, Basketball, or
Marching Band: What Kind of Extracurricular Involvement Matters?” Journal of
Adolescent Research, 14, (1), 2001, 10-43.
Faris, James. H. “The Looking-glass Army: Patriotism in the Post-cold War Era.”
ArmedForces and Society, 21, (3), 1995, 411-435.

291
THE MARYLAND WAL-MART BILL: A NEW LOOK
AT CORPORATE SOCIAL RESPONSIBILITY

Frank S. Turner, Morgan State University


f_turner@house.state.md.us

Marjorie G. Adams, Morgan State University


maadams@moac.morgan.edu

ABSTRACT

Wal-Mart has had phenomenal success becoming America’s largest and most
profitable corporate giant. It is estimated that by the end of 2005, it had created over
100,000 jobs, adding to its 1.2 million U.S. associates. It will pay billions in taxes, provide
health insurance to hundreds of thousands of employees, and support 100,000 charitable
organizations. It is widely recognized that the large modern corporation has many
stakeholder groups – employees, unions, suppliers, consumers, and the community in which
it operates. Related to this understanding is the view that such entities have responsibility to
attempt to act in the best interest of its constituent groups. To the extent possible, the
corporation should treat its employees fairly, bargain honestly with unions, make its products
safe, and be good citizens of the local community. (Barnes, 2005) Overall, Wal-Mart has
made a significant contribution to the employment base of many communities and has
provided health insurance for many of its employees. However, Wal-Mart officials
condemned the 2005/2006 Maryland General Assembly for demanding that the corporation
act, what the Assembly considers to be, responsibly toward its employees in the state by
assuming a fair share of their health care cost. Wal-Mart says it provides health care
coverage to approximately 45% of its U.S. workforce.(Lazarick, 2005) This research will
examine the image and ethical practices of Wal-Mart in light of the issues it confronts with
the Maryland General Assembly and other legislative agencies across the country.

I. INTRODUCTION

One of the most contentious issues affecting the 2005 Maryland legislative session
involved a bill that had a direct impact on a single company, WallBMart. The debate over the
specifics of the legislation continued well after the legislative session was over. The same
issue will be addressed in the 2006 legislative session while the likelihood of a final
resolution will not be achieved until some years in the future. Overall, the Governor, elected
officials, health care administrators and providers, consumer advocates, and the general
public are increasingly concerned about the growing population of uninsured and under-
insured and the rising cost of existing health plans and their impact on the aging population.
The health care crisis has led many health care professionals and administrators, as well as,
other consumer advocates to begin to scrutinize business practices to make sure that large,
profitable businesses that employ large numbers of people contribute their fair share to health
care cost for their employees. The impact of this legislation is nominal compared to the
number of Americans who are currently uninsured or underinsured. However, this legislation
sends a message to corporate America that it has a responsibility to be an active partner in
helping to lessen the health care crisis. Many Americans expect corporate giants like Wal-
Mart, to provide basic affordable health coverage for its employees who help make Wal-Mart
the wealthiest corporation in the world.

292
II. WAL-MART AND THE NATIONAL HEALTH CARE CRISIS

There are approximately 50 million American uninsured or underinsured in the


United States. Dr. Alfred Sommer of the Johns Hopkins School of Public Health said,
“despite the fact that we have very fancy hospitals and the most modern drugs and
technology, we are 27th in infant mortality and are somewhere around 16th or 18th in life
expectancy. As a system that is taking care of society as a hole we don=t do very well.”
(Hill, 2005)

Several states, facing rapidly increasing Medicaid cost are turning to the private sector
to bear more of the cost. Wal-Mart, in particular, has been the focus of several states,
accusations that the company is providing substandard health benefits to its employees.
According to the New York Times, Wal-Mart full-time employees earn on average $1,200 a
month, or about $8 an hour. Some states claim many Wal-Mart employees end up on public
health programs such as Medicaid. A survey by Georgia officials found that more than
10,000 children of Wal-Mart employees were enrolled in the state=s Children=s Health
Insurance Programs (CHIP) at the cost of $10 million annually. Similarly, a North Carolina
hospital found that 31% of 1900 patients who said they were Wal-Mart employees were
enrolled in Medicaid, and an additional 16% were uninsured.

As a result, some states have turned to Wal-Mart to assume more of the financial
burden of its workers= health care costs. California passed a law in 2003 that will require
most employers to either provide health coverage to employees or pay into a state insurance
pool that would do so. Advocates of the law say Wal-Mart employee=s cost California health
insurance programs about $32 million annually.

According to the Daily Times, Wal-Mart says, that its employees are mostly insured,
citing internal surveys showing that 90% of workers have health coverage, often through
Medicare or family members= policies. Wal-Mart officials say the company provides health
coverage to about 537,000 people, or 45% of its total work force. As a matter of comparison,
Costco Wholesale provides health insurance to 96% of eligible employees.

(Department of Legislative Service) Wal-Mart’s position is disputed in a September 30,2003,


Wall Street Journal article on Wal-Mart=s cost-cutting in health benefits, which allows it to
Aspend almost 40% less than the average U.S. corporation and 30% less than the rest@ of the
industry. Hourly employees wait six months before they qualify for health insurance, which
pays for no vaccinations or preventive services, has deductibles higher than most, and does
not cover pre-existing conditions in the first year. A lower percentage of eligible employees
sign up for health insurance than the rest of retail (Lazarick, 2005).

III. WAL-MART’S CORPORATE IMAGE

Wal-Mart officials condemned the Maryland General Assembly=s effort as an


unneeded intrusion. AWe think that this sets a bad precedent by singling out one employer
when it=s a much bigger issue,@ said Nate Hurst, a government relations manager for Wal-
Mart. The retailer already is dealing with a spate of bad publicity, including intense criticism
of its labor practices and a series of lawsuits. In 2005, the company agreed to pay $11
million to settle claims that one of its cleaning contractor=s hired illegal immigrants. The
company is also battling a sex discrimination complaint filed by current and former
employees.

293
Wal-Mart has conquered rural America with more than 3000 stores, but it desperately
needs to break into the urban market to maintain its phenomenal growth. Since its arrival in
the region 13 years ago, Wal-Mart has quickly planted 147 stores in Maryland and Virginia,
including 32 in the greater Washington area. It is now the number one private employer in
Virginia and one of the 10 largest employers in Maryland, with 52,000 workers. (Barbaro,
2005) Despite its more than 5000 stores and $285 billion in sales worldwide, Wal-Mart=s
future is closely tied to continued expansion and the importing of inexpensive Chinese
laptops, televisions, and clothing. (Wagner, 2005) These items are a significant fraction of
the goods sold by Wal-Mart. (Williams, 2005). Market analysts have indicated that Wal-
Mart’s image problem has had no measurable impact on consumers= willingness to shop at
the chain. Sales grew 11 percent in 2004 and Wal-Mart estimates that 90 percent of
Americans, or 270 million people, shopped at one of the company’s divisions in 2004.
(Wagner, 2005) This earned Wal-Mart $10 Billion in profits in 2004. (Struever, 2005)

In 2004, the retailer lost battles to build stores in Inglewood, California, Chicago, and
New York City. During the same time, dozens of local governments C including Calvert,
Prince William and Montgomery counties in the Washington, D.C. region B have passed
zoning rules making it difficult for Wal-Mart to expand in urban markets. (Barbaro, 2005)

IV. THE FAIR SHARE HEALTH CARE ACT: WAL-MART’S BATTLE

Two Maryland legislative mandates, SB790 and HB1284, as amended established the
Fair Share Health Care Fund. The Fund is supported with monies received from employer
payments and other sources. The fund is subject to audit by the Office of Legislative Audit.
The proposed Fair Share Health Care Act would force Wal-Mart to increase its spending on
health care coverage for its Maryland employees. According to published reports, 80 % of
the employees are eligible for health benefits, but only 52% of eligible employees choose to
enroll in company-sponsored insurance for which they pay part of the premium.

The Fair Share Health Care Act requires companies with 10,000 or more employees
in the State of Maryland, to spend at least 8% of their payroll on worker health care, or pay
the difference to the State’s Department of Labor, Licensing and Regulation (DLLR). Health
care costs are payments for medical care, prescription drugs, vision care, medical saving
accounts, and any other health benefits recognized under federal tax law. Proponents of the
Bill say that the measure is a crucial public policy statement about the responsibility of
employers to their workers. They say that because Wal-Mart pays relatively little for its
employees’ health care, other businesses and citizens foot its bill by paying more in higher
insurance and taxes. Opponents, on the other hand, call the bill the first step in a move to
socialize health care. They argue that increasingly smaller companies will be subjected to the
rule and the required spending would rise (Green, 2005).

An employer, beginning January 1, 2007, must submit a report to DLLR specifying


the number of its employees and the amount and the percentage of payroll that was spent on
health insurance costs during the year immediately preceeding the previous calendar year.
DLLR must annually verify which employers have 10,000 or more employees and ensure that
all employers with 10,000 or more employees have made the required report. Failure to report
this information may result in a $250 civic penalty for each day the report is not timely filed.
Failure to make the required payment may result in a $250,000 penalty. (Department of
Legislative Service).

294
Wal-Mart officials have told state lawmakers that the firm currently spends between
5% and 7% of its Maryland payroll on health coverage. The legislation would force the
retailer to raise that to 8 percent or pay the difference to the state. According to Thomas A.
Finey, senior fellow at the Maryland Public Policy Institute, the Act would extend coverage
to 1000 to 3000 employees, thereby lowering the State’s uninsured rate by .05%. Wal-Mart
bill would extend coverage to 1000 to 3000 people, thereby lowering the state=s un-insurance
rate by, at best, about 0.05 percent. Finey suggests that in order for Wal-Mart to achieve that
gain, it will have to up its health insurance spending by approximately 5 million dollars a
year. Wal-Mart can free up the necessary money by cutting employee hours and skimping on
raises, bonuses and other perks. (Finey, 2005) There is little sympathy for that argument
especially since 2004 worldwide profits exceeded 10 billion dollars. Every person counts,
and whenever we can provide health coverage for someone who does not have health
coverage, we are creating a more humane society and lowering the cost for those who pay
premiums on a regular basis. Stakeholders should want healthy employees to continue to
maximize profit potential and demonstrate corporate responsibility. Finey also explained that
the real problem in Maryland is due to insurance mandates, state regulations and the special
tax on health care premiums. This could be a valid argument if insurance rates were rising in
Maryland and falling in the other 49 states. However, insurance costs are rising on a national
basis regardless of those factors. A comprehensive nation health insurance policy, tort
reform, a reduction in insurance fraud, doctors ordering multiple procedures to discourage
possible malpractice law suits, insurance company settlements rather than litigation are all
factors that could have an appreciable impact on reducing insurance cost.

Steven Pearlstein (2005), who writes for the Washington Post, indicated that the Wal-
Mart bill was purely a symbolic effort driven by the desire of the Democratic politicians to
demonstrate solidarity with union workers who see Wal-Mart as a threat to wages and
benefits that have risen to uncompetitive levels. He further indicated, that as a society, if it is
believed that everyone should have health coverage within the context of an employee
based system, small business should not be exempt (Pearlstein, 2005). The profile of most
small business the dominant employer business model, operate as a sole proprietary business
or a partnership, has limited resources and employ one, two or at the most three workers. To
expect a business of this size and resources to provide health coverage would in most cases
severely hinder future operations especially when many of those businesses operate on very
thin margins. Any bills attempting to require such a mandate would for the most part be dead
on arrival.

The Baltimore –Washington, D.C. area is one of the more expensive areas to live in
the United States. Housing, gasoline, food and transportation costs have a tremendous
bearing on wages and benefits. Wal-Mart wages, health coverage package and pension
benefits as previously indicated, are not in line with the metropolitan area.

V. WAL-MART AND THE ECONOMY

After every legislative session in Maryland and other states, The Governor sets several
days aside for bill signing ceremonies. In 2005, the Governor held a veto ceremony that took
place in Somerset County (on the Eastern Shore of Maryland) at the Circuit County House in
Princess Anne where Wal-Mart Stores, Inc is planning to open a distribution center that could
employ hundreds of people (Fisher, 2005). The Governor has truly sided with Wal-Mart
executives by vetoing the Fair Share Health Care Bill. The bill passed in the State Senate
with one more vote than it needed to override the Governor’s veto, and was one vote short of

295
the mark in the House of Delegates. It should be noted that several delegates who support the
bill were absent on the day of the vote (Green, 2005). Despite the vote, Governor Ehrlich said
that the Wal-Mart Bill “threatens the economic health of this terrific county.” Somerset
County is the poorest county in the state and it anticipates the 800 jobs (paying an average
starting wage of $12 per hour) that will be generated by the distribution center Wal-Mart
plans to establish there. “Without employers, there are no employees. There is no health
insurance” (Hopkins, 2005). Governor Ehrlich characterized the issue as a fight over the
future jobs growth in state of Maryland, not just the Eastern Shore. The Governor=s remarks
places more emphasis on creating future jobs and less on retaining existing jobs that already
provide health care.

The Governor=s support of a seemingly less free market system that creates jobs
irrespective of the impact on the existing job market needs to be viewed with some
trepidation. States should protect existing jobs and expand job opportunities in areas that
offer competitive salaries and benefits. This is how states maintain a high standard of living
for it citizens and lessen demands on state services. Maryland=s approach to job creation
after the last recession focused on the new an emerging bio-tech industry, emerging
technologies, growing small firms, incubators, the health care related entities, and expanding
government contracts. This approach has allowed states like California, New Jersey,
Massachusetts and Maryland to move forward and create good paying jobs. The evidence is
clear that Wal-Mart has significantly increased sales and expanded its market share in the
greater Baltimore-Washington, D.C. corridor. This expansion of Wal-Mart threatens
employee=s quality of life and the standard of living. The union issue appears to be collateral
to the overall debate. Most employees and families are more concerned about being able to
pay for higher transportation costs, expensive housing, tuition increases, and other related
expenditures.

VI. CONCLUSION

Overall, Wal-Mart has made significant contribution to the employment base of many
communities, has helped community organizations, and has provided health insurance for
most of its employees. However, Wal-Mart can do better. A 2004 report by the Democratic
Staff of the House Education and Workforce Committee entitled,”Everyday Low Wages: The
Hidden Price We All Pay for Wal-Mart,” ‘analyzed the company’s books and assessed the
costs to U.S. taxpayers of the many employees so underpaid that they qualify for welfare
benefits.’ The report indicated that for a 200-employee Wal-Mart store, the government
spends $108,000 a year for children’s health care, $125,000 annually in tax credits and
deductions for low-income families, and $42,000 a year in housing assistance. The report
estimated that a 200-employee Wal-Mart store costs federal taxpayers $420,750 a year (this is
about $2,103 per Wal-Mart employee). This sum translates into a total annual corporate
welfare bill of $2.5 billion for Wal-Mart’s 1.2 million U.S. employees (Olesker, 11-25-05).

Many American corporations are experiencing flat or declining sales of consumer


good while Wal-Mart continues to expand and grow its market share. With over 10 billion
dollar in profits in 2004, Wal-Mart can provide basic-affordable health insurance for all of its
employees and take much of that expense off of the backs of taxpayers. By providing a
basic-affordable health insurance plan, Wal-Mart would show to a greater degree that its goal
is to act in the best interest of its employees and move closer to meeting its corporate
responsibilities. Wal-Mart can capitalize on this opportunity and be a corporate leader as
America continues to address the health care crisis.

296
REFERENCES

Babbaro, M. (May 23, 2005). Putting on the Brakes, The Washington Post, May 23, 2005,
E1 & E10.
Barnes, J. (2005). Business Ethics and Corporate Social Responsibility, Law for Business, 9th
edition, 67.
Buchanan, C. (June 20, 2005) Wal-Mart, Press Release.
Department of Legislative Services, (May 9, 2005). General Assembly of Maryland, Fiscal
and Policy Notes, 1-2.
Firey, Thomas A. (April 29, 2005). Maryland=s 0.05 Percent Solutions, The Daily Record.
Fisher, J. ( May 20, 2005). Ehrlich: Wal-Mart Bill is Anti-Business, Daily Times.
Green, Andrew J. (November 18, 2005), Wal-Mart Hires More Lobbyists to Help Topple
Benefits Bill, The Baltimore Sun.
Hill, M. (July 24, 2005). Picture of Health, Perspective, The Baltimore Sun, C1.
Hopkins, J.( May 20, 2005). Wal-Mart to Delay Somerset Center, The Baltimore Sun.
Lazarick, L. (June 2005). Other Side of Wal-Mart Story, The Business Monthly, 9.
Olesker, Michael. (November 25, 2005). “Battling The Wal-Mart Behemoth Will Take
Guts,” The Baltimore Sun, B1.
Pearlstein, Steven (April 13, 2005). “‘Get Wal-Mart= Bill is Just For Show” Washington
Post, C1.
Struever, B. (2005). But Big Companies Now Must Pay Their Share of Health Care,
Washington Post.
Wagner, J. (April 6, 2005). Maryland Passes Rules on Wal-Mart Insurance, The Washington
Post.
Walker, A. (May 4, 2005). 500 Jobs to be Lost as Giant Revamps, The Baltimore Sun, 1.
Williams, L. (June 26, 2005). Danger Seen in China=s Economic Power, Perspective, The
Baltimore Sun, C1.

297
DISCRIMINATION, POLITICAL POWER, AND THE REAL WORLD

Reza Fadaei, National University


rfadaeit@nu.edu

ABSTRACT

Every single person is a unique combination of genes, which are both the cause and
the consequence of our differences. This society has been formed on the principles of
diversity and yet succeeded to become the world’s major economic and political power, long
before the complexity of our differences emerged as an issue. Over the years minorities and
women have been viewed by society in many different ways regarding their function in
society and what it should truly be. There was wide disagreement about the particular jabs a
woman and minority might pursue. At one extreme, Americans believed that woman or
minority should work unless compelled to by the absence of the male breadwinner, and that
very few jobs were appropriate for women. At the other extreme, a very small minority of
Americans believed that every woman and minority should be free to follow the career of her
or his choice. Therefore this paper investigates whether there was any discrimination and gap
between women and minorities in this society or not and why do wage differentials exist
among white and nonwhite?

I. INTRODUCTION

During the early stage of our lives, we learn how to talk by reproducing what we hear,
we learn the values accepted as standards, by our surroundings, and we learn to like and
dislike. Historically, the American society has been formed on the principles of diversity and
yet succeeded to become the world’s major economic and political power, much before the
complexity of our differences emerged as an issue.

Minority in the workplace has become a popular buzzword in corporate America. But
what does it really mean? That is hard to say given the fact that different firms interpret the
notion of diversity differently (Wilson, p. 21). Minority in the workplace comes in different
forms such as race, gender, religion, age sexual preference, social, economic or ethnic
background, education, experience, etc. Our traditional perception of diversity in the
workplace has been focused on minorities and women. Some companies are focusing on the
interests of women and have created advisory teams to deal with the issue (Henderson, p.37).
But this approach has not been a solution for women and minority and only a very small
minority of Americans believed that every woman and minority should be free to follow the
career of her or his choice.

In modern society particularly, work is a source of personal identity. In an industrial


society, people introduce themselves by indicating the kind of work they do, and what one
“does” become an important part of the definition and evaluation of self. In America, work is
an especially important determinant of social standing, because our ideology of opportunity
gets interpreted as opportunity for choice, training, and advancement in occupation. Hence,
whether sociologists approach stratification from a functional perspective stressing division
of labor or from a Marxist position emphasizing relation of production, they tend to agree that
work and occupation are “solid social facts that condition life chances.”

298
Over the years, women and minorities have been viewed by society in many different
ways to their function in society truly should be. Between 1890 and 1920, known as The
Progressive Era, nearly everyone believed that women’s first duty was to bear and raise
children and maintains the household duties, and that women were better fitted for these than
any other function. Nearly everyone agreed that it was sometimes necessary and proper for
some women to work, and that some jobs outside of the home were suitable for some women.
However, there was wide disagreement about the particular jobs a woman might pursue. At
one extreme, Americans believed that no female should work unless compelled to by the
absence of the white male breadwinner, and that very few jobs were appropriate for women.
Work and relationships between men and women are two subjects that are questions of
women’s employment. We can be certain that changes in the types of women’s work have
been influenced by what people have thought and felt about women and about the nature of
work.

The employment of women and minorities was a major public issue around the turn of
the century. The increase in labor-force participation of women has been called the
outstanding phenomenon of our century. Women’s participation in the labor force affects
every aspect of life, including trends in fertility, marriage and divorce, patterns of marital
power and decision-making, and demand for supportive services in the economy. For this
reason, it is said that the greatest changes of the twentieth century may result not from atomic
energy or the conquest of space, but rather from the tremendous increase in the proportion of
women working outside the home (Rozen, 1979).

II. BACKGROUND

Earlier in the century it was virtually impossible for women and minorities to obtain
professional training or t get white-collar jobs. But now at the end of the century they are
working as lawyers, doctors, journalists, and in a variety of other white-collar and
professional occupations. People at all levels of economy and social status were influenced to
some degree by the attitudes and ideas of both sides of the issue. The antagonistic view
toward the employment of women is perhaps best illustrated by the idea that almost all
women should retire from work when they marry.

In general, it is difficult to say how much the feminists movement contributed to


broadening education and economic opportunities for women and minorities. The growth of
the industrial economy and the movement of democracy, which influenced much of the
nineteenth century certainly helped their efforts and contributed to their progress. At the end
of the World War II, most people expected a period of severe unemployment that the married
women who had taken these jobs should return to home making, even though the post-war
depression failed to materialize. By the beginning of 1958 the number of women and
minorities in the labor force reached 23 million for the first time .

III. MINORITY AND EARNINGS GAP

The vast proportion of working women are in a restricted number of jobs and in low-
paying, low prestige, and in lower power positions than white men. Even within the same
major occupational groups, women’s earnings are lower than men’s (see Table). In addition
to receiving less pay, women are often excluded from important fringe benefits.

299
Median Weekly Earnings Of Full-Time Wage By Sex And Occupational Group*
___________________________________________________________________________
Occupational Group Women Men
Lawyers $624 $806
Managers, Marketing $470 $751
Engineers $580 $691
Accountants $398 $554
Secretaries $286 $322
Bus Driver $285 $389
Elementary School Teachers $415 $490
*Source: Earl E. Mellor, “Weekly Earnings in 1996,” pp.41-46.

The Table shows the median weekly earnings in several of the higher and lower paid
occupations in 1986. The gap in earnings between the highest paid and lowest paid
occupations is quite large. The most obvious difference is that the weekly earnings of women
are, as a rule, considerably less than the earnings of men. A portion of the earnings gap
between men and women is attributable to the fact that the majority of women are employed
in a fairly narrow set of low-wage; female-dominated occupations while a significant number
of men are employed in a different set of male-dominated, high-wage occupations. This
division of the occupational structure into “female” occupations and “male” occupations is
generally referred to as occupational segregation, although some economists prefer the less
pejorative term “occupational concentration”.
From a human capital perspective, the process begins in school, where boys and girls
acquire different quantities and qualities of human capital. Few boys, but many girls, expect
to become homemakers, perhaps not for their entire working lives, but at least for a time
when their children are young. Because of these different expectations, girls take courses in
school (for example, home economics) that explicitly train them for work in the home after
graduation. From an institutional perspective, this is one reason that women do not have the
same opportunity to acquire as much human capital as men. It is true that boys and girls
follow different tracks in school, but more because of sexism and discrimination than rational
choice. On one level, the sexism in our culture socializes girls to prefer office education
classes over auto mechanics in high school, or in college English literature over civil
engineering. These sex role stereotypes also influence parents and school counselors to push
boys but not girls into areas of study leading to careers. Girls also obtain less human capital
because of discrimination.

Because of the segmented nature of labor market and the unequal education and
training opportunities provided women and minorities, the majority of women and minorities
are crowed into a relatively small set of occupations in the labor market, resulting in intense
competition for jobs and low wages. Occupational crowding can not totally explain the low
level of earnings in minority occupations and in this case political power play an important
role!

IV. CONCLUSION

More than six billion people now live on our planet and every single person is the
unique combination of genes which is both the cause and the consequence of our differences.
In modern society particularly, work is a source of personal identity and people introduce

300
themselves by indicating the kind of work they do, and “does” because an important basis of
whom one “is”.

Over the years women and minorities have been viewed by society and political
power in different ways as to what their function in society truly should be. There was wide
disagreement about the particular jobs a woman might pursue. The increase in the labor –
force participation of minorities has been called the outstanding phenomenon of our century.
The growth of the industrial economy was on their side, as well as the movement of
democracy, which influenced much of the nineteenth century.

Therefore, the minorities make progress and today, four out of ten workers in the U.S.
are women and minorities. But the vast majority of women and minorities are still working in
a restricted number of jobs and in low-paying, low-prestige, and low-power positions. Even
within the same major occupational groups, women’s earnings are lower than men’s, and do
not have the same opportunity to acquire as much human capital as white men.

REFERENCES

Fox, Mary, & Hesse-Biber, Sharlene, Women at Work, pp. 1-12. Mayfield Publishing
Company, 1984.
Henderson, D. The Drive for Diversity. Air Transport World, p.32, June, 1995.
Kaufman, Bruce, The Economics of Labor Markets and Labor Relations, pp. 339-394. The
Dryden Press Co. 1989.
Osteman, Paul, “Sex Discrimination in Professional Employment: A Case Study.” Industrial
and Labor relations Review 32 no. 4, pp. 451-484, July, 1979.
Rosen, Friedas. “Women and Work Force: The Interaction of Myth and Reality.” In Eloise S.
Snyder ed., The Study of Women: Enlarging Perspectives of Social Reality, pp. 79-
102. New York: Harper & Row, 1979.
Szilagyi, Andrew, Management and Performance, pp. 352-355. Goodyear Publishing
Compny, 1981.
Wilson, M. Diversity in the Workplace, “Chain Store Age Executive with Shopping Centre
Age, “p.71, June 1995.

301
CHAPTER 10

FINANCE

302
THE SHORT TERM AND LONG TERM IMPACT OF THE STOCK
RECOMMENDATIONS PUBLISHED IN BARRON’S

Francis Cai, William Paterson University

Wenhui Li, Buruch College, CUNY

ABSTRACT

This paper studies the profitability of stock recommendations published in


Barron’s Magazine, a century old financial weekly in America. Barron’s magazine has
a section “Research Reports,” which contains the analysts’ stock recommendations.
Using event study methodology and market model as a benchmark, we calculate
abnormal returns to ascertain the impact of the recommendations published in the
Research Reports. We find: (1) there are no statistically significant short term abnormal
returns associated with the published recommendations in Barron’s based on two week
event window and (2) there are no statistically significant long term abnormal returns
associated with the published recommendations in Barron’s based on six month and
twelve month event windows.

I. INTRODUCTION

This paper studies the profitability of stock recommendations published in Barron’s


Magazine, a century old financial weekly in America. Every week, Barron’s has a section
“Research Reports” that contains the analysts’ recommendations. We will test whether
investors can make abnormal trading profits by following the recommendations in Barron’s
over two week period, six month period, and 12 month period. The efficient market
hypothesis (EMH) holds that stock prices fully reflect all available information at all times.
This especially applies to publicly available information and suggests that published stock
'tips' cannot provide abnormal returns to investors acting on them. If they did, that would
imply that tips supply new information not previously available to the market.

However, the fact that thousands of analysts employed by the investment firms in the
US write investment reports and make stock recommendations everyday show that at least
investment firms must believe their recommendations work. Some academic researchers
suggest that superior returns are possible too. (Brav and Lehavy, 2003) document significant
abnormal returns around analysts’ target price change. (Jegadeesh, Kim, Krische, and Lee,
2004) shows quarterly change in recommendations show robust results in predicting returns.
(Jegadeesh, Kim, Krische, and Lee, 2004) find that stocks favorably recommended by
analysts outperform stocks unfavorably recommended by them. Womack (1996) shows the
brokerage analysts’ recommendations have investment value. Similar positive findings can
also be found in Palmon, Sun and Tang (1994); Wijmenga (1990); Syed, Liu and Smith
(1989). Barber, Lehavy, Mcnichols, and Trueman (2001) document that investment
strategies based on the consensus recommendations, in conjunction with active portfolio
management yield annual abnormal returns greater than four percent.

We analyze daily abnormal returns in the U.S. markets from published stock
recommendations of the weekly financial magazine Barron’s. Using the sample from
January, 2004 to December, 2005, we examine how the stock prices in the US stock markets

303
react to the stock recommendations. Using event study methodology and market model as a
benchmark, we calculate abnormal returns to ascertain the impact of the recommendations
published in the Research Reports. We find: (1) there are no statistically significant short
term abnormal returns associated with the published recommendations in Barron’s based on
two week event window
and (2) there are no statistically significant long term abnormal returns associated with the
published
recommendations in Barron’s based on six month and twelve month event windows.
In this research, we ask the following five questions in our paper:
(1) Do security prices on the US markets react to the recommendations published in
Barron’s?
(2) Is there any information leakage prior to the publication of share recommendations?
(3) Do the recommendations possess real economic content or permanence, or are they
merely a 'self-fulfilling prophecy'?
(4) Can investors expect profit by following these recommendations?
(5) Are there any significant positive or negative abnormal returns before and after
publication?
(6) Will there be any abnormal returns over 6 month and 12 month period?

The Plan of this paper is as follows. In section II, we present our data. Section III
explains the model and methodology. The empirical results are analyzed in section IV. A
summary and conclusions section ends the paper.

II. THE SAMPLE DATA

The analyst recommendations used in this study are from Barron’s Weekly. The data
for stock returns are from CRSP database. To test the impact of the publication of
recommendations on abnormal returns, we define event day (day 0) as publication date. The
period plus and minus 10 days surrounding the event day is the 'event window.'

Every week, Barron’s contains a section “Research Reports” in the Market Wrap.
Here is the description about “Research Reports” from Barron’s: “Before an investment firm
recommends a stock for purchase, they'll research the company to determine whether or not
it's a good investment. This column provides a sampling of research report information from
various investment firms and analysts.”

The same stock might be recommended from different analysts of different


investment firms during the event window period. To overcome this “overlapping” issue, we
check Bloomberg News for any other stock recommendations on the same stock. If the same
stock is recommended by any other investment firms during the event window period, we
will remove the stock from being selected into the sample. The other type of “overlapping” is
the same stock recommendations in consecutive weeks from Barron’s. We will exclude the
stock if it is recommended in consecutive weeks.

Our sample includes 484 recommendations from Research Reports from January 2004
to December 2004. Table A-1 summarizes the recommendations by upgrading, downgrading,
maintaining, and initiating. Among 484 recommendations, 133 are buy rating, 27 are strong
buy, 6 are over weight, 3 are speculative buy, 106 are outperform, 3 are accumulative, 1 is
attractive, 1 is attractive, 37 are market perform, 44 are hold, 32 are neutral rating, 53 of them
were sell rating, 11 under weight, 13 under perform. The characteristics of our sample is

304
consistent with the findings in McNichols and O’Brien (1998), Barber, R Lehavy, Mf
Mcnichols, B Trueman (2001), and Kadan, Ohad, Leonardo Madureira, Rong Wang, and
Tzachi Zach (2005) in that a relative small percentage of the sample is sell recommendations.

The stock sample includes stocks from communication industry, pharmaceutical


industry, telecommunication industry, computer industry, stocks from wine industry, stocks
from chemical industry, retail industry, conglomerate, electronics industry, and financial
industry. These stocks are representative of the US stock market. To use market model as a
benchmark to calculate the abnormal return, we need market return data. We use S&P500
Index for the market return proxy in our model. For the estimation of the market model, we
use a history of two hundred daily returns before the event day to estimate the parameters of
the market model.

III. MODEL AND METHODOLOGY

We use market-and-risk-adjusted model to estimate the magnitude of share price


adjustments (abnormal returns) each day of the event window. The model used in our paper is
called market model which served as the basis for the pioneering event study conducted by
Fama, Fisher, Jensen and Roll (1969). Later Brown and Warner (1980) found that this simple
model was more powerful in terms of its ability to identify abnormal performance than any of
the other, more complex risk-adjusted models available.

The market model used to estimate the abnormal return for the jth stock in period t is as
follows:
R j,t= α j + βj Rm,t+ εj,t (1)
where
Rj,t = return on security j on day t
Rm,t = return on market on day t
αj = a constant over time, stable component of security returns
βj = beta of stock j, assume stable over time
εj,t = error term or return due to non-market forces (abnormal return).
Equation (1) is used to find the normal or expected returns. According to this model,
each security's return in period t is expressed as a linear function of the contemporaneous
return on the market and a random error term (j,t) which reflects security specific returns. The
coefficients of the linear market model (α, β) are estimated by regressing observed rates of
return for stock j on the corresponding rates of return for a market index. In computing these
parameters daily, instead of monthly, data are used because price adjustments may occur
within a few days after publication. Brown and Warner (1985) found that for daily returns the
market model was most successful in identifying abnormal performance.

We use GARCH model to improve the estimation accuracy. Since ordinary least
squares (OLS) models assume homoscedasticity in the error terms. A growing body of
literature indicates that many daily return series exhibit heteroscedasticity, and the variance
of the forecast error will depend on the size of the preceding disturbance. ARCH or GARCH
models have been widely used to deal with this heteroscedasticity problem in the time series
analysis.

We collected the 450 daily returns for each recommended stock. Then, we divided the
data into two time periods: the estimation period and the event window. The time line of
whole sample period is denoted from T=-189 to T=+260. We estimated market model

305
parameters over the estimation period beginning T = -189 through T = –11. This yields an
estimation period of 179 days. To investigate abnormal returns over 21 (10 days before the
event and 10 days after the event) day period, 6 month period, and 12 month period, we have
three event windows. 20 day event window period has twenty-one days from T=-10 to T=+10
including T=0, the event day, 6 month event window period has 114 days from T=-10 to
T=+130 including T=0, and 12 month event window period has 271 days from T=-10 to
T=+260 including T=0.
Assuming that the estimated parameters α and β remain unchanged over our sample
period, the expected return E(Rj,t) is computed for each stock over the event window, from t =
–10 to t = +10 for 21 day window, from t = –10 to t = +130 for 6 month window, and from t
= –10 to t = +260 for 12 month window.
E(R j,t)= α j + βj Rm,t (2)
The abnormal return (ARj,t) of stock j is defined as the deviation of each return on the
stock j from its expected return, given the return earned by the market index during day t.
Using estimation period data to estimate market model parameters and assuming that these
parameters hold in the event-window, the abnormal returns in the event-window period are
estimated as follows:
ARj,t = Rj,t - E(R j,t), or ARj,t = Rj,t – (α j + βj Rm,t ) (3)
where Rj,t and Rm,t are the observed daily returns for security j and the market index,
respectively, on day t during the event window.
If publication of share recommendations has no impact on the sample stocks, then on
average, one would not expect any abnormal return:
E(ARj,t) = 0 (4)
assuming the standard assumptions hold.
To determine the statistical significance of abnormal returns on any event day t of stock j, we
first compute the Standardized Prediction Error (SPEj,t), an approach originally proposed by
(Patell, 1976) and (Patell and Wolfson, 1999), and popularized in the finance literature by
(Dodd and Warner,1983). Next, we construct the test statistic Zt for every day t in the event
window, for all N stocks. As the SPEt for a particular event window day aggregates
observations from different periods and across all sample stocks, unfavorable and favorable
effects of confounding events may be offset.

Assuming that abnormal returns (ARj,t) are independently distributed and have a finite
variance and that the publication of share recommendations does not lead to abnormal
returns, the null hypothesis is that publication of share recommendations in Barron’s has no
systematic effect on recommended stocks' prices:
H0: publication of stock recommendations in Barron’s has no statistically significant
impact on stock price
H1: publication of stock recommendations in Barron’s has a statistically-significant
impact on stock price
We also investigate the true economic impact or permanence of the press
recommendations to ascertain whether or not a 'self-fulfilling prophecy' effect exists.
Additionally, we explore whether following press recommendations enable abnormal profits.
We compute the average cumulative abnormal return (ACAR) to analyze the aggregate effect
of such published information in the days prior to publication to determine whether any
'leakage' occurs.

To ascertain abnormal cumulative return in the event window, we standardize each


ACAR and test for significance by computing the statistic Z(t1,t2) or ZACAR, assuming that

306
abnormal returns during the estimation period are independently distributed and the
distribution of the test statistics is standard normal.

IV. RESEARCH ANALYSES

A 21 day window period


For the reporting purpose and significance of the sample size, we only report results
for favorable recommendations (buy + strong buy + speculative buy + outperform + over
weight + positive + attractive + accumulate). There are insignificant changes with small up
and down in relative prices from day -9 and day -5, indicating a random pattern of the stock
price movements. From day -4 to day -1 relative prices still show a random pattern with
larger movements. On the event day, there is a large upward movement, indicating the price
jump on the recommendations. From day +1 to day 5, relative price movements go back to
the random pattern. The relative prices trend downward from day +6 to day +8. From day +9
to day +10 the relative prices go upward back to the 1 mark, ending almost where it starts
from day -10.
Figure 1
Relative Price over the Event Window
Stock Recommendations in Barron's:1/04-12/04
Re lativ e Price
1.004563

1.0024504

1.0003378

0.9982252

0.9961126

0.994
0

-8

-6

-4

-2

10
0

8
-1

Ev e nt Day

The significant AR on day +3 with .477% doesn’t warrant a profitable short term trading
strategy based on the published recommendation. As mentioned above a round trading of
buying and selling to take profit in US stock markets would involve transactions amounting
about 1.7%. However, investors are able to make a short term abnormal return profit by
buying the recommended stocks before the recommendations are published, a trading strategy
that requires “insider information.” Who are those possible profitable traders? They may be
the clients of the brokerage houses that make the stock recommendations, or may be the
people in the journal, or may be the people in the publication printing agency. The Z-value of
1.517 for ACAR of 3.336% on day +4 is at 12 percent significant level. From day 0 to day +4
the ACAR is 1.215% (3.336% (day +4) – 2.121% (day 0).) A strategy of buying the
recommended stocks on day 0 after the recommendation is published does not yield a reliable
abnormal return net of transaction costs of 1.7%.

To see if investors will earn abnormal return profits by following the newspaper
recommendations we compute the average cumulative abnormal returns CAR. The results of
cumulative abnormal returns are shown in Figure 3. From day -10 to day -1, the period before
the event day, the cumulative abnormal returns reach 2.10%. From the event day to day +10,
the cumulative abnormal returns are 2.02% (4.12 on day +10 minus 2.10 on day -1). We
derive two conclusions from Figure 3: first, the average transaction costs in US stock markets
for a single trade are 0.85% and 1.70% for a round trading. Investors are able to earn an

307
insignificant abnormal return profit by buying the recommended stocks on day 0 for 0.32%
profit (2.02% -1.70%). Second, investors are able to make abnormal returns if they know the
stock recommendation prior to the recommendation publication. If an investor buy the
recommended stock from day -9 and hold it to the day +10 the cumulative abnormal returns
are 2.13% net profit after the transaction costs (4.12%-1.70% -0.196%).

IV. CONCLUSIONS

In this paper, we have examined the impact of stock recommendations in Barron’s


Weekly Financial newspaper. This paper studies how the stock prices in the US stock
markets react to the stock recommendations from Barron’s Financial Weekly magazine over
21 day period, 6 month period, and 12 month period. Using event study methodology and
market model as a benchmark, we calculate abnormal returns to ascertain the impact of
published recommendations. We find that there are no statistically significant short term or
long-term abnormal returns associated with the published recommendations. However, there
are profitable opportunities if investors act prior to the published recommendations.

In summary, these results indicate that stock recommendations on Barron’s contain no


useful economic information for investors who act on the published recommendations. The
possible abnormal returns for investors who buy the stocks before the recommendations are
made public are evidence of a market that is strong-form inefficient and the delayed
responses from investors to the newspaper recommendations are most likely are the evidence
of a market is semi-strong-form inefficient.

REFERENCES

Barber, Brad M., Lehavy, Mcnichols, and Trueman, 2001, Security analyst recommendations
and stock returns, Journal of Finance, 56, 533-563
Bjerring, J. H., J. Lakonishok and T. Vermaelen, 1983, Stock prices and financial analysts
recommendations, Journal of Finance 38, 187-204.
Bollerslev, T., R. Chou and K. Kroner, 1992, ARCH modeling in finance, Journal of
Econometrics 52, 5-59.
Brown, S. J. and J. B. Warner, 1980, Measuring security price performance, Journal of
Financial Economics 8, 205-258.
Dimson, E. and P. Marsh, 1986, Event study methodolgoies and the size effect: the case of
UK press recommendations, Journal of Financial Economics 17, 113-142.
Dimson, E. and P. Marsh, 1984, An analysis of brokers' and analysts' unpublished forecasts
of UK stock returns, Jegadeesh, Narasimhan, Joonghyuk Kim, Susan D. Krische, and
Charles M. C. Lee, 2004.
Analyzing the Analysts: When Do Recommendations Add Value? Journal of Finance,
Volume 59, Number 3 (June, 2004), Page Numbers: 1083 – 1124.
Kadan, Ohad, Leonardo Madureira, Rong Wang, and Tzachi Zach, 2005, Conflicts of Interest
and Stock Recommendations - The Effects of the Global Settlement and Related
Regulations, November 2005, Working paper.
Lawrence, Martin, Qian Sun and Francis Cai, 1996, Press recommendations and abnormal
returns on the stock, exchange of Singapore, The Journal of International Finance,
Volume 4, Number 2, 49-163
Liu, P., S. D. Smith and A. A. Syed, 1990, Stock price reactions to the Wall Street Journal's
securities recommendations, Journal of Financial and Quantitative Analysis 25, 399-
410.

308
INVESTOR RATIONALITY IN PORTFOLIO DECISION MAKING:
THE BEHAVIORAL FINANCE STORY

Sudhir Singh, Frostburg State University, Frostburg, Maryland


ssingh@frostburg.edu

ABSTRACT

Conventional finance theory assumes investment decision-makers as rational utility-


maximizers. Cognitive psychology, in contrast, holds that human decisions are susceptible to
several illusions: those caused by heuristic decision-making processes, as well as those
arising from the adoption of "mental frames." Unfortunately, these heuristics may lead to
cognitive illusions such as: representativeness, over-confidence, anchoring, loss aversion,
framing, and mental accounting. Proponents of behavioral finance argue that heuristic-driven
bias and framing effects can cause market prices to deviate from fundamental values. This
paper examines the findings of this research and, in doing so. Discusses various human biases
and the potential costs they impose on individual portfolios. It also explores mechanisms for
voluntary detachment from emotion inherent in investing that may mitigate the perilous
effects of such biases.

I. INTRODUCTION

The end of the dot-com era of the late nineties and the continuing anxiety over stock
market performance has had a sobering effect on investors and necessitated a serious
revisiting of the rules of investing. Following the catastrophic events surrounding the bursting
of the speculative bubble in March 2000, new attempts are being made to explain the
behavior of financial markets, one of the foremost of which is in the area of behavioral
finance.

Interest in behavioral finance research has been fueled by the inability of the
traditional finance framework to explain many empirical patterns, including stock market
bubbles in Japan in the late eighties and the U.S, post-announcement earnings drifts. Most
modern textbooks in finance and investing appear to be silent on the influence of behavioral
finance on financial markets. Olsen (1998) notes, behavioral finance recognizes the
paradigms of traditional finance such as rational behavior and profit maximization in the
aggregate, but asserts that these models are incomplete, since they do not fully consider
individual behavior. Specifically, behavioral finance “seeks to understand and predict
systematic market implications of psychological and economic principles for the
improvement of financial decision making” (ibid.). Thus, the insight of how psychology
affects financial decisions, corporations, and the financial markets is finding greater currency
in mainstream finance.

Financial economists are increasingly coming to believe that the study of psychology
and other social sciences can shed considerable light on the unpredictable and erratic nature
of human behavior, and by extension, challenge the prevailing paradigm of efficiency of
financial markets, as well as explain stock market anomalies, market bubbles, and crashes.
Recognition of human biases and accompanying irrationality warrants greater investigation
so as not to repeat the mistakes of the past. As Jolliffe (2005) explains, “for investors who

309
bought technology funds during the internet boom, only to see their value halve when the
bubble burst, studying behavioral finance, the analysis of irrational investor behavior, could
pay big dividends.” Despite the authority conferred on the field in the awarding of the 2002
Nobel Prize in Economic Sciences to noted behavioral economist Daniel Kahneman,
behavioral finance is in a relatively incipient stage as a field of rigorous inquiry. Behavioral
finance uses models in which some agents are not fully rational, either because of individual
preferences or mistaken beliefs; it, thus, encompasses research that drops the traditional
assumptions of expected utility maximization with rational investors in efficient markets in
traditional finance. As Ritter (2003) explains, the twin cornerstones of behavioral finance are
cognitive psychology (how people think) and the limits to arbitrage (when markets will be
inefficient).

This study attempts to discuss the central tenets of behavioral finance and uncover its
impact upon investment decision-making at the individual level. It includes a discussion of
various psychological biases that result in suboptimal investment strategies and concludes
with a discussion of strategies for recognition and avoidance of these biases in portfolio
decision-making and individual retirement planning.

II. TRADITIONAL FINANCE VERSUS BEHAVIORAL FINANCE

The efficient market hypothesis (EMH) is the cornerstone of rationality that


purportedly governs the functioning of well-developed financial markets. Under the semi-
strong form of the EMH, market prices are considered to reflect all past and current
information regarding asset prices; thus, future changes in asset prices are said to follow a
“random walk” and to that extent, unpredictable. Although the assumption of strict investor
rationality is questionable, there is good evidence that markets are efficient in that it is very
hard to consistently earn superior, risk-adjusted returns; this is evidenced by the fact that very
few professional fund managers manage to show consistently high performance.

While many research studies have indeed shown that it is hard to 'beat the market', the
assumption of pervasive market efficiency has been muddied by recent events including the
internet stock bubble and the post-Enron reaction to the accounting and business practices of
a large number of US quoted firms. The burgeoning interest in behavioral finance and
growing body of research is questioning the impact of individual and crowd psychology on
decision-making in financial markets. Under the paradigm of traditional financial economics,
decision-makers are rational and utility maximizing. In contrast, cognitive psychology
suggests that human decision processes are subject to several cognitive illusions, those
caused by heuristic decision-making processes and those arising from the adoption of 'mental
frames', the most salient of which are discussed below.

III. PSYCHOLOGICAL BIASES

Cognitive psychologists have documented many patterns of human behavior. Some of


these patterns and their impact on individual investor decision-making are as follows:

Representativeness: This refers to the tendency of decision-makers to make decisions based


on stereotypes, to see patterns where perhaps none exist. An example of this is decision
making based on the 'law of small numbers', whereby investors tend to over-reach and
assume that recent trends will continue. In effect, people underweight long-term averages.
Investors may seek to chase 'hot' stocks and avoid stocks which have performed poorly in the

310
recent past. If markets are fully rational, recent trends in share price should not impact on
future expectations of a share's price. People tend to put too much weight on recent
experience. As an example, when equity returns have been high for many years (such as
1982-2000 in the U.S. and Western Europe), many people begin to believe that high equity
returns are “normal.” Representativeness is poor protection against the laws of chance.

Overconfidence: People are overconfident about their abilities. The classic behavioral
characteristic of "overconfidence" leads many investors to believe they can consistently select
the best investment, manager and/or sector. Entrepreneurs are especially likely to be
overconfident. Overconfidence manifests itself in a number of ways. One example is too little
diversification, because of a tendency to invest too much in what one is familiar with. Thus,
people invest in local companies, even though it is bad from a diversification viewpoint
because their real estate (the house they own) is tied to the company’s fortunes, and they
already have significant human capital invested in the firm. Research shows that men tend to
be more overconfident than women. This manifests itself in many ways, including trading
behavior. Barber and Odean (2001) analyzed the trading activities of people with discount
brokerage accounts. They found that the more people traded, the worse they did, on average.
And men traded more, and did worse than, women investors.

Anchoring: Anchoring arises when a value scale is fixed or anchored by recent observations.
An example would include a case where a share has recently suffered a substantial fall in
price. An investor may be tempted to evaluate the 'worth' of the share by reference to the old
trading range. As an example, a company whose stock is trading at $10 a share; The company
then announces a 300% earnings increase, but its stock price increases only to, say, $12 a
share. The small rise occurs because investors are "anchored" to the $10 price. They believe
that the earnings increase is temporary when, in fact, the company will probably maintain its
new earnings level.

Loss Aversion: Loss aversion is based on the idea that the mental penalty associated with a
given loss is greater than the mental reward from a gain of the same size. If investors are loss
averse, they may be reluctant to realize losses and may even take increasing risks to escape
from a losing position. This provides a viable explanation for 'averaging down' investment
tactics, whereby investors increase their exposure to a falling stock, in an attempt to recoup
prior losses. Shefrin (2001) terms this phenomenon “escalation bias”.

Framing: Framing in behavioral finance is the choosing of particular words to present a


given set of facts; it can influence the choices investors make. Kahneman and Tversky
(1979) used framed questions in the course of developing "Prospect Theory". They found that
contrary to expected utility theory, people placed different weights on gains and losses and on
different ranges of probability. They also found that individuals are much more distressed by
prospective losses than they are happy by equivalent gains. Some have concluded that
investors typically consider the loss of $1 twice as painful as the pleasure received from a $1
gain. Others believe that this work helps to explain patterns of irrationality, inconsistency,
and sheer incompetence in the ways human beings arrive at decisions when faced with
uncertainty. An increasing body of literature on framing supports a tendency for people to
take more risks when seeking to avoid losses as opposed to securing gains.

Mental Accounting: Mental accounting is a term given to the propensity of individuals to


organize their world into separate 'mental accounts'. This can lead to inefficient decision-
making, for example, an individual may borrow at a high interest rate to purchase a consumer

311
item, while simultaneously saving at lower interest rates for a child's college fund. The use of
mental accounts could be partly explained as a self-control device. As investors have
imperfect self-control, investors may separate their financial resources into capital and
'available for expenditure' pools, in an effort to control their urge to overconsume. Investors
tend to treat each element of their investment portfolio separately, possibly forgoing the
benefits of portfolio diversification. This can discourage an investor from selling a losing
investment, and possibly forgoing an alternative investment opportunity, because its 'account'
is showing a loss.

V. IMPLICATIONS FOR FINANCIAL MARKETS

Proponents of behavioral finance contend that heuristic-driven bias and framing


effects cause market prices to deviate from fundamental values. It is argued that because
these biases are an inherent part of all of our decision-making processes, they can
systematically distort market behavior. For example, the representativeness heuristic could
lead investors to become over optimistic about past winners and over pessimistic about past
losers, causing share prices to deviate from their fundamental level. Anchoring and over-
confidence could lessen analysts' tendency to adjust earnings predictions when new
information arises. In particular, the biases may result in: (a) over or under (depending on the
bias) reaction to price changes or news; (b) extrapolation of past trends into the future; (c)
lack of attention to the fundamentals underlying a stock; and (d) undue focus on popular
stocks.

If such patterns exist, there may be scope for investors to exploit the resulting pricing
anomalies to capture superior, risk-adjusted returns. Proponents of EMH, in fact, argue that
smart money will exploit such anomalies and drive prices to their fundamental values. Other
research, however, shows that rational investor trading is unable to completely offset the
actions of irrational investors. This, as pointed out by Miller (1977), is largely be due to the
inability of smart money to engage in short sales when the bulk of shares are held by
irrational investors. Using data on the interest cost of borrowing and lending shares in the
1920s and 1930s, Jones and Lamont (2001) show that shares that were more expensive to
short tended to be highly priced and had lower subsequent returns on average as predicted by
Miller's theory.

V. STRATEGIES FOR INDIVIDUAL INVESTORS

The preceding discussion has reviewed human behavioral biases and the manner in
which they impair sound decision-making and hurt investor pocketbooks. Strategies that
would be most beneficial to individual investor decision-making, at their core, require self-
awareness and discipline. Specifically, investors can immunize themselves from these biases
by following the following strategies:
Understanding biases: Recognition of biases in oneself and others can be the first step in
avoiding them.
Quantifying investment criteria: Quantifying investment goals prevents one from acting on
rumors, emotion, and other detrimental biases. The criteria for investing must first meet
quantitative benchmarks and can be supplemented by qualitative information such as the
firm’s recognition as a producer of quality products. Diversifying: The principle of
diversification was reinforced when Enron collapsed and $3 million portfolios evaporated in
value. Diversification across different industries and across different investment vehicles

312
(stocks, bonds, real estate, precious metals) would limit investment in one’s employer’s
stock. This is desirable all of one’s human capital is already invested in the employer-firm.
Controlling one’s investment environment: This entails checking one’s stocks once per
month, trading just once per month on the same day each month, reviewing portfolio annually
to track if investments are meeting desired strategies.
Understanding that earning the market rate of return, or even slight underperformance, is
not catastrophic to wealth: The strategies for earning abnormal profits usually exacerbate
cognitive biases and ultimately contribute to lower returns. Portfolio strategies based on
indexing inhibit the deleterious effects of biases and wring out the emotion out of investing
are, therefore, deemed the most successful.

VI. CONCLUSION

The extent of research in the field of behavioral finance has grown noticeably in the
past decade. The field merges concepts from financial economics, psychology and sociology
in an attempt to construct a more detailed model of human behavior in financial markets.
Currently, no unified theory of behavioral finance exists. Shefrin and Statman (1994) began
work in this direction, but so far, most emphasis in the literature has been on identifying
behavioral decision-making attributes that are likely to have systematic effects on financial
market behavior. While behavioral factors undoubtedly play a role in the decision-making
processes of investors, they do not quash all the predictions of efficient market theory; they
offer plausible explanations of financial markets which would otherwise be categorized as
anomalous. The current state of research from the efficient market and behavioral
perspectives therefore suggests that an inclusive and diverse approach in the choice of
theoretical explanations of the behavior of financial markets will be the pragmatic response to
the inconclusive results on either side of the debate. While, on the one hand, investors not
many investors are profiting from market anomalies, many will agree that the stock market
bubble burst of 2000 is better explained by hubris and irrational exuberance grounded in
behavioral finance than by the efficient markets theory. This research benefits individual
investors the most as it seeks to create awareness of the various human biases and the costs
they impose on their portfolios, and advocates voluntary detachment from the “emotion”
inherent in investing.

REFERENCES

Barber, Brad, and Terry Odean, 2001, Boys will be boys: Gender, overconfidence, and
common stock investment, Quarterly Journal of Economics, v. 116, 261-292.
Jolliffe, Alexander, 2005, Following the herd could cost you dear: Behavioral Finance, The
Financial Times (London Edition), January 29, c5.
Jones, C M, and O. A. Lamont, 2001, Short-Sale Constraints and Stock Returns, NBER
Working Paper No 8494.
Kahneman, Daniel, and Amos Tversky, 1979, Prospect theory: An analysis of decision
making under risk, Econometrica, 47(2), 263-291.
Miller, Edward M., 1977, Risk, Uncertainty and Divergence of Opinion, Journal of Finance,
32(4), 1151-1168.
Olsen, Robert A., Behavioral Finance and Its Implications for Stock-Price Volatility,
Financial Analysts Journal, 54, vol. 2, March-April 1998, 10-18.
Ritter, Jay, 2003, Behavioral Finance, Pacific-Basin Finance Journal, Vol. 11, No. 4,
September, 429-437.

313
AN ANALYSIS OF THE MOVEMENT OF FINANCIAL INDUSTRY INDEXES ON
THE STOCK EXCHANGE OF THAILAND

Nittaya Wiboonprapat, Alliant International University


nwiboonprapat@hotmail.com

Mohamed Khalil, Alliant International University


mkhalil@alliant.edu,

Meenakshi. Krishnamoorthy, Alliant International University


mkrish@alliant.edu

ABSTRACT

This paper provides an analysis of the movement of Financials industry indexes: Banking,
Securities, and Insurance sector indexes on the Stock Exchange of Thailand from January 3,
1995 through December 30, 2004. The results of the Durbin-Watson Test Statistic indicate
that the movement of the SET index and of the sub-sector indexes of the Financials industry
was random. The GARCH-M results show a positive relationship between the variances of
the sub-sector indexes of the Financials industry and the SET index. Furthermore, any shock
to SET will affect the sub-sectors since the persistence of stock market volatility is greater.
The results of Granger causality test indicate two-way causality relationships between
Securities and Insurance indexes, as well as between Insurance and Banking indexes, but
only a one-way relationship from Banking index to Securities index.

I. INTRODUCTION

This study presents the results of a research investigation into the movement of
Thailand’s Financials industry sector indexes in the context of the Stock Exchange of
Thailand (SET) index, which is a market capitalization weighted price index that compares
the current market value of all listed common shares with their base market value. The SET
Index calculation is continuously adjusted for new listings, delisting and capitalization
changes in order to eliminate effects other than price movement from the index (SET, 2004).
In addition, this study analyzed the relationships between the variances of the sub-sector
indexes of the Financials industry and the SET index. Finally, this study determined Granger
causality among Banking, Securities, and Insurance sector indexes. The sector indexes are
calculated from the prices of common shares in each sector.

II. RESEARCH METHODOLOGY

This study used the daily stock index prices of (SET) and Financials industry:
Banking sector, Securities sector, and Insurance sector. These secondary data were obtained
from the SET Library, covering the ten year period from January 3, 1995 through December
30, 2004. The movement of the SET index and the Financials industry indexes: Banking
sector index, Securities sector index, and Insurance sector index was analyzed by using the
Durbin-Watson Test Statistic to determine autocorrelation.

314
The relationships between the variances of the sub-sector indexes of the Financials
industry and the SET index were analyzed with the Generalized Autoregressive Conditional
Heteroscedasticity in Mean (GARCH-M) model.

Granger causality test (Granger, 1969) and (Granger, 1988) was used to determine
Granger causality among the Financials sector indexes: Banking sector index, Securities
sector index, and Insurance sector index.

III. BACKGROUND LITERATURE

(3.1) Durbin-Watson Test Statistic:


This study used Durbin-Watson test statistic to identify the degree of autocorrelation within
daily stock price data within each sector.
(3.2) Generalized Autoregressive Conditional Heteroscedasticity in Mean (GARCH-M):
In line with (Engle, Lillian and Robins, 1987), (Rahman and Yung, 1994), and (Bildik
and Elekdag, 2004) the GARCH-M model of equation can be modified as follows:
N
indexi t = ∑ ai indexi t −1 + hit1 / 2 + δ set t −1 + et
j =1
(1)
where i = Banking index, Securities index, and Insurance index
set = Stock Exchange of Thailand index
q p
hit = w + ∑ α i et2−1 + ∑ β i hi t −1
I =1 J =1
(2)
et Φ t −1 ~ N (0, ht set i ) (3)
⎛ indexit ⎞
indexit = ln⎜
⎝ indexi ,t −1 ⎟⎠
where which represents the return of any series (i.e banking
( )
index, securities index, and insurance index) at time t, conditional on past information Φ t −1 ;
and ( set t −1 ) represents the unexpected stock market volatility transmitted by previous
information. This volatility equals the difference between the t-period market risk and its
1/ 2
conditional mean. The presence of ( ht ) in (1) provides a way to directly study the explicit
trade-off between the volatility of each series and its expected returns.
The size and significance of ( α ) indicates the magnitude of the effect imposed by the lagged
( ) ( )
error term et −1 on the conditional variance ht , or to the extent to which volatility now
affects the next period’s volatility (volatility clustering). Significant and positive coefficient
for ( δ i ) implies higher returns for higher levels of stock market volatility. The SET (sett) is
⎛ set t ⎞
set t = ln⎜⎜ ⎟⎟
calculated using the following formula: ⎝ set t −1 ⎠
which represents the market return.
(et) represents the square root stochastic process, which is assumed to be modeled as a
Normal Distribution (N) on the information set ( Φ t −1 ) given to the investor at time ( t − 1 ).
Since the autocorrelation and partial autocorrelation for the squared residuals from model (2)
cut off after lag one, the study uses GARCH (1, 1), so equation (2) reduces to:
hit = w + α i et2−1 + β i hi t −1 (4)

315
Also, as stated above, the SET as a proxy for the information set has a direct impact on the
risk. As a result, the information set at time (t − 1) becomes very important to the investor,
therefore the SET is introduced in the variance equation as an exogenous regressor, with one
period lag to account for the information at (t − 1) :
hit = w + α i et2−1 + β i hi t −1 + set t −1 (5)
The sum of ( α ) and ( β ) represents the rate at which volatility clustering persists through
time. If the sum of ( α ) and ( β ) equals one, then current SET volatility persists indefinitely
in conditioning the future variance. As the sum of ( α ) and ( β ) approaches unity, the
persistence of stock market volatility is greater.
(3.3) Granger causality test:
This study used Granger causality test to examine for causal linkages among sectors in the
Financials industry. There are three sectors in the Financials industry: Banking sector,
Securities sector, and Insurance sector. This study examined whether any sector index
Granger-caused any other sector index.

IV. RESEARCH FINDINGS

(4.1) Durbin-Watson Test


Table (I) below presents the summary of Durbin-Watson test statistic results of the
SET index and three Financials industry sub-sector indexes (Banking, Securities and
Insurance) for the ten year period from January 3, 1995 to December 30, 2004. The results
demonstrate that the values of Durbin-Watson test statistic for the time period covered in this
study are all close to 2.0, which means that there were no serial correlations within the SET
index and the three sub-sector indexes of Financials industry.

Table I Durbin-Watson Test Statistic Results of Financials Industry Sector Indexes


Sector index Durbin-Watson Value (d) Random
Banking sector index 1.856150 Yes
Finance and Securities sector index 1.918079 Yes
Insurance sector index 1.847442 Yes
SET index 1.730086 Yes

(4.1.1) Random Walk Theory: The theory of random walks implies (Eugene F. Fama, 1995),
that a series of stock price changes has no memory—the past history of the series cannot be
used to predict the future in any meaningful way. The future path of the price level of a
security is no more predictable than the path of a series of cumulated random numbers.”
Therefore, the SET index and the three sub-sector indexes of Financials industry behaved as
random walk markets for the time period from January 3, 1995 to December 30, 2004.
The findings from this present study mirror the results of previous studies on the
random walk theory, such as (Dyer, 1976), (Hasan, 2004) and (Van Horne and Parker, 1967)

(4.1.2) Efficient Market Hypothesis: This study also tested the efficient market hypothesis.
Since only price data was taken into account, only the “weak-form” of market efficiency
could be tested. The results in this study confirmed the “weak-form” of the efficient market
hypothesis, which states that past stock price movements is fully reflected in the current stock
price. It implies that technical analysis cannot be used to predict future stock prices. Previous

316
stock price information is unrelated to future stock prices, making it impossible to predict
future stock prices from historical price movements. Therefore, past stock price movement
within the SET index and the three sub-sector indexes of Financials industry was fully
reflected in future stock prices for the time period from January 3, 1995 to December 30,
2004.
The findings from this study mirror the results of previous studies on weak-form
efficient market hypothesis, such as (Al-Loughani and Chappell, 1997).
(4.1.3) Chaos Theory: Furthermore, the findings of unpredictable behavior within the SET
index and the three sub-sector indexes of Financials industry implies conformance to Chaos
theory, which states that it is impossible to predict future outcomes in such systems.
(4.2) GARCH-M Results

Table 2 Summary of Garch-M Results for the Ten Year Period


Variances α β Sum of α and β
Banking & SET 0.081913 0.908341 0.990254
Finance and Securities & SET 0.107215 0.888339 0.995554
Insurance & SET 0.049897 0.94407 0.993967

Table 2 demonstrates the summary of GARCH-M results for analyzing the


relationship between the variances of the Banking sector index and the SET index, the
relationship between the variances of the Securities sector index and the SET index, and the
relationship between the variances of the Insurance sector index and the SET index, for the
ten year period from January 3, 1995 to December 30, 2004.
To analyze GARCH-M, if the sum of the coefficients ( α ) and ( β ) equals one, then
current SET volatility persists indefinitely in conditioning the future variance, which means
there is a positive relationship between the variances of the whole market and the sector. If
the sum of ( α ) and ( β ) approaches unity, the persistence of stock market volatility is
greater, that is to say that any shock to the Stock Exchange of Thailand will affect the
variance of the sector.
From the findings, the sum of ( α ) and ( β ) is very close to one and approaches unity
in all three cases. Therefore, it appears that there is a positive relationship between the
variances of the three Financials industry sub-sector indexes (Banking, Securities and
Insurance) and the SET index, for the time period covered in this study. Furthermore, any
shock to SET will affect the three sub-sectors of the Financials industry since the persistence
of stock market volatility is greater.
(4.3) Granger causality test

Table 3 Summary of Granger Causality Test Results of the Causal Linkages Among
Sectors in the Financials Industry for the Ten Year Period
Null Hypothesis P-Value Reject Null
Hypothesis
Banking does not Granger cause Securities 0.02640 Yes
Securities does not Granger cause Banking 0.23672 No
Securities does not Granger cause Insurance 5.60E-10 Yes
Insurance does not Granger cause Securities 0.00759 Yes
Insurance does not Granger cause Banking 0.04144 Yes
Banking does not Granger cause Insurance 2.60E-10 Yes

317
Table (III) presents a summary of the Granger causality test results of the causal
linkages among the three sub-sectors in the Financials Industry: Banking sector, Securities
sector and Insurance sector on the Stock Exchange of Thailand for the Ten Year Period from
January 3, 1995 to December 30, 2004. A value of Probability (P) of less than 0.05 means
that the corresponding null hypothesis must be rejected. Therefore, all but one null
hypotheses are rejected.

The results show that Granger causality runs both ways between Securities sector
index and Insurance sector index, as well as Insurance sector index and Banking sector index,
but only one way from Banking sector index to Securities sector index for the time period
covered in this study. It is not clear why the relationship between Banking sector index and
Securities sector index is one-way, but a possible explanation relates to the economic crisis in
Southeast Asia. During this crisis, which started in July 1997, a large number of the
companies that made up the Securities sector failed.

V. CONCLUSION

In conclusion, the findings of the Durbin-Watson Test Statistic confirm that the
movement of the SET index and the sub-sector indexes of the Financials industry are random.
The findings of GARCH-M confirm that there is a positive relationship between the
variances of the sub-sector indexes of the Financials industry and the SET index. The
findings of Granger causality test confirm that there are two-way causality relationships
between Securities sector index and Insurance sector index, as well as Insurance sector index
and Banking sector index, but only a one-way relationship from Banking sector index to
Securities sector index. Why this relationship is one-way is not clear, but it might be related
to the failure of more than half of Thailand’s non-bank financial institutions during the
economic crisis in Southeast Asia. Other researchers may be interested in investigating this
area further.

REFERENCES

Al-Loughani, Nabeel and Chappell, David.. “On the Validity of the Weak-Form Efficient
Markets Hypothesis Applied to the London Stock Exchange.” Applied Financial Economics,
7, 1997, 173-176.
Bildik, Recep and Elekdag, Selim. “Effects of Price Limits on Volatility Evidence
from the Istanbul Stock Exchange.” Emerging Markets Finance and Trade, 40, (1),
2004, 5-34.
Dyer, James C. “Random Walks in Australian Share prices: A Question of Efficient Capital
Markets.” Australian Economic Papers , 15, (27), 1976, 186-200.
Engle, Robert F., Lilien, David M. and Robins, Russell P. “Estimating Time Varying Risk
Premia in the Term Structure: the ARCH-M model.” Econometrica, 55, 1987, 391-407.
Fama, Eugene F.. “Random Walks in Stock Market Prices.” Financial Analyst Journal, 1995,
75-80
Granger, Clive W. J. “Investigating Causal Relations by econometric models and cross-
spectral methods.” Econometrica, 37, 1969, 424-438.
Granger, Clive W.J. “Some Recent Developments in a Concept of Causality.” Journal of
Econometrics, 39, 1988, 199-211

318
Hasan, Mohammad S. “On the Validity of the Random Walk Hypothesis applied to the
Dhaka Stock Exchange.” International Journal of Theoretical & Applied Finance, 7, (8),
2004, 1069-1086.
Rahman, Hamid and Yung Kenneth. “Atlantic and Pacific Stock Markets Correlation and
Volatility Transmission.” Global Finance Journal, 5, 1994, 103-119.
SET. The Stock Exchange of Thailand: Sector Information. Thailand Stock Exchange
Library, 2004.

319
THE SHORT SQUEEZE AT YEAR-END

Howard Nemiroff, Long Island University – CW Post


howard.nemiroff@liu.edu

ABSTRACT

In this paper, I attempt to assess whether the information imparted through the NYSE
and NASDAQ short interest news releases creates a trading rule. There has been much work
done on the impact that short interest releases has on the price discovery process. Specific to
this paper, however, is the issue of a short squeeze that seems to be in the public press of late.
It has been argued that the longer the coverage ratio, the more likely it is that shorters can be
tempted to cover. If there is a slight increase in prices through demand pressure, shorters
quickly cover their positions causing a larger increase than otherwise, with prices falling back
eventually. Using a sample of shorted securities ranked by short interest coverage ratio, I find
that there does not appear to be a consistent trading strategy.

I. INTRODUCTION

There are several potential reasons to short a security. Speculators infer that the
security is overpriced and its price will soon fall. Hedgers may sell short to lock in future
prices or to take an arbitrage position in combination with other securities. Further, investors
may take a short position in a security they already hold long, shorting against the box, for
the purpose of deferring capital gains taxes, to name a few. Most pertinent to this study,
however, is the issue that outstanding short positions eventually need to be covered. An
article recently in the Wall Street Journal presents a trading rule in anticipation of short
coverage (using the short interest ratio) through November and December (Zuckerman,
November 17, 2005, pg. C1). It argues that by creating a portfolio consisting of the top five
short positions (by coverage ratio), you would have done better than the S&P 500 over each
of the past four years.

Previous literature on US markets has examined short selling and its impact on
individual returns with mixed results. There are some excellent studies that examine the
relationship that short interest has on price discovery, some find a positive relationship (see,
for example, Brent, Morse and Stice (1990), Senchack and Starks (1993), and Figlewski
(1981), amongst others), and others find no evidence of a relationship (see, for example,
Woolridge and Dickinson [1994], and Vu and Caster [1987], amongst others).

Specific to this study, McDonald and Baron [1973] find that the short interest ratio
provides neither bullish nor bearish predictive power for future stock movements. Asquith et
al. (2004) find that the short interest ratio provides a signal only when using an equally-
weighted (not value-) portfolio to evaluate excess returns. The short interest ratio is
calculated as a measure of the present cumulated short position relative to total outstanding
capitalization, thus presenting a ‘days-to-cover’ ratio. The higher the ratio, the greater the
proportion of shorters relative to long investors.

This paper is organized as follows. Section II describes the data and methodology. In
Section III, the stock price reactions to short interest ratios are discussed. Finally, the
conclusions are summarized in Section IV.

320
II. DATA AND METHODOLOGY

Sample Design
The sample consists of all NYSE, AMEX and NASD companies whose short interest
ratio positions are disclosed in the WSJ from November 2002 through February 2003, and
from November 2003 through December 2003. The disclosure typically contains the top 10
firms with the longest coverage ratio in days. The sample further contains data on a control
group, namely, the next 10 firms organized by coverage ratio. This group is not categorized
as such in print. Since it is expected that most investors would typically focus on the
categorized list, there may be a difference in price impact for the control group. The WSJ
reports short positions on a monthly basis, determined from the required submissions of
trading houses detailing their short transactions through the prior month. Specifically,
member firms must convey all outstanding short positions of their clients to the NYSE and
NASDAQ based on settlement by the 15th of every month. Since it is settlement, the data
includes trades up to 3 business days prior. If the 15th is a holiday, then the reporting date is
the prior business day. The NYSE imposes a delivery deadline of 2 business days following
the 15th, and then disseminates the news to the public 2 business days following that. The
WSJ publishes the short data electronically on that date, after the markets close, and it
appears in the following days’ print edition. The process for NASD stocks is identical save
for the release to the public, which occurs 5 business days after the delivery deadline, also
after markets close. Thus, information contained in the press release is cumulative in nature
and typically close to two weeks old.

Methodology
The estimation period consists of daily stock price returns and equally-weighted index
returns from October 1st through to February 28th for both 2002 and 2003. Cumulated stock
returns are compared to index return values, both before and after the New Year. Descriptive
statistics are reported in Table 1. Excess returns for each firm are reported in Table 2.
Portfolios are created at the time of the newspaper release, and held until the end of the
holding period(s). In the case of October 2002, three portfolios are formed on October 25th,
and subsequently held until the end of October, November and December, respectively.

Table I below lists the number of companies used in the creation of each portfolio per month.
For example, December 2002 for NASDAQ lists 16 stocks that complete the top 10 portfolio,
and 16 in the control sample portfolio.

2002 2003
NASDAQ
Oct Nov Dec Jan Feb Oct Nov Dec
16 17 16 15 17 16 18 18

NYSE
Oct Nov Dec Jan Feb Oct Nov Dec
18 17 19 20 18 18 20 17

III. RESULTS

It appears as though NASD top 10 results show that a portfolio created on October
25th 2002 would have outperformed the EW index if held through to the end of November.

321
Results for 2003 show that you would have outperformed the EW index through December.
Portfolios created based on November releases would not have yielded positive performance
in 2002, but would have outperformed in 2003 if you held through November. Using the
control sample, results are strongly positive for October 2002 portfolios, yet not for October
2003 portfolios. November 2002 portfolios are again yielding positive excess returns, where
November 2003 portfolios are not. NYSE top 10 October portfolio results are highly positive
in 2002, and negative in 2003. And, finally, all NYSE control portfolios (save for the
November 2003) are negative.

Table II presents results on the equally weighted composite portfolio as well as the NASD
and NYSE top 10 and control portfolios. Each portfolio’s return is calculated through a
number of holding periods. For example, October 2002 has three portfolio returns; the first
presents an excess return from Oct. 25th through Oct. 31st, the second from Oct. 25th through
Nov. 29th, and the third from Oct. 25th through Dec. 31st.

2002 2003
Equally Weighted NYSE/AMEX/NASDAQ Composite Index
Oct Nov Dec Jan Feb Oct Nov Dec
0.028218 0.026808 0.00357 0.006921 0.008929 0.031286 0.01189 0.018404
0.147153 -0.00533 -0.00955 0.071142 0.052609
0.115015 0.111861

NASD ‘top 10’ excess portfolio returns


Oct Nov Dec Jan Feb Oct Nov Dec
-0.32822 0.013094 -0.02571 -0.00867 0.013563 0.062342 -0.00234 0.030868
0.098998 -0.0472 -0.07143 0.072883 0.055913
-0.02186 0.096424

NASD control excess portfolio returns


Oct Nov Dec Jan Feb Oct Nov Dec
0.08148 -0.02218 0.016975 0.004766 0.07884 0.021854 -0.00721 0.02486
0.106018 0.089324 -0.03677 0.021359 -0.02779
0.250686 -0.02593

NYSE ‘top 10’ excess portfolio returns


Oct Nov Dec Jan Feb Oct Nov Dec
-0.00443 0.019097 -0.04309 -0.00839 0.011171 -0.02055 -0.00293 -0.00592
0.157717 -0.01716 -0.02313 -0.03935 0.035149
0.150026 -0.0374

NYSE control excess portfolio returns


Oct Nov Dec Jan Feb Oct Nov Dec
-0.03212 0.022586 -0.00857 -0.00508 0.013021 -0.02055 -0.00293 -0.00427
-0.03456 -0.01974 -0.05652 -0.03935 0.035149
-0.07724 -0.0374

322
IV. CONCLUSIONS

It has been argued that short positions are relatively more risky toward year end as the
likelihood of a short squeeze increases for those that have more days to cover. This paper
shows that the results, when based on an equally weighted portfolio, are more consistently
positive for NASD listed stocks than for NYSE, yet the results are not unequivocally
consistent through the two years of this study. It appears as though a trading rule does not
exist when examining the top 10 short stocks in an equally weighted portfolio relative to an
equally weighted index.

REFERENCES

Aitken, M., A. Frino, M. McCorry, and P. Swan, 1998. Short Sales are Almost
Instantaneously Bad News: Evidence from the Australian Stock Exchange, Journal of
Finance, 53, 2205-2223.
Asquith, P., P. A. Pathak, J. R. Ritter, 2004. Short Interest and Stock Returns, Working
Paper, Harvard University.
Asquith, P. and L. Muelbroek, 1996. An Empirical Investigation of Short Interest, Working
Paper, Harvard University.
Brent, A., D. Morse, and E.K. Stice, 1990. Short Interest: Explanations and Tests, Journal of
Financial and Quantitative Analysis, 25, 273-289.
Choie, K., and S.J. Hwang, 1994. Profitability of Short-Selling and Exploitability of Short
Information, Journal of Portfolio Management, 20, 33-38.
Dechow, P. M., A. P. Hutton, L. Meulbroek, R. G. Sloan, 2001. Short-sellers, fundamental
analysis, and stock returns, Journal of Financial Economics, 61, 77-106.
Desai, H., K. Ramesh, S. R. Thiagarajan, B. V. Balachandran, 2002. An Investigation of the
Informational Role of Short Interest in the Nasdaq Market, Journal of Finance, 57,
2263-2287.
Diamond, D.W., and R.E. Verrecchia, 1987. Constraints on Short-Selling and Asset Price
Adjustments to Private Information. Journal of Financial Economics, 18, 277-311.
Elfakhani, S, 1997. Short Sellers Aren’t Always Right, Canadian Investment Review, Winter,
9-14.
Figlewski, S, 1981. The Informational Effects of Restrictions on Short Sales: Some Empirical
Evidence. Journal of Financial and Quantitative Analysis, 4, 463-476.
McDonald J.G. and D.C. Baron, 1973. Risk and Return on Short Position in Common Stock.
Journal of Finance, March, 97-107.
Rubinstein, M., 2004. Great Moments in Financial Economics: III. Short-Sales and Stock
Prices. Journal of Investment Management, 2, 16-31.
Senchak, A.J., and L.T. Starks, 1993. Short-Sale Restrictions and Market Reaction to Short-
Interest Announcements, Journal of Financial and Quantitative Analysis, 28, 177-194.
Vu, J.D., and P. Caster, 1987. “Why All the Interest in Short Interest?” Financial Analyst
Journal, 43, 76-79.
Woolridge, J.R. and A. Dickinson, 1994. Short Selling and Common Stock Prices. Financial
Analyst Journal, 20-28.
Zuckerman, G., 2005. Now Showing, Again: ‘Get Shorty’, Wall Street Journal, November
17, p. C1.

323
CHAPTER 11

GLOBAL ENVIRONMENT AND TRENDS

324
PREDICTING INTERNET USE: TECHNOLOGY ACCEPTANCE FACILITATING
GROUP PROJECTS IN A WEB DESIGN COURSE

Azad I. Ali, Indiana University of Pennsylvania


Azad.Ali@IUP.edu

ABSTRACT

This paper explains about assigning group projects to students taking a web design
course. It illustrates the plan of a particular course in web design at Eberly College of
Business - Indiana University of Pennsylvania. The paper begins by distinguishing between
two methods that falls within category of group learning. Then it shifts the focus to describe
some of the advantages from adopting group projects and the challenges that face
incorporating such projects at college courses. The paper then focuses on a web design course
that incorporates group projects and shows that the design of the course addresses some of
the challenges that faces introducing group projects to college courses.

I. INTRODUCTION

Teachers at various levels of the educational system contemplate often on methods to


deliver information to the students so to increase their learning opportunity. Introducing
teaching methods is usually combined by ways of testing, evaluating or holding the students
accountable for the materials being introduced. Exams, assignments, and projects (whether
individual or group projects) are examples of methods to facilitate the learning process to the
students and then evaluating them based on course materials.

Group projects are described here as tasks that are assigned to a group of two or more
students to work together to complete the assigned responsibilities in the project. Different
benefits could be gained from assigning group projects to students. However, assigning group
projects and then evaluating them has met by some challenges that need to be addressed prior
to including them in the course requirements.

This paper is to illustrate methods to introduce group projects at a particular web


design course taught at the Department of Technology Support and Training (TST) at Indiana
University of Pennsylvania (IUP). The paper introduces frameworks of group work. It then
lists the benefits that could be gained from including group projects within course
requirements and also the challenges that face incorporating such them in the course. Last,
the paper explains about the web design course at IUP that included group projects. It shows
how this course addresses the challenges that face group formation and evaluation of work. A
summary and suggestive of future work is included at the end.

II. COOPERATIVE LEARNING AND TEAM-BASED LEARNING

Two paradigms noted regarding group work at college courses: Cooperative Learning
and Team-based learning. Smart and Csapo (2003) note the difference between the two:
“Cooperative learning can be characterized by three things: (1) Using assigned roles
within groups; (2) having the teacher monitor the groups to see how they are handling
the contents and how well the groups are working; and (3) spending time after the
small-group exercise to process the small-group activity. Team based learning differs

325
in that it relies on the teams themselves to individual and group performance and to
improve performance as necessary (p. 317)”.
This paper focuses on group projects as an activity within college courses as one of
the tools to implement in both of the paradigms mentioned here. In other words, the term
“Group Projects” mentioned in this paper is a tool that is used to facilitate the learning
process within either one of the approaches: team-based and cooperative learning.

III. STUDENTS GROUP PROJECTS – ADVANTAGES

Assigning group projects to students at courses gives different advantages. Smart and
Csapo (2003) identified four benefits from involving the students to work as groups:
“Enhancing communication and decision making, increasing productivity with higher
level of involvement, commitment and motivation, improving processes and
distributing workload” (p. 316).

Enhancing Communication and Decision Making


A recent survey passed to a group of IT managers asked them to rank the
competencies they see as important for new hired employees. The IT managers ranked
communication skills (both oral and written) as the most important skill they see important
for IT graduates to have (Kovac et al., 2005). The result of this study indicates the importance
of communication skills at the work place. So efforts to enhance the communication skill for
the students may contribute favorably to their performance at the workplace.

Communication skills can be enhanced in various ways whether on an individual base


or within a group. But it is clear that involving the students to work together in groups lead
them to communicate more. Thus, group projects give students an arena that they can get
accustomed to communicating with others similar to the workplace.

Increasing Productivity
A study compared individual works performance versus group work performance.
The study found that the work performance for any group participated in the study exceeded
the performance of the best individual participated in the study (West and Hollingsworth,
2004). This finding underscores the fact that productivity increases as groups work together.

Distributing Workload and Improving Processes


An equally important factor in group projects is the ability to distribute workload
among the group members and then improve the processes by which they complete the
assigned tasks. In projects where the workload becomes too large to be handled by one
student, distributing workload among members of the group becomes essential. So to
complete such large projects, the workload can be divided and distributed among the
students.

IV. STUDENTS GROUP PROJECTS – CHALLENGES

This section explains some of theses challenges or difficulties that face the teachers
that attempt to assign group projects for their courses.
Forming the Group
The first challenge to assigning group projects is the method to that the teacher
follows to forming the group. The teacher here is faced with two challenges: first he/she

326
needs to sell the idea of group forming to the class (Cohen et al, 2004). Second, the teacher
needs to establish a procedure or mechanism for selecting and forming each group.
Selling the idea to the students about the advantages of team work is important to the
success of the work being done. The students after all are faced with extra work of
communicating with their peers and also with the task of coordinating their work with their
group members. Forming the group can be done by leaving it to the students to form their
groups through self-selection or it can be completed through selection by the teacher - a
process intended to increase diversity in each group. Research showed that groups formed by
self-selection are least effective while groups selected by teachers are most effective (Smart
and Csapo, 2003).

Facilitating the work within the group


A challenge that faces different groups at the workplace, especially for IT, projects is
how to facilitate the work of the individuals in the group so the final result of the team will be
shown as one whole unit. Microsoft for example has programmers that work together on
different phases of the products. But the final result is shown as one project like Windows
operating systems for example. In similar fashion, group projects in class setting need to
facilitate their work. This faces the challenge of identifying individual’s distinct work and
then combining them together as one final product. In the interim, a lot of problems may arise
from different work that needs to be worked on. Thus, there need to be a mechanism for such
work individually and then when combined together.

Group work evaluation


One of the critical aspects of group work is the method of evaluating various
individuals’ work within the group by the teachers assigning such work. Researchers used the
word “Free Riders” to describe those students who do not contribute to the work of the team
or who contribute very little (Smart and Csapo, 2003). Different method of group evaluation
are suggested that minimizes the “free riders” or “easy riders” within the group like peer
evaluation and readiness assessment test RAD. In peer evaluation, different members
evaluate each other in the group. The teacher may then assign a portion of the score based on
the opinion passed by the peers in the group. In RAD, the students are tested on the concepts
of the subjects taught in the course as well as the scores from the group work. Individual
student scores in the group projects may be adjusted if it is shown that there is disparity in
performance between the two scores (Michaelsen et al, 2002).

V. GROUP PROJECTS IN A WEB DESIGN COURSE

The Technology Support and Training department at Indiana University of


Pennsylvania offers a course in web design for its senior students. This section explains about
the process of group projects at this course. It shows that factors within the course addressed
many of the challenges mentioned here earlier. The factors that are discussed here include the
design of the course, the adoption of the textbook and the selection of the software.

Design of the Course


The design of the course has elements that help with the group formation, providing
feedback to the students and balancing between group and individual work to promote
accountability. The final grade for this course is based on individual student projects, group
projects and two exams. The group projects are of two kinds: four group projects and then
one comprehensive final project.

327
The formation of the groups starts at the beginning of the semester when the teacher
surveys the students about their background and preferences. Each group work together until
the end of the semester when comes time to work on the final project, the students can then
form new groups.

The role of the students in the group changes from one project to another. The
students take rotating role in taking the responsibility of the project coordinator. A different
student is assigned to be the group coordinator for each project. The project coordinator is
responsible for collecting the individual work of the students, submit it to the teacher, present
it in the class and provide the project completion form. This last form contains the name of
the students and description of the tasks completed by each of them.

Textbook Selection
One of the textbooks selected for this course helps with assigning various group
projects. The textbook (Evans, 2004) is divided into nine tutorials and contain four case
projects at the end of each tutorial. Each tutorial describes steps that contribute at the end to
the formation of a final web site for a particular company. The case projects are application
of the concepts learned in the tutorial, each case represents a web site for a different
company. The students work individually on the web sites described in the tutorial and work
in separate groups for each of the case projects listed at the end of the tutorial.

Software Selection
Microsoft FrontPage 2003 is one of the software used in this course. There are two
features in FrontPage that help with coordinating the groups work and also with monitoring
the performance group members: Tasks view and source control. In the tasks feature,
individual students record the tasks and describe the completed duties within the project. In
source control, students can “check-out” individual pages to work on it. Pages that are
checked out remain in the custody of the student working on it until it will be checked in.
During “check-out”, other students cannot work on that particular file.

VI. CONCLUSION

This paper was about group projects and including them as a tool for student learning
and evaluation in a web design course. It started by explaining the difference between team-
based learning and collaborative teaching. It then explained the advantages and challenges
that face adopting group projects in general. Then it concentrated on a web design course at
Indiana University of Pennsylvania and showed that the design of the course addressed the
challenges of group projects. However, the result of this design, its’ outcome and the level of
satisfaction from the student is still unknown. The plan of the author of this paper is to
conduct a survey at the completion of the semester, asking the students about their
perspectives of the group projects and solicit suggestions on how to improve such process.
But the course is yet to be completed at the time of publishing this paper and thus this survey
and the results from the survey can be a subject of another paper in the future.

REFERENCES

Cohen, Elizabeth G., Celeste M. Brody, & Mara Sapon-Shevin(2004). Teaching Cooperative
Learning The Challenge for Teacher Education. Albany: State University of New York
Press.

328
Evans, Jessica(2003). New Perspectives: Microsoft FrontPage 2003 Comprehensive. Boston:
Course.
Holmes, Monica C, Nancy Csapo & D’Aubeterre Fergle (2004). “Assessment: How to Get
Feedback to the Students.” Issues in Information Systems. Volume V, 502-508.
Kovac, Paul J, Gary A. Davis, Donald J. Caputo & John C. Turchek (2005).. “Identifying
Competencies for IT Workforce: A Quantitiative Study.” Issues in Information Systems.
Volume VI. 339-345.
Michaelsen L. K., A. B. Knight, and L. D. Fink (2003). Team Based Learning: A
Transformative Use of Small Groups. Westport, CT: Praeger.

329
PREDICTING INTERNET USE WITH THE TECHNOLOGY ACCEPTANCE
MODEL AND THE THEORY OF PLANNED BEHAVIOR

Marcelline Fusilier, Northwestern State University of Louisiana


fusilier@nsula.edu

Subhash Durlabhji, Northwestern State University of Louisiana


durlabhji@nsula.edu

ABSTRACT

Two popular technology acceptance models were used to predict behavioral intentions
and self-reports of Internet usage among college students in India. Like many countries in the
developing world, India’s population has the potential to gain economic and educational
benefits from increased involvement with the World Wide Web. Questionnaires containing
previously developed and validated scales were used to collect the data. Findings supported
both models. This suggests that predictors of technology acceptance developed for Western
samples may also apply in developing areas.

I. INTRODUCTION

India is prominent in the information and communications technology industries but it


has considerably fewer Internet users than Western countries, China, or Japan. The
availability of hardware and software will not necessarily result in widespread Internet use.
Warschauer (2003: 6) documented many well-intentioned projects around the world aimed at
improving people’s lives through information technology, most of which failed because of
“insufficient attention to the human and social systems that must also change for the
technology to make a difference.” Education is a potential key to developing a population's
Internet competence. This is important because the Internet can contribute to India's
economic development and enhance quality of life.

Considerable research has addressed behavioral determinants of technology


acceptance. Popular models are the theory of planned behavior (TPB) (Ajzen, 1985, 1991)
and the technology acceptance model (TAM) (Davis 1989, Davis, Bagozzi, & Warshaw
1989). The theory of planned behavior posits that behavioral intention to perform an activity
is determined by: (a) attitude, (b) perceived behavioral control, defined as the perception of
how easy or difficult it is to perform a behavior, and (c) subjective norm, defined as one's
beliefs about whether significant others think that one should engage in the activity.

TAM states that behavioral intention to use a technology derives from two beliefs:
perceived usefulness, defined as the expectation that the technology will enhance one's job
performance, and perceived ease of use, defined as the belief that using the technology will
be free of effort (Venkatesh & Davis 1996, Venkatesh 1999).

Literature Review. TAM and TPB have revealed similar predictive efficacy for technology
usage criteria: spreadsheet use (Matheson, 1991) and visits to a computer center (Taylor &
Todd,1995). Both studies involved college student samples. Mahmood, Hall, and Swanberg
(2001) conducted a meta-analysis of 57 studies that identified factors related to information
technology use. Of all the variables analysed across the studies, the TAM components of

330
perceived usefulness and ease of use exhibited the largest effect sizes on technology use.
Attitude had a medium effect. TAM has support as a predictor of student Internet use and
web sites (Anandarajan, Simmers, & Igbaria, 2000; Lederer, Maupin, Sena, & Zhuang, 2000;
Lin & Lu, 2000; Moon & Kim, 2001; Selim, 2003). In a partial test of the theory of planned
behavior, George (2002) reported that attitude was related to intention to use the Internet for
purchasing products. Furthermore, intention was linked to actual purchasing behavior. The
evidence indicates that the TAM is a powerful predictor of users' technology acceptance.
Although fewer studies have investigated the TPB in the technology usage context, it has also
appears respectable as an explanatory framework. However, national culture may influence
the models’ effectiveness in predicting technology acceptance. A non-significant relationship
between perceived usefulness and a microcomputer usage in Nigeria was attributed to
cultural factors (Anandarajan, Igbaria, & Anakwe, 2002). This finding was explained in light
of the abstractive versus associative character of cultures (Kedia & Bhagat, 1991).
Abstractive cultures employ linear thinking that uses a rational cause - effect paradigm to
create perceptions. This type of culture characterizes North America and Europe. Associative
cultures in Africa and Asia may not use a logical basis for linking events. Anandarajan et al.
reasoned that individuals in an associative culture might not connect perceptions of a
computer's usefulness with usage behavior. However, these authors did find that ease of use
and social pressure related as expected to usage behavior.

Shih and Venkatesh (2003) examined home computer use across the U.S., Sweden,
and India. Attitude and normative belief structures predicted (a) rate of use and (b) variety of
home computer uses for all three countries. More specifically, for the Indian sample, an
attitudinal belief concerning utilitarian outcomes of PC use was related to both usage criteria.
A control belief, difficulty of use, was negatively related to the usage measures. These
findings appear to provide some basis for the notion that perceived usefulness and ease of use
play a role in home computer use in India. Thus relatively little evidence pertains to
influences on technology or Internet acceptance among Indian users.

Present Study. This research explored the TAM and TPB in relation to Internet use
intentions and self-reported usage with a sample of college students in India. The models
specifically tested are diagrammed in figure 1. Differences between the TAM and TPB are as
follows: The attitude and social influence (subjective norm) components are unique to the
TPB. In a less individualistic culture such as India is purported to have, social factors may be
an important influence on behavior (Hofstede, 1991; Shih & Venkatesh, 2003). Furthermore,
while TAM states that behavior is affected by behavioral intention, TPB specifies that
perceived behavioral control and intention both impact actual behavior.

331
Figure 1

Technology Acceptance Model (TAM)

Perceived
Usefulness
Intention to Internet
use Usage
Perceived
Ease of
Use

THEORY OF PLANNED BEHAVIOR (TPB)

Attitude

Subjective Behavioral Internet


Norm Intention Usage Behavior

Perceived
Behavioral
Control
II. METHOD

Sample and Procedure. Two hundred sixty-nine college students from two university
campuses in northwestern India took part in the study. Twenty-four of the students reported
that they were not Internet users. They were excluded from the study's analyses, resulting in a
sample of 245. The participating colleges and the numbers of students from each were (a)
engineering (67), (b) rural management (78), (c) arts and sciences (100). The rural
management program was in a geographically separate location from the other two colleges,
which shared a campus. There were 142 men and 103 women in the sample. Average age was
20 years. Ninety-six percent of the sample was under age 25. Seventy percent were
undergraduates and 30% were in a master's program. Average length of Internet experience
was 22 months. Data was collected via questionnaires that were administered to students
during classes. Participation was voluntary and confidential. English was the medium of
instruction at the institutions involved in the data collection; however the questionnaire
administrators were fluent in Hindi as well as English so that they could effectively answer
any questions that might arise. Language did not appear to be a problem for the respondents.

Measures. Validated scales from previous research were used to measure the variables of
interest. Self-reported Internet usage was operationalized with the one-item scale developed
by Davis (1989: 329), modified so that the focus was on the Internet. Six-position categorical
boxes were labeled “Don't use at all,” “Use less than once each week,” “Use several times a
week,” “Use about once each day,” and “Use several times each day.” The constructs of
subjective norm, perceived usefulness, perceived ease of use, and behavioral intention were
measured with scales from Venkatesh and Davis (2000). These scales have been validated
and high reliability reported for each. Behavioral intention and subjective norm were both

332
two-item measures. Four-item scales were employed for perceived usefulness and ease of
use. The word “Internet” was substituted for “system” in the scale items.

Perceived behavioral control and attitude were assessed with measures reported by
Taylor and Todd (1995). Again, the items were modified for Internet usage. Perceived
behavioral control was a three-item scale that concerned what people think that the
questionnaire respondent should do. A “don't know” option was provided in the response
scales for subjective norm, perceived usefulness, perceived ease of use, perceived behavioral
control, and behavioral intention. The purpose of this was to reduce random responding and
guessing on the part of respondents. Attitude was measured on a general four-item affective
scale. The items asked respondents whether using the Internet was bad/good, foolish/wise,
whether they disliked/liked using the Internet, and whether it was unpleasant/pleasant.

III. RESULTS

Scale reliabilities are as follows for each of the scales: intention, .81; perceived
usefulness, .77; perceived ease of use, .83; subjective norm, .78; perceived behavioral
control, .80; attitude, .83. All were close to or above .80, an acceptable level consistent with
previous findings.

TAM Results. Multiple regressions were used to investigate relationships among the
variables in the model. Results are presented in Table I. Both perceived usefulness and ease
of use were statistically significant predictors of intention to use the Internet. Furthermore,
they explained 35% of the variance in usage intention. Significant correlations were evident
for behavioral intention and the Internet usage measure (r = .24, p < .001) as well as between
ease of use and perceived usefulness (r = .49, p < .001).

Table I: Multiple Regression Results For The Tam Analysis (Dependent Variable =
Behavioral Intention To Use The Internet)

Variable Beta R R2 F_
Perceived Ease of Use .29***
Perceived Usefulness .39*** .59 .35 64.53***
***p < .001

TPB Results. Regression results displayed in Table II suggest that all three of the model's
predictor variables, subjective norm, attitude, and perceived behavioral control, were
significantly and positively related to behavioral intention to use the Internet. The
independent variables explained 31% of the variance in the criterion.

Table II: Multiple Regression Results For The Tpb Analysis (Dependent Variable =
Behavioral Intention To Use The Internet)

Variable Beta R R2 F_
Attitude .13*
Perceived Behavioral Control .40***
Subjective Norm .22*** .56 .31 38.10***
*p < .05 **p < .01 ***p < .001

333
Regression results concerning prediction of the Internet usage measure suggested a
positive relationship for perceived behavioral control but that for behavioral intention was
non-significant (see Table III). Eleven percent of the variance in usage behavior measure was
explained.

Table III: Multiple Regression Results For The Tpb Analysis (Dependent Variable =
Frequency Of Internet Use)

Variable Beta R R2 F_
Behavioral Intention .06
Perceived Behavioral Control .30*** .33 .11 15.96**
*p < .05 **p < .01 ***p < .001

IV. CONCLUSION

TAM and TPB were supported as predictors of intentions to use the Internet. Smaller
but statistically significant percentages of self-reported usage were explained also. Behavioral
intention however did not predict Internet use, a finding inconsistent with TPB. Perceived
behavioral control had a stronger impact on Internet use. The addition of a computer self-
efficacy (Fenech, 1998) construct to TAM has been proposed to strengthen the model's
prediction of Internet usage behavior. The present finding concerning the impact of control
on usage appears to provide support for the incorporation of some type of self-efficacy
component to Internet use models.

Contrary to the findings of Davis et al. (1989) and Mathieson (1991), the present
research found that subjective norm was related to intention. This may derive from the
differences in the national culture of the data collection contexts. Alternatively, Taylor and
Todd (1995) suggested that subjective norm is more important for inexperienced users. The
average level of Internet experience for the present sample was 22 months, considerably less
than has been reported for U.S. college students (Jones, 2002). Future research might focus
on separating the effects of national culture and experience as influences on Internet usage.
The present findings were also contrary to the notion that a relationship between perceived
usefulness and usage criteria may not exist in Asian (associative) cultures (Anandarajan et al.,
2002). Perceived usefulness and behavioral intention were significantly related in the present
case, consistent with the results of Shih and Venkatesh (2003).

The relationships found in the present study were not as strong as in some previous
reports (Mathieson, 1991; Taylor & Todd, 1995). A potential explanation for this difference
concerns Internet access. Much TAM research has been conducted in settings where
technology use was readily available to students or even mandatory for employees. The
university campuses on which the present data was collected were observed to have less in
the way of computer hardware and fewer Internet connections than many universities in the
U.S. These limitations on access may interfere with intentions to use the Internet and also
with the intention-usage relationship. From a practical perspective, the findings suggest that
educational administrators might focus on creating perceptions among students that the
Internet is easy to use and useful, as well as fostering positive attitudes toward Internet use,
creating social expectations regarding usage, and improving students' sense of their ability to
use the Internet. Future research might directly compare TAM and TPB across Western and
Indian samples of users. The present study’s supportive results suggest that the models could

334
provide specific avenues for encouraging the user acceptance needed to fully participate in a
global information society.

REFERENCES

Ajzen, I. “From Intentions to Actions: A Theory of Planned Behavior.” In Action Control:


From Cognition to Behavior, Edited By J. Kuhl And J. Beckmann, New York:
Springer Verlag, 1985, pp.11-39.
Ajzen, I. “The Theory of Planned Behavior.” Organizational Behavior and Human Decision
Processes, 50, 1991, 179-211.
Anandarajan, M., Igbaria, M., and Anakwe, U.P. “IT Acceptance in a Less-Developed
Country: A Motivational Factor Perspective.” International Journal of Information
Management, 22, (1), 2002, 47-65.

335
GLOBALIZATION AND ITS IMPACT ON AFRICA’S TRADE

Semere Haile, Grambling State University


hailes@gram.edu

ABSTRACT

Trade policies represent a serious constraint to Africa’s integration in the global


economy. African exports have been handicapped by industrial country policies such as tariff
escalation, tariff peaks and agricultural protectionism. This paper examines the trade of
African countries in a broad spectrum within the context of globalization. It looks at what
Africa needs to do to benefit from existing and future opportunities in the global trading
system. It further discusses improvement required in internal conditions if Africa is to
improve its position in global economy. This paper further explores how the African
perceptions for globalization were overshadowed by war and continuing conflicts in the
continent.

I. INTRODUCTION

Globalization is a result of the expansion, diversification and deepening of trade and


financial links between countries. According to the officials of the International Monetary
Fund (IMF), globalization is defined as the growing economic integration of goods, services
and capital markets (IMF, 1997). For the IMF, globalization is a process that erases existing
national boundaries, pointing states toward an integrated world economic activity controlled
by the invisible hands of the free market system (Kalu, 2004). As globalization increasingly
builds connections between regions of the world, it may result in greater inequality between
nations in the global market. According to the National Intelligence Council (NIC) 2005
Report, Africa has been the continent least positively affected by globalization to date and the
challenge to take advantage of the positive trends in the global economy will be substantial.
The already meager share of the global income of the poorest people in the world has
dropped from 2.3% to 1.4% in the last decade. But even in the developed world, not everyone
has been a winner (BBC NEWS, 2000). African countries position in the international system
has been considerably weakened by the fact that they have been losing the race for economic
development in general and human development in particular to other regions. This poor
performance by African countries accounts in part for the political and social instability. The
rise of authoritarian regimes that have characterized much of post-colonial Africa also further
weakened the ability of African countries to deal effectively with globalization (Addis
Ababa, 2002). This paper examines the trade of African countries in a broad spectrum within
the context of globalization.

II. INEQUALITY UNDER GLOBALIZATION

In an international context, cotton pricing is an example of the longstanding tension


between developed and developing countries over agricultural pricing. A prime example of
these trade agreements is the agricultural export subsidies. The US maintains agriculture
subsidies that are greater than the total income of African countries. The US spend about $3-5
billion a year on subsidies to cotton farmers, which lowers the global price of cotton and
hurts 10 million African cotton growers. The asymmetrical nature of trade also occurs in the
trade of manufactured goods, with escalating tariffs on industrial products targeting the

336
poorest African countries (O’Shea, 2005). Africa’s income gap, relative to the advanced
countries, has widened and per capita incomes in a number of countries have actually
dropped (Calamitsis, 2001). These income gaps have created imbalanced development and
poverty in Africa, which also affected the global capital flows into the continent (Gondwe,
2001). Consequently, globalization needs reforms in order to create a set of policies in the
area of trade. The current global policies of trade are grossly unfair for African countries and
others in the developing world (Stiglitz, 2003). Critics of globalization argued that
globalization paralyzed Africa by turning the continent into a cluster of wagon economies
whose engines is in the developed countries. To use an analogy, a donkey and an elephant
cannot be yoked together to pull a plow, for they are not of the same size or strength. Yet this
is what globalization has done to Africa. Due to differences in weight and size, the weaker
side is struggling to keep pace, while the stronger one reaps the benefits of globalization
disproportionately (Mao, 2003).

According to the NIC 2005 report, globalization will accelerate increasing


differentiation among and within African countries. This inequality between countries is a
result of the asymmetrical nature of the trade agreements that have grown out of
globalization. A study showed that globalizing poor countries recorded gains in income of 5
percent per year in the 1990s, 2 1/2 times faster than the advance countries. By contrast, non-
globalizing poor countries had no income gains at all (Mallaby, 2005).

III. INTERGRATION OF AFRICA IN THE GLOBAL ECONOMY

Africa has been integrated into the global economy as an exporter of primary
commodities and importer of manufactured products. Between the years of 1960 and 1969,
Africa’s average market share of total world exports was 5.3 percent and imports of 5.0
percent. These figures were dropped to 2.3 percent and 2.2 percent, respectively between the
years of 1990 and 1998. The decline in Africa’s total world exports are attributed to the
restrictions of the free market policy, slow growth of per capita income, high transportation
costs, and the continent’s distance from major markets. Africa has also failed to attract the
capital flows it needs because of negative perceptions of the continent’s economic and
political activities, its poor infrastructure, and inadequate legal framework, and lack of
enforcement contracts (Ajayi, 2001; The economist, 1994; Financial Times, 1994). For
example, more than 75 percent of African countries had trade regimes classified as
“restrictive” by the IMF in 1990. Now only 14 percent of African countries’ trade regimes
are classified as restrictive, while 43 percent are classified as open. Yet, on average, Africa’s
trade policies remain more protectionist than those of other countries including its major
trading partners and competitors (Sharer, 2001; Ajayi, 2001). In contrast, 61 percent of
countries outside Africa have trade regimes classified as open. All industrial countries
maintain open trade regimes. Africa’s current average tariff of about 19 percent is still higher
than the average of 12 percent for the rest of the world (Sharer, 2001). The 2005 Annual
Report by the World Bank Group also finds that African nations impose the most regulatory
obstacles on entrepreneurs and have been the slowest reformers in 2004. For example, an
entrepreneur in Mozambique must undergo 14 separate procedures taking 153 days to register
a new business. In Sierra Leone, if all business taxes were paid, they would eat up 164
percent of a company’s grows profits. In Burundi, it takes 55 signatures and 124 days from
the time imported goods arrive in ports until they reach the factory gate.

337
IV. IMPACT ON ECONOMIC AND POLITICAL DEVELOPMENT

The globalization of economic factors in Africa is linked to political, legal, and


cultural issues. Different countries in Africa have also different economic systems. Most
African countries gained their independence from colonial powers in the 1950s and 1960s.
These leaders adhered to socialist theories and applied the same theories to govern their
respective countries (Hill, 1999). The cold war witnessed the emergence of authoritarian
regimes in most African countries in the form of one-party or military regimes. This was
largely a result of the support of the two blocks to keep African countries in their respective
camps from the 1960s through most of the 1980s. Both one-party and military regimes
inhibited the emergence of democratic governance and economic development approaches in
Africa. Consequently, the cold war has contributed nothing but outrageously bad violations of
citizens’ civil liberties and economic disasters across the continent (Kalu, 2004; Hill, 1999;
Addis Ababa, 2002).

Communism, socialism, Marxism, and state-controlled economies’ approach for


African development generally failed. By the same reasoning, democracy of the Western
model of capitalism and free market has also failed in Africa. The rational for failure of these
economic development approaches was that they were at odds with realities on the ground,
having been imported wholesale from outsiders (Kanuma, 2002). Some even still operate
with a modified version of communism as an official economic approach, which also has
essentially failed. In recent years, both socialism and totalitarianism are slowly retreating
across the continent. Since the late 1980s, some 42 African courtiers have abandoned their
experiments with socialism and moved toward more democratic modes of government and
free market reforms (Hill, 1999; Swarns, 2002). Most African countries, where poor
governance is held, resisted adopting the market-oriented policies and reforms needed to tap
the benefits of globalization for more than three decades. The result was economies
stagnated or declined and poverty increased in African countries (Hill, 1999; Calamitsis,
2001). A study shows that 48 percent of African people are living in extreme poverty (Nsouli
& Le Gall, 2001). Consequently, most African people have never reached their full potential
as a result of more than 30 years of misrule, mismanagement, and other historical misfortunes
on the continent (Kanuma, 2002).

According to the Economic Report on Africa (ERA) 2004, one of the principal
reasons for the holding back of Africa's economic performance has been the continuation of
military conflicts. The political crisis has had a significant impact on the social and economic
conditions of neighboring countries in the continent. African policy makers are aware of the
fact that substantial improvements in the economic and social situation of their populations
are contingent upon the maintenance of peace. Without peace, little or nothing can be
achieved. The ERA 2004 further noted that recent empirical research has shown how political
instability adversely affects human development as well as gross domestic product (GDP)
and export growth in Africa. Any improvement in Africa's economic and human development
will be constrained until all the political actors implicated (politicians, civil society, foreign
governments, and international organizations) make a concerted effort to resolve these
conflicts. For example, the persistence of a number of lower intensity conflicts (Uganda or
Sudan) has continued to handicap progress in social, economic and political fields. Therefore,
without peace, little or nothing can be achieved. Healthy business climate enhances foreign
direct investment, which in turn boosts economic growth in African countries (Harms and
Ursprung, 2002).

338
Although many trade and other regional cooperation agreements existed on paper,
there was a lack of political will, or of physical infrastructure, to make them work.
Nevertheless, regional integration could be an effective vehicle for integrating Africa into the
global economy. Much had to be done to create the conditions for reducing poverty. With the
support of regional and international organizations, African countries will be able to meet the
challenges of increasing growth, reducing poverty, and thus lay the foundation for political,
economic, and social stability (Daouas, 2001). Regional economic integration is also a
necessary element for ensuring Africa’s active integration into globalization (Daouas, 2001;
Gondwe, 2001). By introducing more open policies in its own markets, Africa has the
potential source to create new business opportunities by expanding integrated production
across the continent in agriculture, industry, commerce, finance, and social services. In doing
so, regional economic integration and cooperation, trade and investment, economic
efficiency, and growth will be enhanced (Sharer, 2001). The ERA 2004 further indicates that
Africa needs to make a concerted effort in reforming its own economies through a large
diversification of its productive structure if progress is to be made. Africa also clearly needs
to adopt more proactive policies in order to promote the integration of the continent into the
global economy.

V. IMPACT OF TRADE AND ECONOMIC GROWTH

According to the ERA 2004, African exports have been handicapped by industrial
country policies such as tariff escalation, tariff peaks and agricultural protectionism. At the
same time, the report noted that improvement is required in internal conditions, if the
continent is to improve its position in the international economy. The ERA 2004 further
noted that weak infrastructure, poor trade facilitation services, and the lack of physical and
human capital poses a major impediment to export sector development. Despite insufficient
progress towards fulfilling the Millennium Development Goals (MDGs), and the persistence
of political, social and economic problems in the continent, Africa has been making progress
since the lost decades of the 1980s and 1990s. In 2003, Africa was the second fastest growing
region in the developing world, behind Eastern and Southern Asia. Higher oil prices and
production, rising commodity prices, increased foreign direct investments, better
macroeconomic management, backed up by good weather conditions, supported this high
growth. As a result, real GDP grew at 3.6 per cent in 2003 compared to 3.2 per cent in 2002,
with North Africa putting in a strong performance (of 4.7 per cent). West and Central Africa
also exhibited respectable growth rates above 3.5 per cent. East and Southern Africa, in
contrast, registered paltry growth of 2.5 per cent (see Figure 1.1).

339
Figure 1.1; North Africa tops sub-regional economic performance in 2003

Source: The ERA 2004

According to ERA 2004, however, some African economies experienced negative growth
rates. When compared with the growth figures for 2001 and 2002, it becomes clear that there
has been a slight decline in aggregate economic performance for Sub-Saharan Africa (SSA),
from 3.5 per cent in 2002 to only 2.9 per cent in 2003 (see Figure 1.2).

Figure 1.2: Rates of economic growth, North and Sub-Saharan Africa, 2001-3

Source: ERA 2004

According to the ERA 2004, the real per capita growth rates for North Africa and
SSA in 2003 are approximately 2.7 per cent and 1.7 per cent respectively, rates which are
inadequate to achieve the MDGs for poverty reduction. The recent establishment of a new
Commission for Africa, launched by British Prime Minister Tony Blair, in March 2004,

340
represents an important acknowledgement of the need to address the problem of Africa's
underperformance.

VI. CONCLUSION

This paper examined the trade of African countries in a broad spectrum within the
context of globalization. Globalization can promote both growth and decline. Whatever the
impact of globalization on the continent of Africa, Studies indicated that Africa could not
advance by isolating itself from the process. Therefore, Africa needs to adopt more proactive
policies in order to promote the integration of the continent into the global economy. Among
others, Africa needs to strengthen in the areas of energy policy, trade facilitation, and
competitiveness. Another principal reason for the holding back of Africa's economic
performance has been the continuation of military conflicts. The political instability in
several spots of Africa has had a significant impact on the social and economic conditions of
neighboring countries. African policy makers are aware of the fact that substantial
improvements in the economic and social situation of their populations are contingent upon
the maintenance of peace. Without peace, little or nothing can be achieved.

REFERENCES

“A Fair Globalization: Creating Opportunities For All.” (January 23, 2004).


Www.Ilo.Org/Wcsdg/Consulta/Index.Htm.
Ajayi, Ibi S. (December 2001). What Africa Needs To Do To Benefit From Globalization?
Finance & Development, 38: 4, 6-8.
“Can Africa Put Its Own House In Order? Development And Governance Africa
Acknowledges It Must Help Itself. (July 7, 2005). The Economist Print Edition.
Economic Report On Africa 2004. (April 28, 2004). Unlocking Africa’s Trade Potential In
The Global Economy Overview. Www.Uneca.Org/Cfm/2004.
Kalu, Kelechi A. (March 2004). Globalization And Its Impact On Indigenous Governance
Structure. Southwestern Journal Of International Studies, 29-60.
Mao, Robert. (November 3, 2003). Unevenly Yoked: Has Globalization Dealt Africa A Bad
Hand?
Www.Yaleglobal.Yale.Edu/Display.Article?Id=2721.
National Intelligence Council. (March 2005). Mapping Sub-Saharan Africa’s Future.
Conference Report.
Schaefer, F. & Schavey, B. (2002). Foreign Aid Should Follow Free-Market Reform. The
Heritage Foundation.
Stiglitz, Joseph. (2003). Globalizations And Its Discontent. NY: New York, W. W. Norton &
Company.
“The Challenges Of Globalization In Africa: What Role For Civil Society And Other
Stakeholders?” (2002). Conference On The Challenges Of Globalization To
Democratic Governance In Africa, Addis Ababa, Ethiopia.

341
UNIVERSITY EDUCATION, PERFORMANCE STANDARDS
AND THE REALITIES OF A GLOBAL MARKETPLACE

Melissa Northam, Troy University


mnortham@sw.rr.com

ABSTRACT

Americans are living in uncertain times as new global forces act on the economy.
Labor markets are experiencing difficult transitions brought about by intense global
competition. This paper examines certain practices in university classrooms which may be
putting our long-term economic health in jeopardy. One such practice is a cultural focus on
“feeling good about oneself” at the expense of “producing measurable results”. Building on
lessons from capitalism, the author makes a case for challenging students with higher
standards. A fundamental reality exists in a market-driven economy: businesses do not pay
for what they do not need. Universities must realistically educate students for a global
economy; the public would benefit by insisting on performance from its investment dollars in
higher education.

I. INTRODUCTION

As a culture, are Americans indulging themselves today in ways that will hurt us in
the future? Increasingly, thoughtful people are raising the question – about values, standards,
and education – and growing concerned at what they discover. The competitiveness of
America is the subject of heated debates. For example, Colvin (2005) made the case that
“Can America compete?” is not the right question in light of pressing global realities; the
more relevant question is “Can Americans compete?” His conclusion is that we’re not
building human capital the way we used to. He argues that our “greatest challenge will be
changing a culture that neither values education nor sacrifices the present for the future as
much as it used to – or as much as our (global) competitors do” (pg 82). John Doerr, one of
Silicon Valley’s most influential venture capitalists calls “education the largest and most
screwed-up part of the American economy” (pg 82).

Wooldridge concluded that “American universities are acquiring a growing catalogue


of bad habits that could one day leave them vulnerable to competitors from other parts of the
world, although probably not from Europe, which has overwhelming academic problems of
its own” (2005, pg 9). “If current trends continue,” says Richard Freeman, director of labor
studies at the National Bureau of Economic Research, “by 2010, China will produce more
science and engineering PhDs than the U.S. (Einhorn and Carey, 2005, pg 116). Such
statistics are setting off alarms, expressed by Johns Hopkins President William Brody to
Congress: “There is a good chance that U.S. competitiveness in vitally important high-tech
areas will fall behind that of China” (pg 116). Today, China and India graduate a combined
250,000 engineers and scientists a year, vs. 60,000 in the U.S. (Engardio, pg 56), disturbing
since the foundation of economic progress has historically been technological change. Never
has the world seen the simultaneous, sustained takeoffs of two nations that together account
for one third of the planet’s population.

This is especially troubling when developing nations are eyeing the American
standard of living with envy, and national boundaries of competitive advantages are

342
dissolving. Innovative ideas circle the globe in breakneck speed, relentlessly pursued by
huge amounts of investment capital eagerly searching for opportunities, regardless of national
origin.

II. WARNING SIGNS

It is time to get serious about what we need our system of higher education to do.
We’re looking at millions of people around the globe getting better everyday at what has
made us the world’s leading economy. Is the competitiveness of the American workforce in
jeopardy? As a business school professor, it is hard to ignore certain patterns of behavior in
the classroom. As taxpayers, we should be asking hard questions of our educational dollars,
because these dollars must fund critical national priorities and investments – viz. adequate
skills to compete successfully in a global economy, where “globally competitive” has
propelled itself into the national lexicon with frightening force and intensity. A compelling
argument can even be made that the direction our economic future takes is as critical to
society as homeland security.

Salerno identified three broad cultural themes increasingly used to characterize


American values, and which can produce irrevocable damage to society (2005).
Victimization tries to explain/rationalize an individual’s problems by pointing at others.
Recovery is responsible for a variety of disease-definitions. But empowerment has the
potential for greatest harm. In the educational system, empowerment has led to serious social
consequences, especially in combination with self esteem. Together, these two encourage
individuals to “focus on feeling good about themselves” as opposed to “producing
measurable accomplishments”. This movement, according to Mr. Salerno, has produced
declining performance and swelling heads. One recent study of math abilities placed U.S.
kids dead-last in competence among the eight countries tested. South Koreans were on top.
Ironically, the American students had the highest opinion of their math skills, while the South
Koreans were most critical of their own.

III. THE COLLEGE EXPERIENCE

Recent books have essentially acknowledged that learning makes up a small part of
contemporary college life. Nathan concluded that students don’t read what they’re assigned,
so it’s important to assign them less reading (2005). Seaman examined general student life at
twelve elite institutions, noting the rise in drinking and the advent of grade inflation in the
past few decades (2005). He also tackled the decline in academic rigor. Although his was an
unrepresentative group, these twelve elite institutions do set the tone for much thinking about
higher education, and cannot be ignored. Douthat (2005) argued that high school students
compete furiously to get into Ivy League universities, but are seldom stretched when they
arrive. To be objective, in its 2005 annual Survey of Higher Education, The Economist
concluded America’s system of higher education is the best in the world, precisely because
there is no system (e.g. no central planning at the federal level and a market-oriented
approach).

Yet, massive changes are threatening the traditional (elite) university for four reasons:
“massification” or the democratization of higher education, which is quickly spreading to the
developing world (e.g. China doubled its student population in the late 1990s, and India is
trying to follow suit); the rise of the knowledge economy; globalization; and competition
(Wooldridge, 2005, pg 3-4). The problem for policymakers is “how to create a system of

343
higher education that balances the twin demands of excellence and mass access, that makes
room for global elite universities while also catering for large numbers of average students,
that exploits the opportunities provided by new technology while also recognizing that
education requires a human touch” (pg 4).

IV. LARGE NUMBERS OF AVERAGE STUDENTS

This paper focuses on these “large numbers of average students”, since most
American universities (~3500) do not fall into The Economist’s 100 global elite. Exactly
what is the purpose of our system of higher education that is aimed squarely at this specific
market segment? What are realistic expectations to have of these students? Is society getting
its money’s worth?

V. HISTORICAL PERSPECTIVE

Almost from its inception, America has placed a high value on education. As one of
America’s earliest educators, Thomas Jefferson described the purpose of the university:
It cannot be but that each generation succeeding to the knowledge
acquired by all those who preceded it, adding it to their own acquisitions and discoveries, and
handing the mass down for successive and constant accumulation, must advance the
knowledge and well-being of mankind.
Benefiting the republic was one reason Jefferson founded a public university. He argued that
a proper education would advance “the prosperity, the power, and the happiness of a nation”
(Schaefer, 2004).
The psychologist Jean Piaget had a similar perspective:
“The principle goal of education in the schools should be creating men and women who are
capable of doing new things, not simply repeating what other generations have done; men
and women who are creative, inventive, and discoverers, who can be critical and verify-not
accept-everything offered.”

VI. THE CONTEMPORARY CLASSROOM

Yet from where I sit in the classroom of “large numbers of average students”, I see a
large group of students whose primary focus is obtaining a piece of paper. I see a move
toward “standardization” and “templates” that undermine creativity. Pragmatically, template
jobs are the ones most at risk in a global marketplace. “Jobs that will pay well in the future,”
says Coy, “will be ones that are hard to reduce to a recipe. These attractive jobs. . . require
flexibility, creativity, and lifelong learning” (2004, pg 50). Textbook knowledge alone is
inadequate in a global marketplace because workers in other countries read the same
textbooks. Nor are new ideas and innovations sufficient without a motivated, educated
workforce to efficiently and effectively exploit them. An education based on cultivating
skills and developing perspective, on informed judgment and continued learning, leads to
jobs that cannot be easily outsourced.

What I see too often are students whose primary focus is a grade, and a piece of paper
that says they’re educated. Hutchinson’s concern is that “. . . near automatic graduation of
college students who do little besides pay tuition, attend class, and act responsibly has diluted
the college-educated workforce. The baccalaureate degree is supposed to distinguish its
recipients as interested, literate, and able to analyze and problem-solve. As this guarantee is

344
eroded, so is the confidence of employers in the quality of the college-educated” (2005, pg
23).

I also see grade distributions that reflect pressures unrelated to actual student
performance – given by faculty who are being rated by their “customers”. One could argue
that one of the most misleading myths of American higher education is that education is a
business, students are customers, and educators must do whatever makes them happy or lose
them to the competition. Students are not customers, at least not in the conventional sense of
the word. They do not know what they need in terms of skills and knowledge.

VII. GRADES AS SIGNALS

Grades have all but disappeared from communicating honest assessments of student
performance. Recently a distinguished Harvard professor developed a solution for the
“game” of grades. He assigns two grades: one for the “official transcript” and the other
(known only to the student) to reflect the student’s actual performance (Wall Street Journal,
2003). A recent study found that nearly half the course grades given out on the Princeton
campus are ‘A-’ or above; university grades have come to operate under a sort of Gresham’s
Law, where easy A’s drive out honest B’s and C’s (Wall Street Journal, 2004).

I also see a troubling (student) culture moving toward a system where a large group of
people demand compensation for their intentions rather than their results, or worse, for their
subjective evaluations of their own performance. For example, I’m constantly amazed by the
number of mature students who argue they should receive an ‘A’ because they are ‘A’
students - a rather circuitous argument in logic. Logic suggests an individual should earn
rewards above the norm only when he has produced superior results. The “norm” has lately
been revised.

It is mathematically impossible for everyone to be “above average”. Yet the


statistical concept of “average” has been redefined, of late, to include political and social
niceties. In a classroom where everyone receives an ‘A’ or a ‘B’, however, either the
standards are too low or the grading is not meaningful. Although low expectations often
originate in good intentions, they generally lead to low outcomes. The consequences of low
expectations in universities – similar to problems found in lower levels of the educational
system – are that students often progress to the next level without properly mastering the
skills necessary to be successful there. What is the consequence to society when unrealistic
evaluations are transferred from one venue to another – from schools into the workplace? In
the workplace the realities of a global marketplace – and its world-class competitors –
become impossible to ignore.

One of the harshest realities of global markets centers on costs, and particularly labor
costs. In fact, labor accounts for about 2/3 of the cost of making/selling products, making
greater labor productivity tantamount to survival in today’s economy (Cooper, 2004). In an
environment characterized by business pressures, organizations cannot afford to over reward
employees, at least not for long. According to Colvin (2005), American workers are
enormously more expensive than their peers almost anywhere in the world except in Western
Europe (which has its own social and economic problems). The downward pressure on U.S.
wages cannot be ignored, and in fact, Mandel notes that since 2000, the college wage
premium has shrunk (2005).

345
VIII. CAPITALISM AND PERFORMANCE

Perhaps capitalism can provide some useful direction. As the most creative and
dynamic of all economic systems, it has the ability to provide long-term global competitive
advantages and high standards of living. Three centuries of technology breakthroughs, in
fact, are the root of today’s abundance in the developed world, and those with a technological
edge (America, Japan, and Western Europe) still have the highest standard of living (Colvin,
2005).

But capitalism is also very demanding, and potentially brutal: it makes the most
specific demands of its citizens; it requires, at times, government to intervene to moderate its
injustices; it can be extremely harsh while competition sorts out market inefficiencies in an
altogether detached way. In free markets, employment and prosperity are the responsibility
of private enterprise. And to thrive, private enterprise demands performance. . . independent
of race, gender, IQ, or pedigree. Joseph Schumpeter’s creative destruction has allowed
American markets – capital, products, labor – to adjust to change faster than virtually any
other country, and that has been crucial to our prosperity. But our schools are no longer
keeping up.

Capitalism relentlessly pushes turmoil and change, inspiring new technologies,


products, and profit opportunities. But the profit motive puts distinct pressures on business,
and one pressure cannot be ignored. Business does not pay for what it does not need. This
fact is brought home, for example, by highly skilled individuals in the airline industry, and
the high social costs of downsizing.

Yet the fundamental business reality remains: businesses do not pay for what they do
not need. And businesses do not need a workforce that is accustomed to – or expects --
“above average” rewards for “average” contributions. It would appear that some of the
“lessons” being communicated in American universities are not the ones that are needed in
today’s job market.

IX. THE BOTTOM LINE

Encouragement and reassurance are vital parts of education, and no educational


system can excel without them. But what matters most is what a student knows. Education
must focus on challenging students, not unrealistically nurturing them, to produce
competitive results. Now is exactly the wrong time to bury our heads in the sand. Thoughtful
people are examining our educational system in the face of international competition, and
finding it lacking. The issue is not whether we have the resources. We do, for now. The
issue is whether we have the will and discipline to use our educational resources to buy
proper investments for the future. In the meantime, many of our most formidable global
competitors are obsessed with education, and it is highly unlikely they will wait for us to
decide to come to our senses.

346
REFERENCES

A. BOOKS:
Douthat, R. G. Privilege: Harvard and the Education of the Ruling Class. NY: Hyperion,
2005.
Nathan, R. My Freshman Year: What a Professor Learned by Becoming a Student. Cornell
University Press, 2005.
Salerno, S. Sham: How the Self-Help Movement Made America Helpless. NY: Crown,
2005.
Seaman, B. Binge: What Your College Student Won’t Tell You. NY: John Wiley, 2005.

B. JOURNAL ARTICLES:
Colvin, Geoffrey. “Can Americans Compete?” Fortune, July 25, 2005, 70-82.
Cooper, James C. “The Price of Efficiency.” Business Week, March 22, 2004, 8-42.
Coy, Peter. “The Future of Work.” Business Week, March 22, 2004, 50-52.
Einhorn, B. and J. Carey. “A New Lab Partner for the U.S.?” Business Week, Aug 22, 2005,
116-117.
Engardio, Pete. “A New World Economy.” Business Week, August 22/29, 2005, 52-58.
Hutchinson, Alvin. in a letter to the editor of Business Week, October 3, 2005, 23.
“Low Marks for High Marks” Wall Street Journal, September 5, 2003, W15.
Mandel, Michael J. “College: The Payoff Shrinks.” Business Week, September 12, 2005,
48.
Mandel, M. J. “Productivity: Who Wins, Who Loses.” Business Week, Mar 22, 2004, 44-46.
“Oh Say Can You ‘C’? Wall Street Journal, April 4, 2004, W15.
Schaefer, Naomi. “A Little Learning.” Wall Street Journal, April 23, 2004, W11.
Wooldridge, Adrian. “The Brains Business: A Survey of Higher Education.” The
Economist, September 10, 2005, 3-22.

347
INTERNET-BASED MARKETING COMMUNICATION AND THE
PERFORMANCE OF SAUDI BUSINESSES

Abdulwahab S. AlKahtani, King Fahd University of Petroleum and Minerals


gahtania@kfupm.edu.sa

ABSTRACT

The objective of this study is to investigate the relationship between the usage of
Internet by Saudi businesses as a marketing communication tool and some of the measures of
organizational performance. Results showed that businesses which utilize the Internet to
communicate and promote their products reported higher levels of sales, profits, image, and
customer satisfaction.

I. INTRODUCTION

Although Saudi businesses play a major role in the economic development in Saudi
Arabia, the utility of technological advances has not been adopted on a large scale to help
them to successfully communicate and market their offerings. Saudi businesses have to
realize the growing popularity and need of the Internet as a marketing communication tool.
The importance of the Internet stems from the fact that many people all over the world spend
more time searching it for marketing information. The usage of the Internet presents an
alternative channel to overcome some of the strategic challenges that face Saudi Arabia,
including the global access of Saudi products. The purpose of this study is to gain more
knowledge about the usage and impact of the Internet on the performance of Saudi
businesses.

II. LITERATURE REVIEW

The Internet is emerging as a powerful communication force, allowing firms to serve


customers, collaborating with partners and suppliers, and empowering employees more
effectively. Murphy (1999), Sawhney and Zabin (2002), and Drennan and McColl-Kennedy
(2003) stated that the usage of the Internet by companies has helped to boost their
performance in terms of sales and profits. Shen (2002) contended that within a short span of
six years, the Internet has evolved into an important medium for advertisers and marketers for
both branding and direct selling purposes.

Hypothesis # 1: Saudi businesses with Internet-based marketing communications have higher


sales than businesses that do not utilize the Internet to market their products
and services.
Hypothesis # 2: Saudi businesses with Internet-based marketing communications are more
profitable than businesses that do not utilize the Internet to market their
products and services.

Goldstein (2004) thought that the usage of the Internet has gone beyond selling
products and services to include providing detailed information needed to satisfy customers at
any place and any time of the day in the world. Additionally, Goldstein (2004) stated that
organizations go beyond economic motives to politically advance the democratic principle by
supplying customers with detailed information that concerns customers. According to Hill

348
and Jones (2004) these companies become responsive to customers to gain a sustainable
competitive advantage.

Hypothesis #3: Customers of Saudi businesses with Internet-based marketing


communications are more satisfied with than the customer of Saudi businesses
that do not utilize the Internet to meet their needs.

Kotler (2003) argues that people will establish faith, attitude and impression by action
and reflection, which will influence their purchasing behaviors. The impression felt by
individuals will have impact on one's behavioral decision. Some scholars apply this concept
in the retail field. Hansen and Deutscher (1977) and Kunkel and Berry (1968) explored the
influence of business image on consumer behavior and store selection, and develop many
facets than enhance the image. For the Internet marketer, the website is his/her store, which
has a large effect on the behaviors of consumers.

Hypothesis #4: Saudi businesses that use the Internet for marketing communication have a
greater perceived image than Saudi businesses that do not use it.

III. METHODOLOGY

The researcher used the questionnaire to collect information relevant to the usage of
internet technology by different Saudi businesses to communicate and market their products
and services. The questionnaire consists of three parts. Part one contains questions about the
usage/application, availability of an interactive website, the features of the website if
available to compare users and non-users on some performance measures. Part two consists
of questions about marketing managers’ perceived increase in sales, profits, perceived
satisfaction of customers, and perceived firm’s image. Part three includes questions about the
type of business, sales volume, profits, mode of business, size of business, and economic
sectors. Questionnaires were sent to 450 businesses listed by the Chamber of Commerce and
Industry in the Eastern Province in Saudi Arabia. A total of 149 surveys were returned, for a
response rate of about 33 percent. Only six surveys were not successfully completed.
Therefore, 143 questionnaires with complete data were used in the analysis of this study.

IV. MEASURES

This study investigates the difference in performance between Saudi businesses that
use the Internet to communicate their products and service with their customers and
businesses that do not apply it. The usage of the Internet is an independent variable while
performance measures are dependent variables. Performance measures include the
following: increase in sales, increase in profits, firm’s image, and customers' satisfaction
about the business as being responsive to their needs. Saudi businesses that use the Internet
for marketing were asked to respond to the statements that evaluate their customers' feelings
about their offerings on the Internet. Also, Saudi businesses that do not have access to the
Internet were asked to respond to the statements measuring the satisfaction of their customers
in the absence of Internet to market their offerings. Respondents were asked to report the
agreement and disagreement with the image statements on a five-point Likert scale.

349
V. STATISTICAL TESTS AND RESULTS

The researcher used SPSS to examine the four hypotheses in this study. Descriptive
statistics in Table 1 show the sectors, number of businesses, and their percentages in the
sample. The internal consistency of the study was high, Cronbach's alpha was 0.87.

Table 1 Classifications of Participating Saudi Businesses by Sector


Participating businesses by sector Number of businesses Percentage of
sectors
Manufacturing 26 18 %
Consultancy 20 14%
Training and education 24 17 %

Health 23 16%
Services 27 19%
Retailing and wholesaling 23 16%

Table 2 shows the different applications of the Internet by participating Saudi


businesses. A large number of businesses used the Internet for e-mailing (81%) within the
organization while a low percentage of businesses indicated that they used it to communicate
with customers (2%). They also used it to communicate with suppliers (3%) and provide
customers with the necessary information (4%). Results show that the majority of businesses
do not focus on the needs of customers in the era of e-marketing. Results indicated that less
than 2 percent of surveyed businesses had interactive websites. Only 62 out of 149
businesses indicated that they had access to the Internet. That is 62 companies in various
industries. Only 2 firms used the Internet to professionally communicate with management,
employees, suppliers, and customers.

Table 2 Type of Usage of the Internet by Participating Saudi Businesses


(N=62)
Uses of the Internet Percentage
1 E-Mail with employees and other stakeholders 81
2 Advertising medium 2
3 Communication with suppliers 3
4 Communicating with customers to solicit them 2
5 Providing customers with the necessary information 4
6 Full interactive professional website (Major stakeholders) 1
Table 3 shows the results of performance between two groups of Saudi businesses
(users and non-users of the Internet in marketing communication).

350
Table 3 Performance Comparison of Saudi Businesses in the Presence and Absence of
Internet-Based Marketing Communication
Performance Measures df SS MS F Value
Sales Increase
Between Groups 1 3.55 .8911 1.78*
Within Groups 142 27.67 .3802
Total 143 31.22
Profits Increase
Between Groups 1 2.24 .6344 2.23*
Within Groups 142 24.13 .3317
Total 143 26.37
Customer's Satisfaction
Between Groups 1 2.45 .7127 2.03*
Within Groups 142 26.18 .3911
Total 143 28.63
Firm's Image
Between Groups 1 3.12 1.0236 1.97*
Within Groups 142 27.04 .5634
Total 143 30.16
**p < .05

Results in Table 3 indicate that there is a significant difference (F Value = 1.78,


P<.05) in sales increase between businesses that use the Internet (mean = 4.41, sd =.59) for
marketing communication and businesses that do not have access to it (mean = 2.67, sd =
1.18) or do not use it to market themselves. Therefore, hypothesis 1 is supported. Saudi
businesses that use the Internet for marketing communications reported a higher level of sales
increase than businesses that did not use it.

Saudi businesses that used the Internet to communicate their products and services
reported a higher level of profits (mean = 4.98, sd = .68) than businesses that did not have
access to it (mean = 2.47, sd=1.09). F value shows a significant relationship between the
usage of the Internet for marketing communication and profits increase (F value = 2.23,
P<.05). Hence, results support hypothesis 2.

Results in Table 3 show that there is a significant difference in level of customer’s


satisfaction (F value = 2.03, P< .05). Saudi businesses with a responsive Internet marketing
communication capability reported a higher level of customers’ satisfaction (mean = 4.88,
sd= .63) than businesses that use such a new marketing tool to meet the needs of their
customers (mean = 2.77, sd = 1.35). Therefore, hypothesis 3 is supported by this finding.

Results in Table 3 show that there is a significant difference (F value = 1.97, P< .05)
in firm's image between businesses that use the Internet for marketing communication (mean
= 4.65, sd = .59) and businesses that do not utilize it (mean 2.31, sd = 1.21). Saudi businesses
that use the Internet to communicate with major stakeholders reported greater images than
businesses that do not use it to market themselves. Hence, hypothesis 4 is supported in this
study.

351
VI. CONCLUSION

The Internet provides a powerful platform for businesses with home pages to
communicate their products and services. It provides a labor-efficient and cost-effective way
of distributing information to millions of potential clients in the global markets. Customers
can establish online communication with Saudi businesses via their home pages.

The findings of this study support strong and significant relationships between the
usage of the Internet in marketing communication and each of the performance measures of
Saudi businesses. The findings support the need for Internet communication activities to be
integrated in the overall marketing communications mix. Furthermore, many scholars have
predicted the decline of the traditional marketing function (Holbrook and Hulbert, 2002). The
decline in traditional marketing practices implies innovative ways of marketing
communication. However, one may argue that the Internet is a weak medium when used to
display the advertising of offerings. The strengths of the Internet as a form of digital word-of-
mouth are now very clear and coming to forefront.

The researcher recommends that Saudi businesses use the Internet effectively and
efficiently to market their products and/or services domestically and globally in order to
grow. Early adopters of the Internet have greater advantages over late adopters. Some
businesses use the Internet to promote themselves in geographic areas that are difficult to
reach through traditional physical distribution.

The findings show that the image of the firm can influence its profitability and sales.
The usage of the Internet increases the image of the firm which in turn increases its
performance measures such as increase in both sales and profits. This implies that
management of Saudi businesses should consider the usage of the Internet as a contemporary
marketing distribution tool. The findings show that only 1 percent of surveyed businesses
used the Internet interactively while the 81 percent used it for e-mail communication between
management and employees. This indicates that management should use it widely for many
purposes. Since the internet is becoming a global advertising medium, businesses using it are
always potentially addressing international prospects. This might lead to increased
international competition (Wymbs, 2000). Therefore, Saudi businesses must be prepared to
compete with foreign businesses in Saudi Arabia and global markets. In summary, the
findings of this study may encourage Saudi businesses to utilize the Internet to communicate
and market their products.

REFERENCES

Drennan, Judy & McColl-Kennedy, Janet R. (2003), "The Relationship Between Internet Use
And Perceived Performance In Retail And Professional Service Firms", Journal of
Services Marketing, 17(3): 295-311.
Goldstein, Gary B. (Summer 2004), "A Strategic Response to Media Metamorphoses", Public
Relations Quarterly, 49(2): 19-23.
Hansen, R.A., Deutscher, T. (1977), "An Empirical Investigation Of Attribute Importance In
Retail Store Selection", Journal of Retailing, 53(4): 59-72.
Hill, Charles W. & Jones, Gareth R. (2004), "Strategic Management: An Integrated
Approach", (6th ed.), Boston: Houghton Mifflin Company.

352
Holbrook, M.B., Hulbert, J.M. (2002), "Elegy On The Death Of Marketing, Never Send To
Know Why We Have Come To Bury Marketing But Ask What You Can Do For Your
Country Churchyard", European Journal of Marketing, 36(5/6): 706-32.
Kotler, P. (2003), Marketing Management, 11th (International) ed., Pearson Education Inc.,
Upper Saddle River, NJ.
Kunkel, J.H., Berry, L.L. (1968), "A Behavioral Conception of Retail Image", Journal of
Marketing, 32: 21-7.
Murphy, David Patrick, (March 1999), "Advances in MWD And Formation Evaluation For
1999", World Oil.
Sawhney, Mohanbir, Zabin, Jeff (Fall 2002), "Managing and measuring relational equity in
the network economy", Academy of Marketing Science Journal, 30(4): 313-343.
Shen, Fuyuan (Fall 2002), "Banner Advertisement Pricing, Measurement, And Pretesting
Practices: Perspectives From Interactive Agencies", Journal of Advertising, 31(3): 59-
68.
Wymbs, C. (2000), "How E-Commerce Is Transforming And Internationalizing Service
Industries", Journal of Services Marketing,14(6): 463-78.
Acknowledgement
The researcher thanks and appreciates the full support that he received from King Fahd
University of Petroleum in Saudi Arabia to accomplish this study.

353
IDENTITY THEFT: WHAT SHOULD I DO IF IT HAPPENS?

Mark McMurtrey, University of Central Arkansas


markmc@uca.edu

Mike Moore, University of Central Arkansas


mikem@uca.edu

Lea Anne Smith, University of Central Arkansas


lsmith@uca.edu

ABSTRACT

The Internet Crime Complaint Center has been collecting and compiling information
on cyber crime for the past four years. For the most current report, (2004), there was an
increase of 66.6% over 2003. The total dollar loss was 68.14 million with a median
dollar loss of $219.56 per case. There were 207,449 cases submitted to the Fraud Center
with untold numbers going unreported. Three percent of the losses exceeded $5,000.00.
Approximately 6000 of these cases were for identity theft. However, Rob McKenna, the
attorney general for Washington state estimates that the total cost for identity theft in the
U.S. costs consumers $53 billion a year.

I. INTRODUCTION

The word phishing originated sometime around 1996 as a result of hackers stealing
accounts and passwords from America Online. The analogy with the sport of fishing is
easily made. The hackers were using e-mail lures to “hook” or “fish” for passwords and
financial data from the “sea” of Internet users.

It is difficult too tell how many identity thefts are a direct result of phishing, however,
the Federal Trade Commission is responsible for collecting information on identity theft.
The latest information, released in July, 2004, indicates that 10 million Americans were
victims in 2003, and over 27 million since 1998 (FTC, 2004). Recent data indicates that
57 million were targeted in 2004 (Wehner 2005). Most often the stolen information is
used to commit credit card fraud. However, 20% of all victims reported that their
information was used to commit more than one type of fraud. It is not unusual for
criminals to use someone else’s stolen identity to give to the police when apprehended for
a crime. About one half of identity theft victims have no idea how their personal
information was stolen. The number of identity theft cases is increasing rapidly. The top
states for identity theft reports are Nevada, Arizona, and California (FTC 2004).
California has more than 100 cases reported every day. Sadly, local police are generally
ill trained to deal with the problem. According to the FTC, only about 30% of police
departments even take written complaints. Finally, the Social Security Administration
reports that 80% of the instances of misuse of social security information are related to
identity theft (FTC, 2004). According to the Washington state attorney general, identity
theft and the resulting financial fraud are the fastest-growing crimes in the United States.
These crimes cost consumers over $53 billion a year and are difficult to resolve or even
investigate (Government Technology, 2005).

354
II. THREAT FROM CELL PHONE USERS

Until recently, identity theft attempts, spam, and viruses have only been a problem for
Internet users. However, spammers and hackers have recently realized that there are new
opportunities with cell phones, given the use of digital technology. The recent technology
enhancements to cell phones have made them increasingly vulnerable. Many cell phone users
now have the option to access the Internet from their phone. This presents a whole new area
of concern and potential vulnerability for cell phone users. The same risks exist when it
comes to the Internet whether you are accessing it through a computer or a cell phone. As
such cell phone users are just as vulnerable to identity theft attempts, spam, and associated
viruses as people who use a computer.

• From January to March 2005, there have been over 1,000 vulnerabilities found with
cell phones, which is a 6% increase from last year.
• The use of adware, which is short for advertising software, makes spreading spam and
viruses incredibly easy. Between 2004 and 2005, McAfee reported a 20% increase in
threats using malware. Malware is malicious software such as viruses or Trojan
horses.
• In 2003, cell phone users were sent less than 10 million unsolicited emails. That
number is expected to rise to 500 million this year.
• Cell phone users accessing the Internet should protect themselves with anti-virus
protection, install the latest patches, and employ spam filters just like when using a
computer (Kay, 2004).

III. REDUCE YOUR CHANCES OF IDENTITY LOSS

While it may be impossible to avoid the receipt of fraudulent e-mail, there are some
handy procedures to follow:
• If you receive e-mail that tells you, with little or no notice, that an account of yours
will be closed out unless you confirm your billing information, do not click or reply
on the link in the e-mail. The best policy is to contact the company given in the e-
mail via telephone or a website address that you know to be correct.
• Avoid e-mailing personal and financial information. Before submitting financial
information through a website, make sure the “lock” icon is on the browser’s status
bar. It tells you that your information is secure during transmission.
• Review credit card and bank account statements as soon as you receive them to look
for unauthorized charges. If your statement is late by more than a couple of days, call
your credit card company or bank to confirm your billing address and account
balances.
• Report suspicious activity to the FTC. Send the actual spam to uce@ftc.gov. If you
believe that you have been scammed, file you compliant at www.ftc.gov, and then go
to the FTC’s Identify Theft website at www.ftc.gov/idtheft to learn how to minimize
your risk of damage from identify theft (Cox, 2004).
• Look for misspellings and bad grammar in the e-mail. While an occasional mistake
can slip by any organization, more than one is a tip-off to be concerned.
• If the e-mail refers you to a website check the URL closely. It is easy to disguise a
link to a website. Examine the @ symbol in a URL. Most browsers will ignore all
characters preceding the @ symbol in the URL address for a site. So this web address
http:www.wellknownrespectedcompany.com@gonephishing.com may look like a

355
page for the well known respected company but it actually takes the unsuspecting user
to gonephishing.com. The longer the URL the easier it is to hide the actual
destination address. Other ways to conceal the true address is to substitute similar
looking characters so that paypal.com could be (and has been) spoofed as pay-
pal.com. Also, a zero can be substituted for the letter o in the URL (Kay, 2004).
• Use anti-virus software. Sometimes phishing e-mails contain software that can harm
your computer or track your activities on the Internet. A firewall helps make you
invisible on the Internet and blocks all communications from unauthorized sources
(FTC, 2004).

Gartner, a technology consulting firm, warns that phishing could slow the expansion
of the Internet. “The rise in phishing attacks is threatening consumer confidence as never
before” Gartner reported. Based on a survey of 5000 Internet users by Gartner, it is
estimated that 11 million people have clicked on a link from a phishing e-mail. It is
further estimated that 1.8 million people have given their credit card number or billing
address to a fake website. The company estimates that another million people have been
taken in without knowing it.

Most phishing thefts go unsolved, with the banks usually absorbing the cost. Banks
consider it cheaper to absorb the losses rather than spend money to try and stop phishing,
if it can be stopped. Gartner estimates that banks and credit card companies in the United
States lost about 2.4 billion dollars to phishing. A single group of 53 phishing thieves
arrested in Brazil stole approximately 30 million dollars (Wehner 2005).

IV. WHAT TO DO

• Your first order of business is damage control!


• Get in touch in writing with all credit reporting agencies (CRA’s). Get the list of the
three reporting agencies and make sure you call and write them!
• Ask each of the big three credit bureaus (CRA’s) for a copy of your credit report and
go over it very closely.
• Contact all credit grantors: department stores, utility companies, credit-card issuers,
etc. with whom you believe your name may have been used fraudulently. Again,
contact them by phone and by letter.
• Carefully monitor your mail and credit-card bills for evidence of new fraudulent
activity.
• Start a log of all your contacts with authorities and financial institutions, including
those you've already contacted.
• Report the incident to the police or sheriff in the area where the crime was committed.
• DON’T BE INTIMIDATED! You are not alone. Not only are there plenty of fellow
victims (about 500,000 per year!) but you can fight back without an expensive
lawyer!

V. CONCLUSION

Sadly, there is no central data base that captures all reports of identity theft (FTC
2004). Clearly, we need one. Identity theft is a difficult crime to investigate and even more
difficult to prosecute. Victims don’t even realize that they have been defrauded until weeks
later when credit card charges begin showing up on their statements or withdrawals show up

356
on their bank statements. Then there is difficulty in finding out who sent the e-mail. Given
the way that the Internet works, senders could be anywhere in the world and can easily
disguise their identity.
Laws have done little to stop identity theft. California has passed a law, Civil Code
1798.85-1798.86, which provides some guidelines for displaying and transmitting
information. Illinois and Washington state have recently pasted laws on identity theft but
it remains to be seen if they will be effective. In the short run, education seems to be the
best action to expose identity theft. It is in the best interest of banks and e-tailers to get a
message of vigilance out about identity theft and educate the public.

REFERENCES

Arkansas Democrat Gazette. (July 19, 2004). “Phishing Expeditions Hazardous for PC
Users”, 1D.
Wehner, Ross, Arkansas Democrat Gazette. (January 3, 2005). “Groups Unite to Battle
Phishing”, 1D.
Computers for Seniors, Inc. (April 5, 2004). “Phishing.” Retrieved (May 25, 2004) from
http:www.cfscapecod.com/.
Cox, Mike, Michigan Department of Attorney General, ( March, 2004). Fraudulent
Emails Thieves Intend to Steal Your Personal Information. Retrieved (April 10,
2004) from http://www.michigan.gov/ag/.
FTC. (July 21, 2003). “Identify Thief Goes “Phishing” for Consumers’ Credit
Information.”Retrieved (July 26, 2004) from http://www.ftc.gov/opa/2003.
Government Technology. (2005). “New Laws Aim to Protect Personal Data in Washington
State.” Retrieved (January 10, 2006) from http://www.Govtech/mag azine /story/php?id
=95594.
IC3. (May 10, 2005). “New Threat to Cell Phone Users.” Retrieved (November 1, 2005) from
http://www.ic3.gov/media/2005/050510-1.htm.
Kay, Russell. (January 19, 2004). Phishing, Computerworld, Vol. 38, No. 3, pages 44-5.
Mari, J. Frank. (2005). “Identity Theft: Prevention and Survival.” Retrieved (October 5,
2005) from http://www.identitytheft.org/.
Swartz, John. (April 27, 2005). “Cell Phones Now Rich Targets for Viruses,
Spam, Scams.” Retrieved from http://www.usatoday.com /tech/news/2005-04-27-
cell-phones-usat_x.htm

357
GOVERNMENT REGULATION OF THE OATH OF HIPPOCRATES:
HOW FAR CAN THE GOVERNMENT GO?

Roy Whitehead, University of Central Arkansas


Royw@uca.edu

Kenneth Griffin, University of Central Arkansas


Keng@uca.edu

Phillip Balsmeier, Nicholls State University


Phillip.Balsmeier@nicholls.edu

ABSTRACT

The United States Department of Health and Human Services has issued regulations
mandating that all physicians provide oral interpretation and written translation services to
limited English proficient persons free of charge and without reimbursement. The regulation
forbids physicians from relying on friends, family, and coworkers of the limited English
proficient person to provide informal translation services. This article discusses the
regulation, whether the regulation exceeds the government agency’s authority under Title VI
of the Civil Rights Act, forces physicians to speak in violation of the First Amendment, is an
undue burden on the patient-physician relationship long governed by the Oath of
Hippocrates, and was improperly imposed by the government without benefit of public notice
and hearing.

I. INTRODUCTION

Title VI of the Civil Rights Act prohibits discrimination on the basis of race, color, or
national origin “under any program or activity receiving federal financial assistance.” Most
physicians receive federal funds from the medicare and medicaid programs. President Clinton
in August 2000 issued Executive Order 13166 that directed all federal agencies to improve
access to all federal programs for Limited English Proficient (LEP) persons. On August 8,
2003, the Bush Department of Health and Human Services (HHS) issued a language rule that
provided that all physicians who receive any federal funds are required to provide oral
interpretation and written translation services to LEP persons free of charge and without
reimbursement. The rule specifically forbids physicians to rely on the LEP person’s family,
friends or other informal interpreters. The rule obligates physicians to provide a specific level
of translation services regardless of whether such services are actually used. The rule is
burdensome because hiring someone to be on standby and be immediately available, even for
unexpected patients is required. Physicians not in compliance are subject to the loss of federal
funds, to investigation for violation of Title VI, and likely prosecution by the Department of
Justice. By the way, the rule was issued without the traditional public notice and hearing
usually provided for government regulations. Clearly, the rule changes the way physicians
practice medicine and increases the likelihood of exposure to civil rights investigations and
lawsuits. For over 2000 years physicians have been guided by the Oath of Hippocrates. The
oath says: “I will follow that system of regimen which, according to my ability and judgment,
I consider for the benefit of my patients, and abstain from whatever is deleterious and
mischievous.” We now turn to a discussion of the effect of the language rule on the Oath and
the physician’s practice of medicine.

358
II. DISCUSSION

The first question is whether Title VI grants the Agency authority to establish a
language rule for physicians? The law is well settled that a government agency may not
exceed the authority granted it by a statute, Campbell v. Galeno Chemical Company (1930).
Title VI was intended to prevent discrimination against individuals on the basis of race, color
or national origin. Clearly, Title VI prohibits national origin discrimination. Physicians who
intentionally discriminated against patients from Poland or Mexico because they were from
Poland or Mexico would certainly fall within the prohibitions of statute. But can language
alone be considered the proxy of national origin discrimination? Title VI is silent on the
question of language. There are no reported cases that establish that language alone can be
used to create a suspect class under Title VI. The U.S. Supreme Court has decided that
Chinese students who did not speak English were entitled to equal protection opportunities,
Lau v. Nichols (1974). But the court said that the appropriate remedy might be to either give
instruction in Chinese or teach the students English. It did not find necessary nor order any
sort of translation services. In Guadalupe v. Tempe School District (1978), the 9th Circuit
said that the district was not obligated to provide bilingual instruction to Mexican-American
and Yaqui Indian children. Remedial instruction in English was deemed sufficient. Finally, in
Alexander v. Sandoval (2001), the Supreme Court ruled that Title VI only prohibits
intentional discrimination. Here, the reach of the attempted Agency language regulation is
not limited to intentional discrimination. The regulation allows an LEP person to file a
discrimination complaint based solely on a claim that a physician does not provide free oral
or written translation services without regard to whether there are other reasonable
alternatives available. There is no requirement to claim that the physician was intentional
singling out the person because of his language problems. Finally, if Congress intended Title
VI to include language as a proxy for national origin it had the opportunity to include such
language in the statute or amend the statute to so reflect. It has done neither. There is a
compelling argument that HHS exceeded the Congressional mandate of Title VI when it
issued the language rule for physicians.

The second question is whether language is really a proxy for national origin
discrimination under Title VI? Title VI simply prohibits intentional discrimination based on
color, race or national origin. We know of no court sanctioned finding that language is a
proxy for national origin. In Toure v. United States (1994), a native of Togo living in the
United States claimed that a U.S. Government Agency had to communicate with him in
French. The Second Circuit disagreed saying that providing notice in a person’s preferred
language was an unreasonable burden. And in a case directly on point, Soberal-Perez v.
Heckler (1983), ironically a case against HHS, the court rejected claims that the failure of the
HHS to provide notices in Spanish constituted national origin discrimination. The court said,
“Language by itself does not identify members of a suspect class.” The government’s
apparent position that language is a proxy for national origin discrimination finds no
compelling support in the case law. Maybe HHS’s lawyers ought to read their own cases.

Next, is the language rule in violation of a physician’s First Amendment rights? The
HHS language rule seeks to compel physicians to speak in a way mandated by the
government. It surely regulates the manner and the content of the way they speak. A fellow
named James Madison once said that the only thing worse than restricting a person’s freedom
of speech was compelling him to say things he didn’t believe. Madison’s statement is on
point here. The government cannot compel one to pay for a message contrary to his beliefs,
Pacific Gas and Electric Company v. Public Utilities Commission of California (1986). In

359
that case the State tried to force the utility to place ads opposing a proposed nuclear power
plant in the customer’s monthly bills. Clearly, the content of the speech between the
physician and his patient is medical advice and information just like the ads. This is not the
sort of information that the government may generally regulate. Here, the government
language rule regulates the mode of communication between patient and physician. Again,
the government may not regulate the choice of language used to convey a particular message,
Cohen v. California (1971). Also see Meyer v. Nebraska (1923) holding that a state may not
coerce the speaking of a particular language. One troubling effect of the regulation is that it
prohibits physicians and patients from speaking through people they both might prefer the
most, like family members or close friends. There is a considerable likelihood that the
language regulation is contrary to the First Amendment because any government effort to
suppress speech is presumed unconstitutional (Tribe, 1988).

Finally, what about the fact that HHS issued the language regulation without first
giving the public an opportunity to comment about the rule? Generally, the Administrative
Procedures Act (APA) mandates a notice and comment period when the government issues
binding rules. HHS says that the language rule does “not constitute a regulation subject to the
rulemaking requirements of the Administrative Procedures Act.” It says that the rule only acts
to clarify the existing legal requirements to accommodate LEP persons under Title VI. But,
alas, Title VI and its implementing regulations contain no such discernable requirement and
do not discuss language at all. Despite the failure of statutory language or caselaw to show
that language is a proxy for national origin, HHS claims it has a long standing tradition of
protecting LEP persons against national origin discrimination. The problem, however, created
by HHS’s position is that it reflects that HHS has a settled policy of treating language as a
proxy for national origin. If that is so, a settled rule, as in this case, issued by a government
agency, requires a notice and comment procedure under the APA. There was none. In
Appalachian Power Company v. EPA (2000), the EPA issued what it called “guidance”
concerning monitoring of emissions from power plants. The guidance was binding on the
power company and was issued without notice and a comment period. The Court found that
the mislabeled guidance was binding and that the APA required notice and a public comment
period. Here, the language rule is clearly settled Agency policy and cannot be considered
final without giving the public and affected physicians an opportunity to comment. The
language rule requires physicians to comply under penalty of loss of federal funds and/or
prosecution. In Appalachian Power (2000), the court said that if an Agency acts as if a
document issued a headquarters is controlling in the field and if it treats the document in the
same manner as a legislative act it is a final rule. That is exactly what we have here. The HHS
language rule is final. It is not enforceable because the notice and comment requirement of
the APA were not followed. So where do we go from here?

We are blessed that the very questions raised here are awaiting a decision of a panel
of the 9th Circuit Court of Appeals. The case is Colwell v. Department of Health and Human
Services (2005). The case originated in the Southern District of California. Much to the
surprise and chagrin of physicians everywhere, the District Court judge granted summary
judgement to HHS saying that the plaintiffs had failed to state a claim. We might say that the
lower court decision was also amazing to a lot of interested observers. We now turn to what
we think ought to be the conclusion of the matter.

360
III. CONCLUSION

Appellate courts like to decide cases on the narrowest possible grounds. The facts of
this case raise issues of whether Title VI gives the Agency authority to issue language
regulations, whether language can ever be a proxy for national origin discrimination, whether
the regulations implicate the physicians First Amendment right to communicate with their
patients, and whether the regulations were issued in accordance with the Administrative
Procedures Act? We contend, for the reasons discussed above, that the court ought to decide
that Title VI was never intended to give HHS the authority to issue language regulations, that
language is not a proxy for national origin discrimination, that the regulation runs afoul of the
First Amendment by regulating protected speech, and that the regulation is invalid for failure
to have a notice and public comment period required for a final rule. Sadly, there is a way out
for the appellate court without reaching the first three important issues. The court could
simply find that the regulations are not enforceable because of the agencies failure to follow
the APA notice and comment provisions. In such decision, the court would simply enjoin
enforcement of the language regulation until HHS takes the public comments required by the
APA and reconsiders the LEP language regulation. If, after reconsideration, the Agency again
publishes the language regulations, the parties will likely be back in court litigating the first
three issues. The best solution for all concerned is to leave the regulation of the practice of
medicine to the Oath of Hippocrates. These sort of government regulations are unduly
burdensome on physicians who are already swamped dealing with insurance companies and
medicare and medicaid rules. What physician who has LEP patients would not make arrange
to communicate with them? There are numerous economic and medical incentives to do so. It
is not unusual to see, for example, notices in business locations that Spanish is spoken here or
that there are menus available in Spanish. HHS ought to let the market handle the language
question.

REFERENCES

Alexander v. Sandoval, 532 U.S. 275 (2001)


Appalachian Power Co. v. EPA, 208 F.3d 1015 (D.C. Cir. 2000)
Campbell v. Galeno Chem. Co., 281 U.S. 599 (1930)
Cohen v. California, 403 U.S. 15 (1971)
Colwell v. Department of Health & Human Services, No. 05-55450 (9th Cir. 2005)
Guadalupe v. Tempe Elementary School District, 587 F.2d 1022 (9th Cir. 1978)
Lau v. Nichols, 414 U.S. 563 (1974)
Meyer v. Nebraska, 262 U.S. 390 (1923)
Pac. Gas & Elec. Co. v. Public Utilities Commission, 475 U.S. 1 (1986)
Soberal-Perez v. Heckler, 717 F. 2d 36 (2nd Cir. 1983)
Toure v. United States, 24 F.3d 444 (2nd Cir. 1994)

361
WHAT ARE THE BENEFITS, CHALLENGES, AND MOTIVATIONAL
ISSUES OF ACADEMIC TEAMS?

Blaise J. Bergiel, Nicholls State University


Blaise.bergiel@nicholls.edu

Erich B. Bergiel, Mississippi State University


ebb@msstate.edu

ABSTRACT

Academic teams are a vital part of the modern college classroom. Students’ working
in teams is one of the more common approaches to cooperative learning. Professors and
students should understand the different aspects of academic teams before they are used in
the classroom. They must be aware of the benefits of academic teams, the challenges of
academic teams, and techniques to motivate team members. Working on a successful team
not only helps students academically, it also helps to prepare them for the highly team-
oriented workforce.

I. INTRODUCTION

Author C. S. Lewis once wrote, “Two heads are better than one, not because either is
infallible, but because they are unlikely to go wrong in the same direction.” This quote
simply illustrates an important benefit of cooperative learning. Research has “shown that
together, students are able to achieve and learn more than any student is able to individually”
(Pfaff and Huddleston, 2003, page 37). Academic teams, a form of cooperative learning, are
becoming an important learning tool in many college classrooms. As with any type of team,
academic teams are faced with different challenges. It is important for professors, and team
members alike, to understand these challenges in order to promote team success. However,
the many benefits of academic teams make them a crucial part of a quality higher education.

II. ACADEMIC TEAMS DEFINED

One of the most commonly cited definitions of a team is by Katzenbach and Smith.
(1993). “A team is a small number of people with complementary skills who are committed
to a common purpose, performance goals, and approach for which they are mutually
accountable.” Proponents of team-based learning suggest that if work teams result in higher
productivity in the work force, then the same relationship should also hold in the educational
process (Bacon, 2005). The basis of the cooperative learning theory is that students working
in teams are able to apply and incorporate information in more complex ways than students
working individually.

III. BENEFITS OF ACADEMIC TEAMS

Benefits of academic teams can fall into two categories: benefits in the classroom and
benefits to the individual. Benefits in the classroom include: Accomplish projects an
individual cannot do: Many desirable class projects are too large or too complex for one
individual to complete alone. Brainstorm more solution options: Different individuals looking
at the same problem will find different solutions. A team can review ideas and put together a

362
final solution, which incorporates the best individual ideas. Detect flaws in solutions: A team
looking at different proposed solutions might also find pitfalls that an individual might miss.
The final solution is that much stronger. Build a classroom culture: Members of effective
teams can form personal bonds, which are good for the individual and classroom morale.
Also, student on teams may form bonds, which extend beyond the classroom.

Increase Student Participation: In academic teams, students working together are actively
engaged in learning instead of passively listening to a teacher lecture. In a teacher-centered
class, the teacher speaks about 80% of the time. Thus it is estimated that in this typical
classroom, with 30 students in a class—much less than in many classrooms—each student
speaks less than 30 seconds each one-hour class period (Lie, 2004).

Enhance Social and Communication Skills: Another positive component of academic


teams includes training students in the social skills needed to work collaboratively with
others. In individual projects, classrooms competition is valued over cooperation. In
cooperative learning teams, students can exercise their collaborative skills and practice to
work with others to achieve mutual benefits for everyone (Lie, 2004). A team relies on
communication among members. Through team projects students can learn to actively and
effectively listen to their team members to understand their ideas and concerns. Team
members must effectively articulate their ideas or their concerns to others and provide
genuinely constructive feedback to team members.

Promote Diversity: Cooperative groups may include students of varied racial or ethnic
backgrounds. Because students are actively involved in studying classroom issues and
communicating with each other on a regular basis they are provided with opportunities to
appreciate differences. As students are exposed to methods and ideas that come from a
diverse team, they learn different ways of approaching a problem.

Individualize Instruction: “With cooperative learning groups, there is the potential for
students to receive individual assistance from teachers and from their peers. Help from peers
increases learning both for the students being helped as well as for those giving the help”
(Lie, 2004). In classrooms that emphasize the lecture method, teachers cannot always stop to
help students that are having trouble keeping up with the class.

Decrease Anxiety: Many students do not speak out in a traditional classroom setting
because they fear appearing foolish. “In contrast, there is less anxiety connected with
speaking in the smaller group” (Lie, 2004). When students work in teams, the product comes
from the team rather than from the individual. Therefore, the focus is removed from any one
student and the entire team becomes responsible. Cooperative groups provide a safe
environment for students to communicate ideas without the fear of criticism.

Enhance Self-Management: “One purpose in education is to enable students to become


life-long learners, people who can think and learn without teachers telling them what to do
every minute. By shifting from dependence on teachers, cooperative group activities help
students become independent learners and form a community of learners among themselves”
(Lie, 2004). Participating in academic teams calls for self-management by students and the
student may do more academic work. This can be a combination of not wanting to let the
team down or not wanting to look unprepared. In order to perform within their team, students
need to be prepared with assignments completed.

363
IV. CHALLENGES OF ACADEMIC TEAMS

“Although the positive outcomes of teamwork are numerous, there are problems
related to work in group contexts. Some problems in team-oriented work result from the
actions of the students themselves” (Pfaff and Huddleston, 2003, page 38). It is important for
college professors and students to be aware of these challenges, in order to prepare for them.
Some of the challenges that academic teams can be faced with are social loafing, scheduling
conflicts, grading method, team member conflicts, and task perception.

Social Loafing: One common problem with academic teams occurs when a team member
does not contribute equally in the group. “[Research results] suggest that as the size of the
team increases, social loafing (i.e., the tendency of certain team members to free ride on the
efforts of others) is more likely” (Deeter-Schmelz, Kennedy, and Ramsey, 2002, page 115).
“[Social loafing] can be addressed by changing team membership or by increasing individual
accountability” (Pfaff and Huddleston, 2003, page 38). Providing an opportunity for students
to evaluate each other’s contributions may also decrease the occurrence of team members not
contributing to the group. To avoid any fear of social consequences linked to grading each
other, the team members’ evaluations should remain confidential.

Scheduling Conflicts: Many college students have jobs and family responsibilities that
make coordinating times for teamwork difficult. One way to alleviate some of the problems
involved with time coordination is for the teacher to provide class time for teams to work
together (Koppenhaver & Shrader 2003). Encouraging or requiring the students to use email
and/or chat rooms, can provide another means for students to easily communicate with each
other outside of class.

Grading Method: Students are very anxious about being in a situation where their
individual grade depends on the performance of other individuals. A professor’s grading
method for individuals and teams can also be problematic. Due to the nature of team grading,
a student may feel that they have lost control of their academic destiny. “Students frequently
complain, and express dissatisfaction, when their personal grade is heavily weighted by team
output as opposed to individual accomplishment” (Boughton, 2000). “Relying solely on
instructor-based evaluation can also cause problems; not including a peer evaluation in the
grade may adversely affect student attitudes toward teamwork” (Pfaff and Huddleston, 2003,
page 38). Project preparation by the instructor is important so the students know that grading
for a team project consists of more than just a team grade, but an assessment that is balanced.

Team Member Conflicts: Putting together groups of individuals with different backgrounds
and ideas will inevitably lead to disagreements and conflict. Many times, conflict occurs
when professors have control over assigning team membership. Students need to find ways
to handle this kind of challenge. One solution is to let students take charge of team
management. The team members choose how to resolve the problem. The professor abides
by the team decision as long as documentation of the offense is presented. Another way a
professor can alleviate this problem is to provide team building and team maintenance
instruction within the structure of the assignment. Remind the students that some
disagreement is normal and conflict is a part of the team building process. Without it, teams
may not be able to examine all points of view and synthesize information.

Task Perception: If the students perceive the group assignment as “busy work”, the attitude
of the students will be poor and the quality of the experience will be minimal if any at all.

364
One way to avoid this problem is to allow students the flexibility to choose their own topic
for study. The teams will likely choose something that is relevant and meaningful to them.
This helps avoid the idea of only “busy work.” Another solution is for the professor to
emphasize the importance of teamwork in real-world situations. Discuss the importance
corporate recruiters place on being able to work in teams as a hiring criterion in cooperative
and job recruitments and interviews. When students realize the personal benefits of
teamwork, the problem of poor attitudes toward group work should decrease dramatically.

V. MOTIVATION TECHNIQUES

Unfortunately, there is no single recipe for motivating students. The good news is
that, by their very, nature, academic teams can easily incorporate self-motivating strategies.
Many of the benefits of academic teams are related to intrinsic motivation. In a study
comparing cooperative learning to whole group instruction, “the most consistent results of
[the] study related to student motivation, all aspects of which were more positive during
cooperative learning” (Peterson and Miller, 2004, page 131).

Setting the Stage: The first step is to work on creating a class climate that encourages
cooperation. Communicate clear expectations to students about team projects on the first
day. Explain why teams are used and how they can benefit from them. Provide a non-
threatening, hands-on, introduction to teamwork that students can easily accomplish. Instead
of just telling students teamwork can be fun, demonstrate it by letting them develop team
logos and team names. An important step to team building is for the team members to be
acquainted. At the very beginning some time needs to be given to aid in the team
socialization process. Some short brainstorming exercises can help students get used to the
team process and understand the benefits of comparing different points of views (Nelson
2004).

Assigning Membership: A key to a motivated and successful team is how the members are
assigned. Three options are available: 1) students pick their own partners, 2) partners are
randomly assigned, or 3) partners are strategically matched by the instructor. When students
form their own teams, the potential for cliques to form within a team greatly increases, thus
potentially excluding some team members. Oakley, et. al. (2004) list several pitfalls of
letting students select their own groups. First, students of similar abilities tend to congregate
together: strong with strong, weak with weak. This limits interaction by preventing weaker
students from learning how stronger students approach problems and robbing the stronger
students of the educational values of peer teaching. A second pitfall is that group’s will
likely form around pre-existing friendships. This decreases the exposure to different ideas,
and such groups are more likely to encourage and cover for inappropriate behaviors like non-
participation and free riding.

Randomly assigned membership may be effective, but more likely it is problematic.


It is possible that randomly assigned teams may be composed of groups of students who all
have similar skills and are lacking the requisite skills to complete particular tasks or
assignments. The groups may be composed of groups of individuals who share similar
opinions, when it might benefit them to work with students whose ideas are different from
their own.

A commonly recommended strategy is to group members of mixed talents and


temperaments (Oakley, et.al., 2004). Student assets should be evenly distributed among the

365
teams. Student assets typically include such things as work experience, previous relevant
course work, skills, and perspectives from other cultures. Balancing the strengths and
weaknesses of members can help insure that the groups function well and do not have distinct
advantages over one another.

Project Selection: Students will be much more committed to a team project that has value
for them that they can see as meeting their needs, either long term or short term (Stewart, &
Powell, 2004). Students need to feel that the project is significant, valuable, and worthy of
their efforts. To increase motivation and sustain learning, team assignments should be
designed to address these kinds of needs and interests. Students are more engaged in
activities when they can build on prior knowledge and draw clear connections between what
they are learning and the world around them. Once a topic for the team project is agreed
upon, the team needs concrete tasks to accomplish and specific goals to meet in order to be
motivated to work together.

Give Control: In environments where others rigidly prescribe the students tasks and
activities levels of responsibility and commitment often wane. It is important to relinquish
some control to the students. For example, giving teams choices between projects,
developing their own topics, and letting the team pick their own leader. Even small
opportunities for choice, such during class time whether to work in the classroom or some
other location gives students a greater sense of autonomy. Giving students control does not
mean giving up control. There are some aspects of the project that the students will not like,
but they are essential. Fortunately, research suggests that students feel some ownership of a
decision if they agree with it, so getting students to accept the reasons some aspects of a team
project are not negotiable is a worthwhile endeavor (Ashraf 2004).

Feedback: If used correctly, feedback can function as a very powerful tool to motivate
students to participate and learn. Give students constructive feedback on a regular basis.
Teams need some indication of how well they are doing and how to improve. It is more
motivating to have direction than to wonder about potential problems. Although it is
important to evaluate the progress of the team project, critiques need to be presented with
tact. Feedback in the form of grades should include peer evaluations, this provides a way for
students to feel more in control of their group evaluation.

VI. CONCLUSION

Cooperative learning in the form of academic teams provides many advantages to


teachers and students. Cooperative learning increases student participation, enhances social
skills, promotes diversity, helps to individualize instruction, decreases student anxiety, and
enhances self-management skills. Most would agree there is no magic pill to motivate
students. But as instructors, we have the ability to help students become more enfranchised
in their education, through employing active learning strategies. Motivational techniques
including meaningful assignment selection, active student participation, positive social
connections and timely instructor feedback can be easily incorporated into cooperative
learning teams. As with any educational strategy implementation, there are some challenges.
Academic teams may encounter problems like social loafing, schedule conflicts, undesirable
grading methods, team conflicts, and task perception issues. These challenges can be
overcome by simply adjusting methods of instructing academic teams. Another major benefit
of cooperative groups is that the same skills that are valued in teams at the college level are

366
also highly valued in the business world. To remain innovative and competitive, businesses
are looking for employees who can work and learn effectively in teams.

REFERENCES

Ashraf, Mohammad “A Critical Look at the Use of Group Projects as a Pedagogical Tool,”
Journal of Education for Business. (March/April), 2004, 213-216.
Bacon, Donald R. “The Effect of Group Projects on Content-Related Learning,” Journal of
Management Education. 29(2), 2005, 248-267
Deeter-Schmelz, D.R., Kennedy, K.N., & Ramsey, R.P. “Enriching Our Understanding of
Student Team Effectiveness.” Journal of Marketing Education, 24(2), 2002, 114-124.
Katzenbach, J.R. & Smith, D.K. The Wisdom of Teams: Creating the High-performance
Organization. Boston: Harvard Business School. 1993
Koppenhaver, G. D. & Shrader, C. B. “Structuring the Classroom for Performance:
Cooperative Learning with Instructor-Assigned Teams,” Decision Sciences Journal of
Innovative Education 1(1), 2003, 1-21
Lie, A. Cooperative Learning: Changing Paradigms of College Teaching. Retrieved (Date of
access – 2005, March 29) from http://faculty.petra. ac.id/ anita lie
/LTM/cooperative_learning.htm
Nelson, Bob “Motivating People Is The Right Thing To Do,” Corporate Meetings &
Incentives 23(11), 2004, 59-62
Oakley, B., Felder, R. M., Brent, R., & Elhajj, I. “Turning Student Groups Into Effective
Teams,” Journal of Student Centered Learning, 2(1), 2004, 8-33
Peterson, S.E. & Miller, J.A. “Comparing the Quality of Students’ Experiences
During Cooperative Learning and Large-Group Instruction”. The Journal of
Educational Research, 97(3), 2004, 123-133.
Pfaff, E. & Huddleston, P. “Does It Matter if I Hate Teamwork? What Impacts Student
Attitudes toward Teamwork?” Journal of Marketing Education, 25(1), 2003, 37-45.
Stewart, B. & Powell, S. “Team Building & Team Working,” Team Performance
Management 10(1), 2004, 35-39

367
CHAPTER 12

HEALTH COMMUNICATION
AND
PUBLIC POLICY

368
HEALTH, CULTURE, COMMUNICATION: PERCEIVED INFORMATION
GAPS/NEEDS OF FEMALE MINORITY PATIENTS & THEIR DOCTORS

Amiso M. George, Texas Christian University


a.george2@tcu.edu

ABSTRACT

This study examines perceived gaps in communication between female minority


patients and their doctors in selected Reno/Sparks, Nevada, clinics and hospitals. The
communication gaps are assessed using an adaptation of Dervin's (1976) sense making theory
and method. Sense making is based on the situation-gaps-uses/helps model, and considers
human behavior as communication actions in which people identify a need or gap in
information. This need prompts them to seek and use information on a given issue. Findings
bring to fore the importance of cross-cultural and gender communication in health care
settings. They also provide implications for health care policy.

I. INTRODUCTION

Effective communication between doctor and female patients contribute to better


diagnosis of illnesses, reduce unnecessary medical procedures, and empower women to seek and
receive appropriate medical information that would enable them to make better decisions about
their health (McGee & Cegala, 1998; Ong, DeHaes, Hoos & Lammes, 1995). On the contrary,
poor doctor-female patient communication leads to misdiagnosis of illnesses, dissatisfied
patients, and other problems in health care. Studies show that doctors do not satisfy patients'
information needs because they misjudge patients' need for answers (Williams, Weinman, Dale
& Newman, 1995; DiMatteo, Reiter & Gambone, 1994). Asking questions in an appropriate
manner is important, especially when dealing with female patients who are of different social
class, ethnic group or education. The image of the physician as the arbiter of all medical
information is intimidating to female minority patients, so adjusting medical questions to
patients' demographics is essential. This exploratory study employs the sense making theory and
method in identifying perceived gaps in transactional interpersonal communication between
female minority patients and their doctors in two Nevada communities, and how these gaps
influence the patients and diagnostic decisions made by doctors. The study examines the context
of the communication experience, medical information needs and expectations, and how the
information helped or hindered both parties, and the overall relational experience. The study
adds to the discussion on the inclusion of cross-cultural and gender communication in medical
school curriculum, medical centers, and the need for educating female minority patients in
effective communication skills with their physicians and other health care providers. It also
provides ideas that should be included in the discussion on health care policy.

II. LITERATURE REVIEW

The need for effective communication between patients and their doctors has been
studied extensively (McGee & Cegala, 1998; Ong et.al, 1995). However, as the demographics in
the United States change, it is imperative to reexamine the traditional interpersonal
communication style between doctors and their patients, especially female minority patients. A
2001 Census Bureau Report indicates that the Hispanic/Latino population in the United States at
37 million now exceeds African Americans who number 36.3 million. A majority of the

369
Hispanics are of Mexican origin, including Mexican immigrants who are increasingly settling in
Nevada. In spite of these demographic changes, there is little evidence to indicate that the health
care industry makes any effort to ascertain the medical information needs of this population or
other multi-cultural patients in order to serve them effectively.

Studies of the treatment of women and ethnic minorities (Corea, 1985; Kreps &
Kunimoto, 1994) indicate a need for gender-sensitive and culture-sensitive approaches to
healthcare communication. Scully (1980) notes that poor and minority women are used as
teaching tools in hospitals to enable medical students to exercise the skills they will
subsequently use on affluent patients. Additionally, communication between women and their
doctors is usually one of significant imbalance (Scully, 1980; Raymond, 1982). The inequity is
partially explained by the long-standing belief by some that physicians are the only experts on
health; therefore, their authority should not be questioned. In a pivotal study of obstetricians-
gynecologists, Scully (1980) argues that medical education prepares doctors to gain the patient's
trust and confidence in order to manipulate her into doing what the physician wants her to do.
This tradition which is reinforced in medical school training perceives qualities like empathy,
cultural and gender sensitivity inconsequential to the makings of a doctor. In fact, studies
indicate that although communication skills are taught in some medical school, they are not part
of the curriculum. They may range from a four hour workshop to a five day course in
communication (Chase, 1998), and are often dismissed by the students as extraneous to their
medical training (Frederickson and Bull, 1992). In spite of acquired communication skills,
research shows that doctors frequently do not meet patients’ information needs because they
assume the patients do not need the medical information (DiMatteo, Reiter & Gambone, 1994;
Williams, et al., 1995). On the other hand, studies also reveal that patients seek little or no
information from physicians, albeit most contend they want information about their illness to
enable them to make decisions about their health (Beisecker, 1990). In essence, doctors and
patients view effective communication differently (Sanchez-Menagey & Stadler, 1994).
However, given that the quality of communication between physicians and their patients
profoundly affect the quality of patient health care (Roter & Hall, 1992), developing effectual
rapport with multicultural or minority patients is important for successful health care delivery to
that population(Kreps and Kunimoto, 1994).

III. THEORETICAL FRAMEWORK: SENSE MAKING THEORY AND


METHOD

A modified version of Sense-Making theory and method was used for data gathering
and analysis. Dervin (1976) developed the “sense-making” approach through a series of
extensive studies, since 1975, on human information needs and uses. Dervin (1989) defined
“sense-making” as a coherent set of concepts and methods used to study how people make
sense of their worlds, especially, how they create their information needs and uses for
information. Sense-making theory rests on the assumption that humans seek and/or use
information when they find themselves in circumstances that hinder their movement in time
and space (Dervin & Clark, 1987). Sense-making rests on the Situation-Gaps-Uses model
(Dervin, 1989). Situation is the time-space framework at which individuals make sense
of their situations or needs. Gaps are the information needs or concerns of respondents
as they move through time-space, and Uses/Helps indicate questions and answers and
other information that help individuals "bridge the gap" created by their needs, as well
as the uses to which the newly acquired information is put. Dervin noted that these
three parts of the sense-making model allow for inter-subjectivity of responses or the
creation of shared meaning.

370
Sample Selection. Participants were selected through purposive sampling, a non-probability
sampling procedure. Babbie (1986) described this kind of sampling as one based on the
researcher’s familiarity with the population, and judgments about the purpose of the study.
Patton (1990) affirmed that purposive sampling allowed researchers to choose "information-
rich cases that would clarify the research questions" (p. 173). In this case, although the
doctors and patients do not represent all of their peers, they constitute a subset of the
population under study, and therefore qualify as appropriate sample. The doctors were
selected from the directories of healthcare providers, while the female minority participants were
identified and selected through non traditional means.
Physicians: To ensure an adequate response from doctors who served the female population
in Reno/Sparks, 126 OB/GYN, Internal Medicine and General Practitioners were surveyed.
Participants were selected from directories of OB/GYN doctors listed in the Hometown
Health and Blue Cross Blue Shield directories and cross referenced with the yellow pages for
current address information and doctors that may not have been listed in these two health
directories. A total of 15 responses were received for a 12 percent response rate. Of these, 13
respondents participated for a 10 percent total response rate. However, given the negligible
response rate of the physician sample, the data was discarded.
Female Minority Patients: Participants were recruited through nontraditional means, ranging
from beauty salons, ethnic restaurants and businesses, to word of mouth, because the traditional
recruitment methods were not successful. A total of 20 female minority patients participated in
the study within a period of five months in 2004. The women’s ages ranged from 20 to 74 with
an average age of 42. Twelve of the females were Hispanics, four were African Americans, two
were Vietnamese, and two were Pacific Islanders. Twelve of the women were married while the
remaining were single mothers, widows or divorcees. Eight of the women were college
graduates, while the rest were high school dropouts, high school graduates, had some college, or
attending community college. Respondents were asked, in two focus group interviews, to
describe, based on their experiences, the context of their communication with a physician,
their medical information needs, whether the information sought helped or hindered their
ability to make decisions about their health.
Micro-Moment Time-Line interview. Of the 20 focus group participants, five whose responses
were similar were selected for a modified Micro-Moment Time-Line interview, a sense making
interview technique that provides detailed and meaningful responses to research questions. For
these interviews, the researcher re-described the situations that the respondents had alluded to
during the group sessions. The participants were asked to choose the questions or concerns
they considered pertinent and identify physician responses that helped or hindered their
ability to make decisions about their medical condition. These individual interviews lasted
between 60 and 100 minutes, and were conducted at restaurants in the Reno/Sparks area.
Individual Interviews: A modified version of the Micro-Moment Time-Line (individual)
interview technique was used for the individual interviews, because it allowed the researcher
to obtain more than superficial information, given the nature of the topic. Dervin & Clark
(1987) wrote that although Time-Line interviews were longer than regular interviews, the
advantages were tri-fold. They enabled both parties to establish rapport (and
consequently trust), provided a forum for clarification of questions and responses, and
enabled the researcher to observe some nuances that otherwise would not be possible
under other interviewing circumstances. These (individual) interviews focused on the
themes of three situations or experiences that emerged from the focus group discussion,
for which contextual information was sought. The situations dealt with context of
communication experiences, medical information needs, help and hindrances, and relational
experiences.

371
V. RESULTS

All 20 participants in the two focus groups and the individual interviewees freely
discussed the three areas of focus: context of communication experience, medical information
needs, helps and hindrances, and relationship experiences with their physicians. Nine themes
emerged from the focus groups and individual interviews: the need for physicians to be
patient, friendly and sensitive, 2) the need to spend more than 15 minutes with patients, 3) the
need for better communication between primary care physicians and specialists or between
primary care physicians when patients switched doctors, 4) the need to be culturally and class
sensitive so as to be able to communicate successfully with them, 5) the need to participate in
their (patients) education, 6) the need for male doctors to be more sensitive to female medical
concerns, 7) patients cherished long association with physicians, 8) patients find female
doctors more sensitive to female medical concerns regardless of race, ethnicity or social
status, 9) not all minority doctors were empathetic to minority women.

VI. SUMMARY AND LIMITATIONS

Using sense-making method and theory in this study provided rich and diverse
answers. It also highlighted Oakley’s (1986) assertion that in order to successfully interview
women, the interviewer sometimes became a psychoanalyst or a “friend.” This was evident
in all the focus group and individual interviews. Participants noted that the focus group
sessions gave them an opportunity to talk about issues they had never discussed with anyone
outside their families. It also made them to realize that they were not the only ones who have
had the experience of ineffective communication with physicians. The individual
interviewees recalled and recounted painful experiences about their encounters in hospitals
that they did not feel free to share in the focus group sessions. Participants said the sessions
helped them to learn more about themselves and learn from the experiences of other
participants. They shared information and swapped empowering ideas on how to prepare
themselves for the next visit to the hospital or doctor’s office.

VII. CONCLUSION

Although this study is only exploratory, used a small sample in a small urban center,
it provides some basis for an expanded examination of the topic. It also illuminates an area of
doctor-patient communication that has not been studied extensively, female minority patient
perspective. The preliminary data reveal the necessity of an understanding of lived
experiences of female minority patients, an under-served group that is growing as the recent
census figures indicate. A larger study would include a wider diversity of ethnic minority
women in large urban areas. Above all, the study reveals that perceived gaps in
communication exist between female minority patients and their doctors. The women assert
that probing questions are not asked sensitively, that medical jargon is confusing to them, and
that hospital environments are as intimidating as the doctors. The also study divulges that the
perceived ineffectiveness of communication encounters between female minority patients and
their doctors should be tackled. The areas of concern should be used as a springboard for
developing custom intercultural communication skills training that should be part of medical
school curriculum. Female minority patients should also be exposed to skills that would
enable them to develop effective communication skills that would be useful in a clinical
setting.

372
Suggestions for Health Care Policy Decisions
Physicians and other healthcare providers should view minority female patients
through their multiple roles as mothers, wives, employees, coupled with language, gender,
and cultural burdens. Thus it is suggested that gynecological and emotional questions be
couched in non-threatening language. To bridge the gap in communication with their doctors,
female minority patients need to be educated on how to communicate with their doctors. This
could be exemplified by making a list of questions to ask the physician, and mirroring
responses of the physician to ensure clear understanding. Medical centers should print simple
pamphlets in various languages on “how to talk to your doctor about your illness” for their
non-English speaking patients.

This exploratory study clearly indicates that female minority patients have different
views on what constitutes effective communication. The study is not intended to criticize
doctors or female minority patients; rather, it highlights the gap in communication between
these two parties. The study also suggests that if this gap is to be bridged, physicians should
develop effective communication skills that incorporate cultural and gender sensitivities.
These communication skills should be incorporated into medical school curriculum, so that
future doctors will recognize the importance of effectively communicating with their patients
regardless of their background. Female minority patients on the other hand must educate
themselves or participate in community-based programs where they can be taught
communication skills that would enable them to actively seek information, respond to
medical questions, and fully participate in their own wellness. Finally, health care policy
planners would do well to recognize the need to bridge the communication gap between
female minority patients and health care providers to ensure equitable delivery of health care.

REFERENCES

Babbie, Earl. The Practice of Social Research 4th ed. Belmont, CA: Wadsworth, 1986.
Beisecker, Analee. E. “Patient Power In Doctor-Patient Communication. What Do We Know?”
Health Communication., 2, 1995, 105-122.
Chase, Marilyn. HMOs Send Doctors to School to Polish Manners, The Wall Street Journal,
Health Journal, April 13, 1998.
Corea, Gena. The Hidden Malpractice: How American Medicine Mistreats Women. New York:
Harper, 1985.
Dervin, Brenda. "Strategies For Dealing With Human Information Needs: Information Or
Communication?" Journal of Broadcasting., 20 1976, 324-333.
Dervin, Brenda. “Audience as Listener and Learner, Teacher and Confidante: The Sense Making
Approach.” In Ronald E. Rice and Charles K. Atkin, eds., Public Communication
Campaigns. 2nd ed., Newbury Park, CA: Sage, 1989, 67-86.
Dervin, Brenda, & Clark, K. ASQ: Asking Significant Questions. Alternative Tools For
Information Needs And Accountability Assessments By Librarians. Sacramento:
California State Libraries, 1987.
This work was supported in part by a grant from the University of Nevada, Reno, Junior
Faculty Research Grant Fund, but does not imply endorsement by the University of the
research conclusions. Thanks to thank Cindy Petersen, MA. for help with data collection and
analysis.

373
THE FRAME-CHANGING STRATEGY IN SARS COVERAGE:
TESTING A TWO-DIMENSIONAL MODEL

Li Zeng, Arkansas State University


zengli@astate.edu

ABSTRACT

This study examines the changing pattern of SARS coverage in the New York Times.
Based on a two-dimensional model, the study revealed that during the life span of the SARS
epidemic (March 2003 to January 2004), the newspaper employed a frame-changing strategy
on both the time and space dimensions to keep the event salient in the news. An
overwhelming majority of the stories employed the core frames, which were the frames that
originally registered the event on the news agenda. It was also found that during the 11-
month period, the newspaper shifted its focus on the core frame combinations, which further
supported the role of the frame-changing strategy in the coverage of a long-lasting event.

I. INTRODUCTION

The SARS epidemic was a tragedy that alerted the whole world about the
“vulnerability of global health systems” (Chang, Salmon, Lee, Choi, and Zeldes, 2004).
Eventually claiming more than 800 lives throughout the world (WHO, 2003, August 15), the
disease remained ignored for almost four month until international news media reported it
extensively following a SARS warning by the WHO in March 2003 (WHO, 2003, March 12).
Due to the lack of human knowledge about the disease as well as the way in which
coronavirus, later known as the cause of SARS, was transmitted, the news media closely
watched the development of the epidemic during the following months. This study examines
the process during which the SARS outbreak was portrayed in a major U.S. newspaper.

II. REVIEW OF LITERATURE

Media framing has been known as the process in which the media select and package
ongoing events and issues (Entman, 1993; Iyengar, 1991; Ryan and Sim, 1990; Schon and
Rein, 1994). Through selection and emphasis of certain aspects of an occurrence or an issue,
media actors (mostly journalists) are able to define a problem (Entman, 1993) within selected
context, thus creating a “constructed reality” (Turk and Franklin, 1987, p. 30). Studies on
agenda-setting, especially second-level agenda-setting, have repeatedly suggested that the
mediated version of reality may considerably affect people’s understanding of social reality
(e.g., McCombs and Reynolds, 2002). However, frames have often been treated as static
features of news content. Very little is known about the process of framing, especially when
involving an event with a long life span. In a recent attempt to address framing as a dynamic
process, Chyi and McCombs (2004) proposed a two-dimensional model, which takes into
account media focus on both the space and time dimensions. In this model, the space
dimension consists of five levels: individual, community, regional, societal, and international.
The time dimension consists of three levels: past, present, and future. Based on their analysis
of the coverage of the Columbine School shootings in the New York Times, they found that
different frames were employed on the space and time dimensions as the event developed. On
the space dimension, the media focus gradually shifted from the individual level to the
societal level. On the time dimension, a considerable amount of coverage focused on past

374
frames at the beginning, while the use of future frames increased during the second half of
the event’s life span. According to Chyi and McCombs, this frame-changing strategy was
used by the media to secure the salience of the event over time. They also found that the core
frames (the combination of community and present frames), which reflected the essential
features that initially made the Columbine School shootings a salient news event, appeared in
fewer than one quarter of the total number of stories on this topic. The majority of the stories
employed extended frames such as societal plus present.

Is the frame-changing strategy adopted in media coverage of other events as well? Is


it normal that the stories using extended frames greatly outnumber those using core frames
and therefore focusing on essential features of a news event or issue? This study attempts to
reveal the framing process concerning the New York Times’ coverage of the SARS epidemic,
a multi-wave international event. The SARS epidemic first entered the media agenda in
March 2003 as a fatal disease that infected a number of countries in Asia. Therefore, the core
frames of the epidemic are considered to be the “international plus present” combination.
Based on Chyi and McCombs’s two-dimensional scheme (2004), this study asks the
following research questions:
RQ1: How were the SARS stories distributed in the New York Times during the life
span of the SARS Epidemic?
RO2: What was the pattern of space frame change in the SARS coverage?
RQ3: What was the pattern of time frame change in the SARS coverage?
RQ4: How was the use of space frames correlated with the use of time frames?

III. METHOD

This study employed the method of content analysis. Using the Lexis-Nexis database,
a key word search for “SARS” or “Severe Acute Respiratory Syndrome” was conducted in
the New York Times between March 2003 and January 2004. A total of 1,098 stories were
retrieved from the database. A sample of 140 stories was constructed using the systematic
sampling method. The coding instrument in this study was borrowed from the scales by Chyi
and McCombs (2004). The coding unit is a story. Each story was coded individually on the
month of publication, the space dimension, and the time dimension. The space dimension
includes five categories. A story was coded in the individual category if it emphasized
individual SARS patients, reactions from their families or friends, or contextual information
about individual SARS cases or probable cases without referring to a larger area related to the
cases. A story was in the community category if it emphasized the SARS condition in a local
area, such as a large hospital, a town or a small city. A story was in the regional category if it
emphasized the situation in a large metropolitan such as Toronto or Beijing, a special region
such as Hong Kong, a province/state, or a large area that is within the border of a certain
country. The national category was a slight variation from Chyi and McCombs’s (2004)
societal level. A story was coded in the national category if it emphasized the situation within
a specific country. Finally, a story was in the international category if it emphasized the
situation across national borders, e.g., SARS research led by the WHO.

The time dimension includes three categories: past, present, and future. A story was
coded in the past category if it mainly provided historical background or traced relevant
events in the history, such as influenza and AIDS. A story was in the present category if it
focused on updates of the ongoing SARS epidemic. A story was in the future category if it
makes predictions about further developments of the situation, proposes actions to be taken,
or estimates future impacts of the event, including preventive procedures and proposals for

375
future medical research. A graduate student and the researcher participated in an inter-coder
reliability check. The Holsti’s inter-coder reliability for all the variables fell within the highly
satisfactory range of .87 to 1.00 (Holsti, 1969). The graduate student coded all the 140 stories
in the sample.

IV. FINDINGS AND DISCUSSION

The first research question asks how the SARS stories were distributed over time. A
total of 1,098 articles were published about the SARS epidemic over the 11 months under
analysis. After picking up the topic on March 16, 2003, the newspaper cast intensive attention
to the event during April and May 2003, with an average of more than 10 articles about the
outbreak on a daily basis. However, the amount of coverage suddenly dropped to about four
stories per day in June. After July 2003, when the WHO announced that the whole world had
conquered SARS (WHO, July 5, 2003), the media focus gradually shifted away from the
epidemic. The amount of coverage reached its lowest point in November 2003, with a total
number of 19 stories published during that month. However, more stories appeared about
SARS around the turn of 2004, partly due to the widespread suspicion about a revival of the
disease during the winter months.
Research question two asks how the framing of the SARS epidemic changed over
time on the space dimension. The data suggested that international frames were the dominant
frames on the space dimension, alone accounting for nearly half (42.9%) of the total. One out
of three stories (36.4%) employed a national frame, and another 13.6 percent of the articles
adopted a regional frame. Very few stories used a frame at the individual (5%) or community
level (2.1%).

A closer examination of the three dominant space frames revealed a changing pattern
over time (Figure I). While international and national frames apparently led the coverage of
the SARS epidemic during the 11-month period, the whole framing process featured a zigzag
path at

Figure I: Frame-Changing On The Space Dimension (N=130)


80
Percentage of Stories (%)

70
60
50 Regional
40 National
30 International
20
10
0
3

3
03

04
3
3

03

03

3
3
3
r-0

t-0
l-0
-0

c-0
v-0
-0

n-

n-
g-

p-
ar

ay

Oc
Ju
Ap

De
Ju

Ja
Au

Se

No
M

Month

all three leading levels. International frames were the mostly frequently adopted frames
(57%) when the event first entered the media agenda in March 2003. But the percentage of
international frames continuously decreased until it reached an all-time low point in July,
accounting for only 30 percent of the total. There was a dramatic increase in the use of
international frames in August and September 2003, indicating that the media focus shifted
back to discussion in a broader global context. The percentage of international frames

376
suddenly dropped during October and November 2003, but regained momentum in January
2004, reaching another peak of 75 percent of the total. The overall pattern of international
and national frames suggested that the two types of frames complemented each other.
Although not a leading frame at the very beginning, national frames gradually gained salience
between March and July 2003. The percentage of national frames suddenly decreased from
60 percent to slightly over 10 percent in August, and then gradually increased (with some fall
in between) to reach a peak in December, after which national frames suddenly disappeared
altogether. The ebb and flow reflected the shift of the newspaper’s focus from and to
discussion of the SARS outbreak within a national context. An examination of the change in
the number of space frames suggests that regional frames appeared most often in stories
published during April and May 2003. During the remaining months, very few articles
focused on parts of a certain country. Research question three asks how the framing of the
SARS stories changed on the time dimension over time. Data analysis showed that only a
negligible proportion (1.4%) of the stories employed past frames. Figure II displays the
distribution of present and future frames. Present frames were the most dominant frames
during the whole life span of the SARS epidemic. During the first few months of the outbreak
and in September and December 2003, an overwhelming majority of the articles employed
present frames. Due to the limited use of past frames over time, the use of future frames and
present frames were complementary. When the media attention slightly shifted away from
current updates of the SARS situation, predictions and proposition about future preventive
procedures gained some prominence.

Figure Ii: Frame-Changing On The Time Dimension (N=138)


120

100
Percentage of Stories

80
Present
60
Future
40

20

0
3

3
03

04
3
3

03

03

3
3
3
r-0

t-0
l-0
-0

c-0
v-0
-0

n-

n-
g-

p-
ar

ay

Oc
Ju
Ap

De
Ju

Ja
Au

Se

No
M

Month

Because SARS coverage in the New York Times tailed off after May 2003, a
distribution of the number of stories on the time dimension may be more revealing. As
illustrated in Figure III, the number of stories employing present frames reached a peak
between April and May 2003, and gradually decreased during the remaining months. The
number of stories using future frames remained low during the whole process, with slight
fluctuations in April and July 2003.

377
Figure Iii: Change Of Time Frames In The Number Of Stories (N=138)
50

40

Number of Stories
30
Present
Future
20

10

3
3

4
3
3

3
3
3
r-0

t-0
n-0

n-0
l-0
r-0

g-0

p-0

c-0
v-0
y-0

Oc
Ju
Ap
Ma

De
Ju

Ja
Au

Se

No
Ma
Month

Research question four involves the relationship between frame use on the space and
time dimensions. Chi-square results suggest that on all three levels of space frames that were
frequently used (regional, national, and international), present frames were always the most
prominent on the time dimension. No significant correlation of frame use on the space and
time dimension was identified (X2 = .375, p. > .05). The core frames, which were the
“international plus present” combination, alone appeared in nearly half (44%) of all the
SARS stories. The most frequently adopted extended frames were the “national plus present”
combination, accounting for 38 percent of the SARS frames. While it is true that SARS
cases were reported in a number of countries throughout the world and the United States was
not heavily infected during the outbreak, within a SARS-infected country the outbreak can be
considered more a national occurrence than a global disease (Zeng, 2006). National frames
may be essential as well when a foreign correspondent reports within a SARS-hit country,
e.g., a New York Times correspondent reporting from Beijing. Therefore, the national frame
can be considered as a secondary core frame in the U.S. media. The combinations of
“international or national plus present” frames appeared in four out of every five SARS story.
The heavy use of these two combinations suggested that the newspaper took into
consideration of the nature of the SARS outbreak. The shift of focus between these two
combinations indicated that even when focusing on the nature of the event itself, the
newspaper employed a frame-changing strategy to maintain the salience of the event on the
news agenda.

V. CONCLUSIONS

This study tests Chyi and McCombs’s two-dimensional measurement scheme in


SARS coverage in the New York Times. The findings support the model in the sense that, as
this multi-wave event unfolded, frame-changing was employed as a strategy to keep it alive.
The study also indicates that, depending on the complexity level of an occurrence, its
essential features that help register it on the news agenda might be highly complicated.
Therefore, there might be more than one core frame for such events. It is unclear why in the
New York Times more than 80 percent of the SARS stories used a core frame and thus
focusing on the essential features of the event, while only one-fourth of the stories on
Columbine School shootings did the same thing. Nonetheless, the use of core frames in
SARS coverage, particularly the shift of focus on core frame combinations, confirms the role
of the frame-changing strategy in the coverage of a long-lasting event. Consistent with
previous research, findings from this study suggest that framing of a news event is an on-
going process, although clear-cut frame-changing patterns are sometimes hard to identify. In
a multi-wave event that lasted a long time like the SARS epidemic, the frame-changing

378
patterns can be very complicated. Further research is needed to understand the factors that
influence such changes and how such changes may influence public perception.

REFERENCES

Chyi, Hsiang Iris, and McCombs, Maxwell. “Media Salience and the Process of Framing:
Coverage of the Columbine School Shootings.” Journalism and Mass Communication
Quarterly, 81,(1), 2004, 22-35.
Entman, Robert M. “Framing: Toward Clarification of a Fractured Paradigm.” Journal of
Communication, 43, 1993, 51-58.
Holsti, Ole R. Content Analysis for the Social Sciences and Humanities. Reading, Mass.:
Addison-Wesley Pub. Co., 1969.
Iyengar, Shanto. Is Anyone Responsible? How Television Frames Political Issues. Chicago:
University of Chicago Press, 1991.

379
NAVIGATING ILLNESS BY NAVIGATING THE NET:
INFORMATION SEEKING ABOUT SEXUALLY TRANSMITTED INFECTIONS

Kelly A. Dorgan, East Tennessee State University


dorgan@etsu.edu

Linda E. Bambino, East Tennessee State University


lebambino@charter.net

ABSTRACT

With changing norms about health care interactions, patients are expected to be active
in their quest for "good" health and their management of disease. The Internet has quickly
become a dominant source of health information with online bulletin board communities
affording people a place to gather both information and support. By sampling messages
(N=500) from two different online bulletin boards, the authors examine information seeking
techniques used by women with sexually transmitted infections. Such bulletin boards allow
those facing stigmatizing illnesses to manage health-related information in a variety of ways.

I. INTRODUCTION

In this age of modern medicine, we aim to demystify health (Wallis, 1993) by


emphasizing predictability and control of the human body (Mishel, 1990) and education and
empowerment of the patient (Wallis, 1993). We live in a time that values a "model of health
care in which people use their own judgment” in making decisions (Parrott & Condit, 1996,
p. 4). Of course, communication plays an integral role in illness prevention and health
maintenance, including information-gathering efforts. Simply put, patients must have skills
to navigate a widening and deepening ocean of health-related information. Therefore, this
research paper examines how women use Internet bulletin boards to seek information about
their sexually transmitted infections (STIs). Women are at the heart of this examination since
STIs, arguably, affect them more severely, socially (Leonardo & Chrisler, 1992), and
physically (e.g., cervical cancer; Palefsky & Handley, 2002) than their male counterparts.

II. INFORMATION SEEKING AND CHRONIC ILLNESS

Berger and Bradac (1982) identified three information gathering techniques used
during interpersonal interactions with a "stranger": (1) passive strategies (e.g., observing the
stranger); (2) active strategies (e.g., asking a third party questions about the stranger); and (3)
interactive strategies (e.g., asking the stranger questions). These information seeking
techniques have also been investigated within the context of illness events, since "people with
acute or chronic illnesses often seek information to understand their diagnosis, to decide on
treatments, and to predict their prognosis" (Brashers et al., 2002, p. 258). For example,
Brashers et al. (2000) concluded that persons living with HIVS/AIDS may use all three of
these techniques because of the chronic and complex nature of the illness.

Originally, these techniques were applied to gathering information about a "stranger."


Therefore, Brashers et al. (2000) had to reconceptualize all three in order to describe
information gathering during an illness event. That is, passive information-seeking now
involved interacting in “information-rich environments,” thereby absorbing passing

380
information (pp. 70-71). Active techniques involved purposely “eliciting information from
multiple sources” and “monitoring for updated information” (p. 71), and interactive
information-seeking included “self-experimentation with illness and treatments” (p. 71).

III. STI AND STIGMA: INFORMATION SEEKING ON THE NET

Because of the stigma and uncertainty surrounding STIs (e.g., HPV, human
papillomavirus and HSV, herpes simplex virus) the person infected may need to rely on
passive, active and interactive information gathering techniques. For example, when dealing
with a stigmatizing illness, some might prefer passive techniques. As Parrott (1995)
contends, “persons who seek information but hesitate to disclose to others their need for
information may use impersonal press or broadcast sources to reduce their uncertainty more
often than they use interpersonal sources (p.178).

Yet, it is clear that all three information-gathering techniques are relevant to


uncertainty-provoking illness events. Consider that when dealing with a stigmatizing illness,
a person might employ passive techniques, particularly when initially accessing information;
however, as she continues to face uncertainties and have questions raised (e.g., about
treatment options, relational consequences), she will likely also have to employ active and
interactive techniques. Since there might be hesitation on the person’s part to engage in
face-to-face communication, it would be beneficial to analyze a context where all three
techniques are being used: Online communication. Internet bulletin boards, for instance,
allow people to “passively” read messages without announcing their presence, but when they
desire more active information-gathering approaches, they are able to post their questions and
comments (see Robinson, 2001).

For the purpose of this paper, these information seeking techniques must be
reconceptualized to "fit" within a framework of online information seeking. Drawing on
previous work (e.g., Brashers et al., 2000; Sharf, 1997), the passive method is comparable to
online "lurking" in that the information seeker is simply observing within an "information-
rich" community. The active approach can describe intentionally eliciting information about
the illness (e.g., asking questions within the online bulletin board community). Finally, the
interactive information-seeking approach was used to describe “real-time" interactions,
allowing for free-flowing communication directly exploring illness-related issues (e.g., online
chatting; instant messaging).

IV. METHODS

Internet bulletin boards display "all messages that have been posted on it and their
respective replies” (Robinson, 2001, p. 707). The two boards selected for this study are
thriving communities: the HPV board, in existence since 1998, contained over 25,000
current and archived messages; the HSV board, beginning in 1999, contained over 12,000.
The boards required no passwords to read the postings, rendering them accessible to anyone
who stumbled across or actively searched for such websites. As illustrated by Robinson
(2001) in the Exemption Decision Model for Unsolicited Narrative Data from the Internet,
the first author applied for and received exempt status from the Institutional Review Board,
since users posting "to a freely accessible asynchronous board expect that persons unknown
to them may read, share, and comment on their postings” (p. 711) Yet, safeguards were still
used to protect users' anonymity: (1) unique identifiers were omitted, such as personal
names; (2) participant message identifiers are used rather than usernames (e.g., post 13 out of

381
250 HPV posts is referred to as HPV P13); and (3) message board urls (i.e., website
addresses) are not referenced in this paper.

After receiving IRB approval, 500 messages were downloaded from the two boards,
beginning with the most recent message posted and moving toward the earliest messages.
Given that the HPV site had approximately 25,000 messages, 250 (10%) were sampled to
enable persistent observation (Lincoln & Guba, 1985); 250 messages were also sampled from
the HSV site. Message selection was based on several criteria as to avoid temptation to select
the most provocative messages (e.g., only messages posted by users directly or indirectly
identifying themselves as females were selected for inclusion).

Given that this project was observational in nature, a qualitative content analysis
approach was used in data analysis (Berg, 2004), an approach that draws on grounded theory
methods (Strauss & Corbin, 1998). All 500 sample posts were combined into a single
message transcript, totaling 63 pages, with HPV posts and HSV posts separated into different
sections to allow comparisons within and between categories (Strauss & Corbin, 1998).

V. RESULTS

First, a few participants acknowledged that they adopted an observational role,


reading others’ messages instead of posting. One woman wrote, "you know I visit this site
everyday even though i rarely post ... it really keeps my head together” [HSV P40].
Likewise, a second woman admitted that it had take her “a few weeks to post a message
because when I first found out about this club I basically read everyone else’s thoughts and
advice” [HPV P189]. Thus, some women appeared to use the passive technique, for
example, simply reading messages to gain information about herpes typing [HSV P11].
Moreover, this observational technique appeared to ease concerns that face-to-face
interactions may raise.

I have been reading posts from this club for almost a year. I just joined the club last
week. I was afraid to talk to anyone about HPV or warts. [HPV P4]

Second, there was also evidence that some participants desired a more interactive
approach. They too sought information, but they wanted to do so by “chatting.” Because the
HPV board members also had access to a board-associated “chat room,” there were
occasional requests made for other site users to “chat." “I must always sign on [to the chat
room] when no one else is on. I would love to chat, its kinda hard to be single with hpv, I
never know when the right time is to tell someone and should I even date someone that
doesn't have it?” [HPV P70]

Eight posters from the HPV board solicited partners for private chat sessions.
Because the HSV site did not have the chat option, no such requests were made; however,
there was still interest in finding resources. For example, one HSV board member asked:
Can anyone recommend specific sites with good, clear information on just what H [herpes] is
and how it is transmitted and how to possibly prevent transmission? [….] I am a bit new to
"surfing" and I haven't been in a chat room yet (although I'd like to) [HSV P79]

Third, posters overwhelmingly employed the active information seeking technique. In


particular, posters requested that board members share their own stories about symptoms,
alternative treatments, medications, and transmission, thus requesting other board members'

382
experiential information. Several HSV posters, for instance, wanted to understand herpes
outbreaks beyond just their symptoms: some sought information about outbreak-triggers,
such as foods or psychological stressors; others wanted outbreak-cues as to be able to predict
and understand their illness.

Yesterday I started having a weird feeling in my right thigh and the best way I can
describe it is as very sensitive to the touch although I cannot see any blisters there.
Can someone tell me is this is also considered an outbreak? or if someone else has
experienced this too? [HSV P55].

Especially for women who deemed their health care providers as untrustworthy or
inaccessible, these boards offered them a place to actively secure second opinions and
“substitute” medical advice. For example, once prescribed a treatment, some used the board
to solicit other’s opinions: “I just got prescribed Aldara today. I was just wondering what
other people's experience with it has been” [HPV P132]. Still another user asked, “I was
wondering if you had to be anesthetized…when you had your LEEP?? [Loop Electrosurgical
Excision Procedure] My doctor wants me to be” [HPV P156]. In essence, some women used
the Internet bulletin boards to actively seek information about whether their health care
providers were doing "what is best for me” [HPV P211].

VI. CONCLUSION

This research paper examined women's use of online bulletin boards in their passive,
active, and interactive information seeking efforts about STIs. Unfortunately, two techniques
(passive and interactive) could not be adequately addressed by this project. Unless the posters
explicitly admitted to “lurking” or “chatting” in order to gain information these techniques
could not be fully analyzed. Yet, what did emerge was that these bulletin board communities
can provide invaluable forums for women actively seeking to manage their STI experience.

Underscoring previous research (e.g., Sharf, 1997), results here within suggested two
primary benefits to using online communication: (1) it can provide a relatively “risk free”
environment where health-related issues can be explored; and (2) the participants can gain
support and information from people in similar circumstances. Internet sites have managed to
combine a “face-saving” impersonal medium with an interpersonal one. That is, an
individual with a stigmatizing health condition can still connect with others, asking questions
and seeking opinions, but by “hiding” her face behind a computer screen and her identity
behind a username, she may decrease the risk of embarrassment or rejection. Thus, the
anonymity promised by computer-mediated communication can encourage the use of all
three information-gathering techniques.

As the patient's role in health care continues to change (Parrott & Condit, 1996), there
is recognition that communication about health-related matters is not just taking place in
doctors’ offices, a point highlighted by these research findings. Yet, the active information-
seeking patient may face one complication in particular: information management is
becoming complicated by the "saturation of the media environment," making it "difficult for
individuals to avoid information about some health topics" (Brashers et al., 2002, p.265).
Essentially, then, patients now have to sort through the information that their providers give
them, as well as the information they secure through the Internet.

383
This study's findings help underscore the fact that the Internet can be a tool for the
empowerment, especially for those facing a stigmatizing illness. Online bulletin boards can
provide a place for diverse people to meet and help fulfill diverse information needs. Board
members can minimize their interpersonal risks (e.g., embarrassment) by simply asking for
information online. Essentially, then, by navigating the deep and wide waters of the Internet,
these women may be better prepared to navigate their own illness experiences.

REFERENCES

Berg, B. Qualitative Research Methods for the Social Sciences. Boston, MA: Allyn and
Bacon, 2004.
Berger, C.R., & Bradac. Language and Social Knowledge: Uncertainty in Interpersonal
Relations. London: Edward Arnold Publishers, 1982.
Brashers, D.E., Neidig, J.L., Haas, S.M., Dobbs, L.K., Cardillo, L.W., and Russell, J.A.
“Communication in the Management of Uncertainty: The Case of Persons Living with
HIV or AIDS.” Communication Monographs, 67 (1), 2000, 63-84.
Brashers D.E.; Goldsmith D.J.; Hsieh E. "Information Seeking and Avoiding
in Health Contexts." Human Communication Research, (14), 2002, 258-271.
Leonardo, C., & Chrisler, J.C. "Women and Sexually Transmitted Diseases." Women &
Health, 18 (4), 1992, 1-15.
Lincoln, Y. S., & Guba, E. G. Naturalistic Inquiry. Beverly Hills, CA: Sage
Publications, 1985.
Mishel, M.H. "Reconceptualization of the Uncertainty in Illness Theory." IMAGE: Journal
of Nursing Scholarship, 22 (4), 1990, 256-262.
Palefsky, J. & Handley, J. What Your Doctor May Not Tell You about HPV and
abnormal Pap Smears. New York: Warner Books, 2002.
Parrott, R. "Topic-Centered and Person-Centered 'Sensitive Subjects': Recognizing and
Managing Barriers to Disclosure about Health." In L.K. Fuller & L. McPherson
Shilling (Eds.). Communicating about Communicable Diseases. (pp:177-190).
Amherst, MA: HRD Press, 1995
Parrott, R.L., & Condit, C.M. "Introduction: Priorities and Agendas in
Communicating about Women’s Reproductive Health." In R.L. Parrott & C.M.
Condit (Eds.). Evaluating Women’s Health Messages (pp.1-11). Thousand Oaks,
CA: Sage Publications, 1996.
Robinson, K.M. "Unsolicited Narratives from the Internet: A Rich Source of Qualitative
Data." Qualitative Health Research, 11 (5), 2001, 706-714.
Sharf, B. "Communicating Breast Cancer On-line: Support and Empowerment on the
Internet." Women & Health, 26 (1), 1997, 65-84.
Strauss, A., & Corbin, J. Basics of Qualitative Research: Techniques and Procedures for
Developing Grounded Theory. Thousand Oaks, CA: Sage, 1998.
Wallis, L.A. "Preface." In D. Foley & E. Nechas (Eds.). Women’s Encyclopedia of Health &
Emotional Healing, (pp. xxi-xxii). Emmaus, PA: Rodale Press, 1993

384
COMMUNICATION AS CAUSE & CURE: SOURCES OF ANXIETY FOR
INTERNATIONAL MEDICAL GRADUATES IN RURAL APPALACHIA

Kelly A. Dorgan, East Tennessee State University


dorgan@etsu.edu

Linda E. Bambino, East Tennessee State University


lebambino@charter.net

Michael Floyd, East Tennessee State University


floyd@etsu.edu

ABSTRACT

This study investigated the role of communication in cultural adaptation-related


anxiety. Twelve International Medical Graduates (IMGs) participated in 1-1 ½ hour long
interviews at one of three rural Appalachian clinics. Three broad sources of anxiety emerged
from the interview data, including: social isolation (lack of communication); language
barriers with patients and colleagues; and shifting communication norms. Communication
was also identified as a tool for overcoming anxiety.

I. INTRODUCTION

In a recent editorial, Bruijnzeel and Visser (2005) wrote, "Due to the increased
migration from the sixties in the last century up until now, health care in the 'modern Western
world' is more and more confronted with the consequences of a multiethnic society" (p. 151).
Along with the opportunities associated with a multiethnic society, there are also
complications. In particular, cross-cultural interactions can bring about anxiety, and
apprehension (Gudykunst, 2003), especially when interactants are poorly trained or lack
culturally-specific communication skills.

Physicians graduating from medical programs outside of the United States


(International Medical Graduate, IMG) and entering a U.S.-based residency program
arguably face a multi-layered and frustrating cultural adaptation in that they have to adjust to
the: broad host culture, regional culture, and residency program culture. The process of
entering a new culture and interacting with "strangers" is anxiety-provoking; however, it is
through repeated, sustained, and satisfying interactions with those "strangers" that we are able
to overcome anxiety and adapt to the new culture (Gudykunst, 2003). Therefore, this paper
examines the role of communication during IMGs' cultural adaptation, specifically
investigating communication as both "cause" of and "cure" for adaptation-related anxiety.

II. INTERNATIONALIZATION OF U.S. HEALTH CARE

Between 23 and 28 percent of physicians practicing in United States, Canada,


Australia and the United Kingdom are IMGs (Mullan, 2005). While the number of U.S.
medical graduates increased moderately over the past 20 years, the number of U.S. hospital
interns increased exponentially with the influx of IMGs (Blumenthall, 2005). Of course, this
influx has given rise to concerns over unintended consequences. For example, there is
concern over weakening other nations' health care systems by recruiting their physicians and

385
medical students (Blumenthall, 2005). Additionally, there is confusion about how to best
meet the cross-cultural challenges associated with the increased diversification of the U.S.
medical community. To help international health care providers adjust, communication
interventions are being proposed, including cultural immersion (Davis, 2003) and cultural
sensitivity training (Majumdar et al., 1999). Clearly, effective communication is key to
successful cross-cultural adaptation, but it can also be a source of problems.

III. ADAPTATION ANXIETY: COMMUNICATION AS CAUSE AND CURE

Communication is a "universal activity"; however, the specific manner of


communication "varies considerably across cultures" (Knutson et al., 2002, p. 3). For
example, certain cultures demonstrate a stronger willingness to communicate (e.g., U.S.),
leading to cross-cultural challenges due to differences in conflict management and
conversation initiations (Knutson et al., 2002). Moreover, notions about what constitutes
"communication competence" are culturally bound, affecting perceptions of appropriate
greeting behaviors, eye contact, posture, and gestures, to name a few (see Johnson, Lindsey,
& Zakahi, 2001).

Cross-cultural communication can provoke fear and apprehension, possibly resulting


in increased avoidance and decreased satisfaction with interactions (see Roach & Olaniran,
2001). Ironically, the "cure" for adaptation-related anxiety is communication. For example,
international students may benefit from additional training to overcome communication
apprehension (Roach & Olaniran, 2001), arguably resulting in increased cross-cultural
communication (e.g., between the trainer and international student). Likewise, some
advocate the use of "ethnic link workers" who serve as a communication bridge between
ethnically different doctors and patients (Bruijnzeels & Visser, 2005); yet, by involving link
workers, a third party is introduced into doctor-patient interactions, adding an additional layer
of complexity. The underlying cause of anxiety for IMGs is culturally bound as they attempt
to adapt to unique communication challenges before them. It is plausible, then, that
developing a communicative “cure” would simplify the transition into the new culture and
reduce anxiety levels of the IMGs in a more efficient manner. Therefore, it is important to
investigate communication as both a cause and cure of adaptation-related anxiety among
IMG in rural Appalachia, a region where they are immersed in a seemingly ethnically
homogenous community.

IV. METHODS

IMGs (N=12) participated in 1-1 ½ hour long interviews at one of three rural
southeastern Appalachian clinics. Participation was voluntary and informed consent was
obtained from each medical resident interviewed. The sample included female (n=7) and
male (n=5) interns from seven different cultures, including: Caribbean (n=1), Iran (n=2),
India (n=4), Columbia (n=1), Denmark (n=1), Pakistan (n=2), and Peru (n=1). To preserve
confidentiality, participants are identified by a number (e.g., P8 for participant 8 out of 12).

A qualitative content analysis approach was used in analyzing the manifest and latent
elements in the interview content (Berg, 2004). As advised by grounded theory
methodologists (Strauss & Corbin, 1998), the twelve interview audio recordings were first
transcribed (167 pages of text) then coded. Initial open and axial coding efforts were done
independently by the first and second authors. Additional axial coding was done jointly to:
resolve differences; compare within and between categories; and select exemplar quotes to

386
illustrate emergent themes (Strauss & Corbin, 1998). To preserve the integrity of the data,
IMG quotes presented below were only edited for clarity or conciseness. If material was
removed or added by the authors, this is indicated with brackets.

EX: "A lot of the things were very new for me [….]"

V. RESULTS

Three broad sources of communication-related anxiety emerged from the interview


data. First, a predominant theme raised by seven of IMGs was that a lack of communication
can be anxiety producing during cultural adaptation. Especially when first entering their host
culture, IMGs experienced social isolation, resulting in shock (P8), depression (P7), and
hardship (P1).

I am a very social person. I always was. [….] you don’t have that social interaction
that much compared to what you would have in India. It is so crowded and you have
so many people talking to you. [….] You are pushing a hundred people right where
you are walking. Around here when you are walking, there is nobody else. So that
social isolation has a barrier (P8).

One resident acknowledged that being by himself was hard "especially in a small place" as a
rural Appalachian town. He added, "What do you do whenever you are done with all the
movies? [….] Not much, read, watch TV." He added that he would visit others' houses when
invited, "but that is not frequently" (P9).

Aggravating social isolation is cultural and racial isolation. "There is only one family
I know in [town] that is from my country," according to one woman from Pakistan. She
added that such isolation can lead to depression, thereby affecting an IMG's "ability to
perform" as a doctor (P7). For a few other IMGs who had already been living in the United
States, cultural isolation in their new Appalachian community was still anxiety-provoking:
"It was not that difficult in New York because the culture is such a mixed culture [….] So it’s
easier to blend in." This IMG revealed that it is a "big challenge" not interacting with
relatives or "anybody of [a] similar culture or background. […] It’s hard from time to time; I
feel like it has taken me a year to […] feel settled here." (P1)

Second, language barriers emerged as another source of anxiety, particularly when


interacting with patients and colleagues; however, these language barriers overwhelming
centered on the use of colloquialisms. When first entering the host community, adjusting to
the regional dialect apparently presented interpersonal challenges for some IMGs. As one
man explained, "at first it was just nasty. They [patients] just didn’t like me and I of course
didn’t like them because I could not communicate" (P9). Unique cultural expressions
complicate the already complex doctor-patient relationship, requiring both communication
parties to negotiate even harder to achieve understanding. For example, one man recounted a
previous conversation with a female patient who asked about her test results:

"how are my testes?" [the patient asked]. I say, "what is that?" [….] "No, no you
mean, 'what are my test - what is the report?'" [….] “you want to know about your
test, not testes, because you are a female.” (P4)

387
Whether their patients are making broad cultural references to "Jenny Craig" (P11) or
regional references about "mamaw" and "papaw" (i.e., grandmother and grandfather; P10),
these expressions can affect the flow of the doctor-patient interaction. Language barriers
sometimes extended into the IMGs' interactions with colleagues. Culturally specific
expressions like "Oh my God" (P11) and humor can increase an IMG's sense of social
isolation. One woman admitted, for instance, that "it’s hard for me to joke in English"
because it is not her primary language (P7). Moreover, organizationally specific language,
such as "night floats" (P5) or "sharps" (P12) may not translate, leaving them concerned that
their new colleagues think of them as "lazy or stupid" when the IMGs are just confused (P5).
Yet most insisted that with time, patience, experience and concentration, they eventually
adapt to linguistic changes.

A third source of anxiety IMG faced were shifting—and usually unspoken--


communication norms. For example, appropriate greeting behaviors can differ across
cultures: "a lot of the things were very new for me [….] you are walking by a street and
some one goes by and you have to say hello and hi to everyone," adding, "that was a strange
thing for me [….]I’m not use to that" (P7). As one IMG from Denmark said, ""it is so
different from our culture, so different. I mean everything's different" (P5). International
medical residents can experience changes in touch behaviors (P6) as well as changes in
accepted notions about how to communicate with those in authority:

I am not very comfortable sitting and chatting with them [her supervisors]…but I am
much better than I was before. We have a lot of hierarchy in India [….] but here I see
people are very friendly and very casual (P3)

Changing communication norms result in changes in doctor-patient interactions. For


a majority of IMGs, they had to adjust to more demanding, educated, and assertive patients in
their host culture. One man reported because patient education and knowledge was greater in
the United States, "it makes a difference. They know lots of things, and they ask you more
questions and you have to answer" (P6). Perhaps as a consequence of the trend toward the
active patient in the U.S. (Parrott & Condit, 1996), IMGs often were frustrated about patients'
demand for medications: "at least half of my patients receive narcotics. I cannot make them
stop" (P9). Similarly, another IMG said, "The majority of them want some sort of medicine"
(P1). Finally, while communication—or lack thereof—can aggravate an already anxiety-
ridden adaptation process, communication can also alleviate anxiety and frustration. For
example, the vast majority of IMGs emphasized the importance of experiential learning in
helping them adjust to new cultural norms, expectations, and practices. That is, they learned
by interacting with their patients friends, and colleagues. As one IMG stated, "the more you
see patients, the more you get comfortable and the more you feel okay" (P12). Whether it
was practicing English with neighbors (P11) or observing other residents' interactions with
patients (P5), IMGs sought information to help them counter their frustrations with cultural
adaptation.

I’ve heard the more you interact, the more it is easier for you to know how to interact
with the patients. [….] Otherwise if you are just in your community, you cannot
interact with other people. (P3).
Positive communication during their adjustments also seemed important for some. As one
resident said, "I remember my husband used to say, 'Now give yourself six months (to adjust
to the new culture) [….] 'It happens and it happened to me too, you know'" (P11). Such

388
reinforcing messages, whether from colleagues, supervisors, or relatives, may be critical to
remind IMGs that cultural adaptation is possible.

VI. CONCLUSION

This study investigated communication's role in the cultural adaptation of IMGs.


Previous research has already documented how important communication is in health
interactions (e.g., Babrow et al., 2000) and cross-cultural interactions (e.g., see Gudykunst,
2003). Essentially, then, communication is both the cause and cure of the anxiety involved in
adapting to a new culture. Findings of this study suggested a powerful barrier that many
IMGs face: social isolation. In addition to facing linguistic challenges and shifting
communication norms, lack of communication (both with members of the host and native
culture) can undermine their ability to adapt. Underscoring this point is the finding that
experiential learning is a powerful tool for adjusting: IMGs need to be around other people,
to directly interact, and to observe interactions within the host culture, all in an effort to ease
the adjustment process. While linguistic differences are certainly a profound source of
anxiety, there are other underlying communication-related sources of anxiety. Language is
the manifestation of culture; however, it is "not the main feature of belonging to an ethnic
group." We must also focus on cultures' latent features, those "consequences of differences
in that set of values, norms, attitudes, and expectations" (Bruijnzeel & Visser, 2005, p. 151).
By only focusing on language, we miss something of significance: those unspoken
differences that guide our intra-cultural and cross-cultural communication. Of course, some
researchers are already investigating how to help IMGs adjust to new communication
expectations (e.g., Majumdar et al., 1999), but the research is still emerging, and therefore,
limited. There is no indication that the internationalization of the United State health care
system will lessen, presenting exciting opportunities for additional research and intervention
in this area. Communication will continue to be a source of anxiety to anyone brave enough
to venture into cross-cultural interactions; however, by identifying effective communication
tools, we can turn fear into hope, frustration into confidence, and anxiety into excitement.

REFERENCES

Babrow, A.S., Hines, S.C., & Kasch, C.R. "Managing Uncertainty in Illness Explanation: An
Application of Problematic Integration Theory. In B. Whaley (Ed.). Explaining
Illness: Research, Theory, and Strategies. (pp. 41-68). Mahwah, NJ: Lawrence
Erlbaum Associates, 2000.
Berg, B. Qualitative Research Methods for the Social Sciences. Boston, MA: Allyn and
Bacon, 2004.
Blumenthall, D. "New Steam from an Old Cauldron: The Physician Supply Debate." The
New England Journal of Medicine, (350)17, 2005, 1810-1818.
Bruijnzeels, M, & Visser, A. "Editorial--Intercultural Doctor-Patient Relational Outcomes:
Need More to be Studied." Patient Education and Counseling, (57), 2005, 151-152
Davis. C.R. "Helping International Nurses Adjust." Nursing, (33), 6, 2003, 86-87.
Gudykunst, W.B. Bridging Differences: Effective Intergroup Communication.
Thousand Oaks: Sage, 2003.
Johnson,, P., Lindsey, A.E., & Zakahi, W.R. "Anglo American, Hispanic American, Chilean,
Mexican and Spanish Perceptions of Competent Communication in Initial
Interaction." Communication Research Reports, (18), 1, 2001, 36-43.

389
INTEGRATED SOCIAL MARKETING & VISUAL MESSAGES OF BREAST
CANCER INFORMATION TO AFRICAN AMERICAN WOMEN

S. Diane Mc Farland, Ph.D. Buffalo State, SUNY


dianemac@buffnet.net

ABSTRACT

“Images have meanings, and those meanings are not fixed. The ways in which
images are made, used and viewed all have an effect on their meanings” (Nelson, 2005, p.
2). This paper explores the use of visual breast cancer message symbols and how these
are communicating to the African American woman. This research uses the AIDA
marketing model within the social semiotic context to explore how African American
women interpret the visual signs commonly used to promote information and awareness
about breast cancer. To capture the attention of the intended audience marketers must
create Integrated Social Marketing Communication campaigns that first have visual
meaning to that audience. This paper concludes that use of the AIDA model is a useful
template to conduct social semiotic analysis to achieve the goal of producing culturally
targeted materials to promote awareness and understanding of health information.

I. INTRODUCTION

Two significant articles have reviewed the development of the pink ribbon campaign
and how it’s development has led to the destigmatization of breast cancer in the media.
The first article, reviewed the use of the pink ribbon campaign finding that this has created
an aura where “the ribbon, pink, round, feminine and innocent, is an advertisement for
both grassroots and corporate activism and the philanthropist as ideal citizen”(King, 2004,
p. 488). The whole month of October is now given over to publicizing breast cancer and
the ubiquitous pink ribbon is in magazines, on out of home advertising, television
programs, and as noted in Fran Visco speech to the National Breast Cancer Coalition (May
22, 2005), even the U.S. Congress got involved and passed a bill to light up the St. Louis
Arch in pink for breast cancer awareness month.

The second article investigating the pink ribbon campaign is Milden’s (2005), who
discusses how far public openness of breast cancer has come, from the late 1970’s when
women kept breast cancer a secret, to today when there are races and corporate sponsors
publicizing breast cancer survivor events. She evaluates two Race for the Cure events and
offers opinions---both hers, as a woman who has had breast cancer, and others---regarding
this transformation from a private disease to a public one. Fundamental to this
transformation is the use of the pink ribbon campaign. During her evaluation she notes
that for the most part the events are filled with European-American women with very few
African-American women present.

African American (AA) women have a lower incidence of breast cancer, but have a
significantly higher incidence of mortality rates, than do European American (EA)
women. Marbella and Layde (2001) examined breast cancer data by age, from 1981-
1996, finding that AA women in all age groups consistently had higher mortality rates
than EA women. Coward concluded that AA women need to develop “the belief that one

390
has choices and can take action” (2005, p.265) or self-agency, and to do this there needs to
be interpersonal, nontraditional interventions to empower and encourage AA women.

II. SEMIOTICS

Semiotics, is a research analysis method---science---that studies the language-like


codes or signs, “which establish objects such as texts, buildings, cars, and so forth as signs
having cultural meanings over and above their constructions and functions” (Bothamley,
1993, p. 481). Since the approach of semiotics includes the language, visual images and
music that people use to communicate and develop culture, Lawes (2002) suggests that
semiotics takes an outside-in approach to understanding perceptions. She proposes that
marketers use semiotics to analyze marketing communications including the advertising,
tracking the changes in the customer base and consumer in general, etc. As noted by
Watts, “semiotics refers to anything that can or might express meaning” (2004, p. 386)
particularly Peircean semiotics, because of its unique focus on the role of sign in the
ongoing process in the creation of shared meaning (Botan & Soto, 1998).

Anderson proffers that semiosis “occurs for the physicalist at the moment of neural
response, for the perceptionist at the formation of a perception, for the constructionist at
the moment the sign is ideologically positioned, and for the actionalist as the moment of
becoming in action” (1996, p. 62). He contends that the physicalist and perceptionist is
“strongly dependent on the individual as the acting unit” (p. 62). While the constructionist
and actionalist “see semiosis as a collective process in which the individual participates
but cannot achieve on his or her own” (p. 62). This paper proposes that semiotics is a
research method, thus, the semiosis takes place with the interpreter---or the individuals,
those in their interpersonal network, and their environment. As noted by Watts “Umberto
Eco says people bring different codes to a given message and therefore interpret it in
different ways. Those codes have been assimilated and accepted as a result of social class,
education level, political ideology and historical experience” (2004, p. 387).

Combining semiotics with information-processing as a way to understand visual


persuasion in advertising system analysis, Larsen, Luna and Peracchio suggest that “to the
extent that the attributions of those who view ads and those who create them are consistent
with each other, a visual language exists” (2004, p. 110). This is similar to the point
Anderson was making when he says that “meaning is under local control---by defining the
semiotic object as the set of socially determined understandings in play for some material
foundation…within some cultural era or, more radically, some community of
understanding” (1996, p. 56). Nelson suggests that “images have meanings and these
meanings are not fixed” (2005, p. 2) in fact, the images have layers of meanings where
“denotation is the first layer in which something is described and is fairly easy to decode”
(p.2). However, these can be further decoded into a second layer which refers to structured
messages developed to send particular messages to the receiver of the messages. These
connotative meanings can be further “broken down into synecodochal, where the sign is
part of something representing the whole, or metonymic, where something is associated
with something else and represents that something else” (p. 2).

Watts (2004) has proposed that “semiotics is an approach, a way of looking at


meaning” (p. 385) and “is not a defined system which can be applied in black and white
terms to a problem” (p. 385). He has proposed using the marketing model AIDA
(Attention, Interest, Desire, Action) to “give shape and form to semiotic analysis” (p. 385).

391
In his work he interprets a cover of an annual report to analyze the visuals, which he
contends must “speak to the viewer as loudly as the written word…[because] If the visuals
do not impart meaning at the attention stage the viewer will not even read the text” (p.
393). The process as proposed by Watts will be used in this analysis because if the AA
women are not attracted to the visuals they will not receive the message. However,
instead of the interpreter being this author, the AA women will interpret what they find as
meaning. This research will focus in particular how attention might lead to action—in this
case reading the material presented.

III. RESEARCH RESULTS

This semiotic analysis will first analyze the denotative meanings identified by the
respondents for the images presented, these meanings are then explored for the
connotative meanings that come from the culture of the interpretant—the AA women. This
research uses depth interviews to obtain the opinions of the AA women. The process of
depth interviews was chosen because of the sensitivity of the topic and the ability of this
method to probe for meaning. Respondents ranged in age from early 20’s to early 50’s
and were from a sample demographically representative in range of education and income
of AA women from a large Northcoast DMA.

During the process of the depth interviews the AA women were ask to participate in
a “word game” where they were to state the first thing that came to their mind after they
were shown an object, heard a word, viewed a photograph or pamphlet. The respondents
were shown a pink ribbon pin and ask to give the researcher their first impression. The
overwhelming majority of the respondents did not recognize pink as associated with breast
cancer---in fact none of them did. While, the women did think it had something to do with
cancer they more often thought it had something to do with the aids campaign. Typical of
the responses were:
1. “I know some people have pins that’s for cancer or aids or and I’m not sure what
the pink is for. I’m not familiar, that’s like my first impression. And, that’s a beautiful
pin. That’s my second impression”.
2. “When you think about it, just that the ribbon has been copied by so many people
at this point. That everybody has a ribbon in different colors and now everybody has a
band in a different color.”

The women were also shown a series of ten pamphlets that had been produced by
various government and nonprofit medical agencies. The pamphlets all aimed to aid
women in the understanding of breast health, however all had different designs, differing
amounts of information and seemed to the researcher to be targeted to various audiences.
The first four were provided for this research by a county governmental agency. One was
picked up at a physician’s office and the remaining five were obtained from a women’s
clinic that focused on women’s cancer and were specifically given out either individually
or to churches. This report will focus on two of the pamphlets distributed by the local
county governmental agency. These pamphlets are the most similar of the materials
investigated in both design and content, thus reducing potential conflict with reliability
and validity.

The first pamphlet (P1) examined was 5” x 8” printed on recycled paper, with a
green cover which depicted a cartoon of a woman doing a breast self exam in a mirror,
with typeface in a modified cartoon style part in black in smaller type (which read: What

392
every woman should know” and a part in white (the large type in white read: “about breast
health”. All illustrations in the pamphlet were drawn with thin line art and with more
information than the other pamphlets, however the women were not attracted to this 16
page pamphlet as typical responses suggests:
1. “…I don’t know if it’s interesting. It almost looks like it’s a little cartoon so that it
is going to attract people who might not read a lot. It almost look kid like to me…”
2. “I think they are just lines that are kind of colored in but it doesn’t relate to one
thing or another…it’s also the blandness….i think they are more likely to be thrown out
and not paid attention to. Where as the one that is more colorful have real people or the
drawings that distinguish and look like real people it’s more likely like someone will look
through”.

The second pamphlet (P2) was the same size at number one (5” x 8”) but is in color,
printed on slick glossy paper with the cover having a photograph of a EA women viewed
from the back but obviously doing a breast self exam, again this is a 16 page pamphlet and
is a mixture of photography and realistically drawn line art. This is the one pamphlet with
the pink ribbon which is on the back cover. The cover also has a headline of “Women”,
which is approximately 1.25 inches in height and is in a gold color, “and Breast Health” in
white smaller type. There are also six other phrases in smaller type with each in a different
color. Additionally, there is an index on the inside front cover which made one woman
respond about P1 as “even a small booklet could look like it has chapters” when
comparing the two. When comparing P1 and P2 another woman said she would pick up
P2 if both were lying on a table “because I don’t like the texture of pamphlet #1, I don’t
like the papery feeling, and it’s more appealing to the eye to pick up than #1”. The typical
response to P2 was:
“I think this is more knowledgeable than the first one you had. It [has] more colors,
more pulling you…Any type of thing that has color, it’s an eye catcher that one [P1] is
more dull, this one is more eye catching it’s telling you more things…I would pick this up
before I would pick up the other one.,.it’s saying more for 40 and older though. It’s giving
you a least an age group”
[note: there was nothing specifying age on P2 just the photograph of the woman]

As noted by Watt “the visual must speak to the viewer as loudly as the written word.
If the visuals do not impart meaning at the attention stage the viewer will not even read the
text” (2004, p. 393). One respondent eloquently validated this when describing P1 and P2:
“Once it (P2) got my attention then it made me want to know about it. Compared to
the other one, that would be the one I’d read first…The other pamphlet (P1) would have
been the last thing I would have looked at”.

III. CONCLUSION

This paper has used the AIDA marketing model within the social semiotic context to
explore how AA women interpret the visual signs commonly used to promote information
and awareness about breast cancer. First, the research above discussed the denotative
meanings for the image of the pink ribbon. This research contradicted research reported by
Frisby (2002), where her respondents could freely remember the pink ribbon campaigns but
less than 2% of the AA women in her study had an understanding of a single risk factor
associated with breast cancer. In this present study the respondents did not specifically
identify the pink ribbon with breast cancer associating it with aids more frequently.
Obviously, further research is required to determine the differences. If as proposed by King

393
(2004) and Milden (2005) the pink ribbon campaign is being flummoxed by all the other
ribbon campaigns then perhaps different symbols should be investigated. Next this paper
evaluated the connotative meanings that come from the culture of the respondent. There are
layers of meaning within the connotative meanings: first level, conception of self what did
they think, were they attracted to it; second level evaluation of the type of pamphlet was it of
interest and was it targeted to them; third, was it credible; fourth, would others read it. When
exploring the women’s reaction to the pamphlets color, photography, and glossy paper
attracted their attention. The women found P2 more credible and they said they would read
it. While P1’s recycled paper, which has a less expensive feel, and the cartoonish line art
detracted from the attention the women would have given it. Even spending time with P1
during the research the women concluded that this pamphlet had less information than P2
when in fact it had more.

This paper has explored the use of visual breast cancer message symbols and how
these are communicating to the AA women. Further, this paper has explored the AA
women’s semiosis of two pamphlets typically distributed by governmental agencies,
finding that visual graphics, use of color, and finish of the paper determine the attention
given the pamphlet, thus the action that will be taken. This research has shown that the
AIDA marketing model is a useful template to conduct a social semiotic analysis of
culturally targeted materials specifically with AA women. To capture the attention of the
intended audience marketers must create Integrated Social Marketing Communication
campaigns that first have visual meaning to that audience. This inclusion of the audience
in social semiotics has given rise to three important principles that are significant in the
communication process: [1] “All people see the world through signs; [2] The meaning of
signs is created by people and does not exist separately from them and the life of their
social/cultural community; [3] Semiotics systems provide people with a variety of
resources for making meaning” (Harrison, 2003, p. 48). These three principles are
important when choosing a sign to communicate with an audience. Social Marketers must
understand how the audience perceives the signs/ symbols, as well as the text when
developing the communication message. Signs/ symbols have different connotations
within different ethnic and cultural groups. In particular, messages related to breast cancer
or as Day questions whether the pink ribbon campaign is reaching the women most at risk
from the disease (2003).

REFERENCES

Anderson, J.A. (1996). Communication theory: epistemological foundations. Guildford


Press. New York, NY
Botan, C.H. (1998). A semiotic approach to the internal functioning of publics:
implications for strategic communication and public relations. Public Relations
Review, 24 (1). 21-45.
Bothamley, J. (1993). Dictionary of Theories. Gale Research International Ltd. London.
Coombes, R. (2004). Ribbon development: with so many now to choose from, do ribbons
make people more disease aware---or just more confused? Student British Medical
Journal 12, p. 169.

394
DTCA: HEALTH COMMUNICATION OR CAPITALISTIC PERSUASION

Amber Phillips, East Tennessee State University


zanp17@imail.etsu.edu

ABSTRACT

Direct-to-consumer advertisements (DTCA) have become an ever increasing issue in


health communication. The purpose of this study was to examine current research on DTCA,
identify main advantages and disadvantages found in the research, as well as discuss
stakeholder issues and future directions for research. Through a review of the literature, three
advantages and four disadvantages were identified and discussed. Changes in methodology
as well as future directions for research was suggested based upon findings in the literature.

I. INTRODUCTION

Direct to consumer advertising (DTCA) has been faced with many opponents as well
as supporters since it first became a large industry because billions of dollars have been spent
each year since the early 1990’s to produce it. In recent years prescription drug
manufacturers have been changing how prescription drugs are marketed through DTCA).
Before the 1980s, prescription drugs were marketed only toward physicians and other
healthcare providers. Since then, a firestorm has raged over the intention of DTCA.
According to research in the U.S., DTCA has a major impact on the public awareness of
prescription drugs. The pharmaceutical industry spends more than $2 billion annually to
promote consumer desire for products and to increase market share, (Rosenthal et.al, 2002).
According to surveys, over 90 percent of the public reported seeing prescription drug
advertisements (Frank et. al., 2002). Also, an estimated 8.5 million consumers annually have
requested and received prescriptions from their physicians in response to DTCA, (Heinrich,
2002). Pharmaceutical companies have profited heavily due to the use of DTCA (Frank et.
al., 2002). The aim of this project is to examine literature on DTCA, specifically focusing on
the benefits and costs. The stakeholders surrounding this issue will also be focused on.
Future directions and limitations will be discussed as well.

II. HISTORY OF DTCA

Direct-to-consumer advertising development in the pharmaceutical industry was


gradual indeed. Many believe that DTCA was inevitable considering the culture of the
1980’s, the beginning of the boom (Bell et. al., 2000; Hollon, 1999). The pharmaceutical
industry originally targeted doctors in their advertisements, which did indeed prove to be
profitable. However, the pharmaceutical industry was receiving heat from authorities due to
allegedly bribing the physicians with vacations, cars and even homes (Hollon, 1999).
Changes began in the 1980s with an advertisement for Rufen®, a branded generic of
ibuprofen directed to the consumer (Pines 1999). In 1997, the FDA made more changes to
DTCA by lifting restrictions on disclosure of safety information, which gave way to
television and radio advertisements to accompany print ads (Hayes, 1998). Direct-to-
consumer advertisements have been gradually increasing since they were first utilized.
However, in 1997, the use of DTCA increased dramatically. Nearly two billion was spent on
direct-to-consumer advertising in 1999 and generated an estimated $9 billion in product sales.
Expenditures for 2003 rose to $3.27 billion in the United States (Schommer et. al., 2004).

395
These large numbers have sparked investigation on the advantages and disadvantages of the
practice and what effects they have on the stakeholders involved.

III. ADVANTAGES OF DTCA

The literature yielded three main advantages of the direct-to-consumer advertising of


prescription drugs. First, supporters of DTCA state that it is an important tool for the
empowered patient. Researchers argue that many consumers receive important health
information through the advertising they see on television, (Holmer, 2002). Researchers have
stated that DTCA improves doctor-patient communication (Lewis, 2003). DTCA always
encourages health consumers to “ask their doctor” about a certain medication. This may
prompt patients to ask questions to their healthcare provider about a certain drug or condition
(Lewis, 2003). Symptoms that the patient may not have recognized if they had not seen them
described in the advertisement (Lewis, 2003).

It has also been suggested that DTCA does facilitate communication between the
physician and patient about embarrassing or stigmatic topics such as a sexually transmitted
disease or erectile dysfunction (Lewis, 2003) Researchers also declare that DTCA provides
consumers with information about treatment options and other important health information
that they would otherwise not receive(Alperstein and Peyrot,1993), For example, DTCA
increase awareness and treatment of diseases such as diabetes, hypertension and the like
disseminating important information that many people would not receive otherwise, (Holmer,
1999). Researchers also claim that DTCA gives consumers information about treatment
alternatives to current treatment that may be ineffective or problematic, (Gold, 2003). It is
argued that people may not be aware that there is a certain medication out there for their
condition or that a disease is treatable without direct-to-consumer advertising.

Second, supporters argue that DTCA improves the physician-patient relationship.


According to findings by Murray et. al. (2004), out of 266 patients that responded to the
survey 51 argued it improved the physician-patient relationship. Murray et. al. (2004)
conducted another survey, this time with physicians. In this study, physicians believed that
patients talking about information from a drug advertisement had more of a positive effect
than a negative effect on the doctor-patient relationship. Patients and physicians alike believe
that DTCA has a positive impact on the physician-patient relationship (Murray et. al., 2004).
However, they also state that certain conditions must be present in order to these to be so.
Patients must act as an empowered patient, the DTCA conversation must deal with a
legitimate concern of the patient, and the physician must not feel threatened or feel that their
opinion is being belittled (2004).

Third, supporters of DTCA also believe that although a lot of money is spent on the
advertising of pharmaceuticals, it actually lowers healthcare costs (Holmer, 2002). Holmer
(2002) argues that since DTCA facilitates physician-patient communication, more tests will
be requested and preformed. These early detection measures will ultimately decrease
healthcare costs for the individual because the earlier a disease is diagnosed, the less evasive
and less expensive it will be (2002). It may also decrease insurance co-pays without
decreasing the quality of care one gets (Schommer et. al., 2004). For instance, if DTCA
prompts early detection, physicians will be able to treat more patients successfully. It would
also benefit the patient as well. Early detection is known to increase the likelihood of
survival or recovery if a condition is found (Holmer, 2002). This could be an important tool
for patients as well as those that treat them.

396
IV. DISADVANTAGES TO DTCA

The literature yielded four major disadvantages of the direct-to-consumer advertising


of prescription drugs. First, opponents of DTCA claim that it does not improve the
physician-patient relationship. In a study mentioned earlier, Gilbody, Wilson, and Watt
(2005) state that the literature of that time did not support the claim that DTCA was
beneficial to the physician-patient relationship. They went on to add that it could in fact be
the opposite and DTCA would actually be detrimental to the physician-patient relationship.
This could happen because DTCA may prompt the patient to pressure the physician for the
advertised medication (2005). Physicians that were surveyed by Murray et. al. (2004)
showed that physicians that felt at all pressured to prescribe a certain prescription drug,
thought more negative toward the physician-patient relationship. Opponents of DTCA state
that the practice does not improve the physician-patient relationship, it actually can hinder it
(Gilbody, Wilson, and Watt, 2005; Murray et. al., 2002). Second, DTCA has been linked to
an increase in the number of prescriptions written (Lipsky and Taylor, 1997). According to
the Lipsky and Taylor (1997) physicians that responded to the survey suggested that more
advertising leads to more requests for advertised medicines and more prescriptions. They
continue by saying if DTCA initiates a conversation, it is most likely to end with a
prescription being written (2003). According to Lipsky and Taylor (1997), overall physicians
felt negatively about advertisements both in print and electronic media.

Third, Gilbody et. al. (2005) state that DTCA is often misleading and that many
advertisements spend little time discussing harmful side effects. Researchers are concerned
with the fact that consumers are attacked by the preverbal hard sell from drug companies
(Watson, 2002). In other words, people are sucked in by the company’s advertising
techniques and away from the possible problems with the product. DTCA also may mislead
consumers to believe that they are suffering from a condition that can only be treated by these
pharmaceuticals, even if they are asymptomatic, (Gold, 2003). Fourth, this increases the cost
of doctor and hospital visits because the request of unnecessary tests and drugs, (Woloshin et.
al., 2001). Also, doctors report that they prescribe the most popular drugs to their patients,
(Gold, 2003) and feel pressure to prescribe specific brands, (Lewis, 2003). This can also
drive up costs and cause many medical problems. Opponents state that DTCA actually
increases the doctor’s visit due to the request of unnecessary tests by the patient (Woloshin et.
al., 2001; Lewis, 2003). They also state that DTCA is beneficial only to the pharmaceutical
companies and have no value to the health consumer (Lewis, 2003).

V. DISCUSSION

First, in this discussion of the literature, what the literature reveals about these effects
will be focused on, including some important areas of research. The next section will discuss
the limitations to the literature reviewed as well as methodological challenges surrounding
DTCA. The discussion will conclude with a discussion of future directions for researchers in
the field. Through analysis of the research, there are certain conclusions that one can
draw. First, current research in this field seems divided among the experts. There is not
enough expounding evidence one way or another regarding the benefits and harms of DTCA.
Next, it is important to study the short and long-term effects of DTCA. Both are important to
healthcare and the health of the public. Also, through analyzing the research, many important
areas of interest for DTCA researchers did emerge and should also be addressed. Research
regarding DTCA is divided among the experts and researchers interested in the field. As
shown earlier, many researchers believe the opposite of others regarding DTCA (Frank et. al.,

397
2002; Gold, 2003; Hansen et. al., 2002). Researchers are on the fence regarding important
issues including DTC advertisements’ effects on the doctor-patient relationship and
communication (Gilbody et. al., 2005; Gold, 2003;Murray et. al., 2004) as well as its effects
on the overall costs of healthcare (Lewis, 2003).

Both short and long-term effects of DTCA are important to the future research of this
topic. Short-term effects are extremely important at this time due to the newness of this
practice. Only being around since, 1997 (Schommer et. al., 2004), DTCA’s importance is
likely to continue to grow. Short-term effects could help us monitor DTCA and make
predictions on the long-term effects of the advertisements on important issues such as doctor-
patient communication as well as overall healthcare costs. It is also important, however, to
study the short-term effects of exposure on health communication as a whole, especially
between the patient and healthcare provider. This research is very important due to the
amount of money involved in the practice of targeting prescription drugs to consumers. As
stated earlier, expenditures for 2003 rose to $3.27 billion in the United States (Schommer et.
al., 2004). This is a large and very lucrative industry, finding swaying research could change
the face of this practice forever.

There were also certain limitations to the research that must be addressed along with
problems regarding the methodology of DTCA studies. Almost all of the literature examined
here was based on data collected from self-report surveys targeted at either healthcare
providers or healthcare consumers (Bell et al., 1999; Gold, 2003; Kopp and Sheffet, 1997;
Lewis, 2003;Watson, 2002; Weissman et. al., 2004). Self-report surveys are very helpful at
getting to opinions and the perceptions of participants; however, they can prove to be
problematic. Self-report surveys are not the most valid or reliable ways to collect data.
Subjects have been found to sometimes be dishonest on the survey for a number of reasons.
For instance, subjects may be dishonest out of fear of not fitting in with the rest of the
participants. Also, some participants try to guess what the researcher is looking for and
answer the questionnaire according to what they think the researcher wants to find. Also,
participants may not be honest in order to try and fool the researcher or they are just rushing
through the survey in order to finish. For these reasons, self—report surveys could be
problematic to researchers.

In most DTCA research, the sample is not fully representative of the population.
Most of the populations in the studies that make up the literature review were mostly
Caucasian (Gilbody et. al., 2005; Weissman et. al., 2004). There is not a representative
amount of other ethnicities taking these surveys. Looking at mostly Caucasian populations is
giving a skewed idea of patient’s perceptions of DTCA because not all patients are
Caucasian. Likewise, not all physicians are Caucasian. It is important to see how
participants from other ethnic backgrounds view DTCA as well. Also, most DTCA is not
diverse. Most advertisements are tailored to a Caucasian audience, however, recently a
number of drugs are advertising to other ethnic groups as well. Recently, Diflucan and
Valtrex are among two of the top drugs that are beginning to add members of other ethnic
groups to their advertisements. Also, male enhancement drugs as well as birth control are
adding diversity to their ads.

It is important to realize that DTCA never acts alone. Growing research shows that
there are other factors involved in DTCA’s effects on the doctor-patient relationship,
prescribing habits and use as well as increases in healthcare costs (Lewis, 2003; Holmer,
2003; Schommer 2004). For instance, the patient needs to be proactive about his or her

398
healthcare including information-seeking and have insurance or a regular doctor. Controlling
for these variables in DTCA research is important. However research is conducted, it is
important to at least keep these impacting variables in mind. Other important variables to
consider is the drug that is being advertised itself. Characteristics of the drug are important to
focus on such as, what the drug treats, seasonal variations in the drug, what kind of illness it
treats, and for how long (Hansen et. al., 2005). Other important aspects to consider include,
exposure to the ad, processing and communication effects, target audience action, sales and
market share, as well as profit being made off of the drug (2005).

VI. CONCLUSION

Through the analysis of the literature, some future directions of research can be
concluded. First, when conducting DTCA research, self-report can be helpful, however,
implementing other methods along with this could prove to be fruitful to research. An
experimental element is useful when studying the effects of DTCA and may improve validity
of the self-report survey. For example, using self-report along with doctor-patient
communication observation could tell the researcher more than the self-report survey alone.
This practice could help the researcher increase reliability and validity. Proponents argue that
the benefits of DTCA far outweigh what the opposition sees as costs. DTCA, which is only
currently allowed in the U.S. and New Zealand, is spreading and many other countries are
considering allowing pharmaceutical industry’s to begin DTCA (Lewis, 2003). Lobbyists in
Canada have also made a case for the implementation of DTCA and the country may soon
allow companies to begin the practice. The immanent spread of DTCA is also a cause for
more research. DTCA plays a major role in healthcare today and should be addressed. No
matter the outcome, billions of dollars and lives are at stake in this battle. It is the duty of the
researcher to search for what is in the public’s best interest. With the increase in healthcare
costs and increase of prescription use, DTCA has become a larger force in the fight for the
health and well-being of the public.

REFERENCES

Bell .R.A, Kravitz R.L, and Wilkes M.S. “Direct-to-consumer prescription drug advertising,
1989-1998.” Journal of Family Practice., 49, 1995, 329-335.
Frank .R. G., Berndt, E. R., Donohue, J. M., Epstein, A. M., and Rosenthal, M. B.. Trends in
Direct-to-Consumer Advertising of Prescription Drugs. Menlo Park: Kaiser Family
Foundation, 2002.
Gold, J. L., “Paternalistic or Protective? Freedom of Expression and Direct-to-Consumer
Drug Advertising Policy in Canada.” Health Law Review., 11,(2),2003, 30-38.
Hansen, R. and Droege, M. “Methodological challenges surrounding direct-to-consumer
advertising research- The measurement conundrum”. Research in Social and
Administrative Pharmacy,. 1, 2005, 331-347.
Health People 2010. (2005). Objectives. Accessed on September 30, 2005. Available at:
http://www.healthpeople.gov/Document/HTML/volume1/11HealthCom.htm.

399
INTEGRATIVE THEORY & COLLECTIVE EFFICACY: PREDICTORS OF
INTENT TO PARTICIPATE IN A NONVIOLENCE CAMPAIGN

Bumsub Jin, University of Florida


bumsub@ufl.edu

Charles A. Lubbers, University of South Dakota


clubbers@usd.edu

ABSTRACT

This study explored crucial predictors within the integrative theory and applied them
to an actual communication campaign for preventing violence on campus. By exploring an
additional predictor of behavior change, collective-efficacy, this study examined how the
predictor leads to individuals’ intentions to perform a particular behavior. Results showed
that college students’ attitude and perceived norms significantly predicted intentions to attend
the campaign programs, with attitude having more salient influence on intentions. However,
self-efficacy did not significantly predict the intentions. Perceived collective-efficacy failed
to significantly increase its unique variance in intentions, even though it could solely predict
the intentions.

I. INTRODUCTION AND REVIEW OF LITERATURE

Public communication campaigns have played a strategic role in forming beliefs,


attitudes, or norms and augmenting knowledge, thus leading to personal and social behavior
changes. Because of the salient effectiveness of public communication campaigns to create
behavior change, campaigners need to understand what factors lead individuals to pay
attention to the campaigns. That is, by identifying the predictors of an actual behavior or
behavioral intention of interest, campaign designers can develop a behavior change program.
The Predictors of an Integrated Theoretical Model and Another Predictor. Many researchers
have strived to articulate the predictors of human behaviors, including health-related
behavior, using various theoretical perspectives. Applying theories to behavioral phenomena
provides communication practitioners with insights into how an effective health message can
be designed for prevention programs or campaigns. Despite various theories that explain
behaviors, Fishbein et al. (2002) noted that only a few variables in these theories can play a
crucial role in predicting and understanding any health behavior. The three theories include 1)
the health belief model, (Rosenstock, 1974), 2) social cognitive theory (Bandura, 1977, 1991)
, and 3) the theory of reasoned action (TRA; Fishbein & Ajzen, 1975) and its successor, the
theory of planned behavior (TPB; Ajzen, 1991). On the basis of the three theories reviewed
in this study, Fishbein (2000) identified four determinants that can influence an individual’s
behavior; perceived risk (or susceptibility), attitude toward performing a given behavior,
perceived norms, and self-efficacy. A previous study that used a meta-analysis shows that
attitude, norm, and self-efficacy serve as significant determinants in predicting a health
behavior (Sheeran, Abraham, & Orbell, 1999). Fishbein argued that only three determinants
should be proximal predictors of intention and behavior, but perceived risk should be a distal
predictor because of its inconsistent research findings (e.g., Gerrard, Gibbons, & Bushman,
1996). In other words, the perceived risk is not always predictive of a health behavior such as
precautionary sexual behavior.

400
The integrated theoretical model (Fishbein, 2000; Fishbein et al., 2002) explains how
a given behavior occurs. With necessary skills and no environmental constraints, an
individual’s strong intention to perform any given behavior is most likely to result in the
behavioral outcome. In turn, strong intention is predicted by the three primary variables (i.e.,
attitude, perceived norm, and self-efficacy). Furthermore, each of these three variables is
generated by three corresponding beliefs: behavioral, normative, and efficacy beliefs.
Emphasis in this model is placed on developing appropriate types of prevention programs or
campaigns on the strength of each variable. If health-related attitudinal variables are most
salient in individual perception, for instance, campaigners must center on attitude-related
messages. They should attempt to augment individual intention, which is a significantly
powerful predictor of a behavior, by selecting appropriate messages based on this theoretical
background. This study proposes that collective efficacy should play a role in predicting a
behavior in a group needing collective actions and voices. Collective efficacy, defined by
Bandura (1995), means “people’s beliefs in their joint capabilities to forge divergent self-
interests into a shared agenda, to enlist supporters and resources for collective action, to
devise effective strategies and to execute them successfully, and to withstand forcible
opposition and discouraging setback” (p. 33). Bandura (1995) has highlighted the increasing
importance of collective efficacy in our society, where individuals are more mutually
dependent on one another than ever.

The strength of collective efficacy lies in its ability to predict the behavior change of
an individual who perceives a greater likelihood of “sharing” efforts with others, in addition
to individual effort. Perceived collective efficacy may be better exerted among each member
within their own social network group. Campo et al. (2003) argued that college students are
affected more by their own social network members, rather than “perceiving the typical
college student as being within their social network” (p. 486). This study addresses the
following research questions and hypothesis:
RQ1: To what degree do the three major determinants (attitude, perceived norm, and self-
efficacy) predict individual intentions to attend campaign programs for nonviolence
on campus?
H1: College students’ perceived collective-efficacy will positively predict their intention
to attend campaign programs for nonviolence on campus.
RQ2: If their collective-efficacy positively predicts the intention, to what degree do the
collective-efficacy increase its unique variance in the intentions among the three
major determinants?

II. METHODS

Participants and campaign programs for nonviolence on campus. Participants in this study
were 198 students: 135 students enrolled in one general education class and 63 students
enrolled in four laboratories of mass communication research at a Midwestern public
university. This study used a Midwestern public university’s actual campaign for
nonviolence because its goal is to reduce such problems as discrimination, harassment,
violence, and other abuses of power on campus. Research participants were shown lists of the
campaign events and they responded to questions based on the following measures.
Measurement. The questionnaire was designed to measure three variables grounded on the
integrated theory as well as perceived collective-efficacy. This study proposed predictors and
dependent variables based on Ajzen’s article (2002).
Exposure to campaigns for nonviolence on campus. This variable was measured by asking
one open-ended question about how many times respondents have attended campus campaign

401
programs during their college life. The programs were listed in the questionnaire to help
respondents understand.
Perceived susceptibility. Perceived susceptibility was measured with one item using a
1 to 7 Likert scale. The item asked about “How likely do I perceive myself to be a victim of
discrimination, harassment, violence or other abuses of power on our campus?”
Attitude. Attitude was measured with three items using a 1 to 7 semantic-differential scale.
Cronbach’s coefficient alpha was .875. The mean score for the three item scale was 4.190,
SD = 1.086.
Perceived norm. Respondents were asked to use a 7-point Likert scale to report their opinions
about whether 1) most people who are important to me; 2) people whose opinions I value;
and 3) most of the students in the college would value attendance at the programs of the
campaign for nonviolence. Reliability for this scale was also low (α = .425). The mean score
for the three item scale was 3.119, SD = 1.076.
Self-efficacy. Self-efficacy was measured by asking respondents to use a 7-point Likert scale
to report whether attending the programs was easy, up to them, and possible. Reliability for
this scale was also low (α = .485). The mean score for the three item scale was 4.119, SD =
1.076.
Collective-efficacy. A collective-efficacy scale for intention of attending the campaign
programs was created using three items with a 1 to 7 Likert scale. The three items concerned
whether the respondent as a member of groups on campus could attend the events in the
future, are able to create a positive group environment through their attendance; and we
attend because as a group we stick together. Cronbach’s coefficient alpha was .833. The
mean score for the three item scale was 4.315, SD = .968.
Intention to attend the campaign. Intention was assessed by three items asking respondents to
use a 7-point Likert scale to report whether attending the programs was expected of them,
whether they will make and effort to attend, and whether they will definitely attend.
Cronbach’s coefficient alpha was .768. The mean score for the three item scale was 2.719,
SD = 1.267.
Demographic items. Respondents were asked to provide their sex, college level, age, and
ethnicity.

III. RESULTS

Descriptive analyses
Of the 198 respondents, males composed 32.3% (n = 64) of the respondents while
females composed 67.2% (n = 133); .5% (n = 1) did not indicate sex. The respondents’
average age was 20.21 years old (SD = 1.81). The ratio of grade levels was distributed among
freshmen (25.8%), sophomores (22.2%), juniors (28.8%), seniors (22.2%), and graduates
(.5%). One respondent (5%) did not indicate grade level. Their ethnicity was mostly White
(88.4%). The average frequency of attending the campaign programs for preventing violence
on campus during college life was .38 times (SD = .93, min = 0, max = 6). Finally, the mean
point of the respondents’ perceived susceptibility was 2.68 (SD = 1.63) on a 1 (extremely
unlikely) to 7 (extremely likely) Likert scale. Each Cronbach’s alpha of perceived norm
(.425 to .453) and self-efficacy (.485 to .661) increased, respectively, when each second item
was deleted. However, this study used the original reliability because renewable ones did not
significantly affect p-values of the variables.

Research questions and hypothesis testing


Based on the integrated theory, the first research question involved assessing three
proximal predictors of the intentions of participation in campaign programs for nonviolence

402
on campus. A multiple regression analysis was conducted to test three predictors within the
integrated theory, which excludes the actual behavior variable. The multiple regression
analysis 1 (Table 1) showed that attitude (β = .403, p < .001) and perceived norm (β = .355, p
< .001) were significantly associated with intentions, with attitude having more influence on
intentions. Self-efficacy, however, was not a significant predictor of intentions. These
variables accounted for 40.3% of total variance in intentions.

Table 1: Results of Regressions on Predictors


Variables B SE B β
Regression 1 (Intention)
Attitude .469 .077 .403***
Perceived norm .464 .079 .355***
Self-efficacy .012 .074 .010
Regression 2 (Intention)
Collective-efficacy .467 .088 .358***
***p < .001.

This study’s hypothesis, that college students’ perceived collective-efficacy will


positively predict their intention to attend campaign programs for nonviolence on campus,
was supported. Results of a linear regression analysis 2 (Table 1) revealed that the collective-
efficacy was positively associated with intentions (β = .358, p < .001). To determine
collective-efficacy’s unique variance in intentions to attend the campaigns among the three
major determinants, a hierarchical regression analysis was performed. Table 2 shows that the
regression model accounted for 41.1% of the total variance in the intention to attend the
campaigns, indicated by its total R2. The first block, containing attitude (β = .403, p < .001),
perceived norm (β = .355, p < .001), and self-efficacy (β = .010, p > .874) variables,
contributed 40.3% explanatory power (F(3, 195) = 43.232, p < .001). However, the second
block with collective-efficacy (β = .117, p > .124), failed to significantly increase the
proportion of explained variance (F(4, 195) = 33.257, p > .124), with only the .7% of R2
explained.

Table 2: Hierarchical Regression on Intention of Attending the Campaign Programs


F for
Predictors β (t) R2
change
Block 1 Attitude .403 (6.123)***
Perceived norm .355 (5.874)***
Self-efficacy .010 (.158)
.403*** 43.232
Block 2 Collective-Efficacy .117 (1.547)
.411 2.393
Note. Values are standardized regression coefficients with t values in parentheses.
***p < .001.

IV. CONCLUSION

The findings of this study suggest the importance of attitude and perceived norm
variables for designing campaigns to prevent violence on campus. Because college students’
intentions to attend the campaigns were under attitudinal and normative control, campaign
designers should reinforce, modify, or change target audiences’ attitudes and norms with

403
strategic messages. The integrated theory posits that the attitudinal variable is predicted by
behavioral beliefs or outcome evaluations, so campaign messages need to include positive
outcomes of action. If college students believe their participation in campaign programs will
produce actual benefits, they will have a better attitude toward the campaign. Normative
messages surrounding violence prevention also should be addressed in campaigns. Perceived
norm plays a crucial role in an individual’s behavior change. When college students perceive
that their significant others (e.g., family or friends) expect them to be involved in violence
prevention campaigns, they will more likely intend to do it. If students also observe their
significant others experiencing violence, they will more likely be interested in campaign
programs. Campaigners need to target not only each college student but also his or her
influencers by using normative messages. This study empirically revealed that perceived
collective-efficacy did not significantly increase its own unique variance in intentions to
attend campaigns. This consequence may come from the correlations among collective-
efficacy and other variables. However, there is still the possibility that collective-efficacy can
operate in individuals’ intentions because of the significant positive association between
collective-efficacy and intentions. Colleges’ collective sense of efficacy can form a positive
atmosphere to share and resolve problems related to their interests. This provides another
implication that campaigners for college community and issues can use strategic messages
including collective actions and voices. Future studies should examine this model with other
data sets and more reliable scales. For example, the reliabilities for measured constructs need
to be stronger, and it would be beneficial if the actual behavior was measured. Participants
also should be randomly selected from all college members, including faculty, staff, and
students, to examine how much more likely the college members intend to participate in
campaigns for preventing violence on campus. Moreover, collective-efficacy may be further
tested in a group of interest with collective actions and voices. Collective-efficacy still needs
to be examined in an individual or community who perceives a greater likelihood of sharing
efforts with others.

REFERENCES
Ajzen, Icek. “The theory of planned behavior.” Organizational Behavior and Human
Decision Processes, 50, 1991, 179-211.
Ajzen, Icek. “Constructing a TPB questionnaire: Conceptual and methodological
Considerations. 2002. Retrieved February 1, 2005, from
http://www.people.umass.edu/ajzen/pdf/tpb.m easurement.pdf
Bandura, Albert. “Self-efficacy: Toward a unifying theory of behavioral change.”
Psychological Review, 84, 1977, 191-215.
Bandura, Albert. “Self-efficacy mechanism in physiological activation and health-promoting
behavior.” In J. Madden IV, ed., Neurobiology of Learning, Emotion and Affect. New
York: Raven, 1991, 229-269.
Bandura, Albert. “Exercise of personal and collective-efficacy.” In A. Bandura, ed., Self-
efficacy in Changing Societies. New York: Cambridge University Press, 1995, 1-45.
Campo, Shelly, Brossard, Dominique, Frazer, M. S., Marchell, Timothy, Lewis, Deborah,
and Talbot, Janis. “Are social norms campaigns really magic bullets? Assessing the
effects of students misperceptions on drinking behavior.” Health Communication,
15(4), 2003, 481-497.
Fishbein, Martin. “The role of theory in HIV prevention.” AIDS Care, 12(3), 2000, 273-278.

404
CHAPTER 13

HUMAN RESOURCE MANAGEMENT

405
JUSTICE OR EFFICIENCY: ABOUT ECONOMIC ANALYSIS OF LAW

Nuri Erisgin, Ankara University, Turkey


Nerisgin@Politics.Ankara.Edu.Tr

Zulal S. Denaux, Valdosta State University


Zsdenaux@Valdosta.Edu

Özlem S. Erisgin, Ankara University, Turkey


Erisgin@Law.Ankara.Edu.Tr

ABSTRACT

For a long time, economics has been accepted as the most powerful non-legal tool for
analyzing a wide variety of legal rules. It has been claimed that legal rules must be designed to
maximize economic efficiency roughly speaking to make the size of economic pie as large as
possible. However this study claims that legal rules should take into consideration of not only
economic rules but also political, historical, ideological, ethical and cultural foundations of the
society.

I. INTRODUCTION

It has been widely accepted that economics is among the non-legal factors that
influences the law the most. For a long time, it was believed that the law had a guiding role on
economic relations; the relationship between the law and economics was used to describe the
legal rules and to explain the behavior of explicit economic markets. However, starting in the
1960’s the relationship between the law and economics was shaped very in a different way.
Until about 1960, economic analysis of the law consisted greatly with the economic analysis of
antitrust laws. The records in antitrust cases provided very rich information about business
practices which helped economist discover the economic implications of such practices.
However these discoveries carried implications for the legal system, they were just trying to
explain the behaviors of explicit economic markets (Posner, 1998, pp. 25-28). Basically, the
traditional analysis of law and economics appraised the effects of legal rules on the economic
system. Since the 1960s, the application of economics has extended its reach beyond antitrust
law to other field of laws such as common law; to family law legal; to civil, criminal, and
administrative procedure; and so on. The contributions of Coase (1960), Calabresi (1970) and
Posner (1972) carried economic analysis to areas of law that were not limited to explanation of
economic market behaviors. The new (modern) law and economics approach mainly focused
on an “efficiency concept” initiated among others by Posner (1972). According to the
efficiency approach, legal rules either are, or ought to be, designed to maximize economic
efficiency, which means maximization of the social willingness-to pay. In this approach, legal
rules are considered as efficiency generating instruments. The primary purpose of this study is
threefold. First, this study attempts to enrich understanding of Economic Analysis of Law in
the content of allocative efficiency. Second, this study offers a critical view towards the
efficiency-oriented approach from a legal perspective. Finally, the study offers some general
conclusions regarding the application of the law and economics analysis to legal and social
practices and to what extent this approach has implications in the European legal system.

406
II. HISTORY

The modern approach of law and economics appeared at first with Ronald Coase’s
article on social cost and Guido Calabresi’s articles on torts and liabilities. With time, this
approach was carried out in a broader context applying economic analysis to the other fields
of law. Especially, tort law is greatly affected by economic models. By the Posner’s influence
on law and economics, the modern approach of law and economics was started being
evaluated in a general theory of law as well as conceptual tool for the improvement of its
practices. Coase’s Theorem offered a framework for analyzing the effect of initial
assignment of property right on allocative efficiency. According to the Coase theorem, as
long as there are no or negligible transaction costs, and property rights are well-defined,
parties to a dispute achieve the same efficient outcome (allocative efficiency) regardless of
initial assignments of legal entitlements. In conclusion, COASE argued that, regardless of what
the law says about whom is liable, economic efficiency dictates the outcome which provides
the maximum value to the parties involved in the negotiations. However, as noted by Coase,
in the presence of positive transaction costs, the initial assignment of legal entitlements
(property rights) does affect allocative efficiency, and the law directly affects economic
actives. "It would therefore seem desirable that the courts should understand the economic
consequences of their decisions and should, insofar as this is possible without creating too
much uncertainty about the legal position itself, take these consequences into account when
making their decision". Therefore, legal issues, as Coase has argued, primarily could be
solved by choosing social arrangements so as to minimize social costs, roughly defined as
transaction costs and allocative inefficiency. Even though, Coase recognized the existence of
positive transaction costs, his analysis was mainly based on the assumption of negligible
transaction costs which was extensively criticized in the law and economic literature.

One of the important studies in the development of ‘law and economics’ came from
Calabresi (1961), who established the foundations of this approach. As mentioned above, the
framework established by Coase, claimed that allocative efficiency was invariant to the initial
assignment of property rights in the case of ‘zero’ transaction cost. As acknowledged by the
study of Calabresi, transaction costs were in reality not zero which had serious implications on
reaching the efficient outcome (allocative efficiency). According to Calabresi, allocative
efficiency was obtained by assigning the responsibility to the party who incurred the least cost
to avoid similar future injuries. Under the rule of the "cheapest cost avoider", the outcome is
efficient if the "duty of care" is chosen at a level in which the social costs are minimized
(Diamond, 1975). To better understand the implication of this approach on the legal system,
let just look at a real life example: A passenger hired a taxi cab and was injured by someone
throwing a stone. That person was not caught, and the passenger sued the taxi driver
compensation. The question we pose is “is it more efficient to hold the taxi driver liable for the
injury?” According to modern approach of law and economics, regardless of initial allocation
of rights per se, the taxi driver should be liable, since the taxi driver is in a much better
position than passenger to purchase insurance against unforeseeable accidents. Therefore,
applying this efficiency rule minimizes the social costs and society as whole saves resources
(See Stringham, 2001, p. 44 for more examples and detailed discussion).

III. DEPARTING POINT: EFFICIENCY

According to the modern approach of law and economics, legal norms and decisions
must be evaluated based on in point of economic efficiency concept. Therefore, the main
objective of law is the achievement of efficiency. The basic assertion is this: the rules

407
regarding the allocation of resources that every society's uses in its activities are determined
by the market economy. Since market prices determined by market forces, shape the behavior
of individual preferences, protecting ‘effectiveness’ of these market rules had been left to the
legal system. However, if the price mechanism on the market is running, the law maker’s
intervention to constitution of the prices can not be seen as rightful in terms of effectiveness.
The law maker, only, must fulfill its duty of providing institutional conditions for ensuring
economically efficient exchange relations. So, the legal system can have no worth away from
economics and from the needs of economics. Therefore, the most important and prior tool
that can guide to the jurist is through the means of ‘economic analysis.’

The legal system attempt to allocate the recourses in either a Pareto Optimality or
Kaldor-Hicks efficient manner (also is called as ‘weak Pareto Optimality’). Pareto Optimality
is a measure of efficiency. One social situation is said to be "Pareto optimal" to another if it
makes at least one person better off without making anyone else worse off. So, the Pareto
principle is used to legitimize legal rules; if a legal rule left everyone better off, then the rule
should be implemented; otherwise, it should be abandoned. Therefore, if an outcome is not
Pareto efficient, then it is the case that some individual can be made better off without anyone
being made worse off. It is commonly accepted that such inefficient outcomes are to be
avoided, and so Pareto efficiency is an important criterion for legal and economic systems for
evaluating outcomes. Pareto Optimality has been criticized for two main reasons. First, the
result produced by Pareto principle depends totally on the choice of the initial allocation.
Therefore, different results can be achieved depending on the initial allocation. Secondly, this
criteria only allows ordinal evaluation of preferences and ignores any mechanism to induce
decision makers to reveal any cardinal preferences such as satisfaction, happiness, etc. Given
the level of criticism that Pareto criterion received, a less restrictive criterion, Kaldor-Hicks
was adopted to evaluate legal rules. According to the Kaldor-Hicks’s criterion, one outcome
is superior to another one if those that are made better off (gainers) could theoretically
compensate all those that are made worse off (losers) for their loses and still be better off
themselves. Clearly, under the Kaldor-Hicks criterion, change considered to be an
improvement depends on whether or not compensation actually takes place.

IV. A CRITICIZING VIEW TOWARDS THE APPROACH FROM A LEGAL


PERSPECTIVE

The approach of economic analysis of law sets objectives for making the law and
judicial decisions more effective. Beyond that the economic analysis of law also determines
goals such as the economic efficiency of legal decisions and arrangements in order to reach
the said goals. This is one of the objectives of the laws and judicial decisions in every society.
In order to reach this objective, it is of prime importance to know the individual or national
economic benefits of the resolutions brought in by laws and the judicial decisions that are
more effective from economic aspects. However, the cost of obtaining this knowledge may
be prohibitive for the individuals as well as the Governments. For example, it is stated that
the Federal Republic of Germany allocated 78 million Marks (DM) in the year 1985 in order
to determine the relationship between forest losses, damages and the air pollution. Therefore,
this and similar examples demonstrate that the searching efficiency on existing and future
legal rules or decisions is difficult, time consuming and expensive (Kübler, 1990, p.695).

Pareto optimality may create a vehicle contemplated to be applied in extremely


exceptional situations. First, this criteria is asking for the impossible by making it a condition
that everybody must completely determine the results of their behavior; thus, it is based on a

408
hypothesis that everybody can be aware of results of their behavior. Whereas, this approach
losses the sight of the fact that, it is impossible for most behaviors to be totally purified of
outside factors and the effects of third persons. Additionally its displays a rigid conservative
attitude in accepting changes of a distinct social situation under certain situations to be
efficacious in the event the situation does not become worse than before for anybody.
Because, in doing so the new (different) would be deemed to be unacceptable in the event
even one person falls into a worse situation than before. This understanding of economic
efficiency which takes into account the whole society and the effects of changes in the
society, expresses a theoretical defect which can not be corrected even by applying the
concept of justice or justice based on a concrete event (aequitas) in regards to the legal
disputes. Therefore, Pareto optimality favoring the status quo based on rigid individualism is
a criterion which could produce unjust results from the point of view of the law ( Behrens,
1994, p.42; Canaris, 1993, pp.388-391). In regards to the Kaldor–Hicks criterion, there is not
actual paid compensation as is case in Pareto optimality. On the contrary, the compensations
of losers are only hypothetical and do not actually need to take place. Under the Kaldor-Hicks
criterion, an efficient outcome is obtained if the gainers compensate the losers, whether or not
they do so. In a simple way, the Kaldor-Hicks criterion requires a comparison of the
monetary gains of one group versus the monetary losses of the other group. As long as
gainers gain exceed the losers losses as measured in dollars, the move is deemed efficient.
For example, if I am willing to pay a million dollars to have women drivers outlawed and the
willingness of women drivers to pay in order to be allowed to drive is less than a million
dollars, and the efficient outcome under the Kaldor-Hicks would be the prohibition of women
from the traffic system. Therefore, the option that maximizes the summation of total payoffs
(wealth), using the Kaldor-Hicks criterion to evaluate legal rules would be completely wrong.

Considering that a “human being is a social being is a valid statement; if that is the
case, a life, which started in a community, may be described with differing characteristics,
qualities depending on varied ages and different communities such as social human beings
(homo societas), urban human beings (homo urbanus) human beings with weaknesses (homo
faber) and rational human beings (homo rationalis). If urban human beings are taken into
consideration a thinking, feeling, behaving, in short a living human being, a reality, a fact
appears. It is true that these can be discussed, for example although the word “urban”
requires a cautious approach, these characteristics are almost always used for typifying,
finding collective similarities and thus distinguishing human beings from the others. With
such characteristics, qualities a human being is primarily and expressly a living,
contemporary being, not an abstract type! It does not always fall in line with every abstract
typecasting (Binswanger, 1961). In this context, for a human being solely possessing (even if
in appearance) and thus taking money or monetary values as a measure for everything or for
every issue is not essential, but before everything else and particularly “to became into being”
or “existence” and furthermore “coexistence” is essential. If that is the case, in final analysis
the expression “economic” in homo oeconomicus which is condemned (and should be
condemned) to remain a prototype with essential characteristics, qualities to be regarded as
strange in regards to the economic analysis of the law does not seem to be possible to accept,
to treat this characteristic, quality, as a characteristic, a quality human beings acquire in every
relationship they enter into, but as a characteristic determining all of their characteristics,
qualities.

409
V. CONCLUSION

In conclusion the relationship between law and economics should always be taken
into consideration. However, this relationship is not a relationship in which one is dominant
over the other or the other is suppressed, but a model that can be explained as a mutual
influence. One of them can be precede the other depending upon the place and time or may
have an influence over the other, but at a different place, time and circumstance just the
contrary may be realized. Economics has made a substantial contribution to our
understanding of the law, but the law has also contributed to our understanding of economics.
Courts routinely deal with the reality of such economic abstractions as property and contracts.
The study of law thus gives economists an opportunity to improve their understanding of
some of the concepts underlying economic theory.

REFERENCE

Behrens, P.: “Utilitarische Ethik und ökonomische Analyse des Rechts”, Die ethischen
Grundlagen des Privatrechts, Wien/New York 1994, pp. 35-51.
Binswanger, H.: “Über den homo actualis”, Die Rechtsordnung im technische Zeitalter,
Festschrift der Rechts- und Staatswissenschaftlichen Fakultät der Universität Zürich
zum Zentenarium des Schweizerischen Juristenverein (1861-1961), Zürich 1961, pp.
167-192.
Calabresi, G.: “Some toughts on Risk Distribution and the Law of Torts”, Yale Law Journal,
1961 , Vol: 70, pp. 499-553.
Canaris, C. W.: “Funktion, Struktur und Falsifikation juristischer Theorien”, Juristen Zeitung,
1993, pp. 377-391.
Coase, R. H.: “The Problem of Social Cost”, Journal of Law & Econonomics 3,1960, Vol:1,
pp.1-44.
Diamond, P.: “On the Assigment of Liability: The Uniform Case”, The Bell Journal of
Economics, 1975, Vol. 6, No.2, pp. 487-516.
Fezer, K. H.: “Aspekte einer Rechtskritik an der economic analysis of law und am property
rights approach”, Juristen Zeitung, 1986, pp. 817-824.
Forstmoser, P./Schluep, W.: Einführung in das Recht. Einführung in die Rechtwissenschaft,
Bd. I, 2. Aufl., Bern 1998.
Horn, N.:“Zur ökonomischen Rationalität des Privatrechts.-Die privatrechtstheoretische
Verwertbarkeit der «Economic Analysis of Law»”, Archiv für civilistische Praxis,
1976, 307-333.
Kuçuradi, İ.: “Etik İlkeler ve Hukuk”, Hukuk Felsefesi ve Sosyolojisi Arkivi, 2003, C. 8,
pp. 5-11.

410
WHAT DOES IT TAKE TO SUCCEED AS A HUMAN RESOURCES
PROFESSIONAL? A REVIEW OF U.S. HR PROGRAMS

Crystal L. Owen, University of North Florida


cowen@unf.edu

ABSTRACT

The purpose of this paper is to examine U.S. university programs in human resource
management, with the goal of evaluating the consistency of program offerings. The program
review revealed that inconsistency rather than consistency is the rule in such programs.
Suggestions for enhancing the degree to which university programs contribute to successful
HR careers are offered.

I. INTRODUCTION

What does it take to succeed as a professional in human resource management (HR)?


The answer is fundamental to creating an effective university HR curriculum, but for several
reasons, a good answer is hard to find. The HR function has changed dramatically over the
past 30 years in many companies, but not much at all in many other companies, which means
it's hard to define a single set of criteria for a successful career. Asking senior HR executives
what it takes to succeed seems logical, but their perspective may not be much help because
the career ladder they followed may no longer exist due to changes in the field, and because
not everyone defines "successful career" in terms of achieving a senior executive position.
Asking HR scholars what it takes to succeed as an HR professional is another logical source,
but the answers tend to be broadly stated.

II. LITERATURE REVIEW

In 1990, Schuler pointed out that changes in the environment of business require HR
departments to move beyond their traditional support roles toward a member of the
management team. Schuler contends that that to gain credibility as part of the team, HR
professionals must see human resource issues as business issues and help line managers solve
problems through effective HR management (Schuler, 1990). Since the publication of
Schuler's article, a number of conceptual and empirical studies have been produced, echoing
his concerns and attempting to describe the need for change and the skills required of HR
professionals in their new role as team members. A detailed review of this literature will
soon be published in the Journal of Business and Leadership (Sincoff and Owen, in press).

Although Henenan (1999) argues persuasively for a need to focus on the "supply side
characteristics of effective human resource professional" (p. 97), Sincoff and I found in our
review that no consistent set of required knowledge and skills for successful HR practitioners
emerges from this literature. Scholars and practitioners vary in the extent to which they
recommend emphasis on traditional HR competencies (e.g., Brockbank, Ulrich, & Beatty,
1999), a subset of traditional HR competencies (e.g., Van Eynde & Tucker, 1997; Way,
2002), or a mix of HR fundamentals, leadership skills, and business fundamentals (e.g.,
Barber, 1999; Giannantonio & Hurley, 2002; Hansen, 2002). Kaufmann (1996) offers a
number of suggestions for improving HR university education, including augmenting basic
HR coursework with accounting, operations, and finance, an emphasis on communication

411
skills, and the use of case studies to demonstrate the HR role in managerial decision making.
He also suggests that programs that wish to grow must have an effective marketing program
or demonstrate superior ability to place graduates in good jobs.

The Human Resources Certification Institute (HRCI), affiliated with the Society for
Human Resource Management, has for years conducted research on key areas of professional
knowledge required of HR professionals, and offers two levels of certification (Foreman &
Cohen, 1999). However, certification is not required by most employees hiring HR
practitioners, indicating that the HRCI definition of professional knowledge is not widely
accepted in the HR profession. Furthermore, several authors point to a gap between what is
taught in academic programs and what is desired by business (e.g., Johnson & King, 1999;
Langbert, 2000).

This last point suggested a way to further knowledge in the field about current
perceptions of what knowledge and skills are necessary for effective HR professionals: to
review requirements in university HR educational programs. To ascertain the extent to which
academic programs emphasized the perspectives found in the literature, I reviewed the 2005
content of graduate and undergraduate university HR programs across the U.S., categorizing
the programs in terms of source (liberal arts or business) and emphasis (HR major or
concentration), and compared the required coursework for the programs. What emerges is a
bewildering variety of offerings with no clear reason for the variety.

III. PROGRAM REVIEW

The Labor and Employment Relations Association's (LERA) list of HR degree


programs provided the list of HR programs for this investigation. For many years, LERA
(formerly the Industrial Relations Research Association) has maintained a list of degree
programs in industrial/labor relations or human resources. LERA conducts an annual survey
to keep the list current. University programs outside the U.S. (90 of 100 programs listed)
were excluded from consideration because of the unique HR environment created by U.S.
employment laws.

The first part of the program review focused on establishing the types of programs
offered. Programs were categorized according to whether they are housed in a college of
business (56) or some other college (34), undergraduate programs offered (53), and graduate
programs offered (77). All undergraduate programs are offered in a college of business,
whereas some graduate programs in HR are offered by some other college, such as education
or a school of labor relations.

Undergraduate programs were further categorized according to whether the program


consists of an HR major (35) versus a concentration or track (18), and graduate programs
were further categorized according to whether the program consists of a degree in HR ( 44)
versus an MBA with an HR concentration (33).

The second part of the review focused on program content. At the undergraduate
level, only one class is included as a requirement in all programs (i.e., both major and
concentration programs): a survey course on HR. No other course topics are consistently
required of students in HR undergraduate programs. Other courses most frequently required
in the HR major programs are labor relations, compensation, employment law, and staffing,
but the programs vary greatly in the number of required courses versus the number of

412
electives student can choose to meet the requirements of the majors. At one end of the
continuum, HR majors at California State University-Long Beach have one required course,
an HR survey class, and select three additional courses from a list of eight HR functional area
courses, which means that the program content varies from student to student, depending on
the electives chosen. At the other end of the continuum, HR majors at Wright State
University in Ohio have five required courses (HR survey, labor relations, employment law,
motivation and development, and directed research) with one elective chosen by the student
from a list of eight courses (but only one of these courses is an HR functional area course).
Florida State University's HR program represents the middle of the continuum; HR majors
have four required classes (HR survey, staffing, labor relations, and current issues), and
choose four electives from a list of seven HR functional area courses.

For the concentration programs, students are typically allowed to choose three or four
courses from a list that varies in the degree to which the topic reflected HR functional areas
such as recruitment, selection, performance appraisal, and employment law, with the HR
survey course as the only required course. For example, Cleveland State University offers a
four-course HR track, in which the HR survey course is required and students choose three
additional classes from a list of seven Hr functional area courses.

At the graduate level, MBA programs with an HR concentration or track reflect a bit
more consistency because of the limited number of HR graduate courses offered. A
concentration typically consists of three or four elective courses. An HR survey course was
required by all as one of the three, but the other courses reflect a variety of topics including
some HR functional areas (e.g., compensation) as well as topics that are more typical of the
field of organizational behavior (e.g., organizational development, leadership, and
motivation). At Fairleigh Dickinson University, for example, four courses comprise the HR
concentration: employment law, strategic HR management (an HR survey course with a
strategic emphasis), performance appraisal, and managing change.

Graduate HR programs housed in colleges of business reflect a similar lack of


consistency in program content. As an example of one extreme, the M.S. in HR at Marquette
University requires one course in HR strategy of all students, and allows students to choose
from three sets of courses for the rest of their program content. The Ohio State University's
program is an example of a program at the other extreme, with 64 of the 76 hours required for
the degree required of all students who earn the MLHR degree (i.e., 16 required courses, 3
electives). Graduate HR programs offered by colleges other than business contain even more
variation.

IV. CONCLUSION

Pick up any human resource management textbook for a survey course in HR, and
you'll find an almost identical list of topics in the table of contents. You might think this
indicates a clearly defined common body of knowledge for the field, but the content of
university HR programs suggests otherwise. The original question was, What does it take to
succeed as a professional in HRM? The inconsistency among the current program offerings
suggests that as a group, universities do not have the answer.

Currently there is no common body of knowledge for HR that is accepted by both


scholars and practitioners, which contributes to the problem of formulating an answer to the
supply-side question, especially one that can be addressed by a single program of university

413
study that is less then 10 years in duration (Barber, 1999). In addition, although the HR field
grew out of labor economics with a strong emphasis on labor relations (i.e., a liberal arts
perspective), it has been gradually absorbed into the field of business, where it tends to be
regarded more as a pragmatic concern like accounting, but without the respect that
accounting receives for its contributions to the bottom line. Perhaps the resistance to creating
a common body of knowledge and requiring a certification in the field to practice HR
contributes to that lack of respect. It is worth noting that not one of the academic programs
reviewed require a course in HR for students who are not majoring or concentrating in HR, a
state of affairs that does nothing to support the idea of HR professionals as a member of the
management team rather than as support personnel.

Whatever the reason, no single ideal program of study exists in HR university


education. Given the split between companies who use HR in a strategic sense and those
who continue to use HR for human capital accounting, perhaps there is no single ideal
program. The problem, however, is deeper and more insidious than not knowing what to
teach for what type of career. I contend that most HR programs do not have a mission that
can be stated in terms of a valid answer to the question of what it takes to succeed as an HRM
professional. Instead, they play it safe, offering a set of HR classes on topics that everyone
agrees on as fundamental HR activities, such as recruitment and selection, performance
appraisal, discrimination law, labor relations (becoming more questionable as fundamental
with the decline of unions), benefits, compensation, and job analysis, but allowing students to
choose the combination of courses. Playing it safe also means that the courses offered are, to
a large extent, dictated by what the existing faculty is capable of and willing to teach. Then
program decision makers hope for the best, meaning they hope their students get entry-level
jobs and figure out how to move forward from there.

If universities instead created programs that correspond to the career needs of HR


professionals, it would make sense to start by recognizing that a single program cannot meet
the needs for every HR practitioners. Defining the type of HR career for which students are
to be prepared as part of a program mission statement would then be possible. For example,
schools could focus undergraduate education in business on the fundamental elements of HR
so that graduates are prepared for entry-level positions in large corporations, or on a narrower
subset of HR functional areas with deeper coverage so that graduates are prepared for entry-
level positions in specialized HR firms.

At the same time, programs need to contribute to defining HR as a legitimate field in


the eyes of business professionals. One way to do this is to teach the topics that are the basis
for the HRCI's PHR certification and encourage program graduates to obtain PHR
certification through scholarships and prep courses. Graduate programs in colleges of
business could then be focused on either specialization in a given HR function (e.g.,
recruitment and selection, employee relations, performance appraisal) as the next step for HR
generalist undergraduates, or on a higher level of HR generalist and business education
corresponding to the HRCI's SPHR for those with undergraduates in other fields or with no
degree but years of HR experience. Programs in specializations already offering professional
certification (e.g., benefits, compensation) could tailor their offerings to what is required for
certification, encouraging graduates to take the certification exam with scholarships. MBA's
with HR concentrations could be recognized as programs for business professionals who plan
to pursue an upper-level management career in HR. Job analysis and perhaps training could
even be left to the industrial/organizational psychologists - proficiency in these fields requires
far more courses than can be included in a curriculum of business.

414
An important issue to address in any program is strategic HR. As Schuler (1990)
states, HR professionals must be prepared to demonstrate how HR activities make line
managers more effective. Instead, in most companies HR is seen as a stumbling block, a
series of hoops to jump through, a department that makes its information hard to access rather
than as a central contributor to organizational success.

HR can be successful as a partner at the strategic table only if the value of its
contributions is recognized and respected. Value will exist only if those contributions can be
defined in terms specific to corporate performance. Until the field of HR comes together to
define a common body of knowledge and skills appropriate for HR practitioners, the holy
grail of strategic input will continue to be illusive and academic programs will continue to
flounder in a sea of inconsistency.

REFERENCES

Barber, A. "Implications for the Design of Human Resource Management – Education,


Training, and Certification." Human Resource Management, 38, 1999, 177-182.
Brockbank,W., Ulrich, D., and Beatty, R. W. "HR Professional Development: Creating the
Future Creators at the University of Michigan Business School." Human Resource
Management, 38, 1999, 111-118.
Foreman, D. C., and Cohen, D. J. "The SHRM Learning System - A Brief History." Human
Resource Management, 38, 1999, 155-160.
Giannantonio, C. M., and Hurley, A. E. "Executive Insights into HR Practices and
Education." Human Resource Management Review, 12, 2002, 491-511.
Hansen, W. L. "Developing New Proficiencies for Human Resource and Industrial Relations
Professionals." Human Resource Management Review, 12, 2002, 513-538.
Heneman, R. L. "Introduction: The Need for a Supply Side Examination of the Human
Resource Profession." Human Resource Management, 38, 1999, 97-98.
Johnson, C. D., and King, D. "Are We Properly Training Future HR/IR Practitioners? A
Review of the Curricula." Human Resources Management Review, 12, 1999, 539-
554.
Kaufmann, B. E. "Evolution and Current Status of University HR Programs." Human
Resource Management, 38, 1999, 103-110.
Kaufmann, B. E. "Transformation of the Corporate HR/IR Function: Implications for
University Programs." Labor Law Journal, 47, 1996, 540-548.
Langbert, M. "Professors, Managers, and Human Resource Education." Human Resource
Management, 38, 2000, 99-102.
Labor and Employment Research Association, IR/HR Programs, Accessed 11/29/05.
http://www.lera.uiuc.edu/IRHR/index.html
Schuler, R. S. "Repositioning the Human Resource Function: A Transformation or Demise?"
Academy of Management Executive, 4, 1990, 49-60.
Sincoff, M. Z., and Owen, C. L. "What Constitutes an Effective Human Resources
Curriculum?" Journal of Business and Leadership, in press.

415
MINIMIZING THE NEGATIVE IMPACT OF TELECOMMUTING ON
EMPLOYEES

Marian C. Crawford, University of Arkansas—Little Rock


mccrawford@ualr.edu

ABSTRACT

Telecommuting (also referred to as telework) has been used by many organizations


since the early 1970s to increase productivity and to provide employees with flexible work
arrangements (Dimitrova, 2003). Telecommuting is defined as work carried out in a location
where, remote from central offices or production facilities, the worker has no personal
contact with co-workers, but is able to communicate with them using new technologies (Di
Martino and Wirth, 1990). This paper identifies practices that can be incorporated by
managers to minimize the negative impact of telecommuting assignments.

I. INTRODUCTION

Many organizations are concerned with increasing employee productivity and


reducing staff turnover by providing employees with opportunities for flexible work
arrangements. In order to accomplish these goals, some organizations are implementing
various forms of telecommuting (Stanworth, 1998). A Wall Street Journal article estimated
that 24 million workers are employed either full- or part-time as telecommuters, employees
who do all or part of their work from a location other than a traditional office and use various
forms of electronic communication to transmit work (Dunham, 2000). Since telecommuting
is a decentralized computer-mediated work form, it enables employees to work in locations
distant from the main organization and can lead to many changes in how employees work and
how managers supervise (Di Martino & Wirth, 1990).

Organizations have recognized many advantages to providing telecommuting


opportunities for employees and for the organization, and many employees have chosen to
adopt this type of workplace arrangement. Even though the employee might experience
several advantages from this type of work experience, several employee disadvantages can
occur. In addition, many mangers are not prepared to manage effectively this type of
employee arrangement.

II. BENEFITS AND DISADVANTAGES OF TELECOMMUTING

Numerous benefits and disadvantages of telecommuting for both the individual and
the organization have been identified (Apgar, 1999; Cascio, 2000; Dimitrova, 2003; Mann
and Holdsworth, 2003). The benefits to the employee include (1) opportunities to remain in
workforce despite relocating, becoming ill, or taking on family care roles; (2) more time for
home and family, (3) reduced commuting time and expense; (4) greater job autonomy; (5)
fewer disturbances while working, (6) more flexible working hours, and (7) ability to work
from remote locations. Among the disadvantages to the employee are (1) social isolation,
(2) fewer opportunities for development or promotion, (3) the perception that telecommuters
are not valued by their managers, (4) limited face-to-face communication with colleagues; (5)
repetitive nature of many work tasks performed by telecommuters, and (5) lower job security
(Di Martino and Wirth, 1990; Dimitrova, 2003; Mann and Holdsworth, 2003). Many of these

416
employees feel a sense of separation from their traditional work environment and, to some
extent, their social environment. The isolation from the workplace also leads to career
stagnation since the telecommuter no longer has easy access to formal and informal
information networks (Mann and Holdsworth, 2003).

Among the reasons organizations have chosen to let people work from other locations
include (1) reduced expenses for the organization, including real estate and building
expenses, (2) possibility for increased productivity, (3) improved customer service, (4)
employee retention, and (4) environmental benefits (Cascio, 2000). Even though the
organization may experience benefits from this work arrangement, managing out-of-sight
employees is new to many managers and some managers have difficulty supervising this type
of employee arrangement.

III. PREPARING MANAGERS OF TELECOMMUTERS

Organizations who provide employees with telecommuting work arrangement need to


examine several issues concerning the management of telecommuters, such as how to prepare
managers to assist in minimizing the telecommuters’ feeling of remoteness from the
organization and traditional work relationships, how to assist the manager of telecommuters
in learning new communication skills, and how to prepare the manager to assess the
performance of the telecommuter.

Since telecommuters may feel a sense of separation from their traditional work
environment, organizations must explore ways to bridge the transition. Managers need to be
prepared to provide the support and linkages to the organization’s formal and informal
informational and social networks that the employees may need to develop and maintain
work relationships. Developing procedures and processes to provide opportunities for the
telecommuter to remain connected with formal and informal information networks will help
minimize the telecommuters sense of separation from their organization-related networks.

Communication is a major challenge for managers of telecommuters (Cascio, 2000).


Many managers may have to learn new communication skills to prevent telecommuters from
feeling isolated and not part of a larger group and to communicate performance expectations.
Managers should learn how to conduct effective audio and video conferencing meeting, and
to balance e-mail and voice mail with face-to-face communications.

Managers of telecommuters also need to develop performance standards and goals


that are appropriate for the tasks being preformed by the telecommuter. In addition, clear
methods of evaluations must be prepared and communicated to the employee in a
telecommuting position. According to Cascio, that overall objective of goals, measures, and
assessment is to leave no doubt in the minds of remote workers what is expected of them,
how it will be measured, and where they stand at any given point in time.

IV. PROVIDING A SUPPORTIVE CLIMIATE FOR THE TELECOMMUTER

Human resources can provide leadership in an organization to increase the likelihood


that telecommuting employees will have minimum dissatisfaction from the problems
associated with being a telecommuter and decrease turnover among telecommuters (Caudron,
1998; Igbaria and Guimaraes, 1999). The major problems identified in studies included

417
social isolation, decreased opportunities for development or promotion, and the perception
that telecommuters are not valued by their managers, limited face-to-face communication
with colleagues, repetitive nature of work tasks, lower job security, and career stagnation.
The following are suggestions that could be used to retain and further integrate the
telecommuter in the culture:

1. Develop plans and processes to ensure formal and informal communication opportunities
occur between various organizational constituencies, including the manager, and the
telecommuter’s peers.
2. Provide opportunities for the telecommuter to have reliable access to job postings and
development opportunities through both formal and informal communication channels.
3. Review the organization’s selection criteria for telecommuting assignments. Employees
who initiate the telecommuting work arrangement to balance work-family demands may be
more willing to accept the isolation as a trade off.
4. Establish a culture that embraces diversity. Baruch (2001) noted that organizational
cultures that embrace diversity do not just cater to the needs of women, minorities, and the
disabled. Instead, as Baruch notes, diversity is the management of different needs and
different modes of work, and telecommuting forms a significant role in enabling effective
management of diversity in both aspects—groups with special needs, as well as people who
can get the best output utilizing a variety of operational models.

5. Because of the repetitive nature of the work of some telecommuters, consider reviewing
the Hackman and Oldham job characteristics model of work motivation (Hackman and
Oldham, 1980) which proposed conditions under which individuals will become internally
motivated to perform effectively on their jobs. Perhaps changes could be made in the
structure of the assignments to provide increased skill variety, task identity, task significance,
and feedback, and perhaps these changes could increase the satisfaction for the telecommuter.
6. Develop performance standards and goals that are appropriate for the tasks being
preformed by the telecommuter. Establish clear objectives and goals and develop specific
measures of assessment that minimize ambiguity in the minds of remote workers concerning
what is expected of them, how it will be measured, and where they stand at any given point in
time. Managers should communicate to the telecommuting employee all aspects of these
performance standards.
7. Provide telecommuters with opportunities to work from satellite offices. Satellite offices
are sometimes available in suburban locations nearer the homes of employees and offer
employees increased opportunities to develop work relationships and minimize social
isolation, and opportunities to minimize career stagnation, while minimizing the employee’s
commuting time.

V. CONCLUSION

Telecommuting will continue to evolve as technologies change and as managers and


employees seek ways to explore beneficial flexible work arrangements. Minimizing the
negative aspect of a telecommuting work arrangement requires commitment from the
organization to develop practices that ensure a successful transition for the organization and
the employee. Even though the organization may experience benefits from a telecommuting
work arrangement, managing out-of-sight employees is new to many managers and
organizations may have to explore and develop new style of management for this type of
employee.

418
REFERENCES

Apgar, M. “The Alternative Workplace: Changing Where and How People Work.”
HarvardBusiness Review, 76(3), 1999, 121-139.
Baruch, Y. “Teleworking and Quality of Life.” In P. J. Jackson & J. H. van der Wielen
(eds.), Teleworking: International Perspectives—From Telecommuting to the Virtual
Organization. London: Routledge, 2001.
Cascio, W. F. “Managing a Virtual Workplace,” Academy of Management Exectuive, 14(3),
2000, 81-90.
Caudron S. “Workers’ Ideas for Improving Alternative Work Situations,” Workforce, 77(12),
1998, 42-49.
Di Martino, V. and Wirth, L. “Telework: A New Way of Working and Living,” International
Labour Review, 129(5), 1990, 529-554.
Dimitrova, D. “Controlling Teleworkers: Supervision and Flexibility Revisited,” New
Technology, Work and Employment, 18(3), 2003, 181-195.
Dunham, K. J. “Telecommuters’ Lamet”, The Wall Street Journal, 236(85), 2000, B1, B18.
Hackman, J. R. and Oldham, G. R.. Work Redesign. Reading, Mass.: Addison-Wesley,
1980.
Igbaria M., and Guamaraes, T. “Exploring Differences in Employee Turnover Intentions and
Its Determinants Among Telecommuters and Non-Telecommuters,“ Journal of
Management Information Systems, 16(1), 1999. 147-164.
Mann, S. and Holdsworth, L. “The Psychological Impact of Teleworking: Stress, Emotions
and Health, New Technology, Work and Employment, 18(3), 2003, 196-211.
Stanworth, C. “Telework and the Information Age,” New Technology, Work and
Employment, 13(1), 1998, 51-62.

419
IMPLICATIONS OF THE FAIRPAY OVERTIME INITIATIVE
TO HUMAN RESOURCE MANAGEMENT

C. W. Von Bergen, Southeastern Oklahoma State University


cvonbergen@sosu.edu

Patricia W. Pool, Southeastern Oklahoma State University


ppool@sosu.edu

Kitty Campbell, Southeastern Oklahoma State University


kcampbell@sosu.edu

ABSTRACT

Few labor issues are as polarizing as overtime rights. After years of study, discussion,
public debate, and comment, the Department of Labor introduced sweeping changes to the
Fair Labor Standards Act of 1938 (FLSA). Under the rubric of the FairPay Overtime
Initiative (FPOI), the federal law addressing overtime went into effect on August 23, 2004.
The FPOI clarifies employee rights to overtime pay for human resource managers as well as
protecting employers from costly lawsuits. This paper gives an explanation of the initiative
with implications for employees and employers. Key changes in the FLSA are highlighted
and new exemption tests are detailed.

I. INTRODUCTION

The FLSA of 1938 requires that most employees in the United States be paid at least
the federal minimum wage for all hours worked and receive overtime pay at one and one-half
times the regular rate for all hours worked over 40 hours in a workweek. Defined within the
act are certain types of employees who are exempt from both minimum wage and overtime
pay, i.e. if that worker is employed as a bona fide executive, administrative, professional,
outside sales, or computer employee. These exempt categories are cumulatively referred to as
the white collar exemption. To qualify for such exemptions the job description and/or
employment contract must meet certain salary and job duties tests (FLSA, 1938). The past
thirty years have seen these tests become outdated resulting in debate over the need to either
pay overtime to exempt employees or to redefine exemption status (Khorsandi & Kleiner,
2001).

On April 24, 2004 the Wage and Hour Division of the United States Department of
Labor (DOL) responded to these decades-old exemption descriptions with new regulations
relating to white collar exemptions of the FLSA called the FPOI. The purpose of the new
FLSA regulations was to modernize, update, and clarify the criteria for these exemptions and
to eliminate legal problems that the prior regulations caused. This article presents a
discussion of the rationale behind the new regulations, an explanation of the rules developed
by DOL, and concluding comments regarding the implications and benefits of such
regulations for employees and employers.

420
II. REASONS FOR INCREASED LITIGATION

Every president since Jimmy Carter has tried unsuccessfully to simplify federal
overtime pay rules which are contained in the FLSA. The climate changed dramatically in the
late 1990s primarily due to the increase in employee lawsuits brought under the Act against
employers. Employees claimed they were being denied overtime benefits provided under the
Act and were winning multi-million dollar judgments against their employers for non-
compliance with the regulations (Becker, 2004; Crawford, 2004). The number of class-action
suits based upon the provisions of the FLSA climbed from 31 in 1997 to 102 in 2003—over a
300% increase (“Judicial Business”, 2004). The result of such increased litigation is
estimated to cost the economy more than $2 billion annually (National Association of
Convenience Stores, 2004).

Increases in wage and hour lawsuits can be attributed to the desire of employers to cut
costs and increase productivity. Competitive pressures have forced companies across most
industries to cut jobs and revamp their work force deployment, blurring the lines between
employees authorized to receive overtime pay and those who are exempt. Because certain
employees did not have to be paid overtime and could work unlimited hours without
receiving any additional compensation, organizations began to increasingly classify
employees as exempt under the FLSA when, in fact and by law, the employees should have
been classified as nonexempt. In response to such organizational behavior, increasing
numbers of managerial, administrative, sales, and temporary employees began filing high-
visibility class-action lawsuits against employers for unpaid overtime.

Another driving force that contributed to the increase in lawsuits was that the FLSA
regulations provided for significant attorneys’ fees in addition to the damages arising out of a
misclassification or non-classification of employee(s). In many cases, the courts applied
provisions allowing for double damages. Plaintiffs are now entitled to liquidated damages in
an amount equal to the unpaid overtime on their FLSA claim (29 U.S.C. 216.b, 2004). The
FLSA originally made such damages mandatory (Overnight Motor Transportation Co. v.
Missel, 1942). However, the Portal-to-Portal Act (1947), made doubling discretionary rather
than mandatory, by permitting a court to withhold liquidated damages in an action to recover
unpaid minimum wages. Nevertheless, there is still a “strong presumption in favor of
doubling” (Walton v. United Consumers Club, Inc., 1986). It appears then that double
damages are the norm; single damages the exception. The potential for attorneys’ fees being
awarded in addition to damages, actual and double, thus attracted many attorneys to the
FLSA litigation arena.

III. THE FAIRPAY OVERTIME INITIATIVE REVIEWED

To qualify for exempt status (i.e., exempt from paying overtime), employees
generally must meet certain tests regarding their salary and job duties. More specifically, the
DOL has outlined three tests in the FPOI which must be met by each white collar exemption
category in order for him or her to qualify under the available exemptions to the FLSA
requirements (FairPay: DOL’s, 2004). Under the regulations these tests, when correctly
applied, determine which positions are eligible for exemption from overtime pay and which
are not.

The first test is the salary-basis test. To be exempt from overtime pay, employees
must be paid a pre-determined fixed salary (not an hourly wage) that is not generally subject

421
to reduction due to variations in quality or quantity of work performed. Salary is defined as
including only the guaranteed portion of an employee’s pay; not any benefits, bonuses,
incentive payments, commissions, or other inducements. This definition of salary has long
been the standard rule under federal overtime law and has not been changed with the new
initiative. Also, the employee must be paid the full salary for any week in which he or she
performs work, and the employee need not be paid for any work week when no work is
performed. Furthermore, rates cannot be prorated for employees who work less than 40 hours
per week.

The second test is the salary-level test. To be exempt from overtime, the new rules
require that employees earn a minimum salary of $455 a week, or $23,660 a year. This is
triple the prior minimum salary of $155 a week, or $8,060 a year. Examples of employees
most likely to be affected include fast-food managers, office managers, and some retail floor
supervisors. Additionally, the new proposed regulations provide for a new white collar
classification referred to as highly compensated employees. These white collar employees
who earn more than $100,000 a year are generally exempt from overtime pay under the new
law (29 C.F.R.541.602, Part 825, 2004).

The third and last required qualification is called the duties test. This test represents a
major change to the Act and incorporates the most significant revision to the final FLSA
regulations. The focus of the duties tests for exemption classification is based upon the
employee’s primary duty. Primary duty means the principal, main, major, or most important
duty that the employee performs. Factors to consider when determining the primary duty of
an employee include, but are not limited to: 1) the relative importance of the major or most
important duty as compared with other types of duties; 2) the employee’s relative freedom
from direct supervision; 3) the relationship between the employee’s salary and the wages paid
to other employees for performance of similar work; and 4) the amount of time spent
performing the major or most important duty (DOL, FLSA Overtime Security, n.d.).

IV. WHITE COLLAR EMPLOYEE EXEMPTIONS

All employment positions are presumed to be entitled to overtime pay unless the
duties tests and the salary tests indicate that the position falls within one of the five job
classifications identified in the Act as being exempt from overtime pay. To identify whether
or not a white collar employee is exempt, that employee must fall within one of the following
defined classifications: 1) executive (including sub-classifications of manager and business
owner), 2) administrative, 3) professional (including learned and creative sub-classifications,
4) computer, and 5) outside sales personnel.

To qualify for the executive employee exemption, each of the following four
conditions must be met: 1) the employee must be compensated on a salary basis at a rate not
less than $455 per week ($23,660 per year); 2) the employee’s primary duty must be
managing the enterprise, or managing a customarily recognized department or subdivision of
the enterprise; 3) the employee must customarily and regularly direct the work of at least two
or more other full-time employees or their equivalent; and 4) the employee must have the
authority to hire or fire other employees, or the employee’s suggestions and
recommendations as to the hiring, firing, advancement, promotion, or any other change of
status of other employees must be given particular weight (DOL, Fact Sheet #17B, 2004).

422
An exempt administrative employee is one ‘‘whose primary duty is the performance
of office or non-manual work directly related to the management or general business
operations of the employer or the employer’s customers… and whose primary duty includes
the exercise of discretion and independent judgment with respect to matters of significance”
(29 C.F.R 541, 2004, p. 22137). The regulatory criteria that define this category include the
following: 1) the employee must be compensated on a salary fee basis at a rate not less than
$455 per week; 2) the employee’s primary duty must be the performance of office or non-
manual work directly related to the management or general business operations of the
employer or the employer’s customers; and 3) the employee’s primary duty includes the
exercise of discretion and independent judgment with respect to matters of significance
(DOL, Fact Sheet # 17C, 2004).

An exempt professional employee must have a primary duty of performing office or


non-manual work: 1) requiring knowledge of an advanced type in a field of science or
learning customarily acquired by a prolonged course of specialized intellectual instruction,
but which also may be acquired by alternative means such as an equivalent combination of
intellectual instruction and work experience; or 2) requiring invention, imagination,
originality, or talent in a recognized field of artistic or creative endeavor (29 C.F.R.
541.300(a) 2.i, ii, 2004). Such primary duty requirements have resulted in two designations
for professional employees: learned and creative.

The new regulations provide a narrow interpretation for the specific classification of
outside salesmen. To qualify as an outside sales employee and the exemption, the employee
must meet the following qualifications: 1) the employee’s primary duty must be making sales
as narrowly defined within the Act, or solicit purchase or service contracts or rental type
contracts for the use of facilities for which a consideration will be paid by the client or
customer; and 2) the employee must customarily and regularly be engaged away from the
employer’s place or places of business (DOL, Fact Sheet # 17F, 2004).

The new regulations contain a separate subpart for the computer professional
exemption. To qualify for the computer employee exemption, the following conditions must
apply: 1) the employee must be compensated either on a salary or fee basis at a rate not less
than $455 per week or, if compensated on an hourly basis, at a rate not less than $27.63 an
hour; 2) the employee must be employed as a computer systems analyst, computer
programmer, software engineer, or other similarly skilled employee in the computer field
performing computer-related duties (DOL, Fact Sheet # 17E, 2004).

V. CONCLUSION

An update of the overtime pay regulations contained in the FLSA is long overdue and
the DOL’s FPOI is a reasonable solution to eliminating and correcting the existing
deficiencies of the present FLSA regulations. The FPOI is definitive in its attempt to clarify
and simplify the FLSA, eliminating highly litigated problem areas. In theory, the new
regulations modernize the FLSA standards; represent a substantial improvement over past
rules; and satisfy the debate over paying exempt employees for overtime by redefining
exempt status and duties tests. After comparing the present regulations with the proposed
regulations it is our contention that the update would be beneficial to both employees and
employers. It is our belief that the FPOI is both beneficial to employers and employees,
making it easier for employees to know their rights, for employers to understand their
obligations, and for DOL to be able to aggressively enforce the FLSA (Boehner, 2004).

423
Knowledgeable and informed employees are the first line of defense against dishonest
employers who seek to evade the requirements of the FLSA. Any new regulations should
enable employees to more easily recognize when they are owed overtime pay and will reduce
investigation and enforcement costs when violations occur (Kersey, 2004). The prior
regulations are unnecessarily complicated, outdated, and do not benefit employees.

Employers will also benefit from clearer rules because they avoid the risk of legal
confusion and costly litigation. Any new regulations should permit disciplinary deductions
for violations of workplace misconduct rules, provided the deduction is pursuant to a
uniformly applied, written, disciplinary policy. It is foreseeable that the number of FLSA
lawsuits brought against employers will continue to increase unless abated by new and
modernized regulations which are more definitive of the exempt classifications under the
FLSA.

REFERENCES

“Battle Engaged Over New OT Rules”. (2004, August 23). CNN Money. Retrieved
September 8, 2004 from http://money.cnn.com/2004/08/23/news/economy/overtime/
Becker, C. “A Good Job for Everyone”. Legal Times. Retrieved March 3, 2005 from
http://www.aflcio.org.
Boehner, J. (2004, April 20). “Chairman of the House Education & the Workforce
Committee Comments”. Retrieved September 8, 2004 from
http://edworkforce.house.gov/issues/
108th/workforce/flsa/factsheet042004.htm
Crawford, K. “OT Pay: Winners and Losers.” CNN Money. Retrieved September 3, 2005
from
http://money.cnn.com/2004/08/05/news/economy/overtime/index.htm/
DOL, “Fact Sheet #17A: Exemption for Executive, Administrative, Professional, Computer
& Outside Sales Employees Under the Fair Labor Standards Act (FLSA).” Retrieved
September 24, 2004 from
http://www.dol.gov/esa/regs/compliance/whd/fairpay/fs17a_ overview.htm
DOL, “Fact Sheet #17B: Exemption for Executive Employees Under the Fair Labor
Standards Act (FLSA).” Retrieved October 15, 2004 from
http://www.dol.gov/esa/regs/
compliance/whd/fairpay/fs17b_executive.htm
DOL, “Fact Sheet #17C: Exemption for Administrative Employees Under the Fair Labor
Standards Act (FLSA).” Retrieved February 17, 2005 from http://www.dol.
gov/esa/regs/compliance/whd/fairpay/fs17c_administrative.htm
DOL, “Fact Sheet #17E: Exemption for Employees in Computer-Related Occupations Under
the Fair Labor Standards Act (FLSA).” Retrieved February 9, 2005 from
http://www.dol.gov/ esa/regs/compliance/whd/fairpay/fs17e_computer.htm

DOL, “Fact Sheet #17F: Exemption for Outside Sales Employees Under the Fair Labor
Standards act (FLSA).” Retrieved February 15, 2005 from http://www.dol.gov/esa/regs/
compliance/whd/fairpay/fs17f_outsidesales.htm

DOL, “FLSA Overtime Security Advisor.” Retrieved February 18, 2005 from
http://www.dol.
gov/elaws/esa/flsa/overtime/glossary.htm?wd=primary_duty

424
THE IMPACT OF KNOWLEDGE MANAGEMENT CONCEPTS
ON MODERN HRM BEHAVIOR

U. Raut-Roy, Anglia Ruskin University, Cambridge, England


u.raut-roy@anglia.ac.uk

ABSTRACT

This paper investigates the complementarities between Knowledge Management


(KM) concepts and Human Resource Management (HRM) practices. It seeks to establish an
array of HRM factors, systems and processes that act as barriers and/or facilitators when
implementing a KM initiative. Parallel to this objective the paper identifies factors that enable
‘knowledge sharing’ in a service organisation in UK. Semi-structured interviews were carried
out with 32 staff.

I. INTRODUCTION

In recent years, organisations have faced pressures to downsize and outsource, as a


result, they have lost valuable knowledge as people leave and take with them what they
know. Additionally, as pressures for globalisation increase, collaboration and cooperation are
becoming more distributed and international. These changes have resulted in knowledge loss
and distributed working, highlighting the need to manage knowledge (TFPL 1999, Hildreth et
al, 1999). This realisation has heightened organisational interest in the topic of Knowledge
Management (KM).

II. WHAT IS KNOWLEDGE MANAGEMENT?

Defining KM is not only problematic but also varies from person to person, context
and use. Simply stated KM is the practice of capturing, preserving, developing, sharing and
using an organisation’s knowledge assets (Bukowitz & Williams, 1999). These knowledge
assets may include databases, documents, policies, procedures (explicit knowledge) and
previously un-captured expertise and experience in individual employees (tacit
knowledge/human intellectual capital). By efficiently managing its knowledge assets, an
organisation can create new capabilities, superior performance, encourage innovation and
enhance customer service, now or in the future (Srikantaiah & Koening, 2000; Nonaka et al,
2000; Malhotra, 2003). It is clear from current research that KM is part of a continuous
business improvement process, it relates to the way an organisation works and is therefore
about organisational development. Thus KM is about people, individual growth/learning, and
the organisational processes and infrastructure which facilitate corporate learning (TFPL,
1999; Argyris & Schon, 1978). Moreover, it is now recognised that mastery of KM, requires
a skilful blend of people and business processes (Hildreth et al, 1999).

The emphasis of organising, and structuring an organisation’s knowledge assets has


reflected the dominance of information technology systems (Dutta, 1997; Milton et al 1999;
Stein & Zwass, 1995). Nevertheless, many organisations are not yet taking advantage of their
knowledge assets, despite the availability of increasingly sophisticated technology for KM
(KPMG, 2000). As Seybold (1993) asked “the rejection of technology is rife in most
organisations. New technology comes in and is jettisoned back out or remains largely
unabsorbed and unused. Why?” Research suggests that this is due to the difficulties faced by

425
organisations, in combing the capabilities of technology and employees to meet
organisation’s strategic objectives (Nonaka & Tekeuchi, 1995). Moreover, this is largely due
to a reliance on a ‘technology push’ approach to KM, which is not conducive to achieving the
necessary culture and context required to promote organisational learning. Instead,
Damondaran and Olphert (2000) assert that organisations must take a socio-technical
approach, which has as its main objective the management and sharing of knowledge to
support the achievement of organisational goals. This approach maintains that while tools can
certainly facilitate the implementation of the knowledge process, they must be taken in
context and implemented as part of the overall effort to leverage organisational knowledge
through integration with the business strategy, HRM systems (which encompasses HR
planning, recruitment and selection, appraisal, training and development, incentive systems,
employee relations and incorporates specific company practices, procedures and formal
policies), culture, current technologies and other processes (Harrison & Kessells, 2004).

III. THE NATURE OF KM SUCCESS

A worldwide survey conducted by KPMG (2000) found that people and


implementation issues were the most frequent causes of low performing KM initiatives. For
example, a major difficulty found in most KM efforts is in changing the organisational
culture and people’s work habits. Indeed, most initiatives usually fall short because KM is not
built into employees day-to-day tasks and there is failure to obtain the knowledge workers
‘buy-in’. As such, ‘factors like, compatibility with an individual’s work style, impact on
social and organisational norms, must be carefully thought about’, when embarking on a KM
initiative. (Bradley,1998; Harrison & Kessells, 2004). However, most KM efforts treat these
cultural issues as secondary (Ruggles,1998; KPMG, 2000). Hence, improvements in how an
organisation creates, transfers and applies knowledge are impossible without simultaneously
altering the culture to support new behaviour (De Long, 1997). Given that an organisation’s
workforce is their human intellectual capital, management cannot afford to alienate or
demotivate them by ignoring their existing values and norms when implementing a KM
strategy (De Long, 1997). Nevertheless, many organisations have made significant progress
in their organisational learning KM systems and have identified some common strategy
(Prokesh, 1997). The strategy includes key practices such as leadership, communication,
practical learning, team work, and HRM issues as well recognition that IT is a resource not an
answer to knowledge sharing. A further feature of this strategy is the need to manage the
intellectual assets of an organisation through training, which is necessary to keep
organisations and the user’s skills aligned to KM needs (Scott, 1998). More specifically,
training encourages better understanding and attitudes. Therefore, it is essential that training
must cover soft skills such as effective team working and leadership skills. Most importantly,
socialisation skills for effective collaboration must be developed to build trust and
compensate for the lack of face-to-face interaction in some circumstances (Scott, 1998:
McDermott, 1999). This together with education and training in concepts and not just in
operating procedures is a strong candidate for KM success; additionally employees may
require training in the concept of sharing information and coaching on the benefits of sharing
knowledge. Indeed, Siemienuch and Sinclair (1999) note that doing is the important part, they
recommend utilising small scale demonstrators to provide learning and tacit knowledge.
Hence, according to Marsoulas (1998), for skill learning to be effective, employees need
opportunities to practice. They need consistent rewards for correct responses. However, it is
frequently the case that rewarding mechanisms are not oriented towards positive motivation.
O’Dell and Grayson (1998), assert that organisations should focus on creating or changing
the reward system to encourage sharing and transfer. They note that leadership can help by

426
promoting, recognising and rewarding people who model sharing behaviour as well as those
who adopt best practices. Thus, it is effective to design approaches that reward for collective
improvement as well as individual contribution of time, talent and expertise. This highlights
the importance in supporting the shift from individual learning to collective learning for
organisational benefit and supports the need to reinforce people to take responsibility for
voluntarily participating in the activity of sharing and leveraging knowledge. Evidently, if
organisational learning is to develop, then not only must there be suitable learning and
change management processes instituted in the organisation there must be leadership to
support the KM initiative. Hansen et al (1999) maintain that only strong leadership can
provide the direction a company needs to choose, implement and overcome resistance to a
new KM strategy. Without this guidance, there is a danger that semi-autonomous teams will
overdevelop their working cultures and procedures for creating knowledge in isolation to
address dilemmas rather than collaborating to address the longer strategic business needs
(Siemienuch & Sinclair, 1999). It is up leaders to encourage collaboration across boundaries
of structure, time and function. KM programmes have also been found to benefit from senior
management support, since strong support from executives is critical for transformation
oriented knowledge projects, but less necessary in efforts to use knowledge for improving
individual functions or processes. They can help to set the tone for a knowledge-oriented
culture, by communicating messages to the organisation that KM and organisational learning
are critical to the organisations success, provide resources for infrastructure and clarify what
types of knowledge are most important to the company (Davenport et al, 1997). As such, it is
clear that leadership is necessary for improving individual functions or processes while senior
management support is crucial for transformation/change. More importantly, both are
significant for employee ‘buy-in’ and KM success and must work concurrently.

IV. CONCLUSION

Despite employee awareness and belief in the strategic importance of KM, acceptance
of the EIM system with in the organisation was slow. Given the low usage and poor press of
the system in enabling KM it is possible to suggest that low KM success is inevitable.
However, there is a diversity of factors that can influence this outcome. Indeed, the data have
led to the identification of multiple interconnected themes explaining these findings. These
findings have a profound bearing for the HRM. For instance, the ‘communication strategy’
must be considered to ensure employee buy-in and support (KPMG, 2000), but responses
indicated a lack of ‘informed communication protocols and guidelines’. For example, there
seemed to be little consistent communication about the purpose of the KM initiative, fuelling
doubt and mistrust in terms of the company’s intentions. There was also lack of system
guidelines and policies, which were found to have negative implications for employee views
regarding the system. Additionally, questions were raised as to whether everyone had
received the same literature and whether the company’s KM objectives were clear and
consistent with all other related policies, as one person stated, “there is pressure to sell
services to others in the company group yet the stated aim of the system is to make know-
how widely and freely available”. Communication is seen as a key facilitator for KM (Boone,
2001).

In addition to the emergent theme of ‘communication’, further themes that have


emerged include ‘training’. In this instance lack of training appeared to have hampered
acceptance and uptake of EIM/KM. Generally, respondents believed the training to be
insufficient. They received training on “how to operate the system but not why and how to
manage documents”, or the purpose, benefits and value of the system. It was also stated that

427
there was a “need to educate staff regarding when it is most appropriate to use the system”.
The findings emphasise that the right type of training has the potential to improve user
attitudes and encourage a better understanding of the company objectives. For example,
taking a practical approach to training might be beneficial, “successful usage has been
achieved by spending half a day with individuals one by one” (Siemienuch & Sinclair, 1999).
As such, these findings suggest that training is necessary to keep company and user skills
aligned to KM needs, more specifically, training has a powerful influence on KM.
Continuous professional development is considered to be essential to professional and
knowledge workers (Robertson and Hammesley, 2000). In addition, responses suggested that
the company would benefit from training needs analysis. Hansen et al, (1999) argue that
codification and personalisation strategies require that organisations hire different kind of
people and train them differently. Parallel to these findings, employees felt that there might
be more support for KM, if it was perceived to fit in with the company ‘reward scheme’,
individual achievements, co-operation and sharing. As one employee stated there is a “need
to institutionalise KM and make it part of staff assessments”. Therefore, it is essential for
companies to recognise that reward schemes must incorporate a ‘learn from errors’/positive
motivation approach (Schein, 1993). Moreover development programmes that allow for the
assessment, feedback, coaching and development of people at all organisational levels are
needed, that guide individuals to develop and prepare themselves for changes in their work.
Reward systems indicate what the organisation values and shapes individual behaviour.
Traditional reward systems reward those who produce rather then share (Zarraga & Bonache,
2003). Knowledge sharing rewards is, therefore, about lowering the cost of sharing or
increasing the benefit associated with that type of behaviour. Therefore, it can be argued that
group incentives, promotion schemes that encourage individuals to be more collaborative and
360 degrees appraisal systems can create an appropriate climate for the transfer and creation
of knowledge. Similarly, these when applied systematically, can lead to a smoother
socialisation, internalisation and externalisation of knowledge with in the organisation.

The findings also demonstrated the importance of ‘cultural factors’ in achieving


effective KM (Damodaran & Olphert, 2000). As one employee stated, you “can’t change
organisational culture by imposing IT”. Instead, the need to build a new culture of team
working was highlighted, since it was noted that there was no clear forum for discussion or
resolution of dilemmas, suggesting that there could be a risk of semi-autonomous teams over
developing their working culture to resolve dilemmas (Siemienuch & Sinclair, 1999).
Specific findings regarding the organisational context, within which the KM initiative was
implemented, have indicated further intervening factors such as, the number of staff
redundancies over the last few years. Whilst the company underwent numerous operational
changes including downsizing, the implications appear to have ricocheted throughout the
organisation, to the extent that they have affected perceptions of the company vision, to share
knowledge globally. Additionally, this seems to have resulted in the development of a poor
psychological contract, which might also be associated with a reticence to share knowledge.
As an employee stated, “If EIM ensures that when individuals leave their know-how remains,
then surely parting with know-how makes an individual expendable?” Therefore, for
effective sharing of knowledge organisations must foster an environment where employees
feel free to share insights, experiences and know how, in other words, a trust worthy
environment. Undoubtedly, culture is one of this company’s biggest barriers to KM success.
As Patch et al (2000) state, “it is arguable though, that employees may not be wholly willing
to share all of their work-related knowledge if they believe that hoarding knowledge will
assist in furthering their careers”. Tensions in the ownership of knowledge are linked to the
employment relationship with implications for power, control and reward. If employees feel

428
that they have not been treated with trust, and if they feel that work related commitments
have not been kept, then employees are less willing to share knowledge at work (Patch et al,
2000; Damodaran & Olphert, 2000). The findings also suggested the need to decentralise
authority where possible and to create more ownership, to achieve the necessary culture for
KM success. The findings also pointed toward a strong sense of ‘management commitment’
and ‘leadership’, as important factors for knowledge transformation. Additionally, the need to
achieve attitude change from the management team was highlighted as influential in
exploiting KM. However, one main problem was that the EIM system was not being used by
all managers but by their secretaries instead. Evidently, a key behavioural change is required
at high levels in the organisational hierarchy. Individuals need change from the current to the
desired behaviour for KM success (Damodaran & Olphert, 2000) and this need to start with
seniors management on a practical level. Studies (Depres & Hiltrop, 1995; Howrwitz et al,
2003) have found that knowledge workers tend to have high need for autonomy, significant
drive for achievement, stronger identity and affiliation with a profession than a company and
a greater sense of self direction. These characteristics make them likely to resist command
and control imposition of views, rules and structure, Further more, findings suggested that
‘leadership skills’ were necessary to support the KM initiative, respondents stated, “we need
leadership on a practical level, someone to help us develop skills to support knowledge
sharing”. For example, employee responses further indicated that the system was used most
successfully when the work groups had a strong need for EIM and most importantly when
departmental heads demonstrated commitments to the system. The employees expected that
team leaders would provide, “coaching on the benefits of sharing knowledge”, training on
how to use the system and provide practical support to assist them in adapting to new way of
working i.e. team working and virtual collaboration (McDermott, 1999; Hansen et al, 1999).
Despite this expectation responses highlighted the lack of leadership, drive and influence. By
using a case example the paper establishes how HRM practices in an organisation can instil a
knowledge vision and encourage knowledge sharing in an organisation. Furthermore,
appropriate HRM procedures and HRM systems are fundamental to EIM/KM success.

REFERENCES

Boone, M.E. Managing Interactively: Executing Business Strategy, Improving


Communication and Creating a Knowledge-Sharing Culture, McGraw-Hill, 2001.
Damodaran, L. and Olphert, W. “Barriers and Facilitators to the use of Knowledge
Management System.” Behaviour & Information Technology., 19, 2000.
Davenport, T. H., De Long, D. W. and Beer, M. C. “Successful Knowledge
Management Projects.” MIT Sloan Management Review, 39, 2, 1998.
Hansen, M. T., Nohria, N. and Tierney, T. “What’s your Strategy for Managing
Knowledge?” Harvard Business Review., March-April, 77, 1999.
Harrison, R and Kessels, J. Human Resource Development in a Knowledge Economy:An
Organisational View, London: Palgrave Macmillan, 2004.
KPMG Consulting. Knowledge Management Report, Annapolis/London, 2000.

429
EMPLOYEE PERFORMANCE EVALUATIONS
PUBLIC VS. PRIVATE SECTOR

Charles Chekwa, Troy University


cchekwa@troy.edu

Mmutakaego Chukwuanu, Allen University


mmutakaego@yahoo.com

Mike Sorial, Troy University

ABSTRACT

This study is designed to prove the alternative hypothesis that employee evaluation
systems currently used in the public and private sectors contain biases and do not adequately
reflect the true performance of the employee. To do this, this study will disprove the null
hypothesis that employee evaluation systems currently in use in the public sector adequately
reflect the performance of an employee. By correlating employee appraisals with their
perceived “best” and “worst” supervision and their perception of appraisal system accuracy,
this study will be able to assess the validity of the appraisal system in use.

I. INTRODUCTION

A management dilemma often identified by senior leaders of corporations and public


sector organizations alike is the inaccuracy inherent in Employee Performance Evaluations
(EPE) (AT&T, 1997). Due to these inaccuracies, EPE’s do not provide a reliable mechanism
to support strategic decision-making (Oberg, 2004). Leaders are unable to rely on
performance evaluations when deciding who in the organization is best capable of heading up
a new program or initiative. Employee performance evaluations, especially in the public
sector, do not appear to be consistent. Employee performance evaluations may not correlate
to their promotion ability. Employees with past acceptable performance may exhibit lower
performance when moved to a new job with a new supervisor, while employees with less
than favorable evaluations can often surpass expectations of their new supervisor when
moved to a new job. How can senior managers form high performance teams to solve critical
problems without an accurate employee evaluation system? Are existing employee
evaluation systems used in the private and public sector adequate to enable the organization
to sue them to provide a competitive advantage?

II. THE PROBLEM

Although performance evaluations are a great tool for building an organization


(Grote, 2000), many organizations compromise the effectiveness of this tool in favor of other,
less important, objectives.. Grote identifies the “best effort” culture no longer provides for
organizational excellence. He provides a strong example when he refers to telling a 53-year-
old employee that they are doing a great job and then firing them a few months later. This
exemplifies the cruelty of a misleading evaluation system that fails to openly and honestly
deal with reality. The reality of an honest and objective system provides the framework to
improve a workers behavior or productivity. There is evidence that most organizations
subscribe to a myth that objective performance implies quantifiable performance (Grote,

430
2000). In reality, objective performance is verifiable performance. This definition provides
the basis of this study. The dictionary defines objective as uninfluenced by emotion, based
on observable phenomenon. As human beings, the concept of being not influenced by
emotion is difficult to escape. Most human decisions are greatly influenced by emotion.
How then can we escape the emotion to becoming objective?
Alternative Hypothesis. Employee evaluation systems currently used in the public and most
private sectors contain biases and do not adequately reflect the true performance of the
employee.

III. NULL HYPOTHESIS

Employee Evaluation systems currently in used adequately reflect employee


performance. This research is anticipated to prove the above hypothesis by rejecting the
null hypothesis. To accomplish this task, this research will be designed to prove that
employee evaluations, in a sample of a representative public sector department, do not
significantly correlate with promotion potential and may be closer linked to supervisor-
employee relationships instead. Thus, employee evaluation systems in use are not adequate
to support management in making strategic decisions in the Organization. An inaccurate
employee performance evaluation system eventually leads to mediocrity in employee effort,
frustrated managers, and a dysfunctional organization. These symptoms contribute to lost
productivity and diminished publics trust. When implemented correctly, employee
evaluations can provide a reliable tool strategic to provide a competitive advantage to the
organization. The ultimate goal of this research is to raise awareness of this ongoing problem
and spur interest in finding methods to reverse the trend. This study will link objectivity with
consistency. In this regard, it is desired that positive and negative emotion over a period of
time can cancel out the affects of emotionalism. A closely linked performance appraisal
system with corporate strategy greatly enhances an organization’s performance. Commonly
held misconceptions about employee appraisal systems will be discussed. Possible causes and
the factors that contribute to employee evaluation system shortcomings will also be
investigated.

IV. LITERATURE REVIEW

There are four commonly held myths regarding employee evaluations, the system
provides feedback, the system is aimed at improving performance, the system standardizes
appraisals to make them objective, and the system protects employers from lawsuits for
wrongful termination (AT&T, 1997). In fact, a study by AT&T in 1997 disproved these
myths in their entirety. The study argued that these myths represent an idolized view of
appraisal systems that is achieved by a very few organizations. In fact, the study also
concluded that poor performers account for about 10% of the workforce. Therefore, it is
unrealistic to bias appraisal systems high and expect greater achievements by the workforce.

The problem of inaccurate performance evaluations is not new to contemporary


management. The 1970’s and 80’s sparked this trend due to popular culture that everyone is
to be accepted (Schooman, 1988). This trend paved the way for evaluation systems to be
designed in such a way as to not offend or de-motivate anyone. Over the years, this trend
became institutionalized and deep rooted in organizational management. Past studies focused
on the escalation bias in performance appraisals found strong correlation of both positive and
negative bias (Schooman, 1988). Other studies found that the accuracy’s in evaluations were
due to contributing failure to influences beyond employee control, while accepting success as

431
solely within the employee’s control by both the employee and their supervisor (Bernardin,
1989). Both of these examples demonstrate the willingness of the evaluation system to be
influenced by other values. Unfortunately, this value system does not take the effects of
diminishing the reliability of the performance evaluation as a strategic decision-aiding tool.

While management books and manuals present employee appraisals to be objective,


rational, and accurate process, there is evidence to indicate that managers distort and
manipulate the appraisal process for political purposes. Managers often revealed that their
first priority was not the accuracy of the appraisal, but how to make use of the review process
to reward and motivate employees (Longnecker, 1987). In this regard, managers compromise
the greater benefit of an accurate evaluation to support strategic decision-making in favor of
short-term tactical goals. The supervisor’s or raters personal belief system is allowed to
cloud objectivity (Longnecker, 1987). Yet other studies suggest the individual employees
themselves project an impression that the rater cannot neglect. The employee protects their
self-image and in turn influences they manner in which others perceive them (Wayne, 1995).
This is a subtle and psychological issue whereas the rater may not be cognitively aware of the
projected image by the employee. How others perceive an individual can influence reality,
this takes after the often-used term: “Perception is reality”. Results of these studies indicate
that impression management had significant indirect impact on performance appraisals
(Wayne, 1995).

A growing number of studies now indicate that the relationship that develops between
the supervisor and their employees form the basis for differential treatment (Duarte, 1993).
Supervisors develop high quality relationships with a few employees but not others.
Supervisors then often offer preferential treatment and benefits to the high quality
relationship employees with the premise that these employees assist the supervisor in
reaching their goals and commitments.

Implicit stress theory leads individuals to make assumptions about one trait based on
another trait (Rotondo, 1995). Organization behavior research has revealed the presence of
implicit stress theory in the employee appraisal process. This suggests that the rater makes
assumptions about the level of stress the employee is subject to and judges the outcome based
on that stress. In short, an employee with an assumed high stress job may be rated more
leniently by their superior, while an employee with an assumed low stress job may have a
more critical superior. In reality, the job description should not contribute significantly to the
overall behavior outcome. An employee meeting and exceeding the requirements of their job
description should not be further appraised on the difficulty or stressfulness of that job. The
underlying assumption is that the employee’s compensation package adequately reflects the
level of effort required and further preferential arrangements in the appraisal are unnecessary
(Rotondo, 1995). A recent study showed that employees generally responded well to job
satisfaction when rated highly on their evaluations, but then job satisfaction fell steadily after
a four year period (Blau, 1999). This provides strong evidence that inflated evaluation do not
have long lasting organizational effects. While new employees need to be encouraged to
make decisions, take some chances, and help the organization compete, continued over
emphasis of performance capability leads to an under-valuation of the appraisal system. Thus
this factor needs to be taken into account and diminish the case for an escalated evaluation.

The next section describes the methodology used in this study. The design will be
outlined and the selection of the subjects will be addressed. Also, the variables will be

432
defined and the data collection process will be outlined. The next section will then conclude
with and the procedures and data analysis.

V. METHODOLOGY

In this study, a questionnaire was developed and issued to a sample population.


Statistical data was collected in an effort to disprove that employee evaluation systems
currently in use adequately reflect employee performance. The methods used for this study
were comprised of a short questionnaire (copy attached) sent to a selected population as well
as a review of the selected populations’ past employee evaluations. Agreement from the
employees as well as the Organizations directors was first secured. The survey questionnaire
was emailed to the population along with an explanation of the reason for the study; the
organization’s management supported the participation of their employees on a voluntary
basis. The survey was requested within a 2 week period via return email or through a
conveniently placed drop box to maintain anonymousness. The data was populated into a
spreadsheet to enable statistical manipulation. Analysis of variance (ANOVA) provided that
all conditions exists, single factor, fixed effects of a continuous dependent variable. To
accomplish this study, tow key factors must be defined: What is adequate? What is
objective? The definition of adequate in this study was to show consistency and validity. A
consistent evaluation system in this context will be considered adequate while inconsistent
evaluations will be indicative of an inadequate evaluation system. Second, the definition of
objectivity is unbiased or free from rater induced biases. To accomplish this, employee’s
were correlated between different raters and checked for deviations in the mean and
distribution. This study reviewed past appraisals of the subjects described below, completed
questionnaires from the same subjects, and interviewed subjects whose evaluations were not
reviewed to validate the sampling technique used. After selection of the candidate pool, the
names will be removed form record by physical means to ensure confidentiality and
openness. The data from the evaluations will be extracted and used to populate a
spreadsheet. The spreadsheet values will be manipulated and plotted to show statistical
correlation.

The population used in this study will be from the public sector. The public sector
selected will be from the Department of Defense. In particular, this study examined
knowledge and management workers. Candidates chosen from this population were in the
middle of their careers as knowledge workers. Subjects experience level ranged between 10
and no longer than 20 years of. Samples will be drawn from about 110 Engineers and 50
program managers. A minimum of 30 percent response rat is required to obtain statistical
significance of both groups.

VI. RESULTS

Forty-Two surveys were collected, just short of the required sample of fifty. The
survey data consisted of fundamentally two populations, a GS 13 pool of which there were 31
respondents, and a GS-14 Pool of which there were 11 respondents. Due to time limitations,
the deadline for data collection was set and additional solicitations were not possible. This
collected sample was sufficient to proceed with analysis. Further work must be accomplished
to statistically validate the result of this relatively small sample.

433
Question number 2 of the survey was an assessment of the employee’s perception of
the evaluation system. The results showed and average of 2.84 for GS-13’s and 2.73 for GS-
14’s on a 5 point scale as shown in Attachment 3. The standard deviation was observed at
1.25 and 0.79 respectively. This is an indicator that there is general consensus about the
results of this question. Thus, most employees, regardless of grade, perceive the appraisal
system is between ok and a modest dislike. However, GS-14 respondents show a population
where the Gaussian distribution is slightly shifted to the right. None of the GS-14
respondents liked the evaluation system “very much”. This may be due to the lower sample
size collected in the GS-14 population, or it may be an indication that as the employee’s
grade increases, they search for a more meaningful evaluation system. For the purpose of
this study, most respondents believe the system is adequate to accomplish the evaluation of
the employees.

VII. CONCLUSIONS

This study did not disprove the null hypothesis. This study provided data that
supported the null hypothesis. Of a sample of 165 knowledge workers at mid career level, 30
percent responded to a survey within a three week period. The small sample size was
sufficient to perform data analysis to evaluate the hypothesis and to draw some organizational
conclusions. Most employees, regardless of grade, perceived the appraisal system is between
ok and a modest dislike. The GS-14 population had higher appraisal ratings with a tighter
standard deviation than the GS-13 population; higher capability employees received
promotions. Higher grade employees demand more appraisal quality. Both populations
perceive the appraisal system to be accurate. The data collected showed that GS-14
employees attributed accuracy in the appraisal when their rating was high, and lower
accuracy when their rating was lower. This was not the case with the larger GS-13
population, which viewed rating accuracy independently of the achieved rating. The dyadic
supervisor-employee relationship was more significant of an impact at the lower grade than it
was at the higher grade. Among the GS-13’s, a “bad” job was attributed to having a “bad”
supervisor in the employee’s perception. The GS-14’s were not critical of their supervision.
Finally, since the employee acknowledges a low appraisal, and since the employees as a
population find the appraisal system to be accurate from the analysis of question 5, it is
conclusive that the appraisal system in place at this organization is accurate. This study
should be expanded, however, to include additional geographic locations within the same and
with different organizations.

REFERENCES

Blau, G. (1999). “Testing the longitudinal impact of work variables and performance
appraisal satisfaction on subsequent overall job satisfaction.” Human Relations, 52(8),
1099-1113.
Bolino, M., & Turnley, W. (2003). “Counternormative impression management, likeability,
and performance ratings: the use of intimidation in an organizational setting.” Journal
of Organizational Behavior, 24(2), 237-250.
Bernardin, J. (1989). “Increasing the accuracy of performance measurement: a proposed
solution to erroneous attributions.” Human Resource Planning, 12(3), 239-251.
Duarte, Neville T., Goodson, Jane R, and Klich, Nancy R. (1993). “How do I like thee? Let
me appraise the ways.” Journal of Organizational Behavior, 14(3), 239-249.

434
Ferris, G., Judge T., Rowland K., and Fitzgibbons, D. (1994). “Subordinate influence and the
performance evaluation process: a test of a model.” Organization Behavior and
Human Decision Process, 58(1), 101-136.

435
TO REPORT OR NOT REPORT: A MATTER OF GENDER AND NATIONALITY

Wanthanee Limpaphayom, Eastern Washington University at Bellevue


wlimpaphayom@ewu.edu

Paul A. Fadil, University of North Florida


pfadil@unf.edu

ABSTRACT

This paper seeks to examine the influence of nationality and gender on the
effectiveness perception of sexual harassment reporting behaviors. Although gender did not
impact whether the respondents viewed reporting harassment behaviors as effective, the
influence of nationality was strongly supported. Specifically, we found that the US subjects
were more likely than Thai subjects to view victims’ decisions to report sexual harassment
behaviors as more effective in stopping these unwanted advances than other types of
responses. The practical implications and limitations of this study are also discussed.

I. INTRODUCTION

One of the more perplexing, unanswered questions in the sexual harassment literature
is why more people are not reporting harassment incidences (Loy and Stewart 1984; U.S.
Merit Systems Protection Board 1995). Sexual harassment is a serious and costly problem
that occurs in many countries (Fitzgerald, Drasgow et al. 1997; Maatman, 2000; Ismail and
Lee, 2005). However, since there is little agreement as to what actions truly constitute sexual
harassment behaviors (O’Connor et al., 2004), victims tend to respond each harassment
incident differently.

Victim responses are important in the sexual harassment process because they
significantly alter the situation by either stopping or facilitating more harassment behaviors
(Dan, Pinsof et al., 1995). Failure to promptly respond to harassment usually causes the
victim to lose credibility (Seagrave, 1994), as coworkers begin to doubt whether the
harassment ever occurred at all. By ignoring the harasser, it may be interpreted as the victim
accepting or even welcoming further harassment in the future (Perry et al., 2004). In addition,
this inaction may complicate the legal process in establishing and negatively impacting the
victim’s case (Fitzgerald et al., 1995). Since “failure to report” is one of the most intriguing
problems of the sexual harassment phenomena, the current authors chose to investigate this
particular issue.

The purpose of this paper is to explore the demographic factors that influence a
victim’s perception of the decision to report sexual harassment behavior. Specifically, does a
victim’s gender or nationality establish whether they consider reporting sexual harassment
behaviors an effective response? The remainder of this paper will review sexual harassment
responses and the determinants of those responses before developing and empirically testing
hypotheses. The results of this investigation are examined and the practical implications and
limitations of this study are discussed.

436
II. CATEGORIES OF RESPONSES

The most common victim responses to sexual harassment are indirect (Seagrave,
1994). These responses include ignoring or avoiding the harasser (Loy and Stewart, 1984).
Even though more direct approaches such as communicating with the harasser or making
complaints are more effective at ending or minimizing harassment, they tend to be used much
more infrequently (Benson and Thomson, 1982; Gruber, 1989; Gruber and Smith, 1995;
Gutek and Koss, 1993). Dealing more assertively with sexual harassment (e.g., filing a
lawsuit) may result in both positive and negative consequences. On the negative side,
victims may have to cope with negative emotions, such as retraumatization and feelings of
powerlessness (Fitzgerald et al., 1995; Lenhart and Shrier, 1996; Stambaugh, 1997). On the
positive side, victims may receive compensation, settlement, personal growth, confidence,
and feelings of self-worth if they effectively confront the harassers (e.g., the victims win the
lawsuit) (Stambaugh, 1997). As a result, victims should set realistic goals and carefully
consider both the costs and benefits before they decide how to best respond to sexual
harassment (Lenhart and Shrier, 1996).

The broadest category of responses to sexual harassment involves the victim’s decision as to
whether or not to report the harassment behaviors. Most victims choose not to report
the behaviors and tend to use responses that are informal and passive. Only a small
number of victims take formal actions and report the behavior (Loy and Stewart 1984;
U.S. Merit Systems Protection Board, 1995).

III. DETERMINANTS OF RESPONSES TO SEXUAL HARASSMENT

The victims’ decision of how to respond to sexual harassment depends on the many
factors. The person has to first determine that the behaviors constitute sexual harassment
before making the decision on how to react. Sexual coercion or quid pro quo sexual
harassment behaviors are more severe (U.S. Equal Employment Opportunity Commission,
1992; Paetzold and O'Leary-Kelly, 1996) and the most identifiable and agreed upon forms of
sexual harassment (Sheffey and Tindale, 1992; Williams and Cyr, 1992; Gutek and
O'Conner, 1995).

The severity of sexual harassment behaviors has been shown to be the strongest
predictor of responses, with the more severe behaviors leading to the more direct responses
(Gutek and Koss, 1993; Gruber and Smith, 1995). On the other hand, a hostile environment
or gender harassment behaviors are more ambiguous and the perceptions of sexual
harassment under these conditions may be influenced by the victim’s gender (Baird et al.,
1995; Stockdale et al., 1995; Foulis and McCabe, 1997; Hendrix et al., 1998). Research
strongly indicates that females are more likely to report sexual harassment than males (Baker
et al., 1990; Perry et al., 2004; Jackson and Newman, 2004). On the other hand, male victims
may perceive the behavior as flattering and respond by consciously denying that the behavior
is sexual harassment (Thacker, 1996).

National culture may also play a role in influencing the responses to sexual
harassment behaviors. According to Hofstede (Hofstede, 1980; Hofstede, 1991), people from
different cultures differ in dimensions such as individualism vs. collectivism, masculinity vs.
femininity, power distance, uncertainty avoidance, and long-term vs. short-term orientation.
Collectivistic cultures, for example, may feel that seeking support and not reporting the
behavior are more effective responses, while individualistic cultures tend to prefer a more

437
direct approach. Another example of this cultural impact may be found in the value
dimension of power distance. High power distance cultures tend to respect and fear
authority and thereby may not perceive reporting sexual harassment behaviors as an effective
response. Low power distance cultures promote equality, even in unequal authority
relationships, and thereby may empower sexual harassment victims to perceive reporting
harassment behaviors as an effective response.

Since Thailand was identified in Hofstede’s study as both collectivistic and


possessing a high power distance, and the U.S. was found to be individualistic with a low
power distance (Hofstede, 1980; Hofstede, 1991) they were utilized as the two major cultures
of comparison in this study. Specifically, participants in this empirical investigation were
from Thailand and the U.S.

IV. HYPOTHESES

The current authors propose that when sexual harassment behaviors are severe and
easily identifiable, females are more likely to believe that reporting harassment behaviors is a
more effective strategy than males do. Also, individualistic and high power distance cultures
are more likely to identify reporting as a more effective strategy than collectivistic cultures
possessing a low power distance. Thus, the two major hypotheses that form the structure of
this study are as follows:

H1: When compared to men, women will rate reporting severe sexual harassment
behaviors as more effective.

H2: When compared to participants from Thailand, U.S. participants will rate
reporting severe sexual harassment behaviors as more effective.

V. METHODS

228 business students in a southern university in the U.S. and 260 business students
from an English-speaking university in Bangkok, Thailand completed questionnaires that
included their perceptions of vignettes depicting sexual harassment behaviors and rankings of
effective responses to such behaviors. 107 Thais were male (41.2%) while 140 U.S. students
(61.4%) were male. Over 95% of both groups were under 25 years old.

A 2 x 2 ANOVA (nationality x gender) design was used to test the hypotheses


proposed. Both independent variables, nationality and gender, are between subject factors.
The analysis included only sexual coercion behaviors, the most severe form of sexual
harassment where both male and female tend to perceive the behaviors similarly. An
example of this behavior is a manager threatening to cause trouble to a subordinate if the
subordinate refuses the manager’s sexual advances. The dependent variable, an effectiveness
rank of reporting sexual harassment behaviors, range from 1 (most effective) to 10 (least
effective).

VI. RESULTS

The first hypothesis contends that women rank reporting as a more effective way to
respond to sexual harassment behaviors when compared to men. The results from ANOVA

438
showed no significant differences between the ranking by female vs. male (F(1,407) = 1.273, p
= .260). As a result, hypothesis 1 is not supported.

The second hypothesis suggests that the U.S. respondents will rank reporting as more
effective response to sexual harassment behavior when compared to Thai respondents. The
result affirms that U.S. respondents significantly rank reporting as a more effective response
((F(1,407) = 6.075, p = .014).

We also found a significant interaction effect between gender and nationality (F(1,407)
= 7.978 p = .005) which was not previously hypothesized. A plot of estimated marginal
means between gender and nationality shows that the U.S. female ranked reporting as the
most effective among all groups, followed by Thai and U.S. male who were almost in
agreement. Thai female ranked reporting as the least effective among all groups (see Figure
1).

Estimated Marginal Means of Reporting Sexual Coercion Rank

Nationality
4.00
Thai
American

3.80
Estimated Marginal Means

3.60

3.40

3.20

3.00

2.80

Male Female
Gender
Figure 1 Estimated Marginal Means by Gender and Nationality

VII. PRACTICAL IMPLICATIONS AND LIMITATIONS

As indicated by the results discussed above, the current authors found that nationality
makes a difference in whether a person thinks reporting sexual harassment behavior is
effective. Males and females with different nationalities tend to have different opinions
regarding the effectiveness of these harassment reports. In order for businesses to operate
smoothly in today’s global environment, sexual harassment must be prevented or resolved

439
effectively. However, a universal sexual harassment policy thought to be useful in the U.S.
may not work in other countries. Companies should be sensitive to the local culture and seek
to incorporate the differences in their policies and training. It may also be beneficial to
create multiple and friendly reporting channels suitable for both males and females, while
encouraging the reporting of sexual harassment incidents.

This study has a few limitations, including using self-report, student participants from
only two countries. We hope that this will serve as a basis for future investigation in work
settings in many countries, and with other demographic factors, thereby, contributing to our
better understanding and efforts to mitigate the numerous adverse effects of sexual
harassment.

REFERENCES

Baird, C. L., N. L. Bensko, et al. "Gender influence on perceptions of hostile environment


sexual harassment." Psychological Reports 77(1): 1995, 79-82.
Baker, D. D., D. E. Terpstra, et al. "The influence of individual characteristics and severity of
harassing behavior on reaction to sexual harassment." Sex Roles 22(5/6): 1990, 305-
325.
Benson, D. J. and G. E. Thomson "Sexual harassment on a university campus: The
confluence of authority relations, sexual interest and gender stratification." Social
Problem 29(3): 1982, 236-251.
Dan, A. J., D. A. Pinsof, et al. "Sexual harassment as an occupational hazard in nursing."
Basic and Applied Social Psychology 17(4): 1995, 563-580.
Fitzgerald, L. F., F. Drasgow, et al. "Antecedents and consequences of sexual harassment in
organizations: A test of an integrated model." Journal of Applied Psychology 82(4):
1997, 578-589.
Fitzgerald, L. F., S. Swan, et al. "Why didn't she just report him? The psychological and legal
implications of women's responses to sexual harassment." Journal of Social Issues
51(1): 1995, 117-138.
Foulis, D. and M. P. McCabe "Sexual harassment: Factors affecting attitudes and
perceptions." Sex Roles 37(9/10): 1997, 773-798.
Gruber, J. E. "How women handle sexual harassment: A literature review." Sociology and
Social Research 74(1): 1989, 3-9.
Gruber, J. E. and M. D. Smith "Women's responses to sexual harassment: A multivariate
analysis." Basic and Applied Social Psychology 17(4): 1995, 543-562.
Gutek, B. A. and M. P. Koss "Changed women and changed organizations: Consequences of
and coping with sexual harassment." Journal of Vocational Behavior 42: 1993, 28-48.
Gutek, B. A. and M. O'Conner (1995). "The empirical basis for the reasonable woman
standard." Journal of Social Issues 51(1): 1995, 151-166.

440
CHAPTER 14

INSTRUCTIONAL/PEDAGOGICAL ISSUES

441
CHANGING THE MEDIA IN THE MIDDLE EAST: LEBANON IMPROVES
JOURNALISM & MASS COMMUNICATION EDUCATION

Ali Kanso, University of Texas at San Antonio


ali.kanso@utsa.edu

ABSTRACT

This paper assesses higher education in journalism and mass communication programs
in Lebanon. Its main objective is to determine whether or not students (future communication
professionals) are receiving adequate education and training before entering the job market.
The findings suggest that the study of journalism and mass communication in Lebanon is
advancing but it has a long way to reach the standards of well-recognized programs in the
U.S.

I. INTRODUCTION

The media industry has witnessed many changes in recent years. More than ever
before, communication professionals have to acquire certain skills to survive a world
characterized by stiff competition and new technologies. The Arab world cannot just watch a
wave of new developments in media education and training but it has to accommodate these
changes. Lebanon is selected in this study for various reasons. First, the country has more
media than any other Arab state. Second, although Lebanon suffered from a civil war (1975-
1991), Israel’s occupation of the security zone in South Lebanon (1978-2000) and political
assassinations (1977-2005), the country is still considered by many people in the Middle East
as the lone shining star of personal liberty and freedom of speech. Third, relative to its size,
Lebanon has more journalism and mass communication higher education programs than
anywhere else in the Arab world. Fourth, Lebanon adopts a liberal policy in commercial,
financial and fiscal activities. Such a policy has given the country a distinctive role in the
region where Beirut has regained its reputation as a major center of publishing, banking, and
trade.

II. METHODOLOGY

This study is based on: (1) online research of journalism and communication
programs at all universities in Lebanon that offer degrees or courses in journalism and mass
communication; (2) in-depth interviews through long distance phone calls and/or email
exchanges (from July 25 to September 10, 2005) with four full-time university professors, a
director of media training center, and a news editor; and (3) professional knowledge and
experience of the author who served as a news reporter for over five years in the Middle East,
completed his undergraduate study in public relations and advertising in Lebanon and his
graduate work in journalism and mass communication in the U.S. , and has taught for 19
years at five U.S. universities. Two of the four Lebanese professors hold Ph.D. degrees in
mass communication from the U.S. A third professor has an MA degree in journalism from
France, and the other holds an MBA degree from the American University of Beirut.
Furthermore, two of the four professors had professional experience in news reporting before
they started teaching at their respective universities. One professor had expertise in media
planning and currently manages an integrated marketing communication firm. The fourth
professor had no professional experience in news media. Each of the four professors is

442
currently teaching different courses ranging from news reporting to news editing, media
management, advertising, public relations, marketing communication, etc. Two of the four
professors work at the Lebanese University and the others work at Notre Dame University.
LU is a public university and has the largest student body of all universities in Lebanon.
About 1500 students are enrolled annually in its College of Information and Documentation.
NDU, on the other hand, has the highest enrollment of communication majors in a private
Lebanese university (460 students). The director of the media training center holds a Ph. D.
from France. He taught journalism courses in Lebanon for over 15 years. The news editor
holds an MA degree. He is also a part-time instructor at a local university and hosts a political
program at a major television station.

III. FINDINGS

Characteristics of Journalism and Mass Communication Programs


In conducting online research, the author found that 11 of the 45 universities in
Lebanon offer programs or courses in journalism and mass communication. He also noted
that some programs are comparable to those in the U.S. In general, the Lebanese programs
are relatively few and lack some depth found in highly-recognized American universities.
While U.S. degree programs involve an integration of technical practices, theoretical studies,
professional skills and traditional liberal arts disciplines, some Lebanese programs attempt to
follow the Western model by focusing on professional training in various communication
industries. Several schools provide a combination of both theory and practice. The American
University of Technology, for example, offers programs in advertising, journalism, public
relations, radio and television. The American University College of Science and Technology,
on the other hand, has a communication arts program involving the same four areas of
concentration listed above. However, the program emphasizes students’ acquisition of broad-
based theoretical and practical knowledge of communication arts concepts. While AUST’s
general education requirements and major courses in the four concentrations resemble many
programs in the U.S., there is no evidence that the university demands internships or has in-
house media to guarantee the fulfillment of the practical aspect. The University of Balamand
has programs in advertising, journalism, public relations, radio and television, and translation
and interpreting. Some of its practical courses are audiovisual and electronic technology,
script writing, magazine production and photography. UB has also a master’s program in
information and communication technology. The Lebanese American University seeks to
blend theory and practice with a delicate balance of intellectual, cultural and technical
components in three emphasis areas: journalism, radio/television/film, and theater. The
university requires seniors to undergo internships in their respective emphases. LAU has
radio and television studios that provide computer animation capabilities and three theaters
that offer various dramatic experiences. The Lebanese International University also
emphasizes theoretical as well as practical knowledge, thus seeking to prepare students to
enter the workforce with a basic set of hands-on skills in addition to critical thinking skills
necessary to function in the increasingly complex world of mass media and technology. LIU
offers five specialties: advertising, journalism, public relations, radio and television, and
translation and interpreting. The Lebanese University prepares students with skills applicable
to television, radio, public speaking, broadcast production, public relations campaigns, and
cinematography. LU offers degrees in advertising and public relations, broadcasting (audio
visual), journalism, management of information, management of libraries and information,
and management of documentation. Starting this fall, each of these programs requires
students to successfully complete three instead of four years of study to earn a bachelor’s
degree. Along with this change, the university has created a master’s degree for two years.

443
Notre Dame University is another Lebanese school that integrates theory and practice.
However, NDU, unlike many other Lebanese educational institutions, requires students to do
an internship at a local or regional media outlet. Its Department of Mass Communication
encompasses three sequences: advertising and marketing, journalism and public relations, and
radio and television. The university has also a graduate program in media studies that
includes advertising, journalism, and electronic media. A few universities offer courses rather
than degrees in journalism and mass communication. For instance, the American University
of Beirut lists a public relations course in the marketing concentration of the Business School.
AUB also provides journalistic and other forms of specialized writing courses in the English
Department. Interestingly, Haigazian University has an advertising and communication
major, much like American universities. However, this program is part of the Business
Administration and Economics School. It requires an internship and focuses on business with
three communication courses – introduction to advertising, consumer communication, and
integrated marketing. On the other hand, the Holy Spirit University of Kaslik and St. Joseph
University of Beirut offer degrees in radio and television. The former has a program in
graphic arts and advertising which requires some mass communication courses. This program
is part of the Fine Arts Department and concentrates on art techniques such as drawing,
painting, etc. No university in Lebanon has in-house media outlets. Thus, students have no
opportunity to run a radio or television station, produce newspapers, and manage a public
relations or advertising firm.
Academic and Professional Qualifications of Faculty Members
The two professors from the Lebanese University severely criticized the hiring
process at their institution. While many faculty members have Ph.D.s, some are imposed on
the college for political or confessional considerations. Thus, nepotism plays a significant
role. The two representatives from Notre Dame University asserted that all faculty members
at their department have appropriate degrees with technical skills and training in their
respective areas such as radio, television and journalism. NDU also relies heavily on part-
timers who come from the media industry in Lebanon to teach a variety of courses such as
video production, newspaper production, newsgathering, media planning, public relations
techniques and other courses.
Relationships Between Universities and Media Organizations
The professors at the Lebanese University expressed different opinions about the
relationships between their university and media institutions. One claimed that no
relationships exist because the university does not allow full-time faculty members to work
elsewhere. Thus, a professor who teaches news writing cannot work as a reporter or editor at
a newspaper. The other professor pointed out that many administrators and faculty members
have tried hard to establish good relations with the media, and, to some extent, they have
succeeded. He also said many alumni are now news reporters or editors and they keep in
touch with their former teachers. The two professors from NDU admitted that no official
relationships with media organizations exist although the university has made serious
attempts to establish a partnership with the Lebanese Broadcast Corporation (LBC). Several
NDU students, however, do their training (internship) at LBC and some of them end up
working for the station. NDU has a solid relationship with the leading Lebanese newspaper
An-Nahar because Mr. Walid Abboud, a part-time instructor at the university, is a long-time
editor at An-Nahar. He takes senior students in journalism under his wing and helps them do
journalistic work at the newspaper.
Media Educational Research
All four interviewees believe that media educational research is almost non-existent.
The official government’s Center for Studies and Research, in collaboration with the
Lebanese University and other universities, has embarked on some research that relates to

444
media. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has
organized conferences and workshops in an attempt to enhance journalists’ capabilities and
foster freedom of expression/press and exchange of information. In May 2005, the UNESCO
conducted a two-day workshop titled “Media Ethics and Freedom of Expression.” The
workshop, which was carried out in coordination with LU and Alumni of the Lebanese
University’s College of Information and Documentation, included training sessions on
newsgathering and reporting. One NDU professor said that the situation is improving a bit
but only because world organizations and other countries are getting involved. He added,
“What they call in America ‘publish or perish’ is used here as a stick but rarely is a college
professor kicked out of a university because he or she didn’t publish.” Two professors
suggested the following to entice more research productivity among faculty: (1) use the
“stick” and not just swing it in someone’s face; (2) be serious about the need for genuine,
useful, and relevant research; (3) emphasize quality and not quantity; (4) state rewards
clearly; (5) use financial as well as academic rewards; (6) tell prospective researchers that it
is in their best interest to do research; (7) associate research with noble causes; and (8)
organize workshops and seminars on “how research is done.”
Required Changes to Meet New Challenges
The four full-time professors claimed that there has been a rapid growth in
communication programs throughout the country. They also noted that the majority of
universities are applying Western models to deal with worldwide developments. However,
they expressed a wide range of views about the necessary steps to deal with changes in the
media industry. One NDU representative asserted that the Arab world first and foremost
needs a new wave of managers, administrators, opinion leaders, rising citizens, new sets of
values and professional practices in all disciplines including the journalism profession and
other communication-related areas. He added, “We are a long way from making strides in
that direction. Yet, we cannot and must not remain stagnant or complacent in spite of the
difficulties.” He also admitted that “all changes will remain cosmetic if we don't make our
future journalists and communicators apply what they learn on the ground through a number
of well defined and carefully designed courses that sternly require a combination of theory
and practice.” The other NDU professor suggested organizing workshops and inviting more
media professionals and visiting professors from overseas. On the other hand, the two
professors from the Lebanese University said their institution has to work on two fronts to
meet new challenges: (1) evaluating the current programs in an attempt to apply new ones,
and (2) holding conferences with Arab and international organizations to reach agreements on
training programs and education exchanges.
Media Training Opportunities
Three of the four professors strongly believe that training and development of media
professionals in Lebanon lag behind international standards. The following are the reasons:
(1) lack of financial resources at the disposal of newspapers, magazines and television
stations, many of which claim financial loses; (2) resistance of some journalists to change;
(3) government’s lack of support; (4) people’s negative perceptions of journalists in the
Middle East, (5) many news reporters don’t know how to use new technology; (6) some
owners of news organizations have no background in communication and do not recognize
the significance of training; and (7) the turnover rate among reporters is high because of their
dissatisfaction with the status quo. The other professor asserted that the training of media
professionals is fine “because many of our graduates have won national and international
awards for their work in several fields.” He added, “The only difference is our media
professionals, compared to others in developed countries, have less access to information
sources and lower margin of freedom.”

445
While the number of universities that offer programs in journalism and mass
communication is growing, there is only one training center. Established in 1995, An-Nahar
Training Center offers training to one of four groups: An-Nahar’s employees, Lebanese
journalists, Arab journalists, and beginning journalists. The training sessions range from 3 to
14 days. Many sessions are conducted in coordination with delegates from American and
European organizations. In general, student training focuses on practical aspects of
newsgathering, reporting, and editing. The center also enhances students’ journalistic skills
and exposes trainees to new developments in the field. A common issue that the center
frequently faces is the trainees’ academic background. Dr. John Karam, Director of the
Center, noted that most educational programs in the Arab world emphasize theory over
practice. The gap is very wide between what students learn and the skills they need to acquire
in order to practice contemporary journalism. In some countries, students are taught outdated
theories. In other countries, students lack essential journalistic writing skills. His colleague
Mr. Walid Abboud shared the same view. Four years ago, Mr. Edmond Saab, executive
editor of An-Nahar and long-time communication practitioner, pointed out that he often
recruited reporters based on their qualifications and talents and not their degrees in
journalism and communication (Saab, 2002). He mentioned that some of his trainees became
presidential spokespersons, editors-in-chief or holders of other prestigious positions in
Middle Eastern media. Saab, like many of his American counterparts, deplored the poor
writing skills of journalism students.

IV. CONCLUSION

Overall, the levels of achievement or development of journalism and mass


communication schools in Lebanon are similar to U.S. schools 25 to 30 years ago.
Bureaucracy in Lebanon has reduced training opportunities. Several years ago the
government issued a decree to establish a training center, but as of today no one knows when
it will become a reality. A significant component in the process of developing a new breed of
journalists and communicators is to strengthen the academic curricula by: (1) adding
journalism labs, television studios and computer labs; (2) inviting distinguished scholars and
professionals from the farthest places on earth to help students in Lebanon master the new
technologies; (3) sending teachers and even students to do their training and ground work in
advanced countries; (4) allocating appropriate funds for professors to support their research
endeavors; (5) allowing full-time faculty members to work at news organizations in order to
keep themselves updated, especially in new technological trends; (6) offering scholarships to
professors; and (7) revamping programs and courses to reflect major developments in the 21st
century.

REFERENCES

Saab, E. “Professional Ethics, Media Legislation and Freedom of Expression.” Roundtable


discussion conducted at the Lebanese American University, Beirut, Lebanon, March
2002.
http;//www.aub.edu.lb
http://www.aust.edu.lb
http://www.aut.edu.lb
http://www.balamand..edu.lb
http://www.hagazian.edu.lb
http://www.lau.edu.lb

446
THE PROTEAN CAREER MODULE: APPLIED MANAGEMENT AND FINANCE
EXERCISES FOR ASPIRING PROFESSIONALS

Angela J. Murphy, Florida A & M University


Ajmurphy7@aol.com

ABSTRACT

Hall (2004) describes protean careers as more relational, self directed, self-defined
and cross-functional than traditional careers. This career module describes three weeks of
exercises that engage students in protean career planning by applying basic concepts from
several disciplines. Based on career and team concepts from management, students
collectively conduct career interviews and deliver presentations. They also utilize finance
concepts to describe how they would save and invest. The target audience includes
undergraduate students who want to develop life skills in career selection and financial
literacy.

I. INTRODUCTION: THE PROTEAN CAREER MODULE


Based on information on protean careers (Hall, 2004), I created the protean career
module as a set of exercises to assist students in career planning and financial literacy. The
module is protean because it builds self-awareness, focuses on self-directed learning and self
defined success by having students explore specific careers they want to pursue. In addition,
the module applies the protean concept by utilizing relational learning when the career teams
interview graduating students and entry level professionals. The career module consists of
two major sets of activities, (i.e., career interview and personal finance), which take place
over a three week period. The module addresses the first year transition from full-time
college students to full-time professionals. The primary target audience includes
undergraduate students with little or no professional work experience, as well as, students
who want to develop life skills in career selection and personal investing.

II. WEEKS ONE AND THREE: CAREER INTERVIEWS

The basis for the career interviews is the realistic job preview concept taken from the
organizational socialization model (Nelson, 1987). Realistic job previews involve
organizational representatives sharing both positive and negative information about the job or
organization with potential employees (Nelson, 1987; Phillips, 1998). It is advisable that
students engage in realistic job previews prior to joining organizations so that they have
pragmatic expectations of their jobs, organizations and managers. Based on research, when
there are honest and comprehensive realistic job previews, employees will develop greater
organizational commitment and satisfaction and a lower chance of turnover (Phillips, 1998).

Goals of the career interview activities are to learn about the job search, interview and
selection process. The skill objectives are to practice working in teams, networking and oral
communication. The first week students read the assigned career interview materials
(DeSouza & Alleyne, 2002; Hayes, 2001; Holland, 1985; Super, 1980). These readings help
clarify many of the realistic job preview issues. As a result of reading and discussing these
articles, students can ascertain the level of alignment between their desired jobs,
organizational cultures and their personal needs before finalizing their career choices.

447
During week one, students also form career interview presentation groups based on
similarity of career interests. Groups determine ground rules and create project plans to
maximize their performance. They work together to locate and interview two entry level
professionals and two graduating senior students. Students use the career interview questions
with entry level professionals to discover job responsibilities, needed skills and personal
goals or values (see Table I). They also have to link the professionals’ comments to theories.
For example, based on Holland’s (1985) typology, students might address how well the
interviewees’ personality types fit with the interviewees’ career choices. The other career
interviews involve graduating students who have accepted job offers or are in the final stages
of the interview process (see Table II).

Table I Professional Career Interview Questions


1. What are typical day-to-day activities associated with these positions? How many hours
do they typically work each day?
2. What do the interviewees most and least enjoy about their positions?
3. How do their personal values or goals fit with their career and organizational choices?
4. Identify and explain at least four specific skill strengths absolutely needed in these
positions.
5. Explain how the information from the career interviews connects to two career theories,
concepts or practices.

Table II Graduating Student Career Interview Questions


1. How did they locate and select the companies with whom they have interviewed?
2. How do their personal values or goals fit with their career or organization choices?
3. What type of preparation did they do for the campus interview? Give specific sources
and practices.
4. What did they experience during on-site visits, (e.g., one-on-one interviews, panel
interviews, types of interview questions)?
5. Describe three specific skills or experiences that distinguished them from other students.

The group context of the professional and student interviews are examples of
relational learning. This type of learning is often associated with protean career development
because it enhances information sharing and accessibility that can sometimes be difficult
without long term organizational membership (Hall, 2004). The entry level professionals
must currently have positions that the students would be eligible to hold immediately upon
graduation. None of the interviewees can be relatives or family friends. The rationale for the
prohibition of familiar others is to broaden their professional networks. This becomes
especially beneficial in times when the employment economy is not at its strongest (DeSouza
and Alleyne, 2002).

Week three focuses on career interview presentations. Each team has 20-25 minutes
to communicate responses from the career interview questions, incorporate career
experiences and personal reflections. Career experiences integrate creativity into the career
presentations to make them interesting and informative. Past career experiences include a
courtroom etiquette skit from a lawyer career team and an own-your-own-hair salon
infomercial by a beauty career team. Students also reflect on how their career choices align
with their top three values and how their careers fit their definitions of success. These

448
reflections are consistent with a self-directed protean career focus because students build their
self-awareness by refining their career choices, values and definitions of success.
Presentations are evaluated on the articulation of responses from the career interview
questions and personal reflections, eye contact, clarity and intonation of speech, audience
interaction and creativity.

III. WEEK TWO: PERSONAL FINANCE ACTIVITIES

The second week of the career module covers personal finance. The objectives are
for students to learn how to save and invest their money. Given the inhibitions that
sometimes accompany the topic of money, students individually complete the finance
activities. As a way to kick off this component, the instructor presents and discusses
information with the class on savings and investment statistics. For example, based on a
Northwestern Mutual Generation 2001 study, just eight percent of college students feel very
knowledgeable about financial planning (Gillin, 2002). While people are living longer,
medical costs are escalating and social security funding is growing more questionable; fifty-
six percent of Americans are not preparing for retirement in such a way to maintain their
current standard of living in their senior years (American Saves, 2000). The Bureau of
Economic Analysis reports the September 2005 average personal savings rate for Americans
as negative four-tenths of a percent (Commerce Department, 2005); outlays outpace income.
Statistics from the 2001 Survey of Consumer Finances reveals that only about 63% of Whites
and 48% of racial minorities save; in addition, the median net worth of racial minorities is
about seven times less than that of their White counterparts (Aizcorbe, Kinnickell, & Moore,
2003). Evidence suggests this is partially due to lower rates of stock ownership by minorities
(Aizcorbe, Kinnickell, & Moore, 2003). The lack of adequate savings and financial
knowledge prevents average Americans, particularly racial minorities, from quickly paying
off debts and fully taking advantage of investment opportunities. Vehicles, such as the
personal finance component of the career module, become important tools to expose people
to stocks or mutual funds as viable investment options.

After the instructor shares financial literacy information with the class, students read
and discuss financial literacy readings (Browna, 2001; Brownb, 2001; Linton, 2002;
McKinney, 2001; Mintzer & Mintzer, 1999; Social Security On-Linea, 2005; Social Security
On-Lineb, 2005). These materials explain why short and long term investments are
important and the status of social security. In addition, students use this content to define
basic concepts such as risk tolerance, stocks, certificates of deposits and diversification. The
present protean career landscape suggests a need to have short term reserves to cover
emergencies and long term funds to maintain their standard of living in retirement (Brownab,
2001; Hall, 2004; McKinney, 2001).

Table III Personal Finance Questions


1. What is the current social security retirement age? What will it be when you retire?
2. Based on the readings, what is a current issue regarding the solvency of the social
security? Explain. How might this impact your retirement plans?
3. Based on your desired lifestyle after college, determine a realistic monthly expense
budget. Set aside two months of total expenses for a short term (ST) savings fund, (i.e.,
rainy day fund). What ST financial instrument would you use to invest your savings?
What is the interest rate? What is the name of the financial institution where you would
put your money?

449
4. Go to Salary.Com or like salary website to estimate your starting salary. Subtract the
associated federal and state taxes. Set aside 10% of your salary after taxes for long term
(LT) savings, (i.e., retirement). Specifically identify at least two stocks or mutual funds
to invest your savings. What are the year-to-date, three year and five year rates of return?
What is the name of the financial institution where you would put your money? How
does your choice of LT instruments fit your risk tolerance?
5. Based on monthly expenses and starting salary in questions 3-4, do you have enough
money to set aside the recommended amounts for ST and LT savings? Show a
calculation to support your conclusion. If your calculation is positive, do nothing. If it is
negative, specifically explain how you would adjust your revenues and/or expenses to
become positive.
6. How did you incorporate feedback from trusted others, (i.e., parent, teacher, relative,
working professional), to enhance your ST/LT investments? Note contact information.

After reading the finance materials, students answer questions on the topics of social
security, financial instruments, risk tolerance, short term funds, and long term retirement
savings (see Table III). Students specifically identify their official retirement age and
solvency issues associated with social security. After estimating monthly expenses, they
identify the financial instruments they would use for short term savings, the associated
interest rates, and the names of the financial institutions. Browna (2001) recommends that
single adults have short term or rainy day funds that consist of three to six months expenses
in case of unexpected situations, (e.g., downsizings, family deaths). Students only have to set
aside two months because the career module timeframe only covers their first year as
working professionals. As it relates to long term investments, students estimate their starting
salaries then follow Linton’s (2002) suggestion to set aside 10% of their revenue to
adequately fund their retirement. Next, students determine if they will have enough money to
pay their regular expenses and also fund investments at the recommended levels.

III. CONCLUSION

Pfeffer and Fong (2002) reference the need for business schools to be more
interdisciplinary in their teaching approach to enhance the connection between business
education and workplace practice. The career module implements their suggestion by
creating protean career exercises that integrate self-awareness, relational learning and cross-
functional knowledge. Students raise their self-awareness by reflecting how well the
proposed entry level positions align with their skills and values. They also engage in
relational learning in the career interview presentation teams. In addition, students use
knowledge of management and finance to complete the career interviews and personal
finance activities. Students entering the work world will face multi-disciplinary problems,
work in teams and need self-awareness to best direct their own careers. It is important that
they begin to develop or refine the capacity for these types of situations or skills sooner
versus later.

REFERENCES

Aizcorbe, Ana M., Kinnickell, Arthur B. and Moore, Kevin B. Recent Changes in
U. S. Family Finances: Results from the 1998 and 2001 Survey of Consumer
Finances. Retrieved November 28, 2005, from
http://www.federalreserve.gov/pubs/bulletin/2003/0103lead.pdf, 2003.

450
America Saves. Most behind in Retirement Saving. Retrieved November 28, 2005,
from http://www.americasaves.org/back_page/retirement.cfm/, 26 April 2000,
2000.
Browna, Carolyn M. “The Power of One.” Black Enterprise, 32(3), 2001, 93-98.
Brownb, Carolyn M. “Solo Parenting.” Black Enterprise, 32(3), 2001, 101-106.
Department of Commerce Bureau of Economic Analysis. Personal Income and
Outlays: October 2002. Retrieved November 28, 2005, from
http://www.bea.gov/bea/newsrel/pinewsrelease.htm, 2002.
DeSouza, Winifred and Alleyne, Sonia. “Getting a Foothold on Your Career.”
Black Enterprise, 32(7), 2002, 105-110.
Gillin, Eric. Generation Y Flunks Finance 101. Retrieved November 28, 2005, from
http://www.thestreet.com/markets/ericgillin/10007059.html/, 22 January 2002, 2002.
Hall, Douglas T. “A Protean Career: A Quarter-Century Journey.”
Journal of Vocational Behavior, 65, 2004, 1-13.
Hayes, Cassandra. “Choosing the Right Path.” Black Enterprise, 31(9), 2001, 108-113.
Holland, John L. Making Vocational Choices: A Theory of Vocational Personalities Work
Environments. Englewood Cliffs, NJ: Prentice-Hall, 1985.
Clinton, Clifton. A Decade-by-Decade Guide to Retirement Planning. Retrieved
November 28, 2005, from http://www.401khelpcenter.com/mpower/feature_110702a.html,
2002.
McKinney, Jeffrey. “For Richer or Poorer?” Black Enterprise, 32(3), 2001, 109-113.
Mintzer, Rich and Mintzer, Kathi. The Everything Money Book. Holbrook, MA:
Adams Media Corporation, 1999.
Nelson, Debra L. “Organizational Socialization: A Stress Perspective.” Journal of
Occupational Behavior, 8, 1987, 311-324.
Pfeffer, Jeffrey and Fong, Christina T. “The End of Business Schools? Less Success Than
Meets the Eye.” Academy of Management Learning and Education, 1(1), 2002, 78-
95.
Phillips, Jean M. “Effects of Realistic Job Previews on Multiple Organization Outcomes: A
Meta-Analysis.” Academy of Management Journal, 41(6), 1998, 673-690.
Social Security On-Linea. Frequently Asked Questions about Social Security’s
Future. Retrieved November 28, 2005, from http://www.socialsecurity.gov/qa.htm, 2005.
Social Security On-Lineb. Social Security Full Retirement and Reductions by Age. Retrieved
November 28, 2005, from http://www.ssa.gov/retirechartred.htm, 2005.
Super, Donald. “A Lifespan, Lifespace Approach to Career Development.” Journal of
Vocational Behavior, 16, 1980, 282-298.

451
TEACHING APPROACHES AND SELF-EFFICACY OUTCOMES IN
AN UNDERGRADUATE RESEARCH METHODS COURSE

H. Paul LeBlanc III, The University of Texas at San Antonio


pleblanc@utsa.edu

ABSTRACT

This study investigated the outcomes of teaching objectives and techniques in an


undergraduate research methods course. In particular, the study examined student perceptions
of their relative comfort level with performing specific research tasks during the first and
fourteenth weeks of a fifteen week semester. Results indicated that students’ comfort level
increased significantly. Whether students had conducted or participated in research as a
subject prior to the course, in general, played little role in the measured increase in research
comfort level. Implications for educators teaching an undergraduate research methods course
are discussed.

I. INTRODUCTION

For many years our department has required research methods as a core curriculum
course for our undergraduate majors. The rationale behind this requirement is multifaceted.
First, faculty believe an important learning outcome for our undergraduates is the ability to
critically examine claims about communication. Second, faculty believe training in research
skills prepares students to construct more credible claims (see Winn, 1995). Third, research
requires discipline, which in a course designed to engage students in the research process can
become a potent learning outcome. Despite faculty belief in the importance of having a
research methods course in the core curriculum, students seem to carry a relatively high
anxiety about the subject matter and task requirements in such a course. Regardless of how
the course is framed, faculty observe characteristic student fear associated with steps in the
research process, particularly with units on statistics. Therefore, an important but implied
learning objective for an undergraduate research methods course is the reduction of fear, or
the increase in comfort level, for performing specific research tasks. The purpose of this
study was to investigate the relationship of teaching outcomes to objectives and techniques in
an undergraduate research methods course.

II. REVIEW OF LITERATURE

In the social sciences, requiring undergraduate students to take a research methods


course, while not universal, is fairly common (see Thies and Hogan, 2005). For
undergraduate communication majors, the necessity for the required research methods may
not be clear, particularly if those students do not intend to go on to graduate school. As Clark
(1990) points out, communication students tend to be more apprehensive about the skills
needed for a research methods course, particularly their mathematical skills, for which they
may perceive a deficiency (see also Maier & Curtin, 2005; Winn, 1995). For these students,
the role of the teacher includes guidance in the techniques for conducting research using a
variety of teaching methods (Dobratz, 2003). To accomplish this role, the teacher has to
structure a course to promote student self-efficacy (Maier & Curtin, 2005). One such
approach of course structure was described by McBurney (1995) and Sproken-Smith (2005).
McBurney suggested using a problem-based approach to teaching research methods in which

452
students were presented with problems requiring critical thought and creativity to solve. This
pragmatic problem-based approach could achieve the overall course goal of training students
to construct more credible claims. A problem-based approach could focus on the design of a
project or proposal which requires students to both make choices for particular research
methods and to justify those choices (McBurney, 1995). Additionally, a problem-based
approach could result in a range of transferable skills which may be relevant in the workplace
(Sproken-Smith, 2005). However, as Winn (1995) pointed out, requiring a completed project
may be impractical for a variety of reasons including the amount of time required by faculty
in administering projects for larger classes. Winn (1995) and Clark (1990) suggested
requiring students to complete a research proposal which does not require data analysis on
individual student projects. Similar to those suggestions for structuring research methods
courses given above, the research methods course in the current study has structured
assignments as subsections of a research proposal. Each assignment is evaluated
independently from other assignments. However, each assignment builds upon the tasks
accomplished in the previous assignment.

The overall goal for teaching any course is to increase understanding and skill in applying
course related material to real life situations. This goal is especially important in the
undergraduate research course where, as it has been noted above, students come into the
course with considerable trepidation about the course requirements. Ultimately, students
should achieve some sense of accomplishment as well as mastering course material in order
to realize a level of self-efficacy necessary to overcome research anxiety. To determine if this
goal has been achieved, the following research questions are proposed:
RQ1 Is there a measurable change in undergraduate students’ perceptions of their own
abilities to critically examine and conduct research?
RQ2 Does having prior experience conducting research make a difference in comfort level
with research tasks for students enrolled in an undergraduate research methods
course?

III. METHODOLOGY

The participants were 52 students in 3 distinct sections of an undergraduate research


methods course. Average class size was 29.67 students. However, only those students in the
three sections which completed both the pre-test and post-test survey were included in the
sample. The students attended a large research intensive school in the southwestern United
States. Over 96 percent of the students were liberal arts majors. However, approximately 86
percent of the sample consisted of communication majors. Other demographic characteristics
of the sample included classification and study concentration. Juniors comprised the largest
group (53.8%), with seniors (32.7%), and sophomores (13.5%) making up the difference.
Half of the students in the sample listed public relations as their study concentration. Other
areas of concentration reported were technical communication (23.1%), social interaction
(11.5%), and other concentrations (15.4%). Twenty-seven students (51.9%) reported having
previously conducted research, and 35 students reported having previously participated as a
research subject (67.3%). Data for the study were gathered using the Research Comfort
Level Inventory (RCLI). The RCLI contained 10 five-point Likert type statements regarding
specific steps in the research process from selecting a topic for research (statement 1) to
discussing findings (statement 10). Participants were asked to rate their relative level of
comfort with performing the specific research task from very uncomfortable (1) to very
comfortable (5). The RCLI was administered during the first (pre-test) and fourteenth (post-
test) weeks of a fifteen week semester. The reliability of the RCLI the pre-test condition was

453
sufficiently high (α = .86). For the post-test condition, the reliability of the instrument was
very high (α = .90).

IV. RESULTS

When averaging the scores for the pre-test dependent variables and comparing them
to the average scores for the post-test dependent variables, results indicated that a measurable
and positive change in undergraduate students’ perceptions of their abilities to critically
examine and conduct research. Specifically, students reported more comfort in accomplishing
all ten research tasks at the end of the semester (M = 3.83, SD = .73) compared to the
beginning of the semester (M = 3.11, SD = .70), t(51) = -6.27, p <.01, ω2 = .42. Correlation
between the pre- and post- conditions was both moderate and significant (r = .33, p = .017, N
= 52).

To determine whether having prior experience conducting research affected research


comfort level, a factorial Repeated Measures ANOVA was performed on each research task.
Overall, having conducted research prior to the class did not influence research comfort level.
Students, regardless of experience reported increased comfort with each of the research tasks
with the exception of discussing findings. That is, no between-subjects effects were found.
Additionally, no interaction effects were uncovered in the analysis. The results for each test
are reported below. For the task of selecting, developing and narrowing a topic for research,
students reported a significant increase in comfort level at the end of the semester (M = 4.02,
SD = .98) compared to the beginning of the semester (M = 3.38, SD = 1.14), F(1,50) = 12.58,
p = .001, J2 = .20, Wilks’ / = .80. As reported above, no between-subjects or interaction
effects were uncovered. For the task of creating a plan for searching through the literature,
students’ comfort level also increased from the beginning of the semester (M = 2.92, SD =
1.03) to the end of the semester (M = 3.77, SD = 1.06), F(1,50) = 22.31, p < .001, J2 = .31,
Wilks’ / = .69. Table I below shows the means and standard deviations for the pre- and post-
conditions representing students’ perceptions of their comfort level with tasks 3 through 9.
Table II shows the repeated measures within-subjects effects for tasks 3 through 9. As with
tasks 1 and 2, there were no between-subjects or interaction effects for these tasks.

Table I
Central Tendencies For Comfort Level With Research Tasks
Pre-condition Post-condition
Task M SD M SD
3. Choosing search tools 3.04 .99 3.94 .92
4. Evaluating sources 3.31 1.02 3.90 1.18
5. Citing Internet sources 3.38 1.03 4.02 .98
6. Organizing the literature review 3.19 1.10 4.04 1.03
7. Developing hypotheses 2.98 1.11 3.88 .98
8. Developing methods 2.85 1.04 3.69 .90
9. Analyzing statistics 2.54 1.06 3.25 1.01

454
Table II
Summary Of Within-Subjects Repeated Measures Analysis For
Comfort Level With Research Tasks Pre- Versus Post-Condition
Task df F J2 p Wilks’ /
3. Choosing search tools 1,50 23.58 .32 .000 .68
4. Evaluating sources 1,50 10.68 .18 .002 .82
5. Citing Internet sources 1,50 15.76 .24 .000 .76
6. Organizing the literature review 1,50 20.83 .29 .000 .71
7. Developing hypotheses 1,50 27.05 .35 .000 .65
8. Developing methods 1,50 19.94 .28 .000 .72
9. Analyzing statistics 1,50 16.79 .25 .000 .75

For the task of discussing findings, students’ comfort level appeared to increase
slightly from the beginning of the semester (M = 3.46, SD = 1.09) to the end of the semester
(M = 3.77, SD = .96), although no statistically significant difference was found either within-
subjects or between cohorts of students who had previously conducted research and those
who did not. Additionally, no interaction effects were found between having conducted
research previously and having attended a class in research methods. Several post-tests were
conducted to determine if other characteristics of students impacted the results. In particular,
the researcher was interested in determining if having prior experience participating as a
research subject made a difference in perceived level of comfort in performing research tasks
for students enrolled in an undergraduate research methods course. Both between-subjects
effects (F(1,50) = 4.05, p = .05, J2 = .07), and within-subjects effects (F(1,50) = 9.85, p =
.003, J2 = .07, Wilks’ / = .83) were uncovered for the task of selecting a topic (1). Table III
below shows the means and standard deviations for the pre- and post- conditions for task one
for students who had or had not previously participated as a research subject. No interaction
effects were uncovered between cohorts of students and condition for research tasks.
Additionally, no between-subjects effects were uncovered for research tasks 2 through 10.

Table III
Central Tendecies For Comfort Level With Selecting A Topic
Research Participant? M SD
Pre-condition
Yes 3.20 1.21
No 3.76 .90
Post-condition
Yes 3.89 1.05
No 4.29 .77
The researcher was also interested in determining if student classification made a
difference in research comfort levels for undergraduate students. No between-subjects or
interaction effects were uncovered, although a significant within-subjects effect was found
(F(1,50) = 31.04, p < .001, J2 = .39, Wilks’ / = .61). Finally, the researcher was interested
in determining whether the study concentration of a student made a difference in perceived
comfort levels with research tasks. Results indicated that concentration made no difference in
perceived comfort levels with research tasks, although within study concentration categories
comfort level increased from the beginning of the semester to the end of the semester
(F(1,50) = 36.91, p < .001, J2 = .43, Wilks’ / = .57). No interaction effects were found
between study concentration and perceived comfort level with research tasks.

455
V. CONCLUSION

In general, undergraduate students enrolled in a research methods course reported


increased comfort in performing research related tasks from the beginning to the end of the
semester. Results indicated that prior experience conducting or participating in research had
little or no effect on comfort level. The results were particularly interesting for data analysis
which, as noted in previous research, was the most vexing aspect of the research methods
course for students. In this study, the subjects were not required to conduct data analysis for
their research proposals. Yet, students felt more comfortable with undertaking data analysis,
perhaps as a result of observing how data analysis fits in the process of conducting empirical
research. McBurney (1995) suggested that requiring students to engage the research process
by making choices about topic, research questions or hypotheses, and method of observation
and data collection influences students to think critically about expected results and the
implications of those expected results. Engagement in the research process, with this subject
pool, appeared to promote self-efficacy. Indirectly, a major concern of faculty that students
critically examine claims is accomplished by providing for students an opportunity to view
themselves as capable of doing so. If students engage the process successfully in a controlled
classroom environment, then the secondary goal of providing students with the tools to
construct more credible claims can be met. Although many undergraduate students may not
choose to attend graduate school or pursue careers that require empirical research skills, in
the field of communication, the ability to critically evaluate information is a necessity (see
Winn, 1995). The current study only examined students’ perceptions of their own comfort
level with specific research tasks. This study did not test objectively measurable learning
outcomes such as mathematical ability. Given the pre-post design of the study, faculty
expectation of student ability to select appropriate statistical tests or analyze results in the
pre-condition may be unrealistic. Additionally, the course used in the study focuses primarily
on survey-based quantitative research methodology. Although several weeks are spent
discussing qualitative research methods and methodology, as a practical matter the course is
structured to provide maximum benefit from engagement in a research project. Tashakkori
and Teddlie (2003) provide a persuasive argument for taking a mixed-methods approach in
the social and behavioral sciences due to the complexity of human phenomena and the kinds
of questions such phenomena engender. The authors suggest that approaches to teaching
research methods in the social and behavioral sciences have to adapt to the needs of students
to understanding both qualitative and quantitative approaches and the questions those
approaches are designed to answer. With a required course in a program with several hundred
undergraduate majors, a mixed methods approach has some considerable practical
limitations.

REFERENCES

Clark, Ruth A. “Teaching Research Methods.” In J. A. Daly and Associates, eds., Teaching
Communication: Theory, Research, and Methods. Hillsdale, NJ: Lawrence Erlbaum,
1990, 181-191.
Dobratz, Marjorie C. “Putting the Pieces Together: Teaching Undergraduate Research from a
Theoretical Perspective. Journal of Advanced Nursing, 41, (4), 2003, 383-392.
Irish, Donald P. “A Campus Poll: One Meaningful Culminating Class Project in Research
Methods.” Teaching Sociology, 15, (2), 1987, 200-202.
Maier, Scott R., and Patricia A. Curtin. “Self-efficacy Theory: A Prescriptive Model for
Teaching Research Methods.” Journalism and Mass Communication Educator, 59, (4)
2005, 352-364.

456
McBurney, Donald H. “The Problem Method of Teaching Research Methods.” Teaching of
Psychology, 22, (1), 1995, 36-38.
Scheel, Elizabeth D. “Using Active Learning Projects to Teach Research Skills Throughout
the Sociology Curriculum.” Sociological Practice, 4, (2), 2002, 145-170.
Spronken-Smith, Rachel. “Implementing a Problem-Based Learning Approach for Teaching
Research Methods in Geography.” Journal of Geography in Higher Education, 29, (2),
2005, 203-221.

457
TOP 10 LESSONS LEARNED FROM IMPLEMENTING ERP/E-BUSINESS
SYSTEMS IN ACADEMIC PROGRAMS

Michael Bedell, California State University – Bakersfield


mbedell@csub.edu

Barry Floyd, California Polytechnic State University


bfloyd@calpoly.edu

ABSTRACT

This article describes ten key principles which inform the selection and
implementation of ERP software in academic programs. Factors which lead to success in
industry mirror many of the success factors in academia. In addition, academia faces some
unique challenges that differ from industry and require special attention.

I. INTRODUCTION

Schools of Business around the country are grappling with how best to integrate and
exploit the capabilities of today’s enterprise, e-business systems into their programs. The
debate seems to have moved from whether or not it makes sense to utilize this type of
software – the benefits and business cases are so compelling – to how do we install, support,
maintain, integrate and fully leverage such software. Importantly, academia must
acknowledge that these are complex systems that commercial organizations have often taken
years and millions of dollars to get fully up and running. Increasingly, those involved in
industry-academic partnerships that focus on integrating enterprise systems into curriculum,
are finding the complexities and challenges facing academia very similar to commercial ERP
implementations. The following insights are based on the experience of two institutions who
have implemented ERP systems into their curriculum and an industry partner who has (1)
three years of reviewing these types of projects, (2) conversations with hundreds of faculty
and (3) a dozen actual implementations. These insights are coupled with current ERP
implementation issues.

II. LESSON 1: SOFTWARE ALONE IS NOT AN ACADEMIC PROGRAM

The partnership between industry and the University must be more than the simple
software donation to be successful teaching ERP. The acquisition of software, typically via a
grant or donation from a sponsor is the first step. For the industry partner, just handing or
“awarding” software to a school and expecting that they will get it installed, running and
integrated into courses is unrealistic. Research and white papers on the implementation of
ERP systems in industry have shown that much more must be accomplished for success to be
attained (Bedell, Bohanan, Marler, Dulebohn, 2003; Bedell & Floyd, 2002). ERP benefits
accrued by industry are a direct result of effective preparation and implementation. And so it
is in academia. Academics must be prepared for these systems in order to use them
effectively. Astute designers of curriculum know that software is not the answer, but that
“appropriate use” is. Software assists in providing the means to illustrate concepts and to
enlighten through a hands-on, learn-by-doing technique. However, this is not enough: the
educator must provide the “why” and the “how come” and then integrate these three into a
realistic curriculum. Industry partners need to understand the faculty’s point of view and

458
acknowledge the level of effort required to be successful in the classroom (Floyd & Bedell,
2002). They must understand that the successful academic program will be one where a
relationship exists with the vendor, not a quick tax write-off and photo in the College paper.
Each partner must understand the life cycle of acquisition and implementation, the level of
effort required by the academic in each phase, and the level of support the vendor must
provide. Key phases include a technical set-up phase, a faculty-training phase and a
curriculum development and integration phase and a ‘go live’ phase where the students use
the software.

III. LESSON 2: IDENTIFIED FACULTY PROJECT DRIVERS, WITH


INSTITUTIONAL BUY-IN IS KEY

If you don’t have a faculty member(s) who is willing to go the extra mile to
incorporate this into the curriculum, then you will not be successful. Similarly, if that
faculty’s efforts are not supported by the College, failure is on the horizon.
Passion and perseverance are the key ingredients to move these types of academic
collaborations forward. Regardless of all the support a vendor may provide (as suggested in
item #1); there will be a burden that only the faculty can address. Finding technical resources
– staff, servers, labs - getting the Dean’s buy-in, collaborating with other departments,
revising and updating courses, and getting release time for training are some of the nuts and
bolts implementation details that just have to get done (Bedell, Floyd, Glasgow, & Ellis,
2006). These items represent the blocking and tackling of implementing complex systems.
At the same time working with a single faculty member who is passionate but does not have
the buy-in of the College’s administration can leave a program open to being cut if the faculty
member moves. Given the level of commitment that implementing an enterprise system
takes, all parties need an understanding of expectations, commitment and timeline. It helps to
have this in writing! Launching this type of initiative can provide a university a leadership
role among peers (Bedell & Floyd, 2002). However, the institution’s leadership must see the
program as adding value to its goals and be prepared to make the commitment to bring it to
fruition.

IV. LESSON 3: TRAINING, TRAINING, AND TRAINING

If the faculty member really doesn’t know the application, then there’s no way that he
can effectively incorporate it into the curriculum. Like the old real estate adage about
location, if there is one aspect that can not be stressed enough it is training. From the industry
partner’s perspective, at the end of the day, the goal is for the school, faculty and students to
have a positive experience using the software and to walk away with a good feeling or
impression about the company. Well trained faculty who understand the features and
functionality of the systems will be better able to utilize the software in their courses; to
articulate the features and benefits; better able to place the software in an industry context
and more likely to be effective product evangelists. Faculty design courses to achieve specific
learning outcomes – it is not a simple matter to determine how to take a complex suite of
software tools, integrate them into class lab exercises and know that they are supporting the
desired learning outcomes. This would be a challenge for anyone, but someone who is not
versed in the capabilities of the software itself, will have no chance at being successful.

459
V. LESSON 4: TECHNICAL SUPPORT

When a class is taught using technology as the delivery mechanism, make sure that
there are technical people around to keep things going. Like the training issues cited above,
ensuring that the software is installed correctly and fully operational is important for the
success of the educator and for the image of the software vendor. As noted in Lesson 1,
“appropriate use” of the ERP system does not entail dealing with systems that are not fully
functional. And as faculty rely on the technical environment for the success of their
classroom, systems which fail reflect poorly on the faculty involved. They will be perceived
as teaching a class where time is spent resolving technical problems rather than learning
about key issues. In addition, the software donated by the industry partner is perceived as
poor quality and thus one of the reasons for the donation (positive interactions by students
with the software) fades away. From the industry partners perspective, having an up front
“qualification” or assessment process helps. Installing, setting up, optimizing and maintaining
commercial, enterprise e-business and database systems requires certain expertise. Helping
prospective participants understand the technical requirements and assessing their ability to
utilize and support these systems is a necessary step. While a full-time, dedicated Systems
Analyst (SA) or Database Administrator (DBA) may not be necessary, it is imperative that
the school looks at the technical requirements and develops a plan provide support through a
specific technical support person.

VI. LESSON 5: THINK ERP/PROJECT SCOPE

Know the scope of the project that you wish to pursue so that you can get all the
players into place. If you intend to span organizational boundaries, then it will take more
effort! The similarity of issues between a commercial ERP implementation and faculty from
different departments and disciplines launching an ERP-focused academic program is
astounding. Who leads, what vendor to use, differences in opinion between departments,
resistance to change, the impact on technological innovation to current practices, the role of
technology in integrating inter-organizational processes – each has its counterpart in
academia (Bedell & Floyd, 2002). Issues like the impact on curriculum, resistance to
change, the impact on course sequences, resources, and the technology’s ability to break
down barriers need to be planned for. Accounting and HR may now want to work together
(Bedell, Canniff, Richtermeyer, & Weston, 2004). Operations Research may now want to
require new MIS courses as prerequisites so their students are familiar with software and data
that can be shared among courses. Such interactions will take time to work out. Technology,
egos and emotions can be a volatile mix. Like the debates in industry implementations –
vanilla or customize, accept the built in “best practices” or revise it to “how we have always
done things” - faculty will quickly recognize the implications of deploying ERP systems that
will be shared by departments. An astute institution will recognize the “change management”
implications and use the project to innovate and revitalize.

VII. LESSON 6: METRICS & ROI

Know what it will cost you so that you can know if it has been worth it. Also,
develop some measurement methods to determine success. Both sides of most industry-
academic partnerships will have to justify the investments and show some ROI. The
academics will need to document the impact on learning and show the benefits to the students
and the institution – facilitated through the development of clear, concise learning goals and

460
objectives and assessment (Bedell, 2003b). From the industry partners’ perspective, metrics
like number of faculty trained, number of courses utilizing the software, number of students
impacted will be important. Over time, recruiting and hire statistics are a valuable way to
show the impact of the partnership. Other areas like linkages with commercial customers,
participation by faculty at conferences, white papers, etc. are all quantifiable items that
document the benefits.

VIII. LESSON 7: LINKAGES WITH THE ECOSYSTEM

If you build alliances with industry, you will have a place to put your graduates and
you will have folks who can come into the classroom and share their experiences. An
important aspect for today’s universities is relevance. In the world of ERP, building
relationships with the industry users of the ERP systems makes sense. These industry partners
can provide advice on curriculum issues while being actively engaged in supporting these
University initiatives through recruitment of its graduates, setting up internships, and
participating on advisory boards. In an example of our environment, a relationship was
developed with an organization that was a PeopleSoft customer. This relationship enabled
the development of class activities to be realistic and helped to focus the classroom initiative
on those capabilities that are most often considered to be mission critical (Bedell, Floyd, et
al., 2006). For the sponsoring software vendor, having their customers participate in an
industry-academic partnership, provides validation of their academic alliance program and
investment. Programs that “grow the pipeline” of IT talent are great. but initiatives that
impact customers’ key issues – like sourcing IT resources for specific projects are even more
valuable.

IX. LESSON 8: ONGOING EFFORT

Implementing an ERP system is like a marriage, not a flirtation. To pursue this, one
needs to decide that this will be a long-term effort. Otherwise the return on investment will
be too low to participate in such a program. Sustaining any relationship – personal or
professional – takes ongoing effort. In a partnership focused on rapidly changing areas like
IT applications and education, agility and commitment will be key. Software versions
continually change, new applications come out, and a school’s priorities can change, faculty
move on or the institutions leadership changes. This commitment ties back to the institutions
commitment but it also reflects that the relationship or project is never a “finished” product.
The true benefits to both parties come from ongoing collaboration and interaction – new
opportunities can develop, other faculty and/or departments may want to participate, and new
tools and resources will need to be integrated into courses. Many schools have indicated that
its takes 18 –24 months to fully understand and exploit all of the features of an enterprise
class system. Based on the expense and effort that launching this type of initiatives takes, it is
unlikely any school or faculty member is going to invest the time if the project will only
impact a single course. There may be pilots, a phased rollout and incremental development,
but as long as there is an overall vision or goal the partners will recognize that each piece is
building toward an end goal. Like assessments of commercial ERP implementations that
indicate, “you are never done,” integrating enterprise e-Business applications require a long
term sustaining effort. From the academic side, it is important that faculty stay interested and
actively using the product. A reward structure that rewards faculty for experimenting with
and developing ERP activities is one possibility. Product exercises need to change often
enough so that classroom activities continue to provide a bridge to reality. From an
organizational perspective, some concern exists as to how to maintain the long hours of work

461
that the project team inevitably needs. The interest of project team members must be
maintained through the reward structure.

X. LESSON 9: IMPLEMENTATION STRATEGY

ERP can be implemented piece meal (e.g., just install the general ledger app and AR,
or just install HR, or just install CRM), or it can be a suite of applications all installed at one
time. The material can also be installed for one class or for a collection of classes or for a
complete curriculum. However, it is best to plan and to take into account the level of effort
(See Lesson 1) and the faculty involvement (See Lesson 2) and the administrative
involvement (See Lesson 4). Academia like industry may decide how to best implement ERP
applications into their curriculum. Lesson 5 suggests that one can think of ERP as breaking
down academic walls, of building bridges across departments through shared enterprise
applications, data, and concepts. To do so, the college must develop an implementation
strategy with a multi-faceted approach that encompasses both technology and curriculum.
(Floyd & Bedell, 2002).

XI. LESSON 10: DATA

Key thought: To be successful, the ERP system needs to provide a rich enough data
repository in order to run a rich assortment of reports and analyses. Otherwise the system is
focused purely on transaction activities rather than reporting and analysis activities (Floyd &
Bedell, 2002; Wyrick, Bedell, & Rahmani, 2004). The development of a dataset is an
important task that is rarely accomplished well. Such a dataset provides a key resource that
extends the usefulness of an ERP initiative from understanding the back office, operations
view of the organization to a managerial role where data is mined and trends and important
issues are explored. No easy solution to this goal has yet presented itself, yet its importance is
critical to a strong curriculum with a management focus.

XII. CONCLUSION

The goal of implementing an ERP system in academia to create a rich learning


environment for its students is an important one. The logistics involved in its success are
difficult to master, however, the end result are ample opportunities for very rich
faculty/student learning interactions. These “learning by doing” opportunities have a
wonderful payout for the students, institutions, and industry partners.

REFERENCES

Bedell, M. “Human Resources Information Systems” The Encyclopedia of Information


Systems. Hossein Bidgoli, Editor. Academic Press, 2003.
Bedell, M. “An Identification of the Cost Savings Resulting From An HR Information
System Implementation.” American Society of Business & Behavioral Studies Tenth
Annual Conference. Las Vegas, NV, 2003.
Bedell, M., Bohanon, Y., Marler, J., Dulebohn, J. & Wyrick, C. “Integrating Enterprise
Software into HRM/HRIS Program & Curricula.” HEUG Annual Conference. Dallas,
TX, 2003.
Floyd, B. & Bedell, M. “HR Management Systems in HR Classroom: Two Approaches.”
Western Organization & Management Teaching Conference. Los Angeles, CA, 2002.

462
Marler, J., Dulebohn, J. & Bedell, M. “Using technology to teach human resource
management.” Invited presentation to be presented at the Sixty-Fifth National
Academy of Management Annual Meeting. Honolulu, HI, 2005.

463
PROFILES IN ELECTRONIC COMMERCE RESEARCH

Sang Hyun Kim, University of Mississippi


aiken@bus.olemiss.edu

Milam Aiken, University of Mississippi


aiken@bus.olemiss.edu

Mahesh B. Vanjani, Texas Southern University


vanjanim@tsu.edu

ABSTRACT

Research on electronic commerce has increased substantially since the mid-1990s,


and the field is a significant part of the overall MIS literature. This study is perhaps the first
to provide an overview e-Commerce research through a systematic analysis of sample articles
published between 1998 and 2004. Results of the meta-study show leading authors and
universities in the field as well as concentrations in the research.

1. INTRODUCTION

What exactly constitutes electronic commerce is an open question. There are even
diverging opinions on the term’s contraction (e.g., “e-Commerce,” “E-commerce,”
“eCommerce,” or simply, “EC”). Terms such as i-Commerce, iCommerce, Internet
Commerce, Internet Sales, Internet Selling, Web Commerce, Online Sales, Buying and
Selling Online, eTransactions, eSales, eTrade, cyber-market, and cyber-business have also
been used to describe essentially the same activity.

Some have equated electronic commerce with electronic data processing (EDP) in
business applications because “electronic” computers were being used for “commerce.”
Using this simplest definition, e-Commerce could be considered to have been conducted for
the past 50 years. A more restricted definition is also used (Turban, et al., 2004, page 3):
“Electronic commerce describes the process of buying, selling, transferring, or exchanging
products, information, or payments over computer networks, including the Internet.”

However, businesses have been using computer communication through networks for
at least 30 years through electronic data interchange (EDI), e.g. inter-bank transfers of funds
(Alter, 2002, page 184). Others equate e-Commerce with I-Commerce, i.e., commerce
occurring only on the Internet. However, businesses use a wide variety of software
technologies on the Internet with different communication protocols including electronic mail
(SMTP/POP), file transfer (FTP), Internet relay chat (TCP/IP), and of course, World Wide
Web pages (HTTP).

We believe that the vast majority of researchers and the general populace use the term
e-Commerce to refer to business activity over the Web. Based on this definition, the
beginning of electronic commerce could be considered to have begun around 1995, roughly
the same time the Netscape Navigator 1.0 browser and Internet stock tracking indices first
appeared. Similarly, research on electronic commerce first began to appear in the mid- to
late-1990s and several new journals with “electronic commerce” in their titles began to be

464
published. For example, the first issue of the International Journal of Electronic Commerce
appeared in the fall of 1996. Thus, the field is only about a decade old and is continuing to
mature as an area of research. Although there have been several meta-studies of publications
in the field of Management Information Systems (e.g., Alavi & Carlson, 1992; Claver, et al.,
2000; Culnan, 1987) and sub-fields such as Group Support Systems (e.g., Pervan, 1998;
Wong, 2000), to our knowledge, none have been conducted to survey this first decade of
research in e-Commerce. Little is known, for example, of who the leading researchers and
institutions in the field are, as well as what trends have occurred over the years.

This study is perhaps the first to make a systematic analysis of research in this field.
The results show that a large number of researchers at many institutions are conducting
research in e-Commerce, and to date, much of the research has been focused on overviews or
on particular applications. Of the empirical research in the area, almost half has been
conducted via surveys, indicating that more research using other methodologies may be
needed for a balanced perspective.

II. SURVEY

Although e-Commerce is interdisciplinary and articles on the topic can be found in a


wide variety of journals, we focused on four journals (International Journal of Electronic
Commerce, Electronic Commerce Research, Journal of Electronic Commerce Research, and
Quarterly Journal of Electronic Commerce) based upon availability and high-quality
rankings (Bharati & Tarasewich, 2002). Also, we concentrated on journals specifically
targeted to e-Commerce rather than journals devoted to a broader audience (e.g., MIS
Quarterly). A total of 344 articles published from 1998 to 2004 from these four leading
journals were analyzed for publication trends, author and institution involvement, and type of
research.

III. PUBLICATION TREND

The number of articles in the four surveyed journals appears to have peaked in 2002,
roughly two years after the peak of the market in e-Commerce. This lag could be due to the
delays in the publication process, e.g., reviewing, revising, and placing the manuscript in
press. It is widely recognized by most researchers that the time from initial submission of a
manuscript until the date the article finally appears in a leading academic journal often is 1-
1/2 to 2 years, and in some cases, even longer. According to this analysis, the number of
articles related to e-Commerce should be on the rise again, as it has now been over two years
since the nadir of the market cycle.

IV. RESEARCH LEADERSHIP

Authors and institutions involved in e-Commerce research were counted in three


ways. A “Normal” count gave each author or institution a value of “1” for each article,
regardless of position in the list of contributors. An “Arithmetic Mean” gave each a value of
“1” divided by the number of contributors. If the institution appeared more than once (e.g.,
multiple authors from the same university), its score was summed. Finally, a “Geometric
Mean” gave each a value depending upon the position of the name in the list of contributors.
For example, the first author of three was given a value of 3/(1+2+3), the second author was
given a value of 2/6, and the third author was given a value of 1/6. If the institution appeared
more than once, its score was summed. Thus, a higher ranking with a geometric mean

465
indicates that the author has been a primary contributor to the research or has fewer co-
authors, and we believe that the geometric mean is a more accurate representation of a
researcher’s contribution to the field. (In the related tables: AR = Arithmetic Rank, AM=
Arithmetic Mean, GR = Geometric Rank, and, GM = Geometric Mean).

FIGURE I: RESEARCH CLASSIFICATION FRAMEWORK


(FROM ALAVI & CARLSON, 1992)
In the sample, there were a total 619 unique authors, who worked for 265 different
institutions, mostly universities. The relatively low scores in author leadership (Table I) show
that many researchers are conducting research in the field, but Dr. Yao-Hua Tan was the clear
leader during the surveyed seven year period, based upon the geometric mean. Table II shows
the leading researchers with their associated institutions. The survey also showed that
researchers at 21 institutions have published approximately 30% of the 344 articles, with the
University of Nebraska at Lincoln the leader in the field (Table III).

TABLE I: TOP E-COMMERCE AUTHORS


Straight
Author AR AM GR GM Rank Count
Rex Eugene Pereira 1 3.00 2 1.50 3 3
Yao-Hua Tan 2 2.75 1 2.33 1 6
Nicholas C. Romano 3 2.50 3 1.33 3 3
Robert J. Kauffman 4 2.00 4 1.17 3 3
Ross A. Malaga 4 2.00 2 1.50 3 3
Ronald E. Goldsmith 5 1.83 5 1.00 3 3
Albert L. Lederer 6 1.58 2 1.50 2 4
Judith Gebauer 7 1.50 2 1.50 3 3
Keng Siau 7 1.50 3 1.33 3 3
Michael J. Shaw 7 1.50 5 1.00 3 3
Walter Thoen 7 1.50 5 1.00 3 3
Andrew B. Whinston 8 1.33 6 0.67 2 4
Eloise Coupey 8 1.33 3 1.33 3 3
Mahesh S. Raisinghani 8 1.33 3 1.33 3 3
Ting-Peng Liang 8 1.33 4 1.17 3 3

TABLE II: TOP-RANKED AUTHORS AND THEIR INSTITUTIONS


Author Institution AR GR Rank
Rex Eugene Pereira Drake Univ. 1 2 3
Yao-Hua Tan Erasmus Univ. 2 1 1
Nicholas C. Romano Oklahoma State Univ., Tulsa 3 3 3
Robert J. Kauffman Univ. Minnesota, Twin City 4 4 3
Ross A. Malaga Univ. of Maryland, BC 4 2 3
Ronald E. Goldsmith Florida State Univ. 5 5 3
Albert L. Lederer Univ. of Kentucky 6 2 2
Judith Gebauer Univ. Illinois, Urbana-Champaign 7 2 3
Keng Siau Univ. of Nebraska, Lincoln 7 3 3
Michael J. Shaw Univ. Illinois, Urbana-Champaign 7 5 3
Walter Thoen Erasmus Univ. 7 5 3

466
Andrew B. Whinston Univ. of Texas, Austin 8 6 2
Eloise Coupey Virginia Tech. 8 3 3
Mahesh S. Raisinghani Univ. of Dallas 8 3 3
Ting-Peng Liang National Sun Yat-Sen Univ. 8 4 3

TABLE III: TOP E-COMMERCE INSTITUTIONS


Straight
Institution AR AM GR GM Rank Count
University of Nebraska, Lincoln 1 3.83 1 9.00 2 7
University of Maryland, Baltimore County 1 3.83 7 4.17 1 8
University of St. Gallen 2 3.75 4 6.00 3 6
Drake University 3 3.33 14 2.17 5 4
Erasmus University 4 2.75 9 3.50 4 5
University of Minnesota 5 2.67 3 6.33 3 6
National Sun Yat-Sen University 5 2.67 6 4.50 4 5
Chinese University of Hong Kong 6 2.53 7 4.17 2 7
Florida State University 7 2.50 11 2.83 4 5
University of Illinois, Urbana Champaign 8 2.33 5 4.83 4 5
Syracuse University 8 2.33 8 4.00 5 4
University of Southern California 8 2.33 11 2.83 5 4
University of Texas, Austin 8 2.33 13 2.50 3 6
Iowa State University 9 2.17 8 4.00 5 4
New York University 9 2.17 12 2.67 5 4
University of Kentucky 10 2.08 11 2.83 4 5
KAIST 11 2.00 9 3.50 5 4
Georgia State University 11 2.00 14 2.17 5 4
Kent State University 12 1.67 15 1.83 5 4
George Mason University 13 1.58 10 3.00 5 4
Purdue University 14 1.53 2 6.67 4 5

V. TYPES OF RESEARCH

Using the classification framework (Figure I) developed by Alavi and Carlson (1992),
we analyzed the type of e-Commerce research by research methodology. Of the 344 articles
studied, 309 (89.83%) were classified as academic research. Initially, we identified 23
categories but then narrowed the list down to 9 (Table IV). While articles on overviews have
declined over the years, the number of articles about mobile commerce are growing.

TABLE IV: NUMBER OF ARTICLES BY TOPIC AND YEAR

Topics 1998 1999 2000 2001 2002 2003 2004 Total % Total
Type of 3 1 17 1 4 26 7.56%
e-Commerce
Overview of 6 17 26 14 6 3 72 20.93%
e-Commerce
e-Commerce 6 12 9 10 1 38 11.05%
Issues
e-Commerce 4 12 17 20 11 64 18.60%
Applications

467
e-Commerce 5 5 2 6 2 20 5.81%
Economic
e-Commerce 11 14 9 2 5 41 11.92%
Model
e-Commerce 1 11 12 3.49%
CRM
e-Commerce 6 13 5 2 16 10 52 15.12%
Markets
Mobile 2 4 7 6 19 5.52%
Commerce
Total 12 13 51 75 99 52 42 344 100.00%

The numbers of empirical and non-empirical papers were roughly equal, with surveys
typically being used as a research methodology, and these studies represented almost half of
all empirical research. Conceptual papers represented most of the non-empirical studies and
many papers dealt with overviews or particular applications.

VI. CONCLUSION

At the close of the first decade of research in electronic commerce, this paper presents
a summary of what has been done. The field is still young, with many researchers at a large
number of universities involved in the area, and although there are clear leaders, none are
identified as being dominant forces. Results also show that although the number of articles
has declined from a peak in 2002, they could be expected to increase soon. Overview articles
are expected to continue declining, as e-Commerce is well understood now. Other emerging
topics within the field, however, such a mobile commerce, are expected to rise dramatically.
Finally, most research has been based upon surveys. While important, we believe a greater
focus on experiments, and especially field experiments, is needed to present a balance.

There are several limitations with this first analysis of the e-Commerce literature. For
example, a certain amount of human error is certain to occur when counting author
contributions and classifying articles. Some articles may be only marginally relevant to a
topic, or might be relevant to multiple topics.

REFERENCES

Alavi, M., and Carlson, P. “A Review of MIS Research and Disciplinary Development.”
Journal of Management Information Systems., 8, 4, 1992, 45-62.
Alter, S. Information Systems: The Foundation of E-business. Prentice Hall, Upper Saddle
River, New Jersey, 2002.
Athey, S. and Plotnicki, J. “An Evaluation of Research Productivity in Academic IT.”
Communications of the Association for Information Systems., 3, 7, 2000, 1-20.
Bharati, P. and Tarasewich, P.”Global Perceptions of Journals Publishing E-commerce
Research.” Communications of the ACM., 45, 5, 2002, 21-26.
Claver, E., Gonzalez, R., and Llopis, J. “An Analysis of Research in Information Systems
(1981-1997).” Information & Management., 37, 4, 2000, 181-195.
Culnan, M. “Mapping the Intellectual Structure of MIS, 1980-1985: A Co-citation Analysis.”
MIS Quarterly., 11, 3, 1987, 341-353.
Laudon, Kenneth C., and Carol G. Traver. E-Commerce. 2nd ed. Addison Wesley, 2004.

468
Pervan, G. “A Review of Research in Group Support Systems: Leaders, Approaches and
Directions.” Decision Support Systems., 23, 1998, 149-159.
Turban, E., King, D., Lee, J., and Viehland, D. Electronic Commerce 2004: A Managerial
Perspective. Pearson Prentice Hall, Upper Saddle River, New Jersey, 2004.
Wong, Z. “Group Support System Research in the 1990s: Leaders in Research Productivity.”
Journal of International Information Management., 92, 2, 2000, 53-61.

469
PROCESSES FOR THE CREATION OF PERFORMANCE SCRIPTS

Paul Lyons, Frostburg State University


plyons@frostburg.edu

ABSTRACT

This paper explains in detail and demonstrates the general processes of script creation
for training uses. While there is not a lot of research available to practitioners about script
creation uses in training, the research that does exist gives credible support for script creation
applications. There is a body of research in cognition and cognitive processes that
tangentially treats script-creating behavior, however, that body of research is highly technical
and esoteric and not of practical value to most university faculty & staff, trainers, facilitators,
and HRD specialists.

I. INTRODUCTION

In much of the training and education aimed at performance that takes place in
universities, businesses and other organizations we find that there is much activity aimed at
the creation, refinement, and practice of scripts or other scenistic approaches for achieving
goals and for the continuous improvement of performance. The performance may be
individually or group-based. The creation and use of scripts is practiced frequently in
education aimed at professional work, for example, teacher-education, law, engineering,
management, social work, and nursing, although the effort is very seldom labeled script
creation.

To some, it might seem that script creation is a substitute term for behavior modeling
(see Bandura, 1977, 1986). However, the term, script creation, as explained in this paper
represents a process of activities and sequential learning that includes behavioral modeling as
one component or segment. Behavior modeling, as a feature of social learning theory,
stimulates observational and/or vicarious learning. As a form of training, it represents one
substantial part of script creation.

Many trainers and educators take the process of the creation of scripts or performance
guides for granted, and few attend to the critical dynamics of the process. Many of them do
not use the term, script, as the term may seem to convey ambiguous or, perhaps unflattering,
meanings. For example, a script suggests that we are training people to follow some pre-
determined rubric, outline, template, or job aid that prescribes performance. The purpose of
this work is to offer ideas and suggestions for HRD practitioners (trainers, facilitators) that
aim to promote optimal script-making processes that: (1) may have a very substantial
influence on individual learning, (2) contain ordered or sequenced intellectual tasks; and (3)
are at the core of several powerful educational and training designs. Some of those designs
are briefly reported in this work. As Leleu-Merviel, et al. (2002) assert, the use of script
approaches to guide the design of learning environments is becoming a more frequent
practice.

Knowing more about the process of script creation can help to ground instruction and
training design and can stimulate the facilitation of learning. Script creation, if facilitated
well, can also serve as a model for self-learning. That is, in the models expressed later in this

470
work, learners have a process available to them that they might use independently in other,
similar or related, situations to improve performance. As suggested by Ayas & Zeniuk
(2001), the project-based learning that script creation processes enable can assist in the
development of learning capabilities that extend into one's future. Additionally, social
cognition as expressed in the processes may enhance general social skills and accelerate peer
acceptance (Mostow, et al. 2002).

II. BACKGROUND

WHAT IS A SCRIPT?

In general, a script is a hypothesized cognitive structure that upon actuation provides a


guide to appropriate behavior sequences in a given context or situation. We tend to think of a
script as a product, output, or result of some intellectual work. It is an entity, a thing that has
been produced. Unlike a play or movie, script as used in this work is not an explicit, word-
for-word expression of what to say or do in some circumstance or sequence of events. Rather,
the script is a guide to action. While the script-as-guide is a product, the activities that
produce the script are the main focus of this paper and the activities stimulate learning in
different ways. The details of the activities are described later in this work. It is the
knowledge of the process and the dynamics of the process that adds to the repertoire of HRD
professionals.

According to Lord and Kernan (1987) scripts may serve a dual purpose: they help one
to interpret the behavior of others, and aid in generalizing behavior. Hence, they may guide
the planning and execution of familiar and/or repetitive tasks. In supervisory or managerial
work, for example, it is likely that relatively common scripts exist for a variety of activities
such as conducting informational meetings, coaching an employee, conducting performance
appraisal sessions, and the like.

Individuals with large amounts of experiences in a variety of settings may have


knowingly or unknowingly developed very elaborate scripts and these scripts assist in the
processing of information and in the accomplishment of complex tasks. The specific
examples explained later in this work demonstrate how script building may take place; how
experiential, constructivist, and transformative learning takes place as individuals or teams
work to fashion skillful approaches to managing some task.

III. THEORETICAL GROUNDING


Note: Owing to page limitations in these proceedings, this entire section is not
included here. Please contact the author for the complete paper.

IV. THE SCRIPT CREATION PROCESS

The process explained here is a general process, one that may be adapted to a variety
of training and educational purposes. This process has been adapted to several, specific
training approaches and examples of these approaches are reviewed later in this work. The
script creation process consists of several steps or components that not only aid in script
creation but also in the refinement of scripts, and in continuous improvement of script-based
work.

471
1. A body of information, data, etc., is offered (the individual or group). This can take the
form of a scenario, a case situation, an issue or some known problem that is currently active.
Usually, this information is in written form. It may also be represented in video material or by
direct observation. This process is facilitated by one who has attained familiarity with the
script creation approach. Two examples of a body of information are: customer service
problems in a call center, and the effective leading of an ad hoc task force. For purposes of
this paper, we will correlate the steps of the script creation process with the issue of
effectively leading an ad hoc task force.

2. Participant(s) review the information for completeness, understanding, and common


meanings. Clarifications are sought. For our task force example (see above) the participants
need some dialog with the facilitator to clarify basic terms, purposes of the task force,
responsibilities of and accountability of the task force and related matters.

3. Discussion takes place in open forum. We seek to clarify what we know, what we do not
know and/or have questions about. This discussion has as one of its objectives the
commencement of recognition of information and performance gaps. At the least, we need to
attain consensus about the critical issues, assumptions and problems that the information
reveals to us. To assist in this work we use Brady's (1996, 2004) conceptual framework to
shape and crystallize our mental models of reality. That is, we want to address situational
issues using these five elements or screening tools:

Time aspects- currency, duration, etc.,


The Setting - organization, basic work processes, history, etc.,
Actors - number, distribution, roles, training, background, etc.,
Social Patterns - work relationships, friendships, networks, etc., and
Assumptions - what do we take for granted; what appears to be "true."

Superimposing these five categories of information on our task force example, we


begin to quickly fill-in some of the gaps in our knowledge and understanding. That is, the
categories help us to focus on critical elements of the context and the interrelationships
among characteristics of the context

Brady's framework helps to provide criteria for content selection and emphasis. It also
helps us to envision possible relationships among various aspects of reality. A fundamental
assumption is that participants already possess considerable knowledge and need to
reorganize it to make it more useful. Participants need to supplement knowledge with insights
and skills that will help explain more fully what they already know (Brady, 2004, p. 280).
Some consensus building takes place among participants.

4. Brainstorming potential interventions for treating the issues and problems takes place.
Once possible interventions are identified, tasks are parsed to the individual(s) or groups as
follows:
List - What is the specific behavior required to skillfully execute the application
of the intervention?
Access/Summarize - What research, authoritative information, etc., must be
reviewed and considered to successfully implement the intervention?
Reconcile - In order to put forth a tentative action plan for the inter-
vention, it is necessary to reconcile our behaviors list with the research

472
information in order to identify and recommend a more precise set of
behaviors.

In this aspect of the work we need to be very clear about the differences between
performance as behavior and performance as outcomes. Another way to state this is that
activities and tasks are important and need to be well defined and understood; and the results
or outcomes of the various activities must be defined, made clear and understood (Cardy,
2004). The foundation of the script takes shape.

5. Script Identity. At this point there is identification of the behavioral script elements
necessary to address the issues or problems heretofore examined. Consensual processes have
taken place to achieve this result. The result is temporary as validation processes still need to
occur. Per our example of the task force we now have some reasonably clear notions of what
specific behaviors need to occur in order to commence effective leadership of the task force.
In essence, the first iteration of script creation is nearing completion.

6. Model the intervention. The modeling is a rehearsal of the behavioral script. Such rehearsal
requires the leadership of the facilitator. Participants actually act-out (perform) for each other
the behaviors heretofore identified. In terms of experiential learning, this phase represents
active experimentation.

7. Next, qualitative and quantitative evaluation or assessment indicators of effective, skillful


intervention practices are developed. This is relatively time-consuming, labor-intensive
mental work, however, it is necessary work in order to achieve effectiveness. Having
modeled the intervention (rehearsal) will assist this effort. Each of the actions or behaviors
may be examined in terms of skilled performance. The following questions may be helpful:
What criteria would define each element as a skilled performance? What criteria make sense
in terms of quality or output? In essence, this intellectual work stimulates all phases of the
experiential learning model.

8. Repeat the modeling of the intervention. Rehearsal. The effectiveness criteria established
in the preceding step will inform this work. At this point sufficient preparation has been
achieved so as to try out or execute the script-as-intervention in our particular setting.

The assumptions underlying all of this preparation are: improvement is continuous,


improvement is based on trial and learning, and most likely there is no "silver bullet" or one
best way to achieve improvements.

V. EVIDENCE OF THE EFFECTIVENESS OF SCRIPT CREATION

Several training and educational approaches demonstrate the effectiveness of script


creation processes. We do not know of many unsuccessful attempts in the use of the methods
as such efforts do not find their way into the literature. We report here, briefly, upon the
research to which we have access. Keleman and others (1993) used script creation and
management to demonstrate how group support systems can be made more effective. The
primary focus of their research was the implementation and use of group support systems in
various arrangements for problem-solving, decision making and so on. They developed an
approach that permits a facilitator of a group to enable script creation and script adjustments
on-the-fly in real time situations with problem solving groups. The approach enables better
use of time and more effective use of information by the group.

473
Lyons (2003, 2004) used script creation processes extensively in skill development
and performance improvement training and education. Script creation was housed within a
training design that applied skill charting activities. Skill charting is a tool that uses the
general script creation model explained in this work to help a group of employees or students
to focus very intensely on skilled, behavioral performances that attend a particular key result
area, such as customer satisfaction. In one study (Lyons, 2003) team leaders' performance of
specific supervisory and leadership skills with team members was improved from using script
creation processes in their training. In another study (Lyons, 2004) a senior management
team making use of script creation processes was able to positively influence a serious
employee turnover problem through the skillful creation of behavioral profiles of ideal work
associates.

Finally, in a recent study (Lyons, 2004) a training model was developed that made use
of hypothetical problem situations (cases, incidents) with script creation activities
superimposed on the case analysis work. The resulting approach was named Case-Based
Modeling and was used to improve the performance of team members in certain performance
areas such as skillfully managing meetings. The approach has broad applicability for training
in general supervision, management, and for higher education in business and management.
With adaptation, the approach could be used in many different situations and with many
different occupational groups.

V. CONCLUDING REMARKS

Learned in this examination of the script-creation process is: (1) the sequence of
events and the general dynamics of the process are capable of being defined and, ultimately,
assessed; (2) the process, in-general, provides guidance to HRD professionals in the use of a
variety of scenistic tools, (3) script-creation processes are grounded in meaningful theories
and concepts of adult learning and motivation; and (4) existing empirical research generally
supports the efficacy of script-creation processes as a stimulus for learning and change. All of
this has positive implications for HRD practice, particularly the practice that involves training
and/or facilitation.

The few studies reported above give evidence that script creation processes are
effective tools for training and education. Because they heavily involve learners in the
construction of information, continual evaluation of information, and in the application of
information into their repertoire of possible responses to issues and problems, the training
methods are somewhat labor-intensive. The general process offered in this paper responds
very well to the motivational needs of the adult learner.

There are additional outcomes that learners may experience. For example, it is not
unusual for participants to: experience growth in interpersonal skills; learn to rely on and
better appreciate the special skills and talents other team members bring to the tasks; have
greater understandings of trust and cooperation; and, recognize they have much control and
autonomy with regard to their learning. Another such consequence may be the establishment
of reflection as an organizing process in group work (Vince, 2002). Clearly, reflection is a
fundamental feature in many steps of script-creation processes and the activity may be self-
reinforcing.

474
REFERENCES

Allen, S. J., and Blackston, A.R. "Training Pre-service Teachers in Collaborative Problem
Solving: An Investigation of the Impract on Teacher and Student Behavior Change in
Real-world Settings." School Psychology Quarterly., 18, (1), 2003, 22-51.
Ayas, K., and Zeniuk, N. "Project-based Learning: Building Communities of Reflective
Practitioners." Management Learning., 32, (1), 2001, 61-76.
Bandura, A. Social Learning Theory. Englewood Cliffs, N.J.: Prentice-Hall, 1977.
Bandura, A. Social Foundations of Thought and Action. Englewood Cliffs, N.J.:
Prentice-Hall, 1986.
Bandura, A. Self-Efficacy: The Exercise of Control. New York, W.H. Freeman & Co., 1997.
Bickford, D.J., and Van Vleck, J. "Reflection of Artful Teaching." Journal of Management
Education., 24, (4), 1997, 448-472.
Brady, M. "Education for Life as It is Lived." Educational Forum., 60, (3), 1996, 249-255.
Brady, M. "Thinking Big: A Conceptual Framework for the Study of Everything." Phi Delta
Kappan., 86, (4), 2004, 276-281.
Bryman, A., Stephens, M., and Campo, C. "The Importance of Context: Qualitative Research
and the Study of Leadership." Leadership Quarterly., 7, 1996, 353-370.
Cardy, R.L. Performance Management: Concepts, Skills and Exercises. Armonk, N.Y.: M.E.
Sharpe, 2004.
Carver, R. "Theory for Practice: A Framework for Thinking About Experiential Education."
Journal of Experiential Education., 19, (1), 1996, 8-13.
* For the complete paper and references, please contact the author.

475
TEACHING OVERSEAS USING A COMPRESSED COURSE DELIVERY MODULE

David R. Shetterly - Troy University


dshetterly@troy.edu

Anand Krishnamoorthy - Troy University


akrishnamoorty@troy.edu

ABSTRACT

The purpose of this paper is to present an alternative course delivery module that can
be used by universities to deliver courses to non-traditional students. Specifically, the paper
focuses on degree programs that are offered using a compressed delivery format. Qualitative
data from a major U.S. institution’s overseas educational pursuits is used to put forth the case.
This paper contributes to the academic literature by providing institutions of higher education
with information that would be useful when developing and/or reviewing course delivery
techniques at their respective universities.

I. INTRODUCTION

Troy University is a state institution of higher education with a main campus in Troy,
Alabama. However, the university’s programs outside Alabama have historically been much
larger than their in-state operations. Like other institutions such as the University of
Maryland (Rubin, 1997), Troy University offers educational opportunities to the nation’s
armed forces through programs in Europe and the Far East. The university’s outreach
programs fall under the jurisdiction of University College (UC), the department that oversees
programs delivered by Troy University outside the state of Alabama.

The size of University College is large, serving over 20,000 students in more than 10
countries. University College consists of three domestic regions and has extensive
international operations as well. The international division consists of educational programs
delivered in Germany and throughout the Pacific Rim including the U.S. overseas territory of
Guam. University College also has a military contract with Pacific Air Forces Command
(PACAF) to provide graduate degree programs on military bases.

In the current dynamic environment universities have many course delivery options
(Kennen and Lopez, 2005; Yang, 2003; Parish, 1993). The purpose of this paper is to discuss
two alternative course delivery formats in the Pacific Region and to contrast them with the
traditional 15 - week semester delivery system used at many academic institutions. In
particular the authors focus on the fact that the courses are offered in a compressed format.
The authors put forth their case with respect to two programs that existed during the 1998 to
2003 contract period: Master’s of Science in Management (MSM) and Master’s in Public
Administration (MPA). This paper contributes to the academic literature by identifying
course delivery modules and by advantages and disadvantages of teaching graduate students
using a compressed course schedule.

476
II. MASTER OF SCIENCE IN MANAGEMENT (MSM)

Prior to August 2003, Troy University offered the MSM program at military
installations in Japan, including Okinawa, South Korea and Guam. The program required
students to successfully complete 10 courses covering a variety of business disciplines. The
program also required students to pass a comprehensive examination involving completion of
a corporate strategic audit in six hours.

Courses were offered in both a weekday and a weekend format during a term that
lasted eight weeks. Weekday courses were offered either on Mondays and Wednesdays or on
Tuesdays and Thursdays for 8 weeks. Weekday classes met from 6:00 to 9:00 p.m. on the
evening in question. Weekend courses were offered using a three alternating weekend
format. The classes met on Saturdays and Sundays from 9:00 a.m. to 4:30 p.m. Regardless
of the format, the scheduled class meetings had to total 45 contact hours as stipulated in the
contractual relationship. The university ran six terms during a given academic year. Two
courses were offered each term, one in a weekday and one in a weekend format.

There are certain unique expectations required of both students and professors when
courses are delivered in this format. It is the instructor’s responsibility to prepare a detailed
course syllabus and make it available to the students at least 2-3 weeks ahead of the start of
the term. The course syllabus needs to be more detailed than in a traditional format since
students need adequate time to prepare for the first class meeting. Unlike in a traditional
delivery module, students should not be “green” on the first meeting day of the course. The
PACAF contract stipulated that students should write a 15 to 20 page term paper in each
course. This paper needs to be written during the course of an eight-week term. Students are
generally expected to come to the first class meeting ready to discuss the paper. Hence, the
syllabus should provide detailed guidelines regarding the paper.

The instructor needs to be available to the students ahead of the start of the term.
Students may contact the instructor with questions regarding the term paper or the assigned
reading material. The syllabus should provide all relevant contact information for the
instructor with a statement concerning the preferred method of communication. If the
instructor has travel plans prior to the start of the term, checking electronic mail and
responding promptly to student inquiries is imperative.

The students need to mentally prepare themselves for a course that is delivered in this
format. The students need to realize that such courses involve heavy and often intense
workloads within a relatively short period of time. Preparing for examinations is often a
challenge. For example, coverage of material may end on Monday and the exam may be on
Wednesday. In a traditional MSM program, classes typically only meet once a week thereby
giving students a whole week, with a weekend in the interim period, to prepare for
examinations. Students also need to make a commitment to read the assigned chapters and
supplemental material for the day prior to the start of class. This enhances the learning
process, regardless of the delivery approach, and in a compressed course format it is crucial
for student success. In such a format, students who fail to read ahead may find themselves
unable to keep up with the rest of the class.

Given this format, one might wonder about the impact of the format on the learning
and retention process. Course surveys conducted at the end of each course as well as exit
surveys conducted upon successful completion of the degree program addressed this issue.

477
Although the results of the surveys were mixed, by and large students did not feel that the
format compromised the value of their degree.

Evidence suggests that the demographics of traditional and non-traditional students


(Trinkle, 1999; Dutton 1999) and the way such students learn may differ (Miller and Lu,
2003). Therefore the results need to consider the fact that the clientele of Troy University’s
Pacific Region consisted almost exclusively of military personnel and their dependents.
These individuals lead transient lifestyles with unpredictable work schedules. Academic
institutions that wish to adopt this format need to consider the possibility that such a format
may not be well received by a more stable clientele who might prefer a more relaxed pace.

III. MASTER OF PUBLIC ADMINISTRATION

Prior to August 2003, Troy University offered a Master’s in Public Administration


(MPA) at locations in mainland Japan, Okinawa and South Korea. The curriculum included a
ten-course and a twelve-course option. The ten-course option included a six-hour
comprehensive examination as the outcome assessment measure while the twelve-course
program employed a capstone course for such purposes. As with MSM students the MPA
program was structured so students could complete program requirements in a 12 month
period. Each academic year consisted of six eight-week terms with two courses offered per
term. What distinguishes the public administration curriculum from other curricula offered in
the region such as the MSM is the format under which courses are delivered (TABLE I).

TABLE I – EIGHT WEEK COMPRESSED COURSE SCHEDULE


Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7 Week 8
Course Prepare Prepare Contact Wrap Wrap Wrap
A Time Up Up Up
Course Prepare Prepare Prepare Contact Wrap Wrap
B Time Up Up

Each course within a term is 6 weeks in length, with a staggered start. The first course starts
at the beginning of the term, while the second course starts approximately two weeks into the
term. The student contact time, a total of 45 hours, is compressed into a nine-day period that
includes two weekends. A typical schedule for the contact time runs from 8:00 a.m. to 5:00
p.m. on Saturday and Sunday of the first weekend. Students have Monday off and then have
classes on Tuesday, Wednesday, and Thursday evenings from 6:00 to 9:00 p.m. Friday is an
off day followed by an 8:00 a.m. to 5:00 p.m. schedule on Saturday and Sunday of the second
weekend.

From a student perspective the format provides time on the front end to prepare for
the contact time, with the contact time occurring roughly in the middle of each course,
followed by time at the end to wrap up course requirements. During the preparation phase,
students are responsible for assigned readings and meeting other course requirements. The
wrap up period normally includes finalizing a research paper or taking a final examination.
The schedule provides the opportunity for students with demanding travel requirements to
compress their contact time into a nine-day period. This is invaluable for military members
with demanding training and operational requirements. It enables students to start and
complete courses that they might not otherwise be able to accomplish.

478
In terms of pedagogical approach, the compressed schedule might be described as a
mixed model. It allows for the full amount of contact time (45 hours) of a conventional
course while providing flexibility to the student in the preparation and wrap up phases (both
may be done in more of a distance learning format). From an instructor perspective the
format requires finding meaningful ways of helping students prepare for the contact time,
keeping students focused during the contact time, and being available to support student
needs during the wrap up phase. During the preparation phase the key concern is building
adequate incentives into the course so students actively prepare for the compressed contact
time period. The key during the nine-day contact time period is to keep students actively
engaged in the course material. Finally, during the wrap up period support must be provided
to the student to assist in completing research papers and other assignments to include exams
from the contact period.

The format is more conducive to some courses. Courses with independent, or at least
moderately independent, modules work best because they can be treated as discrete units.
Examples of courses compatible with the approach are organization theory, public policy, and
human resource management. Such courses fit the format well because they have elements
that build moderately upon one another. This benefits the student because if they miss a
module of instruction they can return to class with a relatively seamless transition. On the
other hand courses that have modules with a dependency relationship are less conducive to
the format. Examples of such course include research methods in public administration and
economics for public managers. These courses have modules that build upon one another. If
the student misses a particular module it creates a more difficult learning situation for the
student upon return to the class.

IV. ADVANTAGES AND DISADVANTAGES OF A COMPRESSED SCHEDULE

Preparation is a key requirement with any form of compressed schedule. A typical


traditional course delivery schedule involves one class meeting per week over a 15 week
period. A key advantage to the traditional system is that preparation for class meetings is
spread out over an extended period. In a word, both student and instructor can think
incrementally. Incremental thinking does not work well with a compressed course schedule.
At the start of the contact time the instructor must have all modules of instruction ready to go
and the student must be fully prepared to actively engage in the learning process. The
instructor should think of the syllabus as an incentive system to promote student preparation
for the contact time. For example, all courses will have assigned readings. A quiz can be
administered during the first day of the contact time that covers material from the assigned
readings. A quiz of adequate weight (10-15 percent of the course grade) provides a powerful
incentive for students. Another approach is to include graded assignments that must be
completed during the preparation period. For example a written exercise involving an
Internet search or a paper that summarizes an article from a course related academic journal
are possibilities. In short, incentives matter. Graded exercises during the preparation phase
keep the students engaged with the course material.

Contact time variety is also an important concern with a compressed schedule. A


traditional semester class would meet three hours a week over a 15 week period. This is an
advantage in that it is easier to keep students engaged in learning activities. Under a
compressed schedule keeping students engaged during the contact period is a huge challenge.
Back to back weekends, with nine-hour days is a challenge to instructor and student alike.

479
Variety in delivery of course material is essential. Video cases, written cases, class exercises,
computer labs, and student presentations add variety to the learning experience.

An advantage of a 15 week schedule is that it offers time for adjustment and smooth
transitions from one learning module to another. With a compressed schedule there is little
time for adjustment once the contact time starts. The key to avoiding adjustment is to prepare
wisely. Once the contact time starts the instructor must be prepared in totality to deliver
quality instruction in a seamless manner. The corollary to no time for adjustment is that
adjustment is inevitable. The need to adjust will occur. The best-developed plans must often
be changed. The challenge is to make constructive and expeditious adjustments given the
little time available for adjustment. This is particularly critical for the weekend sessions. A
particular adjustment problem is the need to watch transitions. Some courses require more
emphasis on transition from one block of instruction to another block of instruction. If
students don’t successfully make the transition they will have difficulty with subsequent
blocks of instruction. Practical exercises are a good device for determining if the class, as a
whole, is ready to move on.

The compressed schedule has a key advantage over a 15 week schedule in that it
offers flexibility to the student. Face to face contact time with a faculty member is
compressed into a limited number of days; in the case of the MPA program, a nine-day
period. This provides flexibility to the student in meeting course requirements and the
demands of the workplace. It is especially critical for clientele groups with significant travel
and training demands. The compressed schedule offers the advantage of flexibility, as with
distance learning, but it also offers the full amount of faculty contact associated with a
traditional 15 week course. In a sense it embodies a key attribute of both delivery modes.

V. CONCLUSIONS

While the compressed schedule offers some unique pedagogical challenges, it does
provide many advantages to the adult learner. For institutions with an interest in offering
outreach programs it provides an alternative to Internet based distance education
(Krishnamoorthy, 2005) that emphasizes the value of student flexibility and traditional
approaches that emphasize the value of face to face contact time. The compressed schedule
represents a mixed approach which maximizes face to face student/instructor contact time
while allowing a significant amount of flexibility to the student. The compressed schedule is
especially suited to the adult learner that values interaction with a faculty member but needs
schedule flexibility to meet the demands of personal and professional requirements.

REFERENCES

Barclay, Kathy Dulaney “A Comparison of Alternative Course Scheduling at the


Graduate Level”, Reading Improvement v27 n4 p255-60, Winter 1990
Dutton, L.L. “Non-Traditional Students in Physical Therapy: A Descriptive Study”
Physical Therapy, May 1999 v79 i5 pS8.
Kennen, Estela and Estela Lopez “Finding alternative paths for non-traditional students”,
Education Digest, April 2005 v70 i8 p31(5).
Krishnamoorthy, Anand, David Shetterly, and Krishan Rana, “Implementing a Non-
Traditional Course Delivery Module at an Academic Institution” paper presented
at the 2005 SE Institute for Operations Research and Management Science
(SEInforms)

480
Miller M. and M-Y Lu “Serving Non-Traditional Students in E-Learning Environments:
Building Successful Communities in the Virtual Campus”, Educational Media
International, March-June 2003 v40 i1 p163(7).
Parish-Cirasa, Anne “Shaping Graduate Education’s Future: Implications of
Demographic Shifts for the Twenty-First Century Demographic Trends and
Innovations in Graduate Education”, Paper presented at the Annual Conference of
the Canadian Society for the Study of Higher Education (Ottawa, Ontario,
Canada, June 10-12, 1993).
Rose, Mike “Non-traditional students often excel in college and can offer many benefits
to their institutions”, Chronicle of Higher Education, Oct 11, 1989 v36 n6 pB1(2)
Rubin, Amy Magaro “All around the world, U. of Maryland offers classes to U.S.military
personnel” Chronicle of Higher Education, March 21, 1997 v43 n28 pA55(2).
Trinkle, Dennis “Distance education: a means to an end, no more, no less”, Chronicle of
Higher Education, August 6, 1999 v45 i48 pA60(1)
Yang, Kaifeng and Marc Holzer, “Web – Based Distance Education in Public
Administration Programs”, PA times Education Supplement, October 2003

481
CHAPTER 15

INTERNATIONAL BUSINESS
AND
MARKETING

482
ANTECEDENTS OF EGYPTIAN CONSUMERS’ GREEN PURCHASE BEHAVIOR:
A HIERARCHICAL MODEL

Mohamed M. Mostafa, Gulf University for Science and Technology


Hawally, Kuwait
mostafa@usa.com

Naser I. Abumostafa, Gulf University for Science and Technology


Hawally, Kuwait.
drnaser69@hotmail.com

ABSTRACT

This study investigates the influence of various demographic and psychographic factors
on the green purchase behavior of Egyptian consumers. Using a large sample of 1093
consumers, a survey has been developed and administered across Egypt. The findings from
the hierarchical multiple regression model confirm the influence of the consumers’ ecological
knowledge, concern, attitudes, altruism, and perceived effectiveness, among other socio-
demographic factors, on their intention to purchase green products. Results show that
skepticism towards environmental claims is negatively related to consumers’ intention to buy
green products. The study also discusses how the present findings may help policy makers
and marketers alike to fine- tune their environmental and marketing programs.

I. INRODUCTION

Green consumerism is described as being a multifaceted concept, which includes


preservation of the environment, minimization of pollution, responsible use of non-renewable
resources, and animal welfare and species preservation (McEachern and McClean, 2002). As
a result of the increasing number of green consumers, marketers are targeting the green
segment of the population. Recycled paper and plastic goods and dolphin- safe tuna are
examples of products positioned on the basis of environmental appeal (Banerjee et al. 1995).
Marketers are also incorporating the environment into many marketing activities, including
product and package design (Bhat, 1993; Polonsky et al. 1997) and pricing (Kapelianis and
Strachan, 1996). Marketers have even gone as far as to develop specific models for the
development of green advertising and green marketing strategy (McDaniel and Rylander,
1993; Menon and Menon, 1997).

Compared with what has been happening in the West, consumers in Egypt, as well as
in the wider context of the Arab world, are just at the stage of green awakening. This may
explain the fact that little is understood about consumers’ intentions to purchase
environmentally friendly products in this part of the world. Indeed, researchers agree that
very little research has been done concerning cross-cultural studies on environmental
attitudes or behavior of different ethnic, cultural, or religious groups (Klineberg, 1998;
Schultz and Zelezny, 1999). To remedy this void in the literature, this study attempts to look
at the influence of various demographic and psychographic factors on the green purchase
behavior of Egyptian consumers.

483
II. RESEARCH OBJECTIVE

The objective of this research is to identify those factors that influence the intention to
buy environmentally responsible products among Egyptian consumers. Seeking to determine
factors that affect green purchase decisions is important in theory development, policy
decisions, and methodological reasons. Research on eco- orientation is important from a
theoretical standpoint because even though environmental concerns are part of corporate
social responsibility and ethics frameworks, researchers have largely ignored eco- specific
topics related to consumer behavior, values, and culture. From a public policy standpoint, it is
important to know what motivates consumers to buy environmentally friendly products if a
pro- environmental change policy is to be successfully implemented. Finally, from a
methodological measurement standpoint, this research seeks to extend our knowledge about
environmentally friendly behaviors to the Arab world where virtually no research has been
conducted in the realm of eco-orientation.

III. RESEARCH HYPOTHESES

Drawing on research from North America, Australia, Asia, and Europe, The following
hypotheses are developed.
H1: Women are more likely to express their willingness to purchase green products compared
to men.
H2: Green purchase intention is negatively related to age, with older persons less likely to
purchase green products.
H3: Green purchase intention is positively related to educational level, with highly educated
persons more likely to purchase green products.
H4: Environmental knowledge is positively related to consumers’ intention to purchase green
products.
H5: Environmental concern is positively related to consumers’ intention to purchase green
products.
H6: Environmental attitudes are positively related to consumers’ intention to purchase green
products.
H7: Perceived consumer effectiveness is positively related to consumers’ intention to
purchase green products.
H8: Altruism is positively related to consumers’ intention to purchase green products.
H9: Skepticism towards environmental claims is negatively related to consumers’ intention to
purchase green products.

IV. METHODOLOGY

A total of 1500 questionnaires were distributed. Confidentiality of responses was


emphasized in the cover letter with the title “Confidential survey” and in the text. To reduce
social desirability artifacts, the cover letter indicated that the survey seeks “attitudes towards
green purchase” and nothing else. In total 1274 responses were received by the cut-off date,
but 181 questionnaires were discarded because the respondents failed to complete the
research instrument appropriately. The effective sample size, thus, was 1093. All constructs
used in this study were measured by various items on 5- point Likert- type scales (1 =
completely disagree to 5 = completely agree). It is widely believed that attitudes are best

484
measured by way of multiple measures and the general trend in measuring environmental
issues is via several items instead of single item questions (Gill et al. 1986). The items
contain an explicit key expression representing the specific construct . Positive and negative
formulations of the items were presented for guaranteeing the content balance of the study.
All items are based on scales that have been previously validated.

V. RESULTS

Hierarchical regression analysis was used to test the research hypotheses. Table I
shows a summary of results of the hierarchical regression analysis. As seen in table I, when
the three demographic variables were entered into the regression equation in the first step, the
coefficient of determination (R2) was found to be 0.112 indicating that 11.2 per cent of green
purchase intention is explained by these demographic variables. This result confirms
Balderjahn’s (1988) study, which found that demographic and socio-economic variables such
as education, income, and family size are only of limited value in explaining different degrees
of environmental attitudes. In a similar vein, Olli et al. (2001) found that socio-demographic
correlates explain only 10 % of environmental acts.

Following the recommendation of Mainieri et al. (1997), environmental knowledge


and environmental concern were our second entry. This is because knowledge and concern
are fundamental to attitudes and intentional behavior. By adding the two independent
variables in step 2, R2 increased to 0.458 or 45.8 per cent. This R2 change (0.346) is
significant (p<0.001). This implies that the additional 34.6 per cent of variation in consumers’
intention to purchase green products is explained by environmental knowledge and
environmental concern.

In the third step, perceived consumer effectiveness, altruism, and skepticism towards
environmental claims scales were entered. The decision to enter these three independent
variables is based on Ajzen and Fishbein’s (1980) theory that specific attitudes are better than
general attitudes as predictors of related behavior. When the three scales were entered the R2
increased from 45.8 per cent to 66.2 per cent indicating a change of 20.4 per cent, which is
significant (p<0.001).

In the fourth and final step, the green purchase attitudes scale was entered in the
equation in order to gauge its impact as an independent predictor. From the final regression
equation (model 4), it can be seen that R2 increased from 66.2 per cent to 76.4 per cent
indicating a change of 10.2 per cent, which is significant (p<0.001). Thus, the final model
explains 76.4 per cent of the variation in consumers’ intention to purchase green products.

485
Table I. Hierarchichal Regression Results

STANDARDIZED SI
COEFFICIENTS G.
Beta F
R² F C
MODEL T SIG. R² H
CHANGE CHANGE
A
N
G
E
1 (constant) 16.807 .000 .112 .112 45.780 .0
age .118 1.819 .069 00
sex - .758 -10.269 .000
education - .660 -7.201 .000

2 (constant) 14.190 .000 .458 .346 347.148 .0


age - .070 - 1.330 .184 0
sex - .733 -13.388 .000 0
education -1.013 -13.827 .000
knowledge .563 20.483 .000
concern .378 14.355 .000
3 (constant) -.5930 .554 .662 .204 218.562 .0
age -.382 - 8.048 .000 0
sex -.469 - 9.729 .000 0
education -.243 - 3.626 .000
knowledge .251 9.986 .000
concern .213 9.660 .000
perceived .345 15.378 .000
control .329 13.983 .000
altruism .069 3.572 .000
skepticism
4 (constant) - 1.646 .100 .764 .102 468.704 .0
age -.621 -15.073 .000 00
sex -.208 - 4.958 .000
education .000 .002 .998
knowledge .060 2.613 .009
concern .345 17.758 .000
.202 10.184 .000
.135 6.276 .000
perceived -.136 -7.296 .000
.640 21.650 .000
control

altruism
skepticism
attitudes

From the final regression model, it can be observed that the standardized coefficient for
gender is negative (β = - 0.208) and significant at the 0.001 level. As gender was coded as a
dichotomous variable: 0 (female) and 1 (male), this regression coefficient implies that non-
males (i.e. females) express more intention to purchase green products. This result provides

486
support for the first hypothesis. The standardized coefficient for age is negative (β = - 0.621)
and significant at the 0.001 level. This result provides support for the second hypothesis.
Surprisingly, education level was not related to higher intention to buy green products (p =
0.998). This result fails to support the third hypothesis. We found perceived environmental
knowledge to be positively and significantly (at the 0.01 level) related to ecologically
favorable attitudes and behaviors (β = 0.060). This result supports the fourth hypothesis. The
standardized coefficient for environmental concern is positive (β = 0.345) and significant at
the 0.001 level, which supports the fifth hypothesis. We found environmental attitudes to be
positively and significantly (at the 0.01 level) related to green purchase intention (β = 0.640).
This result supports the sixth hypothesis. The standardized coefficient for perceived
consumer effectiveness is positive (β = 0.202) and significant at the 0.001 level. The strong
positive relationship found in this study between perceived consumer effectiveness and green
purchase intention provides a strong support to the seventh hypothesis. The standardized
coefficient for altruism is positive (β = 0.135) and significant at the 0.001 level. This finding
supports the eighth hypothesis. The standardized coefficient for skepticism towards
environmental claims is negative (β = - 0.136) and significant at the 0.001 level. This result
strongly supports the ninth hypothesis.

REFERENCES

Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting behavior.
Englewood Cliffs, NJ: Prentice Hall.
Balderjahn, I. (1988). Personality variables and environmental attitudes as predictors of
ecologically responsible consumption patterns. Journal of Business Research, 17, 51-
56.
Banergee, S., Gulas, C., & Iyer, E. (1995). Shades of green: A multidimensional analysis of
environmental advertising. Journal of Advertising, 24 (2), 21-31.
Bhat, V. (1993). Green marketing begins with green design. Journal of Business & Industrial
Marketing, 8, 26-31.
Kapelianis, D., & Strachan, S. (1996). The price premium of an environmentally friendly
product. South African Journal of Business Management, 27(4), 89-95.
Klineberg, S. (1998). Environmental attitudes among Anglos, Blacks and Hispanics in Texas:
Has the concern gap disappeared? Race, Gender, and Class, 6, 70-82.
Mainieri, T., Barnett, E., Valdero, T., Unipan, J., & Oskamp, S. (1997). Green buying: The
influence of environmental concern on consumer behavior. Journal of Social
Psychology, 137, 189-204.
McDaniel, S., & Rylander, D. (1993). Strategic green marketing. Journal of Consumer
Marketing, 10(3), 4-10.
McEachern, M., & McClean, P. (2002). Organic purchasing motivations and attitudes: Are
they ethical? International Journal of Consumer Studies, 26, 85-92.
Menon, A., & Menon, A. (1997). Enviropreneurial marketing strategy: The emergence of
corporate environmentalism as market strategy. Journal of Marketing, 61, 51-67.
Olli,E., Grendstad, G., & Wollebaek, D. (2001). Correlates of environmental behaviors.
Environment and behavior, 33, 181-208.
Polonsky, M., Carlson, L., Grove, S., & Kangun, N. (1997). International environmental
marketing claims- Real changes or simple posturing? International Marketing Review,
14, 218-232.
Schultz, P., & Zelezny, L. (1999). Values as predictors of environmental attitudes: Evidence
for consistency across 14 countries.. Journal of Environmental Psychology, 19, 255-
265.

487
CULTURE-DRIVEN CONSUMER MARKET BOUNDARIES: AN APPROACH TO
INTERNATIONAL PRODUCT STRATEGY

Dinker Raval, Morgan State University


draval@moac.morgan.edu

Bala Subramanian, Morgan State University


subramanian@moac.morgan.edu

ABSTRACT

As the global markets become intensely competitive, the need to redefine and
redesign market boundaries becomes critical. A frequently overlooked factor in defining and
redefining market boundaries is the dynamics of cultural values. Culture has long been
relegated as a contextual factor in market determination and not considered an active
determinant of market boundaries. This paper examines culture’s role as an active
determinant of market boundaries and segments and proposes culture–driven framework. It
offers an international product strategy that permits appropriate product classification and
positioning within these boundaries.

I. INTRODUCTION

As the global markets for goods and services become intensely competitive, the need
to examine and redefine market boundaries becomes critical for survival and growth.
Companies seek to break out of existing overcrowded segments or by creating innovative
new market spaces to gain competitive advantage. A major influence that triggers
redefinition is trade competition (Day and Shocker 1976). Cvar (1986) suggests periodical
redefinition of market boundaries and segments as a characteristic of successful competitors
in the global markets.

A frequently overlooked factor in defining and determining market boundaries is the


dynamics of cultural values. This paper examines the value of and rationale for such a
framework, proposes a market boundaries framework that is culture-driven, and suggests an
international product strategy that permits appropriate product classification and positioning.

II. RATIONALE FOR CULTURAL MARKET BOUNDARIES

The global market landscape is inherently multicultural, in spite of some cultural


convergence that may be occurring. Many markets, such as the United States, European
Union, Australia and Saudi Arabia - are becoming multicultural due to immigration and
population shifts. This demographic metamorphosis offers opportunities to their businesses to
redefine their markets anew with cultural lens to gain competitive advantage. The strategy is
also valid for traditionally pluralistic societies such as India, South Africa, Malaysia and
Nigeria.

Since cultural values heavily influence consumers to accept and or reject the products
or services, a culture-driven market classification will enable businesses to assess their
market opportunities from a unique and new perspective. Looking at the market through the
prism of cultural values and redrawing market boundaries using cultural parameters offers

488
marketers a potent new tool to address global consumer needs, design new products and
develop responsive product strategies.

III. CULTURAL MARKET BOUNDARIES CLASSIFIED

We define cultural market boundary as the potential size and extent of the market that
can be determined by the commonly shared cultural values of a group of people. Values
uniquely differentiate them from others who do not share same cultural value patterns These
cultural values primarily guide their buying habits, motives and other market related
decision-making behavior.. Four types of market boundaries and areas can be conceptualized
based on culture and their characteristics identified and differentiated for designing consumer
product strategy. These are Culturally Assimilative Market, Culturally Exclusive/Blocked
Market, Culturally Peripheral Market and Neo-Cultural Market.

Culturally Assimilative Market


Mainstream consumers, with nontraditional and flexible lifestyles, form this market
segment. They are open to trying and enjoying products and services distinctly identified with
different cultural groups and their identities. In culturally assimilative markets, the boundary
is open, flexible, and assimilative. Products embodying different cultures exist side by side in
the market and mainstream consumers cross over to buy these products to try something new,
and adapt these products to their own lifestyle. Consumers derive distinct status and
satisfaction in owning and possessing culturally different products. This market’s propensity
to absorb culturally different products is high and any culturally implanted product may
easily become part of it. This market offers firms that sell culturally different products
excellent growth potential. Typically, culturally assimilative market boundaries exist in
multicultural societies.

Culturally Exclusive / Blocked Market


This type of market is created and defined by mandated cultural, religious, and
traditional values and practices that either require or restrict the use of certain products and
services by members of specific cultural groups. The use or avoidance of these products
flows from one’s cultural identity and/or cultural heritage. Hindus and Buddhists prefer
vegetarian food products and abjure consumption of meat. Likewise, Jewish culture mandates
the use of kosher products and Islamic culture prohibits consumption of pork. East Indian
women wear saris and traditional Muslim women wear burkha or chador because of cultural
mandates.

Culturally Peripheral Market


Culturally peripheral markets exist on the threshold of culturally assimilative and
exclusive/blocked market boundaries and mostly go unnoticed by the mainstream market.
The cultural segments of the market are either insignificant or mainstream consumers have
not yet recognized their value. Barriers such as foreign language labels and limitations of
promotional materials prevent mainstream consumers from trying or using these products.
Consumers remain on the periphery of these cultural Diasporas except for occasional
incursions. Language and cultural barriers need to be overcome before mainstream
consumers accept these products. Pizza, an Italian product and bagels, a Jewish specialty,
remained on the peripheral boundary for many years in the United States Market.

489
Neo-Cultural Market Boundary
Culture is a dynamic phenomenon and evolves over a period. Culture shifts can occur
in response to internal and external influences that affect social norms, values, and attitudes
and create changed consumer behavior patterns, preferences, tastes and needs. Raval and
Subramanian (2000) describe the nature of cultural shift:

“Culture shift may be subtle and or act as an undercurrent that may in longer run
erode, hamper, or completely wipe out the demand for goods and services. Culture shift is
frequently subterranean and invisible in nature and may elude detection by normal
environmental scanning techniques. It takes a long time for the subterranean changes to
become manifest and be openly visible”.

Swedish anthropologist Ulf Hannerz (1992) has put forward the concept of cultural
complexity. He has demonstrated how most contemporary societies are currently stratified
into “varied zones of meaning” where multiple cultural communities not only are located
very near each other, but also above all penetrate each other. When this occurs, it is critical
for marketers to examine what market boundaries that these changes create. Neocultural
market boundaries are crated by an amalgam of values and norms that occurs when multiple
cultures and/or subcultures interact and influence each other, creating new value patterns and
lifestyles. Culture shift and cultural complexity become powerful drivers of emergence of
neo-cultural market boundaries. The direction of this market depends on whether the
pendulum of change of values swing toward convergence of values toward fundamental
and/or core values of the society or toward divergence from it to modernity or uncharted
future course yet to come as predicted by Hannerz. Culture shift occurs when prevailing
values tilt toward core or fundamental values. The concepts of culture shift and cultural
complexity offer opportunities to marketers to redraw or develop new market boundaries.

IV. PRODUCT CLASSIFICATION AND PRODUCT POSITIONING


STRATEGIES

The firms in the global market have to examine the nature and type of their products
to decide which market they will fit in and how to position them in that market category.
Raval and Subramanian (2000) have provided the framework for product classification and
positioning that is culture-driven. Their framework of product classification and positioning
is relevant to culture-driven market boundaries. They classify culture-based products into
culturally congruent, culturally blocked, culturally obligatory, culturally peripheral, and
culturally undesirable products. The framework provides cultural product attributes and
relates it to appropriate positioning strategy.

Culturally Assimilative Market


Culturally congruent products are an excellent fit for this market segment. These
products are initially of foreign origin, taste or design that are consistent with the mainstream
cultural requirements and have high potential for widespread market acceptance in foreign
markets. Examples are the popularity of Italian and Mexican foods in the United States and
East Indian Chicken Tikka in the United Kingdom. These products meet the needs of the
mainstream market while maintaining the unique ethnic identity, satisfying the consumer
desires of trying something different. The positioning of these products should follow the

490
same strategy as for mainstream products, except with additional emphasis on its ethnic
identity.

Culturally Exclusive or Blocked Market


Two types of products are appropriate for these market segments. They are culturally
obligatory and culturally blocked products. Culturally obligatory products are those that
enjoy specialty status and are mandated by religious beliefs, historical traditions, and
linguistic and/or national pride. Examples are kosher and vegetarian products preferred by
Jewish and Hindu consumers respectively. American consumers prefer turkey on
Thanksgiving Day or eggnog and fruitcake during Christmas. Understanding the cultural
salience of products offers the marketers unique market opportunities. An enterprising builder
has designed a “Kosher Condo”, in a Los Angeles neighborhood that has a host of kosher
restaurants, Shuls, and Judaica shops. This project targets observant and traditional Jewish
population by offering condos equipped with kitchen that include two sinks, two counters and
two dishwashers to allow for kosher food preparation. (Ballon 2004). The other side of this
market segment is culturally blocked products. These are blocked or banned by cultural
norms, religious beliefs and/or strong customs or nationalistic spirits. The prohibition of pork
consumption in Islamic cultures is an example. The question of positioning of these products
is moot, as they cannot even enter the marketplace. Marketing these products and services is
likely to trigger strong backlash.

Culturally Peripheral Market


This market segment is conducive for culturally peripheral products that are
consumed by small segments of the market often ethnic and culturally diverse groups. These
products are supplied by small entrepreneurs who are well conversant with ethnic consumers’
requirements and also with producers and wholesale suppliers. These products have not won
the favor of the mainstream market, as they fall outside their customs, traditions, religious or
linguistic or cultural mindsets. They are highly desirable offerings to the cognoscenti, but the
mainstream market perceives them as highly exotic and disregards them. Some of these may
become mainstream products eventually. They are on the threshold for that breakthrough but
require tremendous resources for promotion. Positioning of these products mainly stresses
their cultural panache. They are promoted by conveying their cultural desirability to make
them attractive to demographic, psychographic, and economic characteristics of the target
market segments.

Neo-Cultural Market
Products conducive to neo-cultural market fall into two categories: one that responds
to culture shift in markets and the other as a response to cultural complexity. Culture can shift
to conservative or modernity mode. Conservative mode calls for the consumption of
conservative products. These trends are exhibited often in the clothing and fashion markets.
Their demand fluctuates according to values shift at a particular time. A useful product
category is products that respond to multiple values created by the cultural complexity. These
products attempt to gain favor by representing multiple value combinations in the products.
The product positioning in this market segment depends on the emergence of multiple and
complex values that may be unpredictable sometimes. Strategic market planning may
anticipate emergence of such values and create a new marketing mix designed to attract the
emerging segments.

491
Impact of Culture on Elasticity of Demand
Economists have related the concept of elasticity mainly to price, income and degree
of substitutability. Not much work is done in relation to culture as a variable for establishing
elasticity. Culture is critical for determining the degree of elasticity of products in a given
cultural market boundary, as shown in the table below.

Table I: Impact Of Cultural Markets On Demand Elasticity

Market Type Effect on Elasticity


Culturally Assimilative Markets Variability similar to mainstream markets
Culturally Exclusive/ Blocked Highly inelastic
Markets
Culturally Peripheral Markets Elastic for ethnic segments; Inelastic for mainstream
Neocultural Markets Variable from totally elastic to totally inelastic

Products suitable for assimilative market may have variable degrees of elasticity
because they compete with mainstream products and any variations in prices will cause
consumers to be sensitive to their prices. Lower the prices people may buy more, higher the
prices they have more opportunities to substitute mainstream products. Products sold in
culturally exclusive markets have highly inelastic demand since they are culturally imperative
and there are a small number of suppliers in the market. On the other hand, culturally blocked
products have totally inelastic demand since there is zero demand in that niche market
because they are excluded. Culturally peripheral may have elastic demand depending on the
marketing efforts. The demand is elastic for the ethnic segments of the market and relatively
inelastic for the mainstream segments. The elasticity for demand for products that may be
sold in neo-cultural market may vary from totally elastic to totally inelastic depending on the
degree of popularity among the population that grow with multiple values with varying zones
of meaning.

V. CONCLUSION

Culture-driven consumer market boundaries advance the taxonomy of market


classification consistent with emerging global trends of multiculturalization of many nations
markets. They make valuable contribution to advancement of marketing theory and concepts
and link the culturally driven consumer product classification into appropriate market
boundaries. They make role of culture in marketing direct in contrast to the background and
contextual role it has played historically.

REFERENCES

Ballon, M. “Cut and Post” Hadassah Magazine, 86(3), 2004, 8


Cvar, M. R “Case Studies in Global Competition: Patterns of Success or Failure”
In Porter, Michael Competition in Global Industries. Boston: Harvard
Business School Press. (1986).
Day, George S. Analysis for Strategic Market Decisions. St. Paul: West Publishing.1986.
Day George S. and Schoker, A. D. Identifying Competitive Product Market Boundaries.
Cambridge Science Institute. 1976.
Hannerz, Ulf. Cultural Complexity: Studies in the Social Organization of Meaning.
New York: Columbia University Press. 1992.

492
Kluckhohn, F. and Strodtbeck, F. L. Variations in Value Orientations. Evanston:
Peterson. 1961.
Raval, Dinker and Subramanian, Bala, “Culture Shift Risk analysis for Multinationals”
Journal Global Competitiveness 8(1), 2000, 382.
Raval, Dinker and Subramanian, Bala, “Culture Based Product Classification in Global
Marketing for Competitive Advantage” Journal Global Competitiveness 9(1), 2001,
419-428.

493
DOMAIN KNOWLEDGE SPECIFICITY AND JOINT NEW PRODUCT
DEVELOPMENT: MEDIATING EFFECT OF RELATIONAL CAPITAL

Pi-Chuan Sun, Tatung University


pcsun@ttu.edu.tw

Yung Sung Wen, Chiang Kai-Shek International Airport Office


davidwen@cksairport.gov.tw

ABSTRACT

This study examined the relationships among supplier’s domain knowledge specificity,
relational capital and joint new product development from the transaction cost analysis and
social exchange perspectives. Data collected from Taiwanese information hardware
manufacturers were used to conduct empirical test. The results show that supplier’s domain
knowledge specificity positively influences relational capital and joint new product
development, and relational capital also positively influences joint new product development.
Besides, relational capital partially mediates the relationship between supplier’s domain
knowledge specificity and joint new product development.

I. INTRODUCTION

New product development (NPD) in technology-based industrial markets is increasingly


characterized by close interactions between sellers and buyers during the development
process (Sioukas, 1995). Although the transaction cost perspective recommends a variety of
contractual mechanisms to guard against partner opportunism, Dyer and Singh (1998)
propose alternatives of self-enforcing agreements. This study argues that supplier can use a
particular bilateral governance tool in seller-buyer relationship: joint new product
development (JNPD) arrangements. Besides, the supplier also can build relational capital to
protect their proprietary assets (Kale, Singh and Perlmutter, 2000), and the relational capital
mediates the relationship between domain knowledge specificity and joint new product
development. Hence, the objective of this study is to examine the relationships among
supplier’s domain knowledge specificity, relational capital and joint new product
development.

II. DOMAIN KNOWLEDGE SPECIFICITY, JOINT NEW PRODUCT


DEVELOPMENT AND RELATIONAL CAPITAL

Asset specificity has been extensively employed in empirical research on transaction


cost explanations for joint action (Heide & John, 1990; Zaheer & Venkatraman, 1995;
Subramani & Venkatraman, 2003). In the knowledge-driven economy, this research focuses
on domain knowledge specificity (DKS) since it surpasses physical asset specificity as an
important determinant of governance choices. Joint action is one type of governance
mechanism that can be used to safeguard their specific invests (Heide & Jone, 1990; Joshi &
Stump, 1999). From the seller’s perspective, product codevelopment ensures the creation of
innovations that meet buyer’s needs as they are emerging, provides the seller with added
means to monitor the buyer’s behavior and is an appropriate governance mechanism to
address safeguarding problem (Stump, Athaide & Joshi, 2002; Sun, 2005). On the other side,
the customization of knowledge to a specific domain occurs when organizational resources

494
are applied to understanding patterns and rules particular to a specific context. Expertise
deployment leads to increasingly effective issue diagnosis and problem solving based on
greater levels of familiarity and understanding of the nuances of a particular exchange. While
such domain-specific knowledge is very valuable in the context of a particular relationship,
investments made in creating the knowledge have less value in other relationships. Thus, the
first hypothesis is proposed:
Hypothesis 1: DKS is positively related to JNPD
As firms work with each other trust is built among individual members of the
contracting firms because of the close personal ties that develop between them (Macaulay,
1963). Relational capital creates a mutual confidence that no party to an exchange will
exploit others’ vulnerabilities even if there is an opportunity to do so (Sabel, 1993). When
organizational resources are applied to understanding patterns and rules particular to a
specific context, this domain knowledge contributes to trust and closeness between
individuals of the firms based on greater levels of familiarity and understanding of the
nuances of a particular exchange,. Therefore, this study hypothesized that there is a positive
relationship between domain knowledge specificity and relational capital.

Hypothesis 2: The level of relational capital is positively associated with the level of
supplier’s domain knowledge specificity. New product development (NPD) tasks are
intrinsically uncertain about the relationship between inputs and outputs and usually involve a
high risk of failure (Cooper, 1997). NPD processes are plagued by many unforeseen
disruptions and delays. If partners trust each other, constructive dialogue and cooperative
problem solving allow difficulties to be worked out. Supplier’s involvement in new product
development depends on trust of the supplier, and the suppliers’ trust on customer is
positively associated with the degree of joint new product development with customer.
(Walter, 2003). Consequently, this study hypothesized that:
Hypothesis 3: The level of joint new product development is positively associated with the
level of relational capital.

Relational capital, as defined, resides upon close interaction at the personal level
between alliance partners. When organization applied resources to understanding patterns and
rules particular to an exchange contest, it can learn more its partner. This customization of
knowledge leads to increasingly effective issue diagnosis and problem solving based on
greater levels of familiarity and understanding of the nuances of a particular exchange and
then develops trust. Trust reduces fears of exploitation and minimizes feelings of
vulnerability (Boon & Holmes, 1991), and facilitates the possibility of joint new product
development. Therefore, this study hypothesize that the relationship between domain
knowledge specificity and joint new product development is partially mediated by relational
capital.
Hypothesis 4: The relationship between domain knowledge specificity and joint new product
development is partially mediated by relational capital.

III. METHOD

A. Measurement
Measures of the variables were first developed based on previous researches and each
indicator was measured on a 7-point scale from “strongly agree” to “strongly disagree”.
Supplier’s Domain Knowledge Specificity (SKS): Domain knowledge specificity is defined
as the degree to which critical areas of knowledge of a supplier firm are specific to the

495
requirements of a buyer (Subramani & Venkatraman, 2003). It is measured in terms of five
items that reflected the level of specialized intangible investment in developing an
understanding of the buy’s requirements and the distinct context of interaction. Items were
drawn from Nooteboom, Gerger & Noorderhaven (1997) and Subramani & Venkatraman
(2003) to construct this scale. The Cronbach’s α measure of reliability for this construct is
0.8525.
Relational Capital (RC): Relational capital is defined as mutual trust, respect, and
friendship that reside at the individual level between alliance partners. It is measured in terms
of 5 indicators that reflect the level of mutual trust, respect and friendship between partners.
Items were drawn from Kale, Singh and Perlmutter (2000) to construct this scale. The
Cronbach’sα measure of reliability for this construct is 0.893.
Joint New Product Development (JNPD): Joint new product development is defined as the
extent to which supplier and buyer have jointly developed the product (Stump, Athaide &
Joshi, 2002). It is measured in terms of 5 indicators that reflect the level of new product co-
development relationship between supplier and the focal buyer. Items were drawn from
Stump, Athaide & Joshi (2002) to construct this scale. The Cronbach’s α measure of
reliability for this construct is 0.9069.
B. Data collection
The sampling frame of this study includes the Taiwanese firms dedicated in
manufacturing hardware product of information industry, namely system, peripherals, and
modular components. Multiple data sources were used in preparing sample list. Six hundred
and seventy questionnaires were mailed and 144 questionnaires were received, representing
an effective response rate of 21.56%. As suggested by Armstrong and Overton (1977),
assessment of nonresponse bias indicated no significant difference between early and late
respondents (P<0.01). This research chose managers in charge of OEM/ODM business as the
only informant. This approach is consistent with the general recommendation to use the most
knowledgeable informant (Huber & Power, 1985).

IV. ANALYSIS

An exploratory factor analysis is performed to examine the validity of these measures


and no item needs to be deleted. The hypotheses were tested by ordinary least-square
regression analysis. The VIF values of all independent variables ( 1.0~1.269) are smaller than
10, providing the information that multicollinearity will not unduly influencing the least
square estimates (Neter, Kutner, Nachtsheim & Wasserman, 1996). Table I exhibits three
regression models. The dependent variables of these three modes are JNPD, RC, and JNPD
separately. The model 1 in table I shows that domain knowledge specificity has positively
significant effect on joint new product development. The hypothesis H1 is supported. The
model 2 in table I shows that domain knowledge specificity also has positively significant
effect on relational capital and the hypothesis H2 is supported. As predicted, model 3 in table
I exhibits that relational capital has positively significant effect on joint new product
development and the hypothesis H3 is supported. Moreover, model 3 proves that relational
capital mediates the relationship between domain knowledge specificity and joint new
product development, because the coefficient of domain knowledge specificity reduced from
0.607 in model 2 to 0.340 in model 3 (Baron and Kenny, 1986). And the mediating effect is
partial since the coefficient of domain knowledge specificity is significantly greater than zero
in the model 3. As a result, hypothesis H4 is supported.

496
Table I Estimated Models Standardized Coefficient (t value)
Independent JNPD / Model 1 RC / Model 2 JNPD/ Model 3
variables
** **
SKS 0.607 (8.753) 0.461 (5.940) 0.340 (4.681) **
RC 0.451 (6.209) **
F test 76.607** 35.284** 55.376**
R2 0.369 0.212 0.460
2
R (adjusted) 0.364 0.206 0.452
**p value<0.01 ( ) t value

V. CONCLUSION

Consistent with prior research, this research found support for a positive effect
between supplier’s asset specificity and joint new product development (Heide & John, 1990;
Zaheer & Venkatraman, 1995; Joshi & Stump, 1999), and a positive effect between supplier’s
asset specificity and relational capital (Nielson, 1998). Furthermore, adding a variable,
relational capital, significantly increased the explained variance in JNPD (comparing model 1
with model 3). Besides, the partially mediating effect of relational capital on the relationship
between SKS and JNPD is proved. This confirms the argument that relational capital partially
mediates the effect between SKS and JNPD in the seller-buyer relationship.

A. Theoretical and Managerial Implications


Prior studies have integrated insights from both transaction cost analysis (TCA) and
relational exchange theory (RET) to develop models that explain Joint action (Heide & John,
1990; Zaheer & Venkatraman, 1995). This research integrated the TCA and relational capital
to develop a model to explain joint new product development in supplier-buyer relationship.
Asset specificity from TCA concentrates on the internal structure of the transaction, whereas
relational capital from RET exemplifies the dyad’s relational climate. Both joint new product
development and building relational capital with buyer can safeguard the specific assets
invested by supplier. Prior research has shown that joint action is appropriate when specific
assets are at stake, and joint new product development reduces the negative effect of product
customization on seller satisfaction and enhances customization’s positive effect on
continuity (Stump, Athaide & Joshi, 2002). Nevertheless, building relational capital with
buyer can also play a significant role in safeguarding mechanism. This research supports
these arguments. Moreover, it was found that the relationship between supplier’s specific
asset investments and joint new product development is partially mediated by relational
capital. Specific asset investments create an economic motivation for supplier to establish
JNPD; relational capital, on the other hand, provides them with the psychological motivation
to make the resource commitments necessary to establish this nonmarket mode of
governance. Accordingly, it is essential that suppliers focus not just on the respective levels
of specific asset investments when making governance decision. They must also make an
effort to build the relational capital with the buyer.

B. Limitations and Directions for Future Research


The results and implications of this research are tempered by the recognition of several
limitations that can be addressed by future research. One important limitation is that this
research collected data from only one partner to deal with supplier-buyer dyads. Only the
supplier’s perceptions of the extent of RC and JNPD were considered. It would be beneficial
to have these perceptions validated by data from buyer’s perspective. The causal arguments

497
are made in this research and yet offer only a cross-sectional test for these arguments. A
longitudinal methodology whereby the evolution of a supplier-buyer relationship is measured
would be the optimal design to support causal arguments. Finally, this study focused on
limited variables only. An avenue for future research would be to include a broader range of
contextual and situational factors that may influence the extent of joint new product
development.

REFERENCE

Armstrong, J. S., and Overton, T. S. “Estimating Nonresponse Bias in Mail Survey.” Journal
of Marketing Research., 14, 1977, 396-402.
Boon, S. D., and Holmes, J. G. “The Dynamics of Interpersonal Trust: Resolving Uncertainty
in the Face of Risk.” in Hinde R.A. & Groebel, J. (edits). Cooperation and Prosocial
Behavior. Cambridge: Univ. Press, 1991, 190-211.
Cooper, R. G. “The Dimensions of Industrial New Product Success and Failure.” Journal of
Marketing., 43, 1997, 93-103.
Dyer, J. H. “Specialized Supplier Networks As a Source of Competitive Advantage:
Evidence from the Auto Industry.” Strategic Management Journal., 17, 1996, 271-
291.
Dyer, J. H., and Singh, H. “The Relational View: Cooperative Strategy and Sources of
Interorganizational Competitive Advantage.” Academy of Management Review., 23,
1998, 660-679.
Heide, J. B., and John, G. “Alliances in Industrial Purchasing: the Determinants of Joint
Action in Buyer-supplier Relationships.” Journal of Marketing Research., 27, 1990,
24-36.
Huber, G., and Power, D. “Retrospective Reports of Strategic Level Managers.” Strategic
Management. 6, 1985, 174-180.
Joshi, A.W., and Stump, R. L. “The Contingent Effect of Specific Asset Investments on Joint
Action in Manufacturer-Supplier Relationships: An Empirical Test of the Moderating
Role of Reciprocal Asset Investments, Uncertainty, and Trust.” Journal of the
Academy of Marketing Science., 27, (3), 1999, 291-305.
Kale, P., H. Singh and Perlmutter, H. “Learning and Protection of Proprietary Assets in
Strategic Alliances: Building Relational Capital.” Strategic Management Journal., 21,
2000, 217-237.
Macaulay, S. “Non-Contractual Relations in Business: A Preliminary Study.” American
Sociological Review., 28, 1963, 55-67.
Nielson, C. C. “An Empirical Examination of the role of “Closeness” in Industrial Buyer-
Seller Relationships.” European Journal of Marketing., 32, (5&6), 1998, 441-463.
Neter, J., Kutner, M. H., Nachtsheim, C. J., and Wasserman, W. Applied Linear Statistical
Models, 4th ed. Times Mirror Higher Education Group, Inc., 1996.
Nooteboom, B., Berger, H. and Noorderhaven, N. G. “Effects of Trust and Governance on
Relational Risk.” Academy of Management Journal., 40, (2), 1997, 308-338.
Sabel, C. “Studied trust: Building new form of cooperation in a volatile economy.” Human
Relations., 46, (9), 1993, 1133-1170.
Sioukas, AV. “User Involvement for Effective Customization: an Empirical Study on Voice
Networks.” IEEE Trans Engineer Management., 42, (1), 1995, 39-49.
Stump, T. L., Athaide, G. A., and Joshi, A.W. “Managing Seller-buyer New Product
Development Relationships for Customized Products: A Contingency Model Based
on Transaction Cost Analysis and Empirical Test.” The Journal of Product Innovation
Management., 19, 2002, 439-454.

498
CUSTOMER SATISFACTION FOR TELECOMMUNICATION SERVICES: A
STUDY AMONG ASIA PACIFIC BUSINESS CUSTOMERS

Avvari V. Mohan, Cyberjaya Multimedia University, Malaysia

Satya Narayana, Cyberjaya Multimedia University, Malaysia

ABSTRACT

This paper is an explorative study to further the understanding of the various


attributes of customer satisfaction in business to business environment. It is to understand the
perception of managers in telecommunication operators who are customers of a
telecommunication vendor in the Asia-Pacific region. The results show that customers value
a some key processes over others and these combination of importance of processes varies
from one region to another to some extent. The satisfaction level perceived by customers in
these regions also differ considerably with implications for managers operating in Asia-
Pacific region.

I. INTRODUCTION

Telecommunication services in Asia Pacific have been going through a phenomenal


period of reforms in the last decade. This period brought in tremendous amount of
competition in this sector. Many telecommunication firms conduct customer satisfaction
surveys to learn how satisfied their customers are with their products or services for
continued growth. There is always no exact definition on customer satisfaction but for this
paper we take Kotler’s (1988) definition of customer satisfaction as the perceived value by
the customer when they acquire a product or services. In the world full of competition,
achieving customer satisfaction or maximizing customer satisfaction would be the main
mission of any kind of business or organization. Satisfaction ratings are viewed as means to
strategic ends, such as repurchase behavior and customer retention (Mittal et al 1999), that
directly affect a firm’s profits and overall performance. Most companies do recognize that
satisfying customers’ needs and wants is critical to their success and developing the
understanding to achieve that goal is becoming increasingly difficult in today’s global arena.
Verhage, Yavas, and Green (1990) warn that “global marketers need to be very cautious in
accepting theories or techniques that are proven to be successful in their home markets.” As
firms reach over national borders, they are challenged to establish a marketing orientation
effectively across a complex of national cultures (Nakata and Sivakumar 2001).

Some researchers have studied how to study customer satisfaction (Zeithmanl et al


1996). Wayland and Cole (1997) claim that value proposition is the most common way that
companies can increase the value offered to customers. A vendor can increase the value of its
relationship with its customers by serving a larger part of the customers’ value chain
(Stenberg 1997). This can be done by means of a core product, extended offer, or a total
solution. Value-added role (Wayland and Cole, 1997) relates to how the vendor can create
value for the customer and how it is delivered to the customer. Value-added roles can be
divided into product, process, and network management related ones. The product manager
creates value by combining inputs into a product. The process manager's role involves
maintaining several contacts continuously, and the vendor’s aim is to supply a part of the
customer's value chain. The network manager takes a central position, as the vendor acts as

499
an intermediary that manages the flow of the value chain by matching buyers and sellers in
order to achieve specific objectives. Selecting a balance between risk and reward addresses
the ways buyers and sellers create and share value (Wayland and Cole, 1997). Many high-
tech products and services require face-to-face contacts (Wayland and Cole, 1997).
According to Dijksterhuis et al. (1999), co-evolutionary effects of organisations can take
place both within a company and between companies, and therefore interacting with only
certain customer persons is not enough. But most ‘satisfaction’ research has used U.S.
subjects to develop and test satisfaction theory (Spreng and Chiou 2003); thus, such measures
of quality and satisfaction may be less applicable and less meaningful in other countries,
thereby leading to less-than-optimal results. This is no less true for services than for products;
the service sector is taking on increasing importance in the global economy, particularly in
most advanced countries, such as those in the European Union, USA as well as Canada,
Japan, and countries in the Asia-Pacific region. Thus the rationale for this study – a study of
customer satisfaction in business to business services arena the Asia Pacific region.

This study is done to improve the understanding on the customer satisfaction drivers
among the telecommunication operators (customers) in the Asia Pacific region. Customers
are mostly large telecommunication operators who provide services to consumers such as
mobile network, fixed lines, internet and broadband access. The main objective of this
research is to increase the understanding of how a vendor can successfully improve and
maintain its customer satisfaction leading to customer loyalty in a telecommunication market.
The following research questions guide this study:

1) What are the kind of perception customers have against the performance of product or
services provided by a telecommunication vendors and also their importance?
2) Is there a difference in how customers perceive the key processes in the different sub-
regions of the Asia Pacific ?.

II. OVERVIEW OF METHODOLOGY

Eighteen processes which are generic to a typical supply chain cycle of a telecom
operator were identified as the attributes to study the customers satisfaction. They range from
pre sales to operation and extending into warranty and post warranty or maintenance of the
product or services being sold and deployed to the customer. A questionnaire was developed
using the 18 attributes identified – two (10 point) rating scales were provided for each
questions one for the respondents to indicate their level of satisfaction against the attribute
and also about their perceived importance level for the same. Simple means are calculated
eliciting a score for both the importance and satisfaction levels on each attribute. These data
was then analyzed using the ‘importance-performance analysis developed by Martilla, and
James (1977). To answer the second question, data were further sliced by sub regions of
China and “rest of AP”. The data was collected through a survey, conducted via the web
since the customers are widespread around the region. A total of 300 + (the total number of
customers of the telecom company in Asia Pacific) questionnaires were sent out a total of 247
samples of completed questionnaires returned (a response rate of 66%).

III. RESULTS OF THE STUDY

Overall, Importance rated higher than Satisfaction on all attributes in the three groups,
except for the attribute of “Billing and Payment”. When comparing both the importance and
satisfaction scores, the largest gap is for Pricing where the importance score is high while the

500
satisfaction is significantly lower compared to the other attributes. This is followed by
Operational and Product functions, that is (in order of magnitude) Technical Support, Repair
Services, Exchange Services and Services Quality and Product Quality. The least importance
or having a positive gap is Billing and Payment. Looking by the absolute satisfaction level,
Product Quality, Installation and Sales/Relationship has the top three most satisfied process
areas. All of these are above 8.5 on a 10 point scale. This is closely followed by Service
Quality Satisfaction.

Importance - Satisfaction Analysis


Although the absolute means scores of Importance and Satisfaction provide some direction
on strength and weakness of the organization, a matrix outlook provides a view. The
Importance and Satisfaction are plotted and y and x axis respectively. The chart then can be
grouped into 4 quadrants.

Figure 1: Importance-Satisfaction Matrix

High High Satisfaction High Satisfaction


Low Importance High Importance

Indicates Over Strength to leverage


Satisfaction Low Satisfaction Low Satisfaction
Low Importance High Importance

Low Needs least attention Area for improvement


Low High
Importance/Performance

Those 4 quadrants are illustrated above. The area to focus will be High Satisfaction and High
Importance as a “selling point” in the marketing effort by the company. Another quadrant
that is equally important is the Low Satisfaction but High Importance area. Process areas
falling in this quadrant are candidates for ‘improvement initiatives’ When data was plotted on
the chart, all the 18 process areas were falling into the High Satisfaction and High Importance
area. This is likely because the processes are matured and therefore the customers demand
and satisfaction level tends to be higher. Due to this the High-High quadrant was further
segmented to sub-quadrants to analyze the areas of strength and areas to improve.

Figure 2: Importance Satisfaction Matrix – High-High

Strength to Leverage

Area for
Improvements
From the Figure 3 below, clearly the Billing and Payment is in the high satisfaction and low
importance area denoting it as an “over investment”. Areas that is of strength to the company
is Product Quality, Installation Services and Sales and Relationship Management and
Services Quality. Areas that need most improvement are Pricing which seems to be the ONE
biggest area that need serious attention for improvement.

501
Figure 3: Importance – Satisfaction Analysis of the 18 Attributes
Satisfaction

Importance

In order to understand if there is any difference in the strength and improvement areas by sub
regions such as between the Greater China and AP other than China, the data were sliced
further by China results and AP results. The Importance-Satisfaction matrix were plotted and
analyzed as below.

Importance-Satisfaction Analysis – for Customers from China Region

In China, (Figure 4) the over-investment continue to be Billing and Payment along with the
area for improvement is the Pricing. A noted difference in China compared to overall results
is that Program Management and Professional Services seems to be of greater strength along
with all other strength identified earlier ie. Sales and Relationship Management, Installation
Services, Product Quality and Services Quality.

Figure 4: Importance – Satisfaction Analysis of the 18 Process Attributes – China


Customers
Satisfaction

Importance

502
Importance-Satisfaction Analysis for Customers from Asia Pacific (AP) Region other
than China
Analyzing for the AP region other than China (Figure 5) , the satisfaction level is
generally lower compared to China. While over-investment and area for improvement are the
same as for China ie. Billing and Payment and Pricing respectively, Sales and Relationship
Mgmt seems to be markedly lower than China. Program Management and Professional
Services satisfaction is lower with regards to its importance rating.

IV. CONCLUSIONS

Results of the study seem to indicate that customers in AP region value the price
much more than most other processes and the vendor needs to provide extra effort in
improving its ‘pricing strategies’ in winning or maintaining the business in Asia Pacific
regions. This is clear because as an emerging market, AP used to be a

Figure 5: Importance–Satisfaction on 18 Process Attributes – Customers From AP


Region Other Than China
Satisfaction

Importance

high margin region where vendor financing was common in which the vendors provide
financing to expand customers network and the customers pay as they grow. The results
further highlight that processes identified are different by various sub regions. For example
customers in China value the basic processes such as Installation compared to rest of Asia
Pacific where they value enhancement processes such as Professional Services. This reflects
the fact that each market segment is unique due to the culture of the people, government and
politics that a global vendor especially those from west need to be sensitive to the differences
and approach each segment with unique marketing strategies into to capture and sustain
business in this very diversified region. Thus, this study can provide a platform for global
marketers on how customer satisfaction data can be used effectively to improve the customer
loyalty and for new entrants on how to customize marketing strategies they are offering in a
new region.

503
REFERENCES

Dijksterhuis, M., F. Van den Bosch, H. Volberda., “Where do new organization forms come
from? Management logics as a source of coevolution”. Organisation. Science. 10, (5),
1999.
Kotler, Philip. Marketing Management: Analysis, Planning, Implementation, and
Control", 9th ed., Prentice- Hall, New Jersey, USA. 1998.
Martilla, J.A. and James, J.C., “Importance-performance analysis. Journal of Marketing, 41,
(1), 1977, 77-79.
Mittal, Vikas, Kumar, Pankaj and Tsiros, Michael.. Attribute-Level Performance,
Satisfaction, And Behavioral Intentions Over Time: A Consumption-System
Approach", Journal of Marketing ,63, (2), 1999, 88-101.

504
PORTRAYAL OF GENDER ROLES
IN INDIAN MAGAZINE ADVERTISEMENTS

Durriya H. Z. Khairullah, Saint Bonaventure University


Zahid Y. Khairullah, Saint Bonaventure University

ABSTRACT

In this study, the authors examine and discuss the portrayal of gender roles in English
language Indian magazine advertisements of five durable and non-durable products.

I. INTRODUCTION

Consumers generally respond favorably to advertisements that are compatible with


their culture and patronize advertisers who understand their culture and customize their
advertisements to depict its values (Zhang and Gelb, 1996). Advertising acts not only as a
marketing tool but also as a social force that transforms cultural symbols and ideas and bonds
together images of individuals and products (Leiss et al., 1990). One of the major differences
between cultures are the traditional notions of male/female roles which, governed by the rules
of the social organizations vary from culture to culture and hence from nation to nation
(Cateora, 1993). The purpose of the present study is to examine the roles of men and women
depicted in advertisements appearing in the national English magazines in India. The study
is of interest since it sheds light on how women and men are portrayed by marketers in
different societies.

II. METHODOLOGY

In 184 advertisements appearing in English in the national popular English language


Indian magazines were reviewed. English is one of the official languages used in India to
conduct official business and commercial transactions (CIA-The World Factbook, 2004). The
advertisements selected were limited to five durable and non-durable product categories namely
airlines, car, computer, cigarette, and hotel. These products were selected because they were
the most frequently advertised in the Indian magazines that were reviewed and these were also
products that could be used by both male as well as female consumers. Out of the 184
advertisements that were selected only 86 advertisements had one or more models shown
while the remaining 98 advertisements did not have any model(s) present in them. The
contents of the 86 advertisements that had models shown in them were analyzed, recorded,
and coded by the two authors of this study on a instrument consisting of variables associated
with differences in gender roles portrayal adapted from past studies (e.g., Courtney and
Lockeretz, 1971; Gilly, 1988; Ford, et al., 1998; Pingree, et al., 1976 and Soley and Kurzbad,
1986).

III. RESULTS AND DISCUSSION

(A) Exhibit I: Characteristics of The Advertisements Taken as a Whole


In the case of all advertisements (ads) for the five products aggregated together there
were a combined total of 132 male models and 60 female models portrayed in the ads. So for
the products that are used by both male as well as female consumers, it appears that the

505
number of male models is shown in significantly larger number of cases than female models
in Indian magazine advertisements. Also, 42 ads (49%) of the 86 total ads had no female
models in them at all while only 10 (12%) of the ads had no male models. These findings
perhaps reinforce the strong cultural bias towards men in the Indian society. They also
indicate that women are basically restricted to household activities rather than going out and
acting as bread winners, while on the other hand men are considered as providers of
economic security.The setting for all five products taken together for the ads were mainly
outdoors (46 out of the 86 ads = 54%) with only 6 ads (7%) showing a home or private
residence. A store/restaurant or an occupational setting was each seen in 10 (12%) of the ads
while 14 ads (16%) had settings that were not clear.

Exhibit I
Characteristics Related To The Advertisements As Number Percent
A Whole %

1 Setting:
Outdoors 86 53.5
Other/Not Clear 14 16.3
Store/Restaurant 10 11.6
Occupational 10 11.6
Private Residence 6 7.0

2 Prominence of Models Shown In Advertisements:


Prominent 34 39.5
Not Prominent 32 37.2
Somewhat Prominent 20 23.3

3 Total Number of Models Shown: 192 100


Number of Male Models Shown 132 68.8
Number of Female Models Shown 60 31.2
These findings are not surprising because the nature of the four products out of the
five (appearing in the Indian magazines) selected for the present study were such that the
functional value of the four products can be justified to be advertised in an outdoor
environment (i.e. airlines, cigarette, cars, hotels). The majority of the ads for the five
products showed the models in a prominent way 34 (40%). But an almost equal number of
ads 32 (37%) did not show models prominently while 20 ads (23%) showed the models in a
somewhat prominent way. Although there is no clear cut explanation to these findings we
can assume that in Indian print advertisements the emphasis is on the image or the picture of
the product being advertised rather than on the importance of models being used in the
Indian magazine advertisements.

(B) Exhibit II: Characteristics of the Individual Models Shown In The Advertisements
The majority of the models seemed to be between the ages of 15 to 30 years, 74
(56%) of male models and 38 (63%) of female models were in this age group. The marital
status of the majority of models was not clear from the ads. Although most of the models
were not shown as actually working, but both the male models 104 (79%) as well female
models 40 (67%) appeared to be well dressed and were shown in situations that would lead
one to believe that they were well-to-do and gainfully employed. The male models 78 (59%)
and the female models 32 (53%) also seemed to be semi-professionals or holding mid-level
business positions. Almost none of the models (88% of male models and 87% of female

506
models) were shown as spokespersons for the product being advertised and almost none of
them (96% of male models and 97% of female models) were shown to be either seeking or
giving advice or help. These findings reflect the target audience and readers of Indian
magazines – educated middle class and upper class urban Indians who are seen as being
successful in their careers. The products advertised are rather expensive products (airlines,
cigarettes, cars, computers, and hotels) and only well-to-do Indians can afford to use these
products. Although the traditional roles of Indian women are that of dutiful housewives and
mothers, however, in recent years as urban women get more educated, they are going outside
the house to work (Blackwell, 2004; Reddy, 2003). The majority of the male models 72
(55%) in the ads appeared employed in jobs outside the home and 36 (27%) shown in the role
of a spouse or boyfriend. The majority of the female models, 30 (50%) and were shown in
the role of a spouse or girlfriend. The majority of both male 68 (52%) and female 38 (63%)
models were also shown in a recreational mode. The models were shown in non-physical
activities in a majority of the cases, 70 (53%) for male models and 34 (57%) for female
models. These results reinforce the importance given to marriage for both genders in the
Indian society. Furthermore, for both males and females free association with the opposite
sex and dating are not looked upon favorably. None of the male models (100%) and almost
all of the female models (93%) were shown as being frustrated. The dress of almost all the
male models (97%) and most of the female models (73%) was western, and 27% of the
female models wore Indian clothes. The dress styles of almost all male models (99%) and of
all female models (100%) were both demure and non-provocative. These results are not
surprising. In urban areas most Indian men wear western attire (Culturegrams, 1998, Tyabji,
1985). India was under the British rule for 200 years and the western influence particularly
in the case of men’s attire has remained unchanged.

Exhibit II
Characteristics Male Female
Of Models In The Advertisements Models % Models %
N=132 N=60

1 Age:
Under 15 years 4.5 13.3
15 years to 30 years 56.1 63.3
30 years to 50 years 37.9 23.3
Over 50 years 1.5 0

2 Marital Status:
Married 13.6 33.3
Not Married 7.6 20.0
Not Clear 78.8 46.7

3 Employment:
Shown In Work Situation 15.2 6.7
Non-Work Situation Shown But Appears 78.8 66.7
Employed
Appears Unemployed 6.1 26.6

4 Occupation:
Professional / High Level Executive 22.7 6.7
Entertainer / Professional Athlete 7.6 0
Semi Professional / Mid-Level Business 59.1 53.3

507
Non-Professional / White Collar 1.5 6.7
Other / Not Clear 9.1 33.3

5 Whether Spokesperson In The Advertisement:


Yes 12.1 13.3
No 87.9 86.7

6 Model(s) Seeking Or Giving Help Or Advice:


Provider Of Help Or Advice 4.5 3.3
Seeker Of Help Or Advice 0 0
Neither Provider Nor Seeker Of Help Or Advice 95.5 96.7

7 Apparent Role In Life:


Spouse / Boyfriend / Girlfriend 27.3 50.0
Parent 3.0 3.3
Homemaker 0 3.3
Worker 54.5 23.3
Celebrity 3.0 0
Child 4.5 13.3
Not Clear 7.6 6.7

8 Role Portrayed For The Advertisement:


Decorative 4.5 10.0
Recreational 51.5 63.3
Working At Home 3.0 0
Outside Work Related 27.3 16.7
Not Clear 13.6 10.0

9 Activity:
Physical – Sport 25.8 13.3
Non-Physical 53.0 56.7
Inactive 21.2 30.0

10 Frustration Shown:
Frustrated 0 3.3
Not Frustrated 100 93.3
Not Clear 0 3.3

11 Dress Type:
Western Dress 97.0 73.3
Indian Dress 1.5 26.7
Neither 1.5 0

12 Dress Style:
Seductive 1.5 0
Demure 98.5 100

13 Level Of Sexism
Provocative 1.5 0
Non-Provocative 98.5 100

508
Although a majority of urban Indian women wear typically Indian dress–sari and
shalwar-kameez (Culturegrams, 1998, Tyabji, 1985), it is not uncommon to find some
females wearing western type clothing also to their corporate jobs (Cox and Daniel, 2000).
These young urban women like men seemed to be influenced by western multinational
corporations promoting western brands successfully in the Indian markets. A comparative
study on gender roles portrayals in the United States and Indian magazine advertisements by
Griffin et al. (1994) found that some Indian magazine advertisements portrayed women in
their traditional roles but because of the western influence in India, this trend was changing
and there were more advertisements that were depicting them in career-oriented roles and as
outgoing and enjoying an active life also. Modesty in dress styles of both males and females
in the Indian advertisements mirror the cultural values of the Indian society.

IV. CONCLUSION

In conclusion, the findings of the present study must be interpreted keeping in mind
the limitations of the study in terms of sample size and selection of the product categories and
the English medium national magazines that target upper and middle class urban Indians.
Nevertheless, since adverting reflects the social values of a given society, the overall findings
of the present study suggest to international advertisers that in developing their advertising
campaigns for urban Indian markets it is important to reflect the traditional cultural values of
the Indian society where men could be shown as providers of economic means, successful in
their careers, educated, well-to-do, married, younger individuals. Although the typical role of
the Indian women is that of a housewife, however, depending upon the product being
advertised women could be shown as being employed as semi-professionals and in western
clothes. For example, in case of cigarette advertisements where women were shown along
with male models all the females were shown in western attires and shown as being young,
educated, employed, and having a good time in an outdoor setting. For most of the car
advertisements where women were shown with their male counterparts they were wearing
sarees and appeared to be young, educated, well-to-do Indian housewives shown in non-
working roles. Interestingly, women were neither shown as sex objects nor in demeaning
roles in the advertisements that were sampled in the present study. Further research on
different product categories, examination of other magazines are needed to gain additional
insights into the portrayal of gender roles in Indian magazine advertisements.

REFERENCES

Blackwell, F. India: A Global Studies Handbook. Santa Barbara, CA: ABC-CLIO,


Publishers, 2004.
Courtney, A.E., and S.W. Lockeretz. “A Woman’s Place: An Analysis of Role Portrayed by
Women in Magazine Advertisements,” Journal of Marketing Research., 8 (February),
1971, 92-95.
Ford, J.B., P.K. Voli, E.D. Honeycutt Jr., and S.L.Casey. “Gender Role Portrayal in Japanese
Advertising: A Magazine Content Analysis,” Journal of Advertising, XXVII (1),
1998, 113-124.
Gilly, M. “Sex roles in Advertising: A Comparison of Television Advertisements in
Australia, Mexico, and the United States,” Journal of Marketing, 52 (2), 1998, 75-85.
Griffin, M., K. Viswanath, and D. Schwartz. “Gender Advertising in US and India: Exporting
Cultural Stereotypes”, Media, Culture & Society, 16, 1994, 487-507.
Nayar, S.J. “Dreams, Dharma, And Mrs. Doubtfire,” Journal of Popular Films &
Television,31 (2 Summer), 2003, 73-82.

509
Pingree, S., R. Hawkins, M. Butler, and W. Paisley. “A Scale for Sexism,” Journal of
Communication, 24 (4), 1976, 193-200.

510
CHAPTER 16

LEADERSHIP

511
STUDENT LEADERSHIP AT THE LOCAL, NATIONAL AND GLOBAL LEVEL:
ENGAGING THE PUBLIC AND MAKING A DIFFERENCE

J. Gregory Payne, Emerson College


gregory_payne@emerson.edu

David Twomey, Emerson College


david_twomey@emerson.edu

ABSTRACT

Teaching, mentoring and encouraging student leadership is a challenge for all


professors in the effort to prepare students for success and fulfillment in their careers upon
graduation. A large number of colleges and universities have experienced an increased
interest in the academic study of leadership within the curriculum, as well as heightened
awareness of and opportunities within the local, national and global communities for students
to engage in extracurricular activities that provide a practical application of such leadership
principles (Wren, 25). This paper provides readers a descriptive synopsis of several
extracurricular leadership opportunities as case studies in how professors and students can
extend theory learned in the classroom to valuable practice in the real world. In both cases,
the practical application also demonstrated leadership by students in enhancing the final
collaborative effort. The authors will briefly describe the historical context of the events of
each case study, the opportunities and challenges faced by students involved and the overall
assessment - both by students and external audiences - of such involvement. Those cases
examined include press and leadership strategies of: a) the 2005 Boston National
Communication Association; b) UNICEF New England’s World Aids Day activities.

I. NATIONAL COMMUNICATION ASSOCIATION’S ANNUAL CONVENTION


IN BOSTON

The National Communication Association is the largest professional communication


organization in the world with 8,000 members. NCA is comprised of over twenty interest
groups and its mission and activities can be found on the web site (www.natcom.org). Each
year the NCA has an annual convention at notable cities throughout the United States. These
conventions are coordinated from the national NCA office in Washington D.C., working
closely with the local arrangements committee. Each convention has a local committee and
one of member is designated to be in charge of the Press and Promotion for the convention.
The purpose of this paper is to outline the public relations strategies of the 2005 National
Communication Association’s Annual Convention in Boston, November 17-20, which
witnessed its highest membership turnout ever with nearly 5,500 in attendance. The Press
and Promotion efforts of NCA Boston marked an innovative, hands-on approach that differed
from past convention press operations. The result was an effort praised by NCA Executive
Roger Smitter as “one of the most successful that I have encountered in over 20 years of my
affiliation with the NCA”. The Boston effort was also praised by the NCA President, Martha
Watson and Vice President Dan O’Hair specifically for moving beyond the traditional
approach of relying solely on faculty to be involved in the p.r. effort. The 2006 Press and
Promotion effort featured students at the graduate and undergraduate level involved in the
planning and operation of highlighting in real time the events and activities of NCA Boston.
Prof. Watson described this effort as “exemplary of the talents we have in the communication

512
programs.” While Prof. O’Hair highlighted the leadership of faculty and students in
providing real time information and coverage of the convention via a specially designed web
site (www.ncapress.com).

Specifically, the objective in this descriptive analysis will be to highlight how the
Boston NCA press and promotion effort evolved, as well as to outline its strategies for
promotion. Initially, a brief synopsis of past press and promotion efforts will be provided.
Following these descriptive and detailed synopses, which will include the operations and
strategies employed in at the NCA Boston conference, will be recommendations to enhance
future conventions not only of NCA but other similar organizations.

II. NCA PRESS AND PROMOTIONS

Past NCA press and promotion efforts included the national office working closely with the
local arrangements committee in sending out press releases concerning noteworthy guest
speakers planning to speak at the annual conference, as well as seminar research and awards
to take place at the convention. Another traditional effort also was to inform local high
schools, junior colleges and colleges and universities, which might not be aware of NCA’s
mission and purpose, of the convention in the effort to attract publicity as well as potentially
new members for the organization. An additional objective was to make available experts,
as well as NCA personnel, for any questions local media might have regarding the
convention or external events. For example, during political campaigns many NCA scholars
and practioners are interviewed at the convention for their insights on pertinent issues, such
as candidate debates, current events, etc. In addition, local politicians, academic, business
and government leaders frequently speak at the NCA conventions, which have generated
press regarding the annual event. At the conventions, news releases are often generated from
the national office, but local press and promotion teams also have sent out information,
especially during the course of the convention in the effort to highlight winners of major
awards and to highlight honors, etc. Coverage of the convention for those in attendance, as
well as members who do not attend, has been provided traditionally in a special issue of
Spectra, the newsletter of NCA, published in January following the November convention.

The local events coordinator for press and promotion for the Boston Convention was
Dr. J. Gregory Payne, of Emerson College, who met several times during the summer with
the local NCA committee headed by local convention chairs – Drs. Anne Mattina of Stonehill
College and Sarah Weintraub of Regis College. Classes involved in the Emerson Press and
Promotion effort included Dr. Payne’s Advocacy and Argument, which consisted primarily
of freshman and sophomores, and upper class members of a senior seminar course in political
communication, The Public Affairs Matrix. The new/old media technical abilities of the
student effort was enhanced by the fact that many of the Advocacy and Argument students
were journalism majors, with at least an introductory journalism course which included the
use of pod casting and other innovative novel means of communication and promotion. Parts
of each class were devoted to strategizing on potential approaches to the NCA project. In the
effort to utilize the talents of various departments at Emerson, the Press and Promotion team
included EmComm, Emerson’s award winning internal marketing communication student
organization, supervised by Prof. Douglas Quintal. The merging of student and faculty
talents enhanced the creative possibilities, but this new approach of inter-departmental
cooperation also created tensions – some positive and others less so. Some emerging issues
of this new collaborative approach included the following: who was who directly was in
charge of the operation, distinguishing and identifying the reporting and supervisory lines

513
among those students from different departments involved, as well as the time line
expectations of the project. From a learning perspective, such dynamics are more often than
not characteristic of creative talent in the workplace.

While the strategic capabilities Emerson College students, faculty and staff for the
2005 NCA convention was long in the making, the NCA Press & Promotions website
developed almost overnight. Throughout the day preceding the convention, EmComm, and
graduate student David Twomey worked collaboratively to establish the website. By the start
of the convention the next morning, a fully functional website was up and running with letters
from the chairs of the local arrangements committee and faculty at Emerson College, a
schedule of events at the convention, press releases, contact information and links to
background on Boston in general.

Emerson students also began immediately reporting on a wide range of panel


discussions, lectures and some of the lighter moments of convention, such as the evening
receptions and celebrations, through an online blog set up on the website. With blogs
becoming common place in today’s modern society, understanding their usefulness through
the ability to quickly reach a large audience is critical to all communications professionals.
Emerson students quickly proved their exceptional mastery of blogging through reporting on
a daily basis their experiences, thoughts and opinions at the convention. Perhaps one of the
most groundbreaking developments of this website was the addition of audio webcasts and
podcasts. As communications and entertainment professionals in training, Emerson students
are regularly exposed to new, state of the art technology. It is not every day, however, that
students are able to showcase their skills and talents to such a large audience. Through the
assistance of the Department of Journalism, students used flash recorders to record audio
interviews with panelists and convention attendees and even entire presentations. Students
were first put through a brief training program in how to use equipment needed for the
particular purposes at the convention. They were then given assignments on specific panels,
events or individuals to cover for the day. Once their assignment was completed, the audio
was brought back to the college on the flash recorder’s memory cards, loaded into lab
computers and posted at first as a downloadable audio clip, and then finally posted as a full
podcast for users to download to their iPods or other MP3 devices.

Emerson student William Luken took charge of digital photography at the convention
and through partnership with David Twomey, posted the pictures online each evening for the
enjoyment of all at the convention. The pictures included captions detailing the names of
those in the pictures and what events were featured in each. The website for this effort can be
viewed at www.ncapress.com.

As a result of the entire experience, the students gained real-world experience for
promoting a large event with cutting edge technology that in turn fascinated and impressed
those attending the convention. There was truly an extraordinary, lasting impression of
Emerson College on many professors and students from other colleges across the nation
through the hands-on work of all Emersonians involved.

III. UNICEF/WORLD AIDS DAY

Similar opportunities arouse when UNICEF New England honored Emerson students’
reputation in communications and marketing to plan, promote and co-sponsor the first World
AIDS Day observed on a large scale in Boston. This invitation was due to Dr. Gregory

514
Payne being a board member of UNICEF New England, a relationship that provided the
opportunity for this learning experience.

Much of the team from the NCA convention was reassembled. EmComm was
charged with creating the marketable identity for the event and creation of print promotional
and informational materials for distribution. What was ultimately very different with this
particular event as opposed to the NCA convention was that students were also deeply
involved in the planning and organizational process of many events throughout the duration
of the day. Planning for these events began in October with the formation of a student
committee which worked in association with other student groups, such as student
government, comedy troupes and Model UN. Through this collaboration lead by
undergraduate student Michael Hawkes and graduate student David Twomey, events were
planned throughout much of the day focusing on UNICEF’s work with third world nations on
the AIDS epidemic, but also on the local impact of AIDS, specifically with younger people.

Students lead by Dr. Payne, again with assistance from the Department of Journalism,
used flash audio recorders to record interviews with UNICEF staff who have visited African
nations torn by the horrible effects of AIDS on all ages in their communities. They also
interviewed students, faculty and staff who worked tirelessly to make the day’s events a
success and candid interviews with Emerson students with their thoughts on AIDS in
general. As with the NCA convention, students returned to the college’s computer labs to
upload and post their audio interviews to the event’s website. Graduate student and assistant
to Professor Payne, David Twomey, established the Boston World AIDS Day website. It
includes the audio and podcasts of the interviews taped by students, photos taken by student
William Luken, blog posts by students college-wide on the day’s events and their personal
thoughts on AIDS, a schedule of events for day, press releases and contact information.

The creative talents and brainstorming of ways to highlight the event produced some
noteworthy publicity. For instance, students obtained a proclamation from the state assembly
of the Commonwealth of Massachusetts, recognizing World AIDS Day as well as UNICEF
and Emerson College’s efforts to bring attention to the day. The event was a marked success
gaining the recognition of the Emerson College President, state and local officials, and
UNICEF staff nationally who were looking for ways to involve college students in such
projects.

From a student perspective, evaluations provided by those who participated reveal


such events to have been the highlight of the learning experience. Nathan Bailey, a film
major at Emerson identified the experience as one that not only committed him to public
service but also to minor in political communication. Michael Hawks, a political
communication student who coordinated the undergraduate event, viewed his experience as
one in leadership training and realizing that one should give back to the community in which
he/she lives.

IV. CONCLUSION

In summary, the two case studies above demonstrate that today’s world is one where
classrooms have no walls and where education can help make the difference in engaging
students and citizens to work together in the pursuit of the common good. It is the hope of
the authors that such opportunities will be further pursued in the effort to further the feeling

515
of community in a world that desperately needs a global commitment to public diplomacy
and a belief that each of us can make a difference.

REFERENCES

Hanlon, John. “Live from Boston… it’s NCAPress.com!” Spectra, National Communication
Association, Vol. 42, No. 1, 2006, 7.
Heifetz, Ronald. Leadership without Easy Answers. Harvard University Press, Cambridge,
MA. 1995.
Kenny, Maureen, and Lou Anna Simon. Learning to Serve: Promoting Civil Society Through
Service Learning. Kluwer Academic Press, Norwell, MA. 2002.
Pinto, Amanda. “College Observes World AIDS Day.” The Berkeley Beacon, Emerson
College, Vol. 59, Issue 13, 2005, 1 & 5.
Peltak, Jennifer. “National Communication Association, The.” Official website. Washington,
DC. January 2006.
<http://www.natcom.org>
Smitter, Roger. Personal interview. Executive Director, National Communication
Association. Boston, MA. 2 Dec 2005.
Twomey, David. “NCA Convention 2005 Press & Promotions.” Official website. Boston,
MA. January 2006.
<http://www.ncapress.com>
Twomey, David. “World AIDS Day Boston.” Official website. Boston, MA. January 2006.
<http://www.aidsdayboston.com>
Watson, Martha. Personal interview. President, National Communication Association.
Boston, MA. 20 Nov 2005.
Watkins, Jason. “2005 Convention Draws Nearly 5,500.” Spectra, National Communication
Association, Vol. 42, No. 1, 2006, 1 & 19.
Wren, J. Thomas. The Leader’s Companion; Insights on Leadership through the Ages. Basic
Books, New York, NY. 1995.

516
CHAPTER 17

MANAGEMENT OF DIVERSITY

517
ORGANIZATIONAL CULTURE AND CUSTOMER SATISFACTION: A PUBLIC
AND BUSINESS ADMINISTRATION PERSPECTIVE

Shelia R. Ward, Texas Southern University


shelia_ward@co.harris.tx.us

Gbolahan S. Osho, Texas Southern University


oshogs@tsu.edu

ABSTRACT

This study examines the relationship between organizational culture and customer
satisfaction. Organizational culture influences the shape of an organization at all levels and
has a strong effect on the customer’s satisfaction rating. It is suggested that the relationship
between organizational culture and customer satisfaction is greater when employees and
management share the culture values of an organization.

I. INTRODUCTION

For many years, organizational culture has been an important component of an


organization’s success. Organizational culture was used to explain the economic success of
foreign firms over American firms, through the development of a highly motivated
workforce, which is committed to a common set of core values, beliefs, and ideas. More
satisfied customers have been linked to higher profits, higher employee satisfaction, and
greater retention. An understanding of organizational culture is essential to building an
effective employee foundation and in obtaining customer satisfaction within an organization.
Many studies and articles point to the importance of management concentration on customer
satisfaction as a necessary means of increasing and sustaining positive outcomes (Benko,
2001, Bailey, 1995, Bliss, 1999, Folan, 1998). Organizational culture is one of those terms
that is difficult to express clearly, but everyone knows it when they see it. For example, the
culture of a large, for-profit organization is quite different from that of a non-profit, which is
different from that of a government organization. An organizational culture is not created by
any single person or event, but by a complex combination of forces that include the visions of
the leaders, the contribution of the members, and the way the organization has historically
responded to problems. You can tell the culture of an organization by looking at the
arrangement of furniture, what they brag about, what members wear, and other
characteristics-- similar to what you can use to get a feeling about someone's personality.
Organizational culture is the personality of the organization. Organizational culture
comprises the attitudes, values, beliefs, norms, and customs of an organization.

The members of an organization (Hofstede, 1980) bring with them their own personal
cultures that come from their families, their communities, their religions, any professional
associations to which they belong, and their nationalities. In an effort to understand
organizational culture, researchers have explored how various internal processes, such as
individual selection, socialization, and the characteristics of powerful members such as the
founders of the organizations, or group members, can influence the organizational values and
outcomes. It has also been suggested that increased employee empowerment increases
customer satisfaction. That is one of the reasons that many human resource managers now

518
place as much emphasis on identifying organizational culture as they do on mission and
vision.

II. OBJECTIVE

Many experts believe that developing a strong organizational culture is essential for
successful customer satisfaction. While the link between organizational culture and
organizational effectiveness is far from certain, there is no denying that each organization has
a unique social structure and that these social structures drive much of the individual behavior
observed in organizations (Frost, 1985). The aim of this paper is to examine the impact of
organizational culture on customer satisfaction in an organization. Factors that influence
employee behavior play an important role in determining how the organization will act, how
it will accomplish its goals, and how it will treat its customers (Figure I).

Figure I
Factors of Organizational Climate and Customer Satisfaction
INTERNAL SERVICE PERFORMANCE
FACTORS FACTORS FACTORS

Leadership
Planning Involvement Monetary Gain
Customer Focus Reliability Value Gain
Information and Responsiveness Self Satisfaction
Analysis Assurance Acknowledgement
Human Resource Empathy
Focus
Mission Statement
Economic Conditions
Language
Training

Even today, organization culture is the one key that points to the success of such
organizations as Southwest Airlines, Microsoft and Oracle and the failure of leadership in
other organizations such as Hewlett Packard and Walt Disney. It is widely recognized that
cultural differences is one of the most common reasons for failure in mergers (AOL and Time
Warner). In 1989, less than a decade after the term corporate culture (Kotter, 1989, Heskett,
1989) came into general use, Time, Inc. blocked a hostile bid by Paramount by arguing that
its culture would be destroyed or changed by the takeover to the detriment of its customers,
its shareholders and to society. The judge ruled in Time’s favor. A recent electronic search of
the topic on the Internet suggests that in the past five (5) years alone, authors have published
53,500 articles and reports attempting to examine the related effect of culture in the
organization arena.

III. LITERATURE REVIEW

As stated in Workforce Feb. 1999, “Why Is Corporate Culture Important” by William


G. Bliss, the author writes that culture is critical to the success of any organization. He states
that many call it that “soft HR type thing”, but corporate culture is important because it sums
up total values, virtues, and accepted behavior (both good and not so good). He describes an
organization culture as aggressive, customer-focused, innovative, honest, research-driven. He
goes on to say that the impact of culture on the bottom line can be quite high. In the Center
for Quality of Management Journal Fall, 1995, Volume 4 Number Employee Involvement
Special Issue, “Employee Satisfaction + Customer Satisfaction + Sustained Profitability:

519
Digital Equipment Corporation’s Strategic Quality Efforts” by Betty Bailey and Robert
Dandrade, the authors discuss the experience and successes of companies that include Banc
One, Taco Bell and others. They stress that the correlation of putting employees and
customers first with profits has necessitated new ways of managing and measuring success.
In Corporate Culture and Performance by John P. Kotter and James L. Heskett, the authors
state that culture is more important to the bottom line performance than strategy, structure,
financial analysis and management systems. Their studies show the following:
1. Corporate culture can have a significant impact on a firm’s long-term economic
performance.
2. Corporate culture will probably be an even more important factor in determining the
success or failure of firms in the next decade.
3. Corporate culture that inhibits strong, long-term financial performance is not rare; it
develops easily, even in firms that are full of reasonable and intelligent people.
4. Although tough to change, corporate culture can be made more performance
enhancing.

In Organizational Culture by Peter J. Frost, Larry F. Moore, Meryl Reis Louis, Craig
C. Lundberg and Joanne Martin, the authors state that the structuring of an organization into
work roles in turn influences patterns of interaction found in the organization. Collective
understanding may, therefore, shade conceptions of self as well as opinions of others and, in
essence, provide an interpretive system that employees can use to make sense of ongoing
events and customer satisfaction. In Communication World August-September 1998 v 15 n7
p50 (2), the article on “A Winning Culture Beats the Competition” (corporate culture) (Cover
Story) by Jill Langendorff Folan describes how a corporate culture can differentiate an
organization from its competitors. The author speaks about how a winning culture encourages
trust, learning, growth, and courage and expects out-of-the-box thinking, creative problem
resolving, quality customer service and excellent performance on a daily basis. She states that
the only sustainable competitive advantage a company has is its people. Culture, leadership,
and commitment elements are critical to the success of an organization.

This study has one purpose: to examine the impact of organizational culture on
customer satisfaction in an organization. The study utilizes data gathered from two
organizations to assess how organizational culture impacts customer satisfaction. It was
predicted that organization culture would significantly differentiate if the focal point of the
organization were customer focus. Moore and Kelly (1996) proposed that having a service-
oriented culture allows human service organizations to meet customer expectations in a way
that a more technically focused culture could not. The authors further claim that it is not
possible to understand service quality in a human service industry without viewing the
service from the client’s point of view. Additionally, providing employees with the
information and knowledge they need to immediately solve customer problems as opposed to
needing managerial intervention has further increased customer satisfaction levels (Benko,
2001).

Hypothesis: If an organization has a strong culture and it demonstrates an integrated and


effective set of values, beliefs and behaviors, then it will perform at a higher level of
productivity.

520
IV. METHODOLOGY OF RESEARCH

The study presented in this research utilized a tool to measure employee and customer
satisfaction. A survey was created which measured satisfaction with several areas including
autonomy in resolving problems, a supportive work environment, culture diversity, training,
courtesy, initial services, and overall experience. The survey instrument was self-
administered.
The Denison culture of adaptability (Denison, 1984) measures an organization’s customer
focus, organizational learning, and ability to create innovation and change. Employee
empowerment, employee development, and training are indicative of what Denison refers to
as a high involvement culture. It has also been suggested that increased employee
empowerment increases customer satisfaction. Other research emphasized the importance of
employee training and development as tools to enhance customer satisfaction.

Procedure: Participants received the satisfaction survey from the management teams of their
various organizations. Responses were collected from employees across all levels of the
organization. Participants were instructed to place completed surveys in designated survey
boxes located throughout the organizations. Customer satisfaction surveys were provided to
the customers by the various staff providing the services. Customers were asked to complete
a survey soon after they receive the service. Participants were instructed to place completed
surveys in designated survey boxes located throughout the organizations.
Participants: One hundred and seventy-seven (177) employees and four hundred and sixty-
five (465) customers received the surveys. Surveys were completed by fifty (50) employees
at Employer A’s location and twenty (20) employees at Employer B’s location.
Approximately seventy (70) participants (employees) from both locations completed the
satisfaction survey. Surveys were completed by 200 customers from Employer A’s location
and (15) customers from Employer B’s location. Approximately two hundred and fifteen
(215) customers from both locations completed the satisfaction survey. Overall 100% of the
employees at Employer’s A location were satisfied with the organization, and 25% of the
employees at Employer’s B were satisfied with the organization. 70.57% of the customers at
Employer’s A location were satisfied with the service received, and 13% of the customers are
Employer’s B location were satisfied with the service received at the organization. The
research will reflect the data used to compile the results from both organizations.

V. DATA ANALYSIS

The data was analyzed based on the number of responses received from each
organization. Surveys were tallied and compared to create a chart, which reflects satisfaction
levels. The satisfaction scores are reported as the percentage of customers who responded in
the various areas. An abbreviated summary of results and the survey are below. (Figures II &
III)

521
Figure II Figure III
SURVEY RESULTS EMPLOYEE SATISFACTION SURVEY
ABBREVIATED SUMMARY OF SURVEY RESULTS FOR EMPLOYERS A & B ABBREVIATED EMPLOYEE SATISFACTION SURVEY
RESULTS

300 In order to measure staff satisfaction with the quality of management support and
direction being provided, the department is instituting this survey instrument to
gather feedback. Change occurs when validated data is presented to demonstrate
250 the need for change. Place the completed survey in designated boxes. Please be
candid and frank with your responses to the following questions or statements:

200 Please circle the response that best addresses the question or statement for you.

1. How long have you been an employee of Employer A or B?


150 Employer A 1-2 years- 20 (41%)
2-3 years-14 (29%)

100 Employer B 5-10 years-5 (10%)


More than 10 years- 9(18.7%)
2. Employer A or B is?
An excellent place to work.- 10 (20.8%)
50 A good place to work- 26 (54.1%)
A fair place to work-12 (25%)
3. Within our program, management gives staff autonomy in resolving problems
involving clients and other staff.
0 Always-8 (16.6%)
Most of the time-19 (39.5%)
Sometimes-17 (35.4%)
Not very often-2 (.04%)
Never-1 (.02%)
4 U t (M d Ad i i t ti St ff) i i d h l f l d

VI. RESULTS

Culture is an extremely powerful business performance tool. The way we do things


around here (Deal, 1982, Kennedy, 1982) generally has more impact than the formally
written polices and procedures found in every organization. Many organizations have found
out that when culture goals and customer service conflict, time and again culture wins out.
Culture barriers such as resistance to change, lack of shared values or lack of teamwork could
yield low results in creating the structure needed to manage a successful organization and
provide excellent customer service. The survey results reflect that when an employee is
satisfied with the work environment, there is a greater chance of the organization achieving a
higher customer-satisfaction rating.

VII. CONCLUSION

In summary, these results show the impact of organizational culture on customer


satisfaction and that culture is associated with change in productivity. The current results
show that the correlation between organizational culture and customer satisfaction is stronger
when there is a shared belief and value. It could be argued that culture helps to shape
organizational climate, and that climate, in turn, may influence productivity and customer
satisfaction. Ultimately, what makes it possible for people to function comfortably with each
other and to concentrate on their primary task is a high degree of agreement on issues. The
culture that eventually evolves in a particular organization is thus a complex outcome of
external forces, internal potential, and responses to events. In order for businesses to
distinguish themselves from other organizations, it is critical that the management of an
organization creates a culture that is conducive for success in accordance with the elements of
the external environment, and it needs to consider customers’ satisfaction as a key factor.

522
REFERENCES

A. BOOKS:
Hofstede, G. Culture’s Consequences: International Differences in Work Related Values.
Beverley Hills, CA: Sage Publication, 1980.
Deal T.E., and A. A. Kennedy. Corporate Cultures: The Rite and Rituals of Corporate Life.
Harmondsworth, England: Penguin Books, 1982.
Frost, Peter L., Larry L. Moore, Meryle Louis Reis, Craig Lundberg, and Joanne Martin.
Organizational Culture. New York, NY: Sage Publication, 1985.
Kotter, John P. and James L. Heskett. Corporate Culture and Performance. New York, NY:
The Free Press, 1992.
B. JOURNAL ARTICLES:
Bailey, Betty, and Robert Dandrade. “Employee Satisfaction + Customer Satisfaction +
Sustained Profitability: Digital Equipment Corporation’s Strategic Quality
Efforts”. Center for Quality of Management Journal., v 4, 1995, n3.
Benko, L.B. “Getting The Royal Treatment.” Modern Healthcare Journal., 39, 2001, 28-32.
Denison, D. “Brining Corporate Culture to the Bottom Line.” Organizational Dynamics., 13,
(2), 1984, 4-22.
Folan, Jill Langendorff. “A Winning Culture Beats The Competition (Corporate Culture).”
(Cover Story). Communication World., 15 , 1998, n7, 50 (2).
Moore, S.T. and M. J. Kelly. “Quality Now: Moving Human Service Organizations Toward a
Customer Orientation to Service Quality”. Social Work., 41, 1996, 33-40.
Bliss, William G. “Why Is Corporate Culture Important?” Workforce., 78, 1999, 12 W8 (2).

523
DIVERSITY IN THE WORKPLACE

Carolyn Ashe, University of Houston-Downtown


ashec@uhd.edu

Chynette Nealy, University of Houston-Downtown


nealyc@uhd.edu

ABSTRACT

This exploratory research analyzes the diversity in the workplace today within two
different Fortune 500 companies and shows the different support groups and methods of
diversity used. The two selected organizations are not only Fortune 500 companies, but two
of the largest and most profitable corporations in the United States. Each of these
organizations is capable of making a significant contribution with respect to changing the
way in which the business world acts given their leadership roles in each of their respective
market segments.

I. INTRODUCTION

Diversity is one of the most important attributes a company can maintain in today’s
business environment. Diversity is defined as, but not limited to, those observed and inferred
differences among employees of organizations which are rooted in their distinctive culture,
gender, age, geographical regions, ethnic group affiliations and related characteristics
(Rhea 2003).

Building a diverse environment allows a company to gain a competitive advantage in


any industry. By making diversity one of the most important goals, management and team
leaders are able to provide customers excellent service and understand the customers’ needs
at a whole new level. Companies need to not only strive for a diverse construction of
supervisors, management, and team leaders but also support the training, promoting, and
planning of a diverse infrastructure.

II. PURPOSE

The purpose of this exploratory research was to provide information on the status of
diversity initiatives at two of the most successful companies (Exxon Mobil and Coca Cola) in
the United States and how they are committed to addressing diversity in the workplace. The
goal is to analyze their vision and goals pertaining to company diversity and examine what
they have accomplished as well as where they can improve. A review of these companies
illustrates how companies in general are viewing diversity and the importance of it, whether
they are succeeding in their goals of diversity, and future diversity management plans.

III. EXXON MOBIL

In 1999, Exxon and Mobil merged to become today’s Exxon Mobil, the world's
second-largest integrated oil company. Both Exxon and Mobil began their businesses in the
late 19th century when the petroleum industry was booming (From Kerosene to Gasoline
2005). The merger enhanced the ability of both companies to become global competitors.

524
Exxon Mobil is now a major competitor in the production of petrochemicals as well as oil
and gas exploration, production, supply, and transportation around the world (Williams
2005).

Exxon Mobil's 45 refineries in 25 different countries have a capacity of producing 6.3


million barrels of oil per day. The company supplies refined products to 42,000 service
stations in more than 100 countries that operate under the Exxon name. Having service
stations and refineries world wide ensures that Exxon Mobil will have a diverse work force
and deal with a diverse customer base on a daily basis.

Views On Diversity: Lee R. Raymond, the CEO of Exxon Mobil clearly defines that
Exxon Mobil’s diverse workforce is a key competitive advantage as a worldwide enterprise
(A letter from CEO, 1). Exxon Mobil understands that business is about people. Employees
are more productive working in an environment where everyone gets equal opportunities to
grow and excel. Exxon Mobil has built a diverse worldwide workforce by hiring employees
with a shared focus on attaining superior business results. Lee points out that in this
increasingly global environment, Exxon Mobil will continue its efforts to find diversely
talented people worldwide to join its business.

Hiring Policies: Exxon Mobil relies primarily on college recruiting conducted world
wide in order to hire people from upper management to skilled laborers (Global Diversity –
Recruiting, 1). Campus recruiting and hiring programs have also allowed Exxon Mobil to
develop an excellent workforce full of diverse, intelligent employees who have significantly
helped build Exxon Mobil into one of the strongest petroleum industries in the world.

Truman Bell, the education program officer stated that the company has a variety of
jobs open to anyone wanting to become an employee at Exxon Mobil. There are also many
organizations associated with Exxon Mobil (including the Society of Women Engineers, the
National Society of Black Engineers, the Society of Hispanic Professional Engineers, and the
American Indian Science and Engineering Society) that help support and train future
employees for Exxon Mobil in diversity and communication.

Awareness Training: Exxon Mobil uses the following examples: formal training
classes, informal group sessions, newsletters, team-building activities and brown bag lunches
with guest speakers as diversity initiatives used around the world (Career Development
2005). Through these programs and activities they have effectively increased employees’
awareness of diverse work environments as well as helped them to adjust to changing work
environments.

Impact On Business: Exxon Mobil's retail marketers worldwide tailor their products
and promotions to meet their customers' needs. Hiring local employees, and sponsoring
events shows Exxon Mobil’s being attentive to ethnic and cultural differences. This leads to
improved sales and stronger brand loyalty. In the United States, for example, Exxon and
Mobil stores promote Black History Month to celebrate the achievements and contributions
of African-Americans.

An international workforce has built Exxon Mobil’s $2 billion Singapore Chemical


Complex. Employees hired locally and from around the world are operating this facility on a
daily basis (Global Diversity - Essential to Success 2005). The success of this project
symbolizes Exxon Mobil’s success in managing and utilizing diversity.

525
Accomplishments In Diversity: In 2003, Exxon Mobil hired more than 1,200 new
professional employees worldwide, 40% of these were women, and 64% were hired outside
of the United States. Notably, many of these new employees were hired from many of the
world's developing countries (Diversity/Career 2005).

IV. COCA COLA

On May 8, 1886, the Coca Cola product was produced. Dr. Pemberton, a local
pharmacist in Atlanta, Georgia created the formula through a combination of syrup and
carbonated water. Dr. Pemberton sold all of his remaining portions of his business prior to
his death in 1888 to Asa G. Candler. Mr. Candler bought the additional rights and acquired
complete control of the company. In 1891, Mr. Candler purchased all of the shares of the
company for $2,300. The Georgia Corporation, Mr. Chandler, John S. Candler, Frank
Robinson, and two other associates formed The Coca-Cola Company. Two years later in
1895, Mr. Chandler presented the annual shareholder’s report. The report announced every
state and territory in the United States of America now enjoyed the refreshing Coca Cola
beverage (Heritage 2005).

In 1919, the Candler group sold the Coca-Cola Company to Atlanta banker Ernest
Woodruff and an investor group for $25 million dollars. Robert Winship Woodruff and
Ernest Woodruff’s son elected a president four years later. Robert Woodruff led the
company for more than six decades. The emphasis of the drink placed on the product was the
quality of the beverage (Heritage 2005).

At this time, the Coca-Cola product was bottled and distributed internationally.
Through the early 1900s, there were bottling operations in Cuba, Panama, Canada, Puerto
Rico, the Philippines, Guam, and France. The Coca-Cola Company now operates in more
than 200 countries and produces nearly 400 brands of drinks. For more than 115 years, The
Coca-Cola Company continues to produce drinks for individuals around the world (Heritage
2005).

Views On Diversity: Since the Coca-Cola products extend over 200 countries
speaking more than 100 languages, The Coca-Cola Company strives to be a special part of
people’s lives. The company feels that with this much market position, there must be some
form of responsibility and the company has chosen to take a leadership role regarding
diversity. The company believes that the individual differences make the company stronger in
the business market. The company believes that diversity (whether the basis is race, gender,
sexual orientation, ideas, ways of living, cultures, or business practices) provides the
creativity and innovation essential to the company’s economic well being (Our Company:
Diversity 2005).

Awareness And Training: The Coca-Cola Company values diversity and has many
programs and forums to help promote diversity in the workplace. The company believes that
the heart and soul of the enterprise has always been the people who work in the company.
Over the past century, the Coca-Cola employees have led with success by living and working
with a consistent set of values. The Coca-Cola Company understands that the business world
is constantly changing, but respecting these values will continue to be essential to the long-
term success. As the company has expanded over the decades, the company has benefited
from the various cultures and experiences of the societies that are a part of the business. The
company believes that the future success will depend on the ability to develop a company that

526
is rich in diversity of people, cultures, and ideas. The company is also determined to have a
diverse culture from top to bottom that will benefit from the perspective of the individual
workers (Our Company: Diversity at Work 2005).

In addition to being aware and understanding the need for diversity, the Coca-Cola
Company went a step further by forming a unique partnership with the American Institute for
Managing Diversity to help create the Diversity Leadership Academy (DLA). The academy
receives funds totaling $1.5 million dollars through grant money contributed by The Coca-
Cola Company. Formed in 2001, The DLA is as an innovative experimental learning
program for leaders from all sectors of our society. Some of the sectors include individuals
from the education, government, religion, non-profit, and for-profit groups. The program
brings leaders together monthly for over a five month period to learn the principles of
diversity management, benefit from the knowledge of others, and work collectively to
address the diversity issues of the community (Our Company: Diversity Community Support
2005).

Awards: The Coca-Cola Company has received numerous awards regarding the
diversity of the enterprise. Some of the most recent awards are listed below (Our Company:
Awards 2005):
• LATINA Style 50, LATINA Style Magazine (2005)
• 50 Best Companies for Minorities, Fortune (2004)
• Top 50 Companies for Minorities, Diversity Inc. (2004)
• Top 50 Companies for Diversity, Diversity Inc. (2003/2004)
• Corporate Commitment to Minority Business Entrepreneurs Award, Houston
Minority Business Council (2003)
• Corporate Commitment Award, Houston Minority Business Development Council
(2003)

V. CONCLUSIONS

In future years, a key focus of all businesses will be diversity in the workplace.
Diversity means that a company has a wide variety of certain aspects: age, race, ethnicity,
physical ability, gender or sexual orientation. With many businesses deciding to go global, it
will be very crucial for companies to hire a diverse staff that will be able to help achieve
business objectives more efficiently. The biggest concern for top management is how to
manage all these diverse employees and still remain a successful business. These are many
of the issues that Exxon/Mobil and Coca-Cola face everyday.

Exxon Mobil, after the merger in 1999 became the 2nd largest integrated oil company.
The company does a lot of work globally and recoginized the need for diverse employees to
help with the oil and gas exploration, production, supply, transportation, petrochemical
producing, as well as marketing around the world. Exxon Mobil understands that business is
all about people and by having a diverse staff can help employees look at new challenges
with a wide range of ideas and perspectives, while teaching each other at the same time. To
show their dedication to diversity, Exxon Mobil does a lot of recruiting for new employees at
over 100 colleges and universities in the United States, and more than a 100 in other
countries. By doing this, they are able to keep their company very diverse and highly
qualified for the work they must do to help the company succeed.

527
Coca-Cola also realized the importance of having a diverse company. The company
first started in 1886, and has been apart of many peoples lives since then. Coca Cola makes
over 400 products, and distribute these products to over 200 countries speaking more then
100 different languages. Since they deal with so many different cultures, it is crucial for
Coca Cola to have a very diverse workforce that can take care of all of the customer’s needs
and wants. They must be able to tell the company what is the best way to market the
products, and how to get the products to the customers the best way. Coca Cola must work
together as a whole to reach the goals they have set.

VI. RECOMMENDATIONS

These are only two of the many companies who feel having a diverse workplace gives
them a competitive advantage over other companies in the same line of work. Diversity is
essential for all businesses that plan on moving globally, and should be a important area of
concern as we move forward into the future. When looking at the United States Bureau of
Labor Statistics, one can easily see that business trends are changing in favor of a more
diverse workplace. The Hispanic labor force will continue to grow at a steady pace, but not
as fast as the Asian workforce. Projections indicate a growth over 45% in just a decade
alone, and show no signs of slowing down (U. S. Department of Labor (2005).

Another big part of diversity deals with gender. The workplace was once flooded
with nothing but males in top-management, but women now occupy almost half of the
workforce in the United States. Look for normal business trends concerning gender diversity
to change at a rapid pace as companies are realizing the importance. In the past many
companies used terms such as affirmative action and understanding different cultures as key
words to emphasize the importance of diversity. A new term for the future should be entitled
diversity management, which deals with making sure that everyone understands the cultural
differences of all employees. Diversity Management can be used advantageously while
looking into the future.

REFERENCES

Career Development. Retrieved September 23, 2005, from


http://www.exxonmobil.com/Corporate/Newsroom/Publications/diversity/c_career.html.
Diversity/Careers. Retrieved October 12, 2005 from http://www.diversitycareers.com
From Kerosene to Gasoline. Retrieved October 12, 2005 from
http://exxonmobil.com/Corporate/About/History/Corp_A_H_Kerosene.asp.
Global Diversity - A Letter from CEO. Retrieved October 12, 2005 from
http://www.exxonmobil.com/Corporate/Newsroom/Publications/diversity/c_letter_CE
O.html
Global Diversity – Recruiting. Retrieved October 11, 2005 from
http://www.exxonmobil.com/Corporate/Newsroom/Publications/diversity/c_recruiting
.html.
Global Diversity–The 1990s: A Decade of Progress. Retrieved October 21, 2005 from
http://www.exxonmobil.com/Corporate/Newsroom/Publications/diversity/c_whyfocus
.html.
Global Diversity - Essential to Success. Retrieved October 22, 2005 from
http://www.exxonmobil.com/Corporate/Newsroom/Publications/diversity/c_whyfocus
.html.

528
CHAPTER 18
MANUFACTURING AND SERVICE

529
THE ROLE OF ELECTRONIC DATA INTERCHANGE IN
SUPPLY CHAIN MANAGEMENT

Mohammad Z. Bsat, Jackson State University


Mohammad.z.bsat@jsums.edu

Astrid M. Beckers, Jackson State University


Cultures_etc@excite.com

ABSTRACT

Ten years ago, even the most sophisticated retailers were just mastering EDI
(electronic data interchange) communication with vendors. Today, hyper-efficient supply
chains differentiate the world's leading retailers from the average retailers with merely super-
efficient supply chains. Regardless of the phase your business is in, an effective strategy for
measuring your supply chain effectiveness is a more complex proposition than ever.

I. INTRODUCTION

Supply chain measurement is essential for an effective c-commerce strategy. While


the rewards are large, packaged application vendors are still scrambling to build solutions,
and metrics are emerging. SCM performance management is a key component to
collaborative commerce (c-commerce), enabling an enterprise to see into existing operations
and thus be more flexible and responsive to changing business conditions. Through
collaborative Internet-enabled technologies and performance management, enterprises can
improve agility, monitor effectiveness and drive competitiveness within the extended supply
chain. Many vendors claim to offer applications that support enterprise or collaborative
supply chain performance management, but there is no clear leader in the market, leaving
users to choose between four classes of immature solutions.

The Necessity and the Opportunity: While implementations of SCM analytics can be
difficult (determining the right metrics and gathering and cleaning data), the need for a
supply chain performance management system is growing. Outsourced operations and
strategic sourcing projects are accelerating the need for enterprise performance management
and continuous improvement. Through 2004, enterprises that implement interenterprise
metrics to measure the value of c-commerce initiatives will increase their ROI over a five-
year period (0.7 probability).

II. SUPPLY CHAIN MEASUREMENTS

As supply chains change from linear to nonlinear customer delivery models for both
products and services, performance management is key to guaranteeing that each enterprise
cannot only measure its own performance; but also, monitor and evaluate outsourced
dependent operations. In the face of this opportunity, many enterprises are rushing into
extended supply chain collaborative performance metrics without rationalizing the processes
of collaborative performance measurement. To be successful, enterprises must rethink
existing processes to design effective outward-facing processes and determine extended
supply chain metrics to enable differentiation. While future B2B relationships will use
common metrics to manage risk, determine entry, identify preferred trading partners, and

530
monitor and correct behavior, users today must design and develop their own metrics,
performance management solutions and processes. Because of the complexity of this
undertaking, progress on a common standard is not expected until 2003, with fewer than five
vendors expected to deliver solutions that measure multidimensional supply chain
effectiveness for multiparty processes (see Figure 1).

Successful Collaborative Performance Measurement


The Metrics: Although enterprise metrics have been defined through industry consortia with
available historical benchmarking (see Note 1), the right measurements for multiparty,
outward-facing processes are still emerging. These are being developed by supply chain
innovators and e-marketplace providers — an interenterprise model is not expected before the
second half of 2002. Users interested in developing performance measurement systems for
collaborative, extended supply chain performance measurement projects should use the
following guidelines:

1. Enterprise performance management should precede collaborative supply chain


measurement. Successful efforts in measuring extended supply chain performance start with
enterprise performance measurement to develop an understanding of performance
measurement and to build the data sources. Since the data for supply chain performance
measurement is often not readily available, these processes can take 12 to 18 months. It is
critical to understand internal performance before engaging a trading partner in measurement
activities.

In planning for a supply chain analytics project, extra time should be allowed to identify,
harvest and cleanse data to ensure that KPIs have a meaningful tie to business objectives.
Industry consortia and benchmarking sources are useful in determining industry specific KPIs
and benchmarks.

2. Focus on KPI Definition — The Devil Is in the Detail: Common enterprise KPIs include
supplier performance, supply chain response time, forecast accuracy, inventory balances,
manufacturing cycle time and delivery performance. However, users find that even the three-
level SCOR Model (see Note 2) must be used with caution to ensure a consistent definition.
The SCOR Model is a general standard enabling enterprises to define KPIs differently (e.g.,
how on-time delivery is defined — is it the date the goods were shipped, the date of arrival or

531
the time dropped in the customer's trailer yard?). To be successful and to benchmark
accurately against the competition, trading partner communities must rationalize and define
metrics rather than allow participants to define their own "version of truth."

3. Start and End With the Business Agreement: Successful performance management of
extended trading relationships starts with the business agreement. While many metrics are
"nice to have," evaluation of collaboration pilots supports the assertion that sustainable
performance measurement systems are closely tied to the agreement.

4. Plan for Complexity: Complexities include the inconsistency of measurement definition,


semantic reconciliation, information accuracy and data latency. By definition, supply chain
data is complex with large data sets to cleanse and normalize.

5. Where Possible, Take Advantage of ABC Analysis: Supply chain performance


measurement is converging with activity-based costing and activity-based budgeting (see
Note 3). Rather than measuring past performance, these projects aim to create value-based
investment by answering the following questions:

• What is the true cost of customer service by customer across the extended supply
chain?
• What are the true costs of outsourcing goods and services?
• How well is the extended supply chain performing and where does performance need
to be improved?

Current State of SCM BI Applications


Most ERP and SCM vendors are investing in marketing and building new BI applications.
However, even well-known software vendors' solutions — such as i2 Technologies or SAP
— are immature with few (or no) live customers (see Note 4). Users must evaluate four
classes of solution:

1. Additional modules offered by SCM suite vendors: SCM and SCE vendors that have
included supply chain analytic solutions as part of their offering.
2. Emerging SCM performance management specialists: Best-of-breed vendors that
have packaged supply chain performance management and industry specific KPIs for
heterogeneous application environments.
3. Solutions offered as an additional module by ERP expansionists: As an extension of
ERP, ERP vendors in the late 1990s marketed application suites that included
operational, executional and analytical capabilities. As these processes became more
collaborative, ERP II developed as a set of business practices optimizing enterprise
and interenterprise processes for collaboration, operational and financial processes.
(retail),
4. New modules being developed by BI suites and platforms: Enterprise BI suites offer
multiple styles of BI functionality, including ad hoc query, reporting, charting,
multidimensional viewing and light analysis (such as trending). Enterprise BI suites
focus on scalability, usability and manageability. BI Platforms are a more complete BI
offering, with a complete set of tools for the creation, deployment, support and
maintenance of BI applications. These are data-rich applications, with custom end-
user interfaces, organized around specific business problems, with targeted analyses
and models.

532
Selection should be made by a cross-functional team of IT and business professionals
focusing on the trade-offs of these four classes of solution, as illustrated in Figure 2.

Conclusions

Bottom Line: While enterprises should embrace supply chain performance initiatives, the
initial focus must be on enterprise performance management. Due to the complexity of
collaborative performance measurement, users must be conservative in project expectations
and resist market hype, as vendors are offering BI tools, but are struggling with product
delivery.

533
REFERENCES

Andersen Consulting (1992). The Lean Enterprise Benchmarking Project, London.


Andersen Consulting Publication Andersen Consulting (1994). Worldwide Manufacturing
Competitiveness Study, London, Andersen Consulting Publication.
Dimancescu, D. (1992). The Seamless Enterprise: Making Cross Functional
Management Work, New York, Harper Business.
Edwards, C., Ward, J. and Bytheway, A. (1995). The Essence of Information
Systems, 2ed, London, Prentice Hall.
Goldratt, E.M. & Cox, J. (1993). The Goal, 2ed, Gower.
Hammer, M. & Champy, J. (1995). Reengineering the Corporation: A Manifesto for Business
Revolution, London, Nicholas Brealey Publishing.
Hines, P. (1994). Creating World Class Suppliers: Unlocking Mutual Competitive
Advantage, London, Pitman Publishing.
Hines, P. (1996). Creating Your Own World Class Supply-Chain, Logistics Focus, 22/4/96.
Hines, P. & Rich, N. (1997). The Seven Value Stream Mapping Tools, International Journal
of Operations & Production Management, Vol. 17, No. 1: 46-64.
Hines, P., Rich, N. & Esain, A. (1997). Creating a Lean Supplier Network: A
Distribution Industry Case, Conference Proceedings: Logistics Research
Network, University of Huddersfield, 16-17th September 1997.
Inger, R. (1997). Performance Measurement Issues in the Supply Chain, LRN
Workshop, Cranfield, 15/04/97.
James, R., Rich, N. & Francis, M. (1996). Vendor Managed Inventory: A Processual
Approach, In Proceedings: 5th International IPSERA Conference, Eindhoven
University of Technology (TUE), Eindhoven, Netherlands.
Jones, O. (1997). Information in the Supply Chain: Media and Message, In: Cox, A.and
Hines, P., Advanced Supply Chain Management: The Best Practice Debate, London,
Earlsgate Press.
Levit, S. (1994). Quality is Just the Beginning: Managing for Total Responsiveness,
McGraw-Hill.
Martin, J. (1995). The Great Transition: Using the Seven Disciplines of Enterprise
Engineering to Align People, Technology, and Strategy, New York, AMACOM.
Mintzberg, H. (1979). The Structuring of Organizations, London, Prentice Hall
Monden, Y. (1983). The Toyota Production System. Atlanta, Institute of Industrial
Engineers.
Monden, Y. (1993). Toyota Production System: An Integrated Approach to Just-In-Time, 2nd
Edition, Norcross, Georgia, Industrial Engineering and Management.
Ohno, T. (1988). The Toyota Production System: Beyond Large-Scale Production, Portland,
Oregon: Productivity Press.

534
THE EFFECT OF GENDER ON APOLOGY STRATEGIES

Astrid M. Beckers, Jackson State University


Cultures_etc@excite.com

Mohammad Z. Bsat, Jackson State University


Mohammad.z.bsat@jsums.edu

ABSTRACT

This study aims at investigating potential gender effects in American university


students’ use of apologies within the framework of the two-culture theory which claims that
men and women are so different that they comprise strikingly different cultures. Our findings
revealed that male and female respondents used the primary apology strategies of statement
of remorse, accounts, compensation, and reparation. They also resorted to the use of non-
apology strategies such as blaming the victim and brushing off the incident as not important
to exonerate themselves from blame. The findings further revealed that male and female
respondents used the same primary strategies but in different frequencies. In addition, female
respondents used fewer non-apology strategies than their male counterparts and more
manifestations of the statement of remorse.

I. INTRODUCTION

This study is mainly concerned with potential gender effects in American university
students’ use of apology strategies. The researchers adopted the controversial and much
criticized (Cameron, McAlister and O'Leary, 1989; Troemel-Plotz, 1991), yet partially
evidenced (Michaud and Warner, 1997; Basow and Rubenfeld, 2003) views of the two-
culture theory which claims that men and women exist in different cultural worlds (as
opposed to the dominance theory which claims that men and women exist in the same
cultural world in which power and status are distributed unequally). Proponents of the two-
culture view claim that due to the striking differences between them, men and women belong
to different ‘communication cultures’ (Maltz and Borker, 1982; Tannen, 1990; Gray, 1992;
Schloff and Yudkin, 1993) or ‘speech communities’ (Wood, 2000; 2002).

II. LITERATURE REVIEW

The two-culture theory has mainly focused on gender differences in ‘troubles talk’,
intimacy, and emotion (Jefferson, 1988; Tannen, 1990). Bate and Bowker (1997: 166) claim
that "caring seems to be the principal category that differentiates one sex from the other".
Proponents of this theory claim that girls are taught that talk is the primary vehicle to
establish and maintain intimacy and connectedness (Maltz and Borker, 1982), while boys are
socialized to view talk as a mechanism for getting things done, accomplishing instrumental
tasks, conveying information, and maintaining status and autonomy (Wood and Inman,
1993).

The two-culture hypothesis “maintains that gender-specific socialization of boys and


girls leads to different masculine and feminine speech communities. These communities
represent different cultures-people who have different ways of speaking, acting, and

535
interpreting, as well as different values, priorities, and agendas” (MacGeorge, Graves, Feng,
Gillihan, and Burleson; 2004:1)

The relationship between language and gender during childhood has been widely
addressed in the literature (Maltz and Borker, 1982; Huston, 1985; Tannen, 1990, Leaper,
1991; 1994; Swann, 1992; Maccoby, 1998; Wood, 2001) which suggests that girls are more
likely to use language to form and maintain connections with others, whereas boys are more
likely to use language to assert their independence, establish dominance, and achieve goals.
Tannen's (1990; 1994; 1995) research suggests that men and women have different modes of
communication and, thus, communication between them ought to be viewed as intercultural
communication. She (1990:85) further argues that "girls are socialized as children to believe
that "talk is the glue that holds relationships together" which is later reflected on their
perceptions of conversations as "negotiations for closeness in which people try to seek and
give confirmation and support, and to reach consensus". On the other hand, boys are taught to
maintain relationships through their activities, which would later color a man’s perceptions of
conversations as contests ‘in which he [is] either one-up or one-down’. Along the same line,
Wood claims that “much of the misunderstanding that plagues communication between
women and men results from the fact that they are typically socialized in discrete speech
communities” (2000: 207).

The findings of much of the research on gender differences in supportive


communication have been inconsistent with the two-culture theory. In their detailed critique
of Tannen (1990), Goldsmith and Fulfs (1999) concluded that virtually none of the empirical
generalizations forwarded by Tannen were adequately evidenced. Similarly, Thorne (1993)
critiqued Maltz and Borker’s (1982) qualitative research, while Burleson (1997) and
Vangelisti (1997) discussed the pitfalls in much of Wood’s work (Wood, 1993; 1997; 2000).
However, in a study of 145 men and 239 women, Michaud and Warner (1997) claim to find
support for Tannen's (1990) analysis of gender differences in ‘troubles talk’. They presented
participants with six ‘troubles talk’ situations for which participants were asked to rate the
likelihood of using six communication strategies which correspond to those described by
Tannen (1990). They found statistically significant gender differences for three of six
message strategies used to provide support, for all seven emotional responses to advice, and
for three of seven emotional responses to sympathy. Michaud and Warner (:537) concluded
that "many statistically significant differences were found in this study, and all were in the
direction predicted by Tannen's work" although they (:538) noted that "the effect sizes were
very small, even for the differences that were statistically significant".

To be able to draw conclusions which pertain to the major question of the research,
the researchers attempt to first identify apology strategies and cross-reference them by those
presented by Sugimoto (1997). More specifically, the study aims to answer the following
questions of which the second is the central focus of the research:

1. What are the apology strategies used by American male and female
undergraduate students?
2. What are the potential differences in the use of apology strategies between
male and female respondents?
3. Do Sugimoto’s findings of American respondents’ use of apology strategies
hold true for this group of respondents?

536
In her study of Japanese and American apology strategies, Sugimoto (1997) put forth
the primary strategies which include statement of remorse, accounts, description of damage,
and reparation; secondary strategies which include compensation, and the promise not to
repeat offense; and seldom used strategies which include explicit assessment of
responsibility, contextualization, self-castigation, and gratitude. Although they would keep
others in mind, the present researchers use these strategies as the basis of their data analysis.

III. METHODOLOGY

The population of the study consisted of American undergraduate students at the


University of Georgia. The sample consisted of a randomly selected group of one hundred
16-30-year-old American male and female undergraduate students from various areas of
specialization in the undergraduate program in the spring semester of 2005.

The instrument is an adaptation of the questionnaire used by Sugimoto (1997) to


compare the apology strategies used by American and Japanese students. Since it has already
been piloted and checked for validity, the present researchers did neither. The questionnaire
consists of three parts:

A short introduction of the study and instructions for answering the questions,

1. a short section aiming at collecting demographic information about the


respondents, and
2. ten scenarios each of which involves a situation which requires an apology
(See the Appendix).

One of the researchers personally visited classes and oversaw the data collection
process. She distributed the questionnaire, offered explanations and answered questions, and
collected the completed questionnaires in the course of one class session. The data were then
tallied to identify any potential differences which could be attributed to gender. To discover
the potential effect of gender, the researchers tallied the percentages of the apology strategies
used by male and female respondents.

In order to find the apology strategies used by the sample, the researchers used two
types of tables: the first to clarify the method used by the student to show his/her remorse
(viz., statement of remorse), and the second to show other apology strategies employed in
each situation. The statement of remorse was manifested in different realizations including
one expression, two expressions, three expressions, one expression with one or more
intensifiers and two expressions with one or more intensifiers.

The researchers list the apology strategies used by the students including those which
do not imply an apology. One such strategy that was not addressed in previous research,
including Sugimoto’s (1997) is that in which the wrongdoer exonerates him-/herself and,
instead, blames the victim for what had happened.

537
IV. CONCLUSIONS

Gender played an important role in the use of apology strategies which coincided with
reports in previous research. Male and female respondents differed in their use of apology
strategies. Female respondents' tendency to apologize more than their male counterparts was
reflected in their overt use of the statement of remorse. Female respondents also used more
manifestations of the statement of remorse than their male counterparts. Although both male
and female respondents used the same primary strategies of accounts, reparation,
compensation, and self-castigation, female respondents used them more than their male
counterparts. Furthermore, female respondents used slightly fewer non-apology strategies
than male respondents. To remedy the potential ‘intercultural’ misunderstanding between
men and women, proponents of the two-culture theory call on educators to develop programs
that foster ‘multicultural awareness’ of stylistically different, albeit functionally equivalent,
approaches to communication events (Wood, 1993).

REFERENCES

Basow, S.A. and Rubenfeld, K. (2003). "Troubles talk": Effects of gender and gender-typing.
Sex Roles: A Journal of Research, 48, 183-7.
Bate, B. and Bowker, J. (1997). Communication and the sexes (2nd ed.). Prospect Heights,
Illinois: Waveland Press.
Burleson, B. R. (1997). A different voice on different cultures: Illusion and reality in the
study of sex differences in personal relationships. Personal Relationships, 4, 229-41.
Cameron, D. (1992). Naming of parts: Gender, culture, and terms for the penis among
American college students. American Speech, 67, 367-82.
Cameron, D., McAlister, F., and O'Leary, K. (1989). Lakoff in context: The social and
linguistic functions of the tag questions. In J. Coates and D. Cameron (Eds.), Women
in their speech communications. London: Longman.
Goldsmith, D.J. and Fulfs, P.A. (1999). "You just don't have the evidence": An analysis of
claims and evidence in Deborah Tannen's You Just Don't Understand. In M.E. Roloff
(Ed.), Communication yearbook 22. Thousand Oaks, California: Sage.
Goodwin, M.H. (1980). Directive-response speech sequences in girls’ and boys’ task
activities. In S. McConnell-Ginet, R. Borker, and N. Furman (Eds.), Women and
language in literature and society. New York: Praeger.
Gray, J. (1992). Men are from Mars, women are from Venus. New York: Harper Collins.
Huston, A.C. (1985). The development of sex typing: Themes from recent research.
Developmental Review, 5, 1–17.
Jefferson, G. (1988). On the sequential organization of troubles-talk in ordinary conversation.
Social Problems, 35, 418-41.
Lakoff, R.T. (1975). Language and woman’s place. New York: Harper and Row.
Leaper, C. (1991). Influence and involvement in children’s discourse: Age, gender, and
partner effects. Child Development, 62, 797–811.
Leaper, C. (1994). Exploring the consequences of gender segregation on social relationships.
In C. Leaper (Ed.), Childhood gender segregation. San Francisco: Jossey-Bass.
Maccoby, E.E. (1998). The two sexes: Growing up apart, coming together. Cambridge,
Massachusetts: Belknap Press/Harvard University Press.
MacGeorge, E.L, Graves, A.R., Feng, B., Gillihan, S.J., and Burleson, B.R. (2004). The myth
of gender cultures: Similarities outweigh differences in men's and women's provision
of and responses to supportive communication. Sex Roles: A Journal of Research, 50,
143-75.

538
Maltz, D.N. and Borker, R.A. (1982). A cultural approach to male-female mis-
communication. In J.J. Gumperz (Ed.), Language and social identity. Cambridge:
Cambridge University Press.
Michaud, S.L. and Warner, R.M. (1997). Gender differences in self-reported response to
troubles talk. Sex Roles: A Journal of Research, 37, 527-40.
Porter, R. and Samovar, L. (1985). Approaching intercultural communication. In L. Samovar
and R. Porter (Eds.), Intercultural communication (4th ed). Belmont, California:
Wadsworth.
Schloff, L. and Yudkin, M. (1993). He and she talk: How to communicate with the opposite
sex. New York: Plume Books.
Sugimoto, N. (1997). A Japan-U.S. comparison of apology styles. Communication Research,
24, 4, 349-70.
Swann, J. (1992). Girls, boys, and language. New York: Blackwell.
Tannen, D. (1990). You just don't understand: Women and men in conversation. New York:
William Morrow.
Tannen, D. (1994). Talking from 9 to 5: How women’s and men’s conversational styles
affect who gets heard, who gets credit, and what gets done at work. New York:
William Morrow and Company, Inc.
Tannen, D. (1995). Gender and discourse. New York: Oxford University Press.
Thorne, B. (1993). Gender play: Girls and boys in school. New Brunswick, New Jersey:
Rutgers University Press.
Troemel-Plotz, S. (1991). Selling the apolitical. In J. Coates (Ed.), (1998). Language and
gender: A reader. Oxford: Blackwell.
Vangelisti, A.L. (1997). Gender differences, similarities, and interdependencies: Some
problems with the different cultures perspective. Personal Relationships, 4, 243-53.
Wood, J.T. (1993). Engendered relations: Interaction, caring, power, and responsibility in
intimacy. In S. Duck (Ed.), Social context and relationships. Newbury Park,
California: Sage.
Wood, J.T. (1997). Clarifying the issues. Personal Relationships, 4, 221-8.
Wood, J.T. (2000). Relational communication (2nd ed.). Belmont, California: Wadsworth.
Wood, J.T. (2001). Gendered lives: Communication, gender, and culture. (4th ed.). Belmont,
California: Wadsworth.
Wood, J.T. (2002). A critical response to John Gray's Mars and Venus portrayals of men and
women. Southern Communication Journal, 67, 201-10.

539
CHAPTER 19

MARKETING

540
SALESPEOPLE’S PERSONAL VALUES:
THE CASE OF WESTERN PENNSYLVANIA

Tijen Harcar, Penn State University at Beaver


Tuh10@psu.edu

Mahmut Paksoy, Istanbul University


mpaksoy@istanbul.edu.tr

ABSTRACT

Shared personal values of the members in an organization are crucial for the long-
term success and employee satisfaction. A key contribution of this study is measurement of
the personal values of the salespeople in Western Pennsylvania. Schwartz’s Value Inventory
has been used for the measurement.

I. INTRODUCTION

Values play an important role in understanding the behavior of individuals at work


because individuals assess events and take actions using values as their criteria (Kluckhohn
1951, Rokeach 1973, Williams 1968). Values are also central to cultural differences
(Hofstede 1980); managers from different cultures are anticipated to vary in the importance
that they place on personal values and, since values influence behavior, differences in the
importance of values will be reflected in differences in managerial behavior
(Bigoness/Blakeley 1996, England/Lee 1974, Ralston et al. 1992). Researches have analyzed
the effect of differences in national culture on a wide range of management issues, including
a firm's preference for market entry (Kogut/Singh 1988), the likelihood of joint venture
survival (Barkema/Vermeulen, 1997), cross-border acquisition performance
(Morosini/Shane/Singh 1998), product development (Nakata/Sivakumar 1996), strategic
management and decision-making (Newman/Nollen 1996, Geletkanycz 1997, Schuler et al.
1996), the transfer of technology (Kedia/Bhagat 1988), human resource management
(Schuler et al. 1996, Schuler/Rogovsky 1998), and managerial performance
(Neelankavil/Mathur/Zhang 2000). In spite of important role that values play in human
motivation (Gutman 1982), the personal selling literature is almost devoid of efforts to
understand the role of values in the motivation, performance, and the satisfaction of the sales
forces (Apasu at.all 1987). In this study, as a first step to understand the role of personal
values in sales force, Schwartz’s Value Inventory was applied to the target population.

Schwartz’s Value Inventory presents a theory of potentially universal aspects in the


content of human values. 10 types of values (power, achievement, hedonism, stimulation,
self-direction, universalism, benevolence, tradition, conformity, and security) are
distinguished by their motivational goals. Schwartz also organizes these values in 2 basic
dimensions: openness to change vs. conservation and self-transcendence vs. self-
enhancement. The first group of values largely express conservation of the status quo versus
those that express openness to change; the second group of values that largely express self-
transcendence to promote the interest of other persons and the natural world as opposed to
values which define self-enhancement to promote own interest regardless of others’ interests
(Schwartz and Bilsky, 1987; Schwartz, 1992, 1994, 1996, 1999; Schwartz and Sagiv, 1995;
Schwartz and Huismans, 1995; Schwartz and Sagie, 2000).

541
II. METHODOLOGY

We examined the values of salespeople from different companies in Western


Pennsylvania. The respondents selected for this study was the salespeople of small, locally
owned independent companies. There are three major rationales supporting the selection of
these subjects for this research. First, this sample structure allowed for control of the possible
effects of organizational culture, given that these companies are independently owned.
Second, managers of large multinational companies risk being contaminated by the national
culture of their headquarters (Selmer/De Leon 1996, Dunphy 1987, Hofstede et al. 1990),
while our respondents were free from this influence. Third, since the data to be collected
were of a highly personal nature, the sample frame had to be one that could easily be targeted
for personal interviews (Reichel, Arie, Flynn, David M., 1983).

Table 1. Characteristics of Salespeople

Variable Frequency Percentage Characteristic


Gender 108 52.7 Female
97 47.3 Male

Education 20 9.8 Some HS


45 22.0 HS Grad
56 27.4 2-yr College
65 31.8 College Gr.
18 8.8 Post Grad.

Age 3.4 4.3 <25


1.2 12.2 25-34
12.0 19.4 35-44
23.8 37.2 45-54
13.1 12.2 >55

Income 27 13.1 <30 K


46 22.4 30-39K
59 28.7 40-49K
39 19.0 50-59K
34 16.6 >60 K

Marital Status 65 31.7 Single


87 42.4 Married
51 24.8 Divorced
2 0.0 Widowed

Tenure with the 39 19.0 <1 Yr


company 58 28.2 1-2 Yrs
49 23.9 2-5 Yrs
44 21.4 5-10 Yrs
15 7.3 >10 Yrs

As always when there is personal data collection, there is a risk of interviewer


response bias. To reduce this possibility, the interview was structured so that the interviewer's
role was limited to: 1) asking screening questions, 2) delivering the questionnaire, 3)
clarifying, as necessary, any misunderstandings about the values and, 4) collecting and
returning the completed questionnaire. The questionnaire contained: background questions

542
about the characteristics of the salespeople and their values. As shown in Table 1 above, the
sample consisted of 52.7 percent females and 47.3 percent males. The sample members were
highly educated (40.6 percent had College or Post-College degrees), about 37.2 percent were
between 45-54 years of age, and about 35.6 percent earned incomes above $50,000. One of
the most striking characteristics of the sample was that 52.1 percent of the salespeople were
the tenure between 1-5 years.

III. RESULTS

Schwartz’s Value Survey was applied to the salespeople and they were asked to
indicate on a five-point Likert scale what importance for each value is given as guiding
principles of their lives. Table 2 shows the results of fifty-seven statements related to
salespeople’s given importance to these values.

Table 2: Salespeople’s Given Importance to Different

543
Using the ‘factor analysis’ module in SPSS, the ‘value’ data was analyzed. The principal
components' method for initial factor extraction with the criterion Eigenvalue greater than 1
and Varimax method of rotation was applied. Sample size is an element that can affect the
adequacy of the factor models.

All the items were first factor analyzed. Rotated factor loadings were examined
assuming different numbers of factors for extraction. Deleting nine different values; respect
for tradition, social recognition, wisdom, humble, healthy, preserving public image,
responsible, clean, self-indulgent (values 18, 23, 26, 36, 42, 46, 52, 56, 57) all the salespeople
responses could be incorporated into the analysis. This was carried out. The results showed
considerable improvement over the previous attempt as some meaningful patterns emerged.
Table 3 depicts the sorted rotated factor loadings for the items based on nine-factor
extraction. The total figure of 77.37% represents the percentage of variance of all 48 items
explained by the nine factors.

Table 3: Factors of the Salespeople’s Given Importance to Different Value

544
It has been found that there are nine different factors related to values as guidance of
salespeople’s life. These are:

Factor 1: Power-Competition-Achievement Factor 2: Universalism


Factor 3: Benevolence Factor 4: Conformity
Factor 5: Tradition Factor 6: Hedonism
Factor 7: Self Direction Factor 8: Security
Factor 9: Stimulation

IV. CONCLUSION AND FUTURE RESEARCHES

The purpose of this study was to investigate the personal values of salespeople in
guiding their lives. Data confirm the widespread presence of 10 value types according to
Schwartz’s Value Inventory. Future studies should combine information about both
salespeople and their managers. Sales managers should be requested to evaluate the
salespeople’s performance. After having both data from salespeople and their manager’s
causal relationship between salesperson’s values and performance can be investigated. The
focus of the future researches can be on how organizational and individual factors affect sales
force commitment, satisfaction, and turnover in sales organizations and also individual sales
force values and then consideration if these differences influence the sales force values of
organizational commitment, job satisfaction, and turnover.

REFERENCES

Apasu, Yao, Ichikawa, Shigeru, Graham, John L..“Corporate Culture And Sales
Force Management In Japan and America“, The Journal of Personal
Selling & Sales Management. New York: Nov 1987.Vol.7, Iss. 3; pg. 51-
63.
Gutman, J. “ A Means-End Chain Model Based on Consumer Categorization
Processes“ Journal of Marketing, 46, Spring, 1982, 60-72.
Schwartz, Shalom H. and Sagie, Galit, “Value Consensus and Importance: A
Cross-National Study”, Journal of Cross-Cultural Psychology, Vol.31,
Iss.4, Jul 2000, 465-497.
Schwartz, Shalom H., “Cultural Value Differences: Some Implications for Work”,
Applied Psychology: An International Review, 48, 1999, 23-48.
Schwartz, Shalom H., “Value Priorities and Behavior: Applying a Theory of
Integrated Value Systems”, C. Seligman, J. M. Olson, ve M. P. Zanna
(Eds.), The Psychology of Values: The Ontario Symposium, No. 8, Hillsdale,
NJ: Lawrence Erlbaum, 1996, 1-24.
Schwartz, Shalom H. and Huismans, Sipke, “Value Priorities and Religiosity in
Four Western Religions”, Social Psychology Quarterly, Vol.58, Iss.2,
June 1995, 88-107.
Schwartz, S.H. and Sagiv, L., “Identifying Culture-Specifics in the Content and
Structure of Values”, Journal of Cross-Cultural Psychology, 26, 1995,
92-116.
Schwartz, Shalom H., “Are There Universal Aspects in the Structure and Content
of Human Values?”, Journal of Social Issues, 56, 1994, 19-45.
Schwartz, Shalom H., “Universals in the Content and Structure of Values:
Theoretical Advances and Emprical Tests in 20 Countries”, Advances

545
in Experimental Social Psychology, 25, 1992, 1-65.
Schwartz, S.H., and Bilsky, W., “Towards a Psychological Structure of Human
Values”, Journal of Personality and Social Psychology, 53, 1987, 550-
562.

546
BEHAVIORAL AND ATTITUDINAL DIFFERENCES BETWEEN ONLINE
SHOPPERS VS NON-ONLINE SHOPPERS

Ugur Yucelt, Penn State-Harrisburg


uqy@psu.edu

ABSTRACT

The Internet is a growing market, and more shopping experiences are needed. The
study attempts to investigate and compare attitudinal and behavioral characteristics of online
purchasers vs. non-purchasers. Demographic characteristics of online purchasers indicated
that majority of them are younger than 24, moderate or high-risk takers, single, college
educated, and earned an average income of $35,000. On the other hand, majority of non-
purchasers are younger than 35, low risk takers, married or divorced, college educated, and
earned an average income of $15,000.

I. INTRODUCTION

Consumers, as being decision maker for their own shopping needs and wants, are in
position to make the best choices in terms of product types and brand names. The Internet is
now offering shopping environment emphasizing to convenience, lower prices, quality, large
variety of product choices, brand names, and a global shopping network. Online shoppers
enjoy with 24 hours a day and 7 day a week shopping environment, and avoid unnecessary
driving to shopping malls. Online non-shoppers still believe that the online shopping is a
risky decision and it is inconvenient (Rowley, 2000; Warrington, Abgrab, and Caldwell,
2000; Bhatnacar, Misra and Rao, 2000).

Characteristics of Online shoppers


The online shoppers are heterogeneous group of consumers who do not like to go
shopping away from their home. They do not like to drive to malls, or shopping centers, and
wait on the cashier lines to complete their purchase. They make price comparison on the
Internet; therefore, they use the Internet for their purchases. The Internet also helped them to
find variety of merchants for their needs and wants. The online shoppers are younger,
wealthier, and better educated, and are convenience and variety seekers, innovative,
impulsive, and have better computer/Internet literacy. (Warrington, Abgrab, and Caldwell,
2000; Swinyard and Smith, 2003).

Online Shopping Behaviors


As an environment for consumption, the Internet can provide opportunities for
obtaining information and needed resources (Coupey, 2001). Consumers may find almost
everything on the Internet, if they spend enough time with it. Although, the time consuming
nature of online shopping is an obstacle for its growth, faster search engines would contribute
positively to shopping online. In this respect, a study (Rowley, 2000) identified the five
different types of Web visitors describing them as directed information seekers, undirected
information seekers, bargain hunters, entertainment seekers, and directed buyers. Each group
of Web visitors is willing to spend different amounts of time on the Internet depending upon
the purpose of the search. (Rowley, 2000).

547
Characteristics of Online Non-shoppers
Online non-shoppers are group of consumers who enjoy shopping away from their
home. They do not mind to drive far away for shopping and spend time in cashier lines
during the busy times. In addition, they would like to see, touch, and inspect their choices
instead of looking its pictures in the Web sites. They use their own experiences or retail
salespersons for information about pricing, product quality, and colors. Online non-shoppers
do not interested in technology/computer; have highest fears of credit card theft and monetary
loss. They belong to high-risk averse group, and do not trust online ordering. (Swinyard and
Smith, 2003).

II. PURPOSE OF THE STUDY

Research on online shopping behaviors has become a popular topic and e-retailers
have became interested in determining “who buys on the Internet?”, “what do they think
about online shopping?”, “what do they purchase on online?”, “how often do they shop on
online?”, ‘what is their experiences on the Internet?, “where do they use the Internet for their
shopping experiences?”, “how much time they spent on the Internet”, and the perceived risk
taking behaviors between online and online non-shoppers? Therefore, this study attempts to
investigate attitudinal and behavioral characteristics of online vs. online non-shoppers and
examines various research questions such as reasons behind online and non-online shopping,
satisfaction level and product categories of online products, Internet usage among online vs.
non-online shoppers, and demographic profiles on online vs. non-online shoppers.

III. METHODOLOGY

Data of this study is gathered in the Central Pennsylvania region and analyzed in the
SPSS software using frequency statistics, stepwise regression and Pearson correlation
analysis. The stepwise regression technique is used because of its power to identify the
significant variables, which are reasons/factors of online and non-shopping. The Pearson
correlation is selected for its simplicity and strength to identify the purpose of this study. The
Pearson correlation coefficient helps to obtain a measure of the degree of significant linear
association that exists between the two a dependent and an independent variable (Smith and
Albaum, 2005). In this study, the dependent (Y-variable) and independent (X-variable) are
applied to identify reasons/factors affecting online and non-online shopping.

IV. FINDINGS

Demographic characteristics of the online shoppers indicated that they are female
(54%) or male (46.0), 18-24 year olds (51.3%) or 25-35 year olds (27.3%), single (58.0%) or
married (24.0%) without children (71.3%) or have one or more children (28.6%) , earning
less than $15,000 (30.7%) or $15,000-$30,000 (29.3%) or higher than $31,000 (38.6%), live
in urban areas (17.3%) or rural (29.3%) or suburbs (53.3%), have some college (56.0%) or
college degree (24.0%) or high school education(12.7%) and are white-collar professionals
(50.6%) or blue collar workers(12.0%). In addition, data demonstrated that the majority of
online shoppers own PC (94.0%) and have access to the Internet (100.0%).

Demographic characteristics of online non-shoppers; on the other hand indicated that


they are female (69%) or male (31.0), 18-24 (35.7%), 25-35, (21.4.3%), 36-50 year olds
(23.8%) or 51 year or older (19.0%), single (33.3%), married (31.0%) or divorced (23.8)
without children (50.0%) or have one or more children (50.0%) , earning less than $15,000

548
(31.0%), $15,000-$30,000 (40.5%) or higher than $31,000 (28.6%), live in urban areas
(23.8%), rural (33.3%) or suburbs (42.9%), have some college (47.6%), college degree
(11.9.0%) or high school education(35.7%) and are white-collar professionals (40.4%) or
blue collar workers(23.8%). In addition, data demonstrated that only 72.2% of online non-
shoppers own PC or have access to the Internet (88.1 %).

Behavioral characteristics of online shoppers showed evidences that the most


important factors that influence their decision to shop online were convenience (43.5%), price
(23.9%), and relatives and friends (26.1%). The majority of them indicated that they are very
satisfied (8.7%), satisfied (44.6%), and somewhat satisfied (10.9%) with their online
experiences. They are either moderate (62.0%) or high (15.3%) risk takers and majority of
their purchases were electronics (23.9%), apparel (18.5%), music/CD (29.3%), toys (10.9%),
books (22.8%), and travel (21.7%). They shop online for convenience (82.0%) and low prices
(42.7%). However, they feel that security issue was somewhat (42.0%) or definitely (44.0%)
important, and more security on the Internet is either somewhat (41.3%) or definitely (36.0%)
needed.

Behavioral characteristics of online none-shoppers; on the other hand, demonstrated


that the most important factors that influence their decision are security (61.9%) and risk
(52.4%), and believe that more security (59.5%) is needed. They are either low (47.6%) or
moderate (47.6%) risk takers, and do not think that online shopping is safe (64.3%) and
convenient (61.9%), and cheaper (66.7%).

According to stepwise regression analysis, the regression model has a significant (a =


.01) and high F-value. This outcome reveals power of the model and shows significant
relationships (a =.01) between the online shopping and independent variables of frequency of
shopping, satisfaction level with online shopping, the Internet usage, convenience and low
prices. In addition, high R-square value demonstrates the explained (72.8-percent) variations
between dependent and independent variables. Table 1 shows online purchasing behaviors.

Table 1: Stepwise Regression Analysis of Online vs. Non-Online Purchasing Behaviors

B t Sig. R-Squire Sig. F-Value Sig.

Constant .203 1.982 .114 .728 .001 86.849 .000


How Often .221 6.574 .000
Satisfaction .109 5.955 .000
Internet .337 4.444 .000
Convenience -.190 - 4.366 .000
Low prices -.123 -3.504 .001

Pearson correlation analysis of online shoppers; on the other hand, showed evidence
that there is a significant (a = .05) correlation between online shopping behaviors and
security of the Web site, level of risk, low prices of online products and education level of
online shoppers. They indicated that they buy toys, music and CD, books and travel. Table 2
shows factors affecting online shopping.

549
Table 2: Online vs. Online Non-Shopping Behaviors

Online shoppers Online Non-Shoppers


Security Access
. 248 -.306
(.002) (.046)
Risk Taking Convenience
.189 -.540
(.021) (.003)
Education Safety
.530 -.812
(.000) (.000)
Toys Risk Taking
.232 .414
(.004) (.025)
CD Satisfaction
.185 -415
(.023) (.025)
Books Security/More
.197 -415
(.018) (.023)
Travel
.162
(.048)

According to Pearson correlation coefficients, which are shown in table 2, the


negative sign of risk factor indicate that high risk of online shopping decreases the interest on
online shopping. On the other hand, the positive Pearson correlation coefficients of the
variables namely security on the Internet, lower prices and education level suggest that online
shopping have a positive affect on the behaviors of respondents.

V. CONCLUSION

Findings of this study demonstrated similarities with previous research findings, and
shared the some concerns regarding the security and safety on the Internet, and need to be
resolved. They stated that they would shop online for convenience, and lower prices than the
offline. On basis of the findings of this study, the online Internet marketers must focus on the
Internet security and safety. Consequently, they should try to introduce new technologies to
make the Internet market a safer environment for every shopper. In addition, they should be
competitive to the offline brick and mortar retailers by offering lower prices and convenient
and more secure shopping environment in term of service, speed of navigation, ordering
process, and delivery time and cost.

Online marketers should be receptive for suggestions, improvements, and


technological advances so that consumer and the online marketers must share their
experiences, technical training, and concerns. The goal is to develop an alternative marketing
environment, which should be opened to everyone without boundaries.

550
REFERENCES

Bhatnacar, A, S. Misra and H.R. Rao. On Risk, “Convenience, and Internet


Shopping Behaviors”, Communications of the ACM, 2000, November, Vol.43, 98-
1005.
Coupey, Eloise. (2001). Marketing and The Internet, New Jersey: Prentice-Hall.
Smith, S.M. and G. S. Albaum. Fundemantals of Marketing Research. Sage, 2005.
Swinyard, W. and S. M. Smith. “Why People (Don’t) Shop Online: A Lifestyle
Study of the Internet Consumer” Psychology & Marketing, Vol. 20, July 2003, 567-
597.
Rowley, Jennifer. (2000). “Product Search in E-Shopping: A Review and Search
Propositions”, Vol.17, No.1, Journal of Consumer Marketing, 20-35.
Warrington, Traci B., Nadia J. Abgrab and Helen M. Caldwell. Competitiveness
Review “Building Trust to Develop Competitive Advantage in E-Business
Relationships”, 10 (2), 2000, , pp.160-168.

551
AN EXPLORATORY MODEL FOR TURKISH HEALTH CARE CONSUMERS

Talha Harcar, Penn State-Beaver


tdh13@psu.edu

Karen C. Barr, Penn State-Beaver


kcb10@psu.edu

Tijen Harcar, Penn State-Beaver


tuh10@psu.edu

ABSTRACT

Individuals and societies have become increasingly aware of the importance of


preventative health care and faced marketing effort of health care services. Very little
research has been done in developing countries on preventative heath care behaviors,
especially as it relates to those parties involved in providing the information, as well as those
parties involved in receiving the information. Our research purpose is to build an exploratory
model of the Turkish consumers’ attitudes toward preventive health care, wellness and health
care marketing and to also find the level of knowledge consumers have about health care.
Results demonstrate a significant encouraging connection between wellness orientation and
agreement with marketing efforts and preventive health care behavior. The relationship
linking preventive behavior and agreement with marketing efforts is found not significant.

I. INTRODUCTION

The introduction and development of preventive health care programs, especially in


developed countries, show that patients are no longer passive beings. They are taking a more
active role in their demand of health care. They are more likely to question doctors, discuss
possible results of the treatment and shop around for doctors and health care institutions
(Sanchez 1984, Boscarino and Stelber 1982).

There are several challenges facing providers of preventative health care. One
challenge deals with getting the at-risk person to realize they are at risk. Generally speaking,
people tend to think they are immune to the consequences of health threats. They may ignore
warning labels and routinely indulge themselves in unhealthy activities thinking the resulting
negative outcomes only happen to “other” people. This sense of denial can be very
frustrating for preventative health care providers. A second challenge for providers is the
high failure rate of long term compliance of strict programs these at-risk patients must
incorporate into their lives. Patients with disease processes such as hypertension, ulcers and
diabetes often fail to routinely adhere to their daily regimen, which, when not followed
strictly may provide less than desired results (Jayanti, 1997).

Due to the broad spread crisis in escalating health care costs, efforts to educate the
public about health-promoting lifestyles, or wellness, have been praised as a partial solution
to this growing concern. (Elias and Murphy 1986; De Arrellano 1990). One strategy many
organizations have taken to deal with rapidly escalating health insurance premiums has been
the development and implementation of health promotion or wellness programs designed to

552
improve employee health, in an attempt to decrease the costs associated with health-related
employee benefits (Busbin and Campbell 1990).

Traditionally, hospitals rarely provided wellness-related programs. However, they are


now beginning to realize the revenue potential this new and untapped market segment could
provide. This has been a major motivation for hospitals to offer health promotion programs
(Longe and Ardell 1981). Based on the above trends, it is in the best interest of those
businesses, wishing to promote wellness-related activities, to be able to identify individuals
who would have the greatest likelihood of participating. Because the identification of these
individuals represents market opportunities, a market-oriented approach is suggested (Kraft
and Goodell, 1993).

Our research purpose is to build an exploratory model of the Turkish consumers’


attitudes toward preventive health care, wellness and health care marketing and to also find
the level of knowledge consumers have about health care.

II. METHODOLOGY

Wellness orientation measures individual characteristics, behavioral tendencies,


cognitions, and affective orientations that will ultimately influence a person’s state of well
being. To measure it, a scale adapted from Jayanti and Kraft and Goodell were used (Jayanti,
1997; Kraft and Goodell, 1993). Respondents indicated their level of agreement with six
questions on a five-point Likert-type scale (1 = strongly disagree). A Cronbach's alpha of
0.79 is obtained (see the table
2 for construct and item content).

Health knowledge measures the extent to which health care customers (most cases
patients) understand health problems and the ways to solve their health problems. This
construct is measured by knowledge scales developed by Jayanti 1997 and Goud 1988).
Respondents indicated their level of knowledge with six questions concerning familiarity of
health care issues and provided answers on a five-point Likert-type scale (1 = strongly
disagree). A Cronbach’s alpha reliability of 0.80 for the scale is found. A scale was used from
the preventive health care behavior literature that covered three subgroups of preventive
behavior; diet, life style and preventive medicine were used to measure preventive health care
behavior. Respondents indicated their agreement about each preventive behavior in keeping
their good health on a five-point scale (1 = strongly disagree). This scale is calculated an
alpha of 0.71. Agreement with the marketing efforts in terms of advertising was captured
using a three-item scale from the health care marketing literature (Kraft and Goodell, 1993;
Goud 1988). Respondents indicated their agreement about each statement on a five point
scale (1=strongly disagree, and 5=strongly agree). This scale reported a coefficient
alpha of 0.76.

The data used in this study was derived from a survey of 565 randomly selected
individuals above 21 years old in several Istanbul metropolitan areas. The data collection
instrument was a self administrated questionnaire consisting of closed ended questions
carefully prepared with the assistance of faculty from the health management department at
Istanbul University. The questionnaire consisted of two sections that have questions related to
study sample demographic characteristics and also attitudes, behaviors and knowledge about
health care issues. As part of a large health related consumer survey, data were gathered by

553
research assistants of the marketing department of Istanbul University. The questionnaires
were verified by calling back some of the respondents and by debriefing the interviews.

III. DATA ANALYSIS

Confirmatory factor analysis was used to test the measurement of the seven latent
constructs (see Figure 1). Deleting seven different statements (statements 3, 5, 8, 9, 15, 16,
18) all the responses could be incorporated into the analysis. This was carried out. The results
showed considerable improvement over the previous attempt as some meaningful patterns
emerged. The final model contains 19 observed variables and seven underlying constructs.
Health knowledge, wellness orientation and agreement with marketing efforts are first-order
factors. Preventive health care behavior is a second-order factor, composed of three
dimensions of diet, life style and preventive medicine.The internal consistency of each
construct in the model was assessed using measures of composite reliability, Cronbach's
alpha, and variance extracted. Table 1 presents these figures for the measurement model. All
constructs exceed the 0.7 recommendation for composite reliability and Cronbach's
coefficient alpha, providing evidence of internal consistency. Evidence of convergent validity
is provided when the coefficient of each indicator (variables with a V label in Figure 1) to its
construct is significant. The parameter estimate for every indicator in the tested measurement
model is significant. Table 1 also contains the overall results of the revised measurement
model. The chi-square value for the model is 162.58, based on 137 degrees of freedom, with
a probability of 0.078, indicating a good fit. The chi-square value is an absolute measure of
fit and can be sensitive to sample size and number of indicators in the model. The number of
indicators and constructs in this model is high, so fit indices other than the chi-square should
be examined. Incremental fit measures compare the fit of a tested model to a null baseline
model. EQS provides two incremental fit measures. The first, the Bentler-Bonett normed fit
index (NFI) is 0.979 for the model.

Table 1: Measurement Model Results


Internal Consistency
Composite Variance
Construct Reliability Alpha Extracted
Wellness Orientation 0.74 0.79 0.45
Health Knowledge 0.75 0.80 0.54
Diet 0.77 0.80 0.51
Life Style 0.74 0.78 0.53
Preventive Medicine 0.76 0.71 0.52
Agreement with Marketing
Efforts 0.71 0.76 0.45

Parameter Estimates

Parameter
Construct Path Labels Estimate t-Value
Wellness Orientation V20 0.557 6.312
V21 0.536 6.682
V22 0.740 9.106
V23 0.688 8.889
Health Knowledge V14 0.572 8.100
V17 0.633 8.701
V19 0.800 12.196

554
Diet V1 0.479 7.847
V2 0.602 10.042
V4 0.464 9.406
Life Style V6 0.559 5.088
V7 0.738 9.919
V10 0.789 9.986
Preventive Medicine V11 0.812 8.979
V12 0.673 7.979
V13 0.782 9.604
Agreement with Marketing
Efforts V24 0.651 7.209
V25 0.726 8.517
V26 0.649 7.232

Overall Fit Chi Square df P


162.58 137.000 0.078

Figure 1: Exploratory Model of Consumers Attitudes toward Health Care Services

V1 V20 V21 V22 V23

V2 Factor 5
Diet
Factor 3
V4 Importance Attributed
to
Wellness Orientation

V24
V6
Factor 3
V7 Factor 6 Factor 1 Agreement with V25
Life Importance Marketing Efforts
V10 Attributed to
Preventive Health
Care V26

V11 V14
Factor 7 Factor 4
V12 Preventive Health
V17
Medicine Knowledge
V13 V19

The second, the Bentler-Bonett nonnormed fit index (NNFI), takes into account the degrees
of freedom in a model. This index is 0.986. Parsimonious fit measures are more precise than
absolute fit and incremental fit indices because they evaluate the fit of a model in relationship
to its degrees of freedom and sample size. The parsimonious fit index calculated in EQS is
the comparative fit index (CFI). The comparative fit index for the proposed measurement
model is 0.988. The various fit indices support the revised measurement model.

Estimation of the structural parameters is the second step in the structural equation
modeling technique (Anderson and Gerbing 1988). The overall fit of the model is good, with
chi-square of 12.58, 8 degrees of freedom, p = 0.134, NFI = 0.989, NNFI = 0.991, CFI =
0.997. Preventive behavior is a second-order factor in the structural model and the three first-

555
order factors (diet, life style and preventive medicine) loading onto it all were significant. The
loadings are included in Figure

IV. CONCLUSIONS

Results show a significant positive relationship between wellness orientation and


agreement with marketing efforts (F2 F3 = 0.143; p < 0.05). This suggests that the more a
person is wellness oriented, the more likely he or she is to perceive marketing efforts for
health care to be very normal. The significant link (F1 F2=0.339; p<0.01) between wellness
orientation and preventive behavior shows a positive relationship between the importance
given to preventive behavior and wellness orientation. The relationship between preventive
behavior and agreement with marketing efforts is not significant according to the link
between F1 and F3. This finding suggests that even though a person may place importance on
preventive health care, they do not necessarily favor health care marketing efforts. A likely
explanation for this finding is that individuals giving importance to preventive health care
behavior believe that doctors and other health care providers should not be advertising their
services under any circumstances. Another finding surprisingly suggests a non-significant
link between preventive health care behaviors and health knowledge, according to the link
between F1 and F4. A possible reason for this could be that healthy behaviors are done by
the individual because of personal preference, not because they know these behaviors are
“good” for them. For example, a person may not have knowledge of the benefits of exercise,
but exercise routinely simply because they enjoy it and have been physically active since they
were younger.

REFERENCES

Anderson, James C., and David W. Gerbing. "Structural Equation


Modeling in Practice: A Review and Recommended Two-Step Approach,"
Psychological Bulletin 103(3), 1998, 411-423.
Boscorino Joseph and Steven R. Stelber, “Hospitals Shopping and Consumer
Choice”, Journal of Health Care Marketing, Spring, Issue 2, 1982, 15-23
De Arellano, Annette B. Ramirez. "Specialists in Educating The Public.,"
Geriatrics, 45 (February) 2, 1990, 71-2.
Elias, Walter S. and Robert J. Murphy, "The Case for Health Promotion
Programs Containing Health Costs: A Review of the Literature," The
American Journal of Occupational Therapy, 40 (November), 1986, 759-
63.
Jayanti K. Rama, “Preventive maintenance”, Marketing Health Services. Chicago:
Spring 1997.Vol.17, Iss. 1; pg. 36-45
Kraft Frederic, B. and Goodell Philips, “Identifying the health conscious consumers”,
Journal of Health Care Marketing, Fall 1993, 13, 3, p 18
Sanchez M. Peter, “Health Care Marketing at the Crossroads”, Journal of Health care
Marketing, Vol. 4., number 2, Spring 1984, 37-43

556
BENEFIT SEGMENTATION BY FACTOR ANALYSIS: AN EMPIRICAL STUDY
TARGETING THE SHAMPOO MARKET IN TURKEY

Talha Harcar, Penn State-Beaver


tdh13@psu.edu

Selim Zaim, Fatih University


szaim@fatih.edu.tr

ABSTRACT

Market segmentation is one of the most neglected tools of marketing in most


industries. Despite the substantial amount of investment made in research and development
and technology by many industrial companies, little investment so far has been made in
marketing strategy development related issues. This article demonstrates the application of
benefit segmentation to the soap, cleaning compound, and toilet preparation manufacturing
industry by using shampoo as a case example. Study results indicate that there are three
factors based on benefit segmentation; manageability, maintenance, and cleanliness.

I. INTRODUCTION

In recent years, the micromarketing (relationship marketing/person-to-person


marketing) approach has been used substantially. According to this approach, there are four
market levels; segments (consisting of a group of customers who share a similar set of needs
and wants), niches (a more narrowly defined group of customers seeking a distinctive mix of
benefits), local areas (target marketing which leads to marketing programs tailored to the
needs and wants of local customer groups), and individual customers (the ultimate level of
segmentation leads to a one-to-one marketing or simply called relationship marketing (Kotler
2003, p.279-282).

Market segmentation is based on a simple idea; not everyone wants the same things.
This idea was first introduced by Wendell Smith in 1956 (Smith, 1956). The crucial
advantage offered by market segmentation is that it provides a structured way of presenting
the marketplace facing the company (Wilkie, 1994). In today’s competitive markets,
segmentation has become an extremely important strategy for all companies.

Market segmentation is essential to marketing strategy since different consumer


clusters lead to a need for different marketing mixes (Doyle, 1987). The technique of
segmenting a market reveals profit opportunities and strategic windows (Abell, 1978) for new
competitors to challenge established market leaders. As a market develops, new segments
open up and older ones tend to decline. For example, the deepening of the product range has
seen the introduction of a greater variety of services for several market segments: first-time
home buyers, insurance services, and personal loans for the young, equity release and high
interest savings products for the elderly, etc. (Speed and Smith, 1992).

The concept of segmentation in marketing recognizes that consumers differ not only
in the price they will pay, but also in a wide range of benefits they expect from the product or
service and its method of delivery (Doyle, 1987). In this regard, buyers are divided according
to the different benefits they seek from the product. Behavioral segmentation is important

557
since segmentation only has value if it is related to consumer-product relationships (Arnold,
Price and Zinkhan, 2004). Because different people seek different benefits from the same
products or service, it is possible for marketers to use benefit segmentation (Russel, 1995).
Marketing administrative always challenge to classify the one most important benefit of their
product or service that will be mainly significant to the consumer. Consumers try to explore
what sort of benefits are desired and determine which of these benefits lead to the greatest
amount of customer approval. Benefits can take a variety of forms. Some derive benefits
from the functions product perform while others derive social benefits for instance, from
anonymous disposition behaviors such as charitable giving to social services agencies
(Arnold, Price and Zinkhan, 2004).

Market segmentation studies are also used to guide the repositioning of a product or
the addition of new market segment. Nintendo, for example has been very successful in
capturing a large share of the children’s market for its electronic games, but now seeks to
attract adult users (Schiffman and Kanuk 1999).

Geographic, demographic, and psychological segmentations are useful to locate and


describe target segments. However, they suffer from the underlying disadvantage that all are
based on an ex post facto analysis of the kinds of people who make up specific segments of a
market. With these methods, it can not find out what causes the segments to develop, nor
does buying behavior determine membership of a segment (Minhas and Jacobs, 1996). Most
of the time marketing managers first identify the segments and then look at the segment
members' behavior, instead of first identifying a certain kind of behavior and afterward
discover what kind of consumers are classified in the segment. Obviously, the way marketing
managers go about the task will determine the characteristics of the segments identified.

The disadvantages of geographic, demographic, and psychological segmentation bases


can be overcome by using benefit segmentation, a form of behavioral segmentation. Its
proponents argue that the benefits that people seek constitute the basic reason for purchase,
and therefore form the proper basis for market segmentation (Assael, 1995; Haley, 1968).

The major power of benefit segmentation is that the benefits sought have a causal
relationship to future behavior. However, complexities can occur in deciding the exact
benefits to be highlighted and become certain that costumers’ stated motives are their real
motives. Failure to understand the benefits which consumers may be seeking can prevent
market success (Young et al., 1978). Keeping those limitations in mind, this research has
focused on the task of applying benefit segmentation to the shampoo market in Turkish
Market.

II. METHODOLOGY

The development of the shampoo features was achieved by interviewing a focus


group. There were fifteen participants and the interviews were conducted by two moderators.
The group was made up of a cross-representational component of the community including
homemakers, students, professionals, retirees, and clerical workers.

The questioning included areas such as shampoo use, frequency of use, buying
criteria, brand loyalty and buying decision process for shampoo. Major results of the research
also included in-depth reasoning behind the features of shampoos.

558
The focus group interview revealed that there were seventeen features associated with
shampoo buying. They were price, brand name, fragrance, vitamins, naturalness, prevents
eye burn, prevents dandruff, softens hair, provides brightness, avoid hair loss, easy to foam,
easy to rinse, packaging, ergonomic, provides volume, avoid stickiness, appropriate for hair.
The focus group results were incorporated into the design of the questionnaire.

III. FINDINGS

Using a structured questionnaire, 240 customers were asked to rate the importance of
the shampoo features identified and to compare the performance of the many shampoo with
their “ideal shampoo.” In this way it was possible to see which quality characteristics are
more important for meeting or exceeding customers' expectations.

The rate of importance is a rating of the customer demands on a scale of 1 to 5. On


this scale 5 denotes most important and 1 denotes relatively low importance. The customers
should assign these ratings. Mean and standard deviation of the attributes are depicted in
table 1.

Table 1. Mean and standard deviations of the attributes

Variables Mean Standard Deviations


Price of shampoo 3,05 1,23
Brand of shampoo 3,70 1,20
Fragrance of shampoo 3,90 1,06
Vitamins 4,25 0,97
To be natural 4,05 1,02
Prevents eye burn 3,09 1,36
Prevents dandruff 4,25 1,05
Softens hair 4,33 0,92
Provides brightness 4,35 0,91
Avoids hair loss 4,63 0,81
Easy to foam 3,85 0,99
Easy to rinse 3,96 1,02
Packaging 2,71 1,25
Ergonomics 3,09 1,33
Provides volume 4,25 0,99
Avoids stickiness 4,55 0,78
Appropriate for hair 4,50 0,64

In this case, considering mean value “Avoids Hair Lose” has the highest importance.
“Avoids Stickiness and Appropriate for Hair ” attributes were ranked the next priority.
According to the Table 1, “Packaging” had the lowest priority. To be sure whether or not all
attributes are important, one needs an exploratory factor analysis.

Exploratory Factor Analysis


Exploratory factor analysis with varimax rotation was performed on the importance of
the attributes in order to extract the dimensions underlying the construct. The factor analysis
of the 17 attributes yielded 3 factors explaining 62.8% of the total variance.

559
Deleting 5 different attributes; Price, Brand Name, Prevention of Eye Burn, Package, and
Agronomy, all the responses could be incorporated into the analysis. This was carried out.
The results showed considerable improvement over the previous attempt as some meaningful
patterns emerged. It has been found that there are three different factors; "Manageability
factor” (Factor 1), “Maintenance factor” (Factor 2), “Cleanliness factor” (Factor 3). These
twelve items are shown as items in the Table 2. The total figure of 68.37% represents the
percentage of variance of all 12 items explained by the three factors.

Table 2: Factor Loading

Attributes Factor
1 2 3
Provides brightness 0.800
Provides volume 0.758
Softens hair 0.657
Fragrance of shampoo 0.572
Avoids stickiness 0.567
Prevents dandruff 0.548
Naturalness 0.851
Vitamins 0.747
Appropriate for hair 0.703
Avoids hair loss 0.632
Easy to foam 0.817
Easy to rinse 0.803

Factor analysis, which is a multivariate technique, links the six attributes in the Factor
1, and four attributes in the Factor 2, and two attributes in the Factor 3 in such a way that only
the unique contribution of the twelve attributes is considered for each factor. Thus, using
factor analysis avoids potential problems of multicollinearity. The Cronbach’s alpha measure
of reliability for the three factors were; 0.80 for Factor 1, 0.79 for Factor 2, and 0.74 for
Factor 3. All three values are above of the traditionally acceptable value of 0.70 in research
(Raju, 1995).

IV. CONCLUSION

Wills (1985) argues that a major condition for successful segmentation is that the
segmentation criteria must be appropriate to the purchase criteria of customers. It is
specifically the connecting of consumer groups through the benefits they look for that builds
benefit segmentation such a helpful and practical marketing technique. Seventeen customer
demands were obtained by focus group study. These customer demands were grouped under
three factors, which were labeled manageability factor, maintenance factor, and cleanliness
factor. These three factors include only twelve attributes out of seventeen. Five attributes
were eliminated because of low relationship with corresponding factors.

By examining their strengths, companies can pinpoint those benefit markets they are
most likely to appeal to. By noting and, if so desired, overcoming their weaknesses, they can
develop benefits to appeal to previously unreachable markets. By operating in this manner,
companies should be able to market more effectively and efficiently to one or more groups of
customers than is possible using more traditional methods of market segmentation. Minhas
and Jacobs, 1996)

560
REFERENCES

Arnould Eric, Price Linda and Zinkhan George, Consumers, 2nd edition Mc
Graw Hill, Irwin, Boston, 2004.
Darral, G. Clarke. “Johnson Wax”, Harvard Business School Review, August 2,
1999.
Doyle, P. "Managing the marketing mix", in Baker, M. (Ed.), The Marketing
Book, Heinemann, London, Ch. 12, 1987.
Haley, R.I., "Benefit segmentation: a decision-orientated research tool", Journal
of Marketing, Vol. 32 No. 3, July 1968, pp. 30-5.
Kotler, Philip, Marketing Management, Eleventh Edition, Prentice Hall,
Englewood Cliffs: NJ, 2003.
Minhas, Raj Singh, Jacobs, Everett M “Benefit segmentation by factor analysis:
an improved method of targeting customers for financial services” The
International Journal of Bank Marketing. Bradford: 1996.Vol.14, No. 3.
Russel Haley, “Benefit Segmentation: A Decision-Oriented Research Tool”,
Marketing Management, No.1, Summer, 1995, pp59-63
Schiffman G. Leon and Kanuk lazar Leslie, 2000, Consumer Behavior, Eight
Edition, Prentice Hall, New Jersey
Smith, W.R. (1956, "Product differentiation and market segmentation as
alternative marketing strategies", Journal of Marketing, Vol. 20 No. 3.
Wilkie, W., Consumer Behavior, 3rd edition, John Wiley & Sons, New York,
NY, 1994.
Wills, G., "Dividing and conquering: strategies for segmentation", International
Journal of Bank Marketing, 3 (4), 1985, pp. 36-46.
Young, S., Ott, L. and Feigin, B., "Some practical considerations in market
segmentation", Journal of Marketing Research, Vol. 15 No. 3, August, 1978, pp. 405-
12.

561
CHAPTER 20

ORGANIZATIONAL BEHAVIOR
AND
ORGANIZATIONAL THEORY

562
IMPACT OF PERSONALITY FACTORS ON PERCEIVED IMPORTANCE OF
CAREER ATTRIBUTES

Keith L Whittingham, Rollins College


kwhittingham@rollins.edu

ABSTRACT

Beyond the vocational types that Holland’s theory ascribes to individuals and job
functions, there are values or attributes of a career that can impact an individual’s choice of
one career over another. In this study, preferences for certain attributes load together onto
factors that represent different measures of career attribute preference. For a sample of 458
individuals, these factors are shown to have mild, but significant, regression relationships to
personality factors as measured using the RightPath6 assessment. R2 values are in the .05 to
.15 range, but for each career attribute factor, significant regression models exist with either
the personality factors or subfactors. For some individuals, these relationships to personality
could have a significant impact on career choice.

I. INTRODUCTION

While studies on the relationship between personality and vocational choice have
been numerous in recent years (e.g. Nordvik, 1996; Pietrzak and Page, 2001; and Bozionelos,
2004), fewer studies (e.g. Halaby, 2003; Johnson, 2001; and Karl and Sutton, 1998) have
investigated career values (also called job values or career attributes), and fewer still have
looked at the impact of personality on these value or attribute measures (Nordvik, 1996). The
choice of one job over another will typically involve a number of factors. These include
classifications of the specific type of work to be done, an inventory of the skills required for
the job and those possessed by the applicant, and an assessment of the benefits that may
accrue to the applicant as a result of engaging in a specific type of work. These, often
intangible, benefits have critical importance, as they will likely contribute to the overall level
of fulfillment experienced by the employee, and are related directly to job satisfaction
(Locke, 1976). In this study we seek to confirm the existence of significant relationships
between personality and choice of career attributes using different scales for personality and
career attributes than those used by Nordvik, thus broadening the concept validity.

IV. METHODOLOGY

The sample under study in this investigation consisted of 458 individuals, including a
mix of working professionals and college students, both male and female. For each individual
in the study, two instruments were administered, the RightPath6 profile to assess personality
traits, and a simple Work Values Inventory. The assessment instruments were administered as
part of a series of career counseling seminars, delivered in corporate settings, or through the
Career Management offices on college campuses. The RightPath6 profile is a forced-response
version of the Career Direct Personality Inventory (CDPI), developed for career counseling
(Toth, Stokes, Garnett, Ellis & Noble, 1995). In this instrument, usually delivered online,
personality is measured on six scales, and further defined by sixteen subscales. The scales of
personality are Dominance, Extroversion, Compassion, Conscientiousness, Adventurousness
and Innovation. These, along with the subscales are assessed by analysis of a subject’s
preferential selection of self-descriptive adjectives from presented lists. (RightPath

563
Resources, 2002). The Work Values Inventory consisted of a list of 8 career attributes. Each
subject was required to assign a rank order to the items, indicating the item’s perceived
importance to the subject in his/her choice of an ideal career. The 8 career attributes are listed
in Table I along with the mean and standard deviation of the rankings for each item. Due to
the forced response nature of this assessment, the scales are ipsative, i.e. a high score on one
item is only obtainable by awarding lower scores to other items. As such, the items tend to be
negatively correlated to each other, particularly for relatively small numbers of items – in the
extreme case of only two items the correlation would by -1.00. Nordvik (1996) presents a
clear discussion of this issue in relation to the scales of Schein’s career anchors.
TABLE I. LIST OF ATTRIBUTES IN THE WORK VALUES INVENTORY

Item Rank Order


Item # Name Mean Std. Dev.
1 Career progression 3.286 2.018
2 Help others grow and develop 4.688 2.305
3 High achievement 2.910 1.718
4 High income 4.177 2.032
5 High leadership position 5.312 1.949
6 Intellectual development 3.954 2.018
7 Prominence (well known in my field) 6.544 1.777
8 Security and benefits 5.129 2.199

Correlation analysis of the career attribute (CA) scores is confounded by their ipsative
nature as described above. Therefore, to assess any relationship among the items’ rank
ordered scores, principal component factor analysis was carried out on the 8 item scores.
Varimax rotaion resulted in 4 factors, with eigenvalues above 1.00. These are shown in Table
II. Standardized regression scores were saved for these factors, and became the response
variables in the subsequent analysis of the relationships to personality. Based on the loaded
items, the factors were assigned intuitive descriptors as shown in Table II. These are
interpreted as follows:
CA factor 1: Success (pursuing leadership and achievement) versus Security
(pursuing job security and benefits. CA factor 2: Reward (pursuing high income) versus
Service (seeking to help others develop). CA factor 3: Intellectual Development (pursuing
same). CA factor 4: Self-oriented (pursuing own status on individual terms) versus
Organization-oriented (pursuing success as defined by organization).
TABLE II. ROTATED FACTOR MATRIX FOR CAREER ATTRIBUTES

Items Career Attribute Factors Loadings Factor Descriptor


CA factor 1 CA factor 2 CA factor 3 CA factor 4
Security and benefits -0.850 0.067 -0.211 0.037
High leadership position 0.687 0.345 -0.231 -0.058 Success vs Security
High achievement 0.680 -0.179 -0.129 0.265
Help others grow and develop -0.024 -0.932 -0.119 0.009
Reward vs Service
High income -0.142 0.606 -0.585 0.217
Intellectual development -0.105 0.019 0.812 0.031 Intellectual Development
Career progression -0.076 0.016 0.046 -0.919
Self- vs Org- Orientation
Prominence (well known in my field) 0.038 0.187 0.488 0.509
Eigenvalues 1.762 1.596 1.332 1.027
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.

The personality trait scales of the RightPath6 profiles were the intended potential
explanatory variables in our regression. To reduce effects of correlation between these
personality scales, they too were factor analyzed, resulting in 3 rotated factors with
eigenvalues above 1.00 as shown in Table III. Once again, standardized regression scores

564
were saved for use as the explanatory variables in the subsequent analysis. The descriptors,
shown in Table III, were intuitively developed based on the loaded factors and would apply
to a subject with high scores on the respective factors. Cronbach’s alphas for the first two
factors only (the third had only one loaded trait) were 0.828 and 0.713 respectively, showing
acceptable internal consistency.
TABLE III. ROTATED FACTOR MATRIX FOR PERSONALITY TRAITS

RightPath6 Scales Personality Trait Factors Loadings Factor Descriptor


PT factor 1 PT factor 2 PT factor 3
Compassion -0.908 -0.103 -0.154
Adventurousness 0.853 -0.143 0.048 Driven-Competitive
Dominance 0.817 0.188 -0.222
Conscientiousness -0.064 0.911 -0.168
Inward-Structured
Extroversion -0.182 -0.830 -0.331
Innovation -0.013 0.049 0.983 Innovative
Eigenvalues 2.331 1.557 1.137
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.

V. EMPIRICAL MODELS AND ESTIMATED RESULTS

Four unique regression models were created, one for each of the factors of the career
attributes, using the standardized factor scores as the response variable in each case. For each
of the models, the factor scores for the three factors of personality traits were used as the
explanatory variables. The general form of the model is given by:
CAfactorn = α + (β1 × PTfactor1 ) + (β 2 × PTfactor2 ) + (β 3 × PTfactor3 ) ,
where “CA” stands for “career attribute”, and “PT” stands for “personality trait”, and n=1, 2,
3, or 4, for each of the four career attribute factors.
Results of the regression analysis are shown in Table IV. Significant regression
models were found to exist for CA factors 1, 2 and 3, with ANOVA F-values of 21.785,
9.392 and 8.100 respectively, all with p-values significant below the 0.01 level. The
corresponding R2 values for these models were low, as expected, with 12.6%, 5.8% and 5.1%
respectively, of the variance in the CA factor attributable to the model. In all cases the
constant term was statistically 0.000, due to the fact that all the variables, response and
explanatory, are standardized. All the PT factor coefficients were significant below the 0.01
level.
TABLE IV. OLS REGRESSION COEFFICIENTS
Response Variable in the Model
Model Coefficients CA factor 1 CA factor 2 CA factor 3 CA factor 4
Constant 0.000 0.000 0.000
0.341* 0.184*
PT factor 1 (7.765) (4.040)
PT factor 2
0.130* 0.216*
PT factor 3 (2.845) (4.734)
Model Parameters

ANOVA F value 21.758* 9.392* 8.100* 1.154
2
R 0.126 0.058 0.051
Coefficients with p > 0.05 not shown. * Coefficient significant below 0.01 level. † ANOVA f-value not significant for this model.

565
CA factor 4 failed to yield a significant model with these factors as explanatory
variables. To further investigate possible personality relationships with this CA factor, a fifth
regression model (not listed in Table IV) was created with the six individual personality traits
as explanatory variables. This resulted in a significant model (ANOVA F-value = 4.492) with
three of the personality traits (Adventurousness, Compassion and Conscientiousness) yielding
coefficients that were negative and significant below the 0.05 level (the first and last of these
three were significant below the 0.01 level). The coefficient for a fourth trait, Extroversion,
was also negative, but was just barely insignificant at the 0.05 level (p=0.066).

VI. CONCLUSION

The list of 8 career attributes in the Work Values Inventory is somewhat smaller than
those used in some other reported studies. Johnson (2001) used an inventory of 14 items that
loaded onto 4 job value scales, Extrinsic, Intrinsic, Altruistic and Social. Marini (1996) used
the same 14 items, plus another 8 that loaded onto scales of Influence, Leisure and Security
and 1 additional subscale item that loaded onto the Intrinsic value scale. The 8 items in the
current study are descriptively related to 8 of the 14 subscale items common to the above
studies (Johnson, 2001; Marini, 1996), spanning all four of the scales in Marini’s work
(1996). Karl and Sutton (1998) and Halaby (2003) use shorter inventories of 10 and 9 items
respectively, of which 4 and 6 items respectively are descriptively related to items in Work
Values Inventory used here.
Nordvik (1996) used Schein’s career anchor scales which consist of 9 items that
loaded onto four factors. Nordvik interprets these factors as being related to concern for: 1)
stimulation versus comfort, 2) skill development, 3) self-direction versus belongingness and
4) lifestyle versus helping others (Nordvik, 1996, p268 – 269). These are conceptually very
similar to the interpretations of the 4 factor scales developed in the current study: 1) success
versus security (like Nordvik’s factor 1), reward versus service (like Nordvik’s factor 4),
intellectual development (like Nordvik’s factor 2) and self-orientation vs. organization-
orientation. The similarity of these two sets of career value (or attribute) scales adds
additional concept validity to the inventory used in this study. CA factor 1 (Success vs.
Security) was found to be positively related to PT factor 1 (Driven-Competitive). According
to the RightPath6 profile developers (RightPath Resources, 2002), the subscales aligned with
this PT factor (based on the loaded trait scales) would include “ambitious” and “daring”,
among others. These characteristics would appear to be associated with desire for success, in
results and positional advancement, as well as a risk tolerance that would not require high job
security.
CA factor 2 (Reward vs. Service) was found to be positively related to both PT factors
1 and 3 (Driven-Competitive and Innovative). A person with this combination of personality
traits would score low on the subscales of “sympathy”, “support” and “tolerance”, primarily
due to the negative loading of the Compassion scale onto the PT factor 1. In conjunction with
high scores on the subscales of “ambition” and “independence”, these would support the
pursuit of an individual’s own gain, without the desire, need or perhaps the innate ability to
identify with the plight of others and/or render assistance. CA factor 3 (Intellectual
Development) was found to be related to PT factor 3 (Innovative). This trait is made up of the
“imaginative” and “resourceful” subscales. These characteristics would suggest one who
pursues new knowledge and creative ways of applying it. The established relationships
among CA factors 1, 2 and 3 and the respective personality traits, thus are consistent with the
conceptual understanding of the personality traits themselves. CA factor 4 is intuitively
challenging, with positive loading of the desire for individual prominence (status and
reputation) and negative loading for desire for career progression. This counter-intuitive

566
inverse relationship seems resolvable by the given interpretation, which makes the distinction
between a self-oriented career trajectory, versus one that aligns with the organization’s
structures and norms.
The secondary regression analysis of CA factor 4, while not as robust for prediction
purposes due to the potential correlation issues among the explanatory variables, also has
some conceptual consistency. A high score on this factor would be associated with low scores
for adventurousness, conscientiousness and compassion, with low extroversion scores being
also being related, but with less significance. The low extroversion scores would suggest an
inward focus that is consistent with the CA factor 4. The low compassion and
conscientiousness scores indicate “detachment” and “rejection of structure” respectively,
according to the RightPath6 developers (RightPath Resources, 2002). These could be
interpreted to support a “make it on my own” approach to career success. Note that the
negative coefficient for conscientiousness corresponds with the results of Nordvik (1996),
where a negative relationship was established between the Judgment trait (from MBTI) and
the “self-direction versus belongingness” career anchor factor. Judgment from the MBTI and
Conscientiousness on the Big Five inventory have been shown to be related (McCrae and
Costa, 1990). Further direct comparison to Nordvik’s coefficients is not facilitated due to the
fact that, in the primary models in this study, loaded PT factors are used which have no direct
corollary in the Nordvik study.
The fact that the primary model for CA factor 4, with the loaded PT factors as
explanatory variables, was not significant may be explained by the signs of the significant
coefficients in the secondary model. In the secondary model, Adventurousness and
Compassion both had negative coefficients, whereas in PT factor 1 these traits load with
opposite sign. The same holds for the Conscientiousness-Extroversion pair; they have the
same sign in the secondary model, but load with opposite sign onto PT factor 2. These
offsetting effects on the PT factors likely rendered them insignificant in the primary model.

REFERENCES

Bozionelos, Nikos. “The Relationship between Disposition and Career Success: A British
Study.” Journal of Occupational and Organizational Psychology., 77, 2004, 403-420.
Halaby, Charles N. “Where Job Values Come From: Family and Schooling Background,
Cognitive Ability, and Gender.” American Sociological Review, 68(2), 2003, 251-
278.
Johnson, Monica K. “Job Values in the Young Adult Transition: Change and Stability.”
Social Psychology Quarterly., 64(4), 2001, 297-317.
Karl, Katherine A., and Sutton, Cynthia L. “Job Values in Today’s Workforce: A
Comparison of Public and Private Sector Employees.” Public Personnel
Management., 27(4), 1998, 515-527.
Locke, Edwin A. “The Nature and Causes of Job Satisfaction.” In Dunnette, Marvin D. ed,
Handbook of Industrial/Organizational Psychology. Chicago, IL: Rand McNally,
1976, 901-969.
Marini, Margaret M., Fan, Pi-Ling, Finley, Erica, and Beutel, Ann M. “Gender and Job
Values.” Sociology of Education., 69(1), 1996, 49-65.

567
THE DETERMINANTS OF OWNERSHIP
IN SPANISH FRANCHISED CHAINS

Rosa Mª Mariz-Pérez, University of A Coruna, Spain


Email: rmariz@udc.es
Rafael Mª García Rodríguez. University of A Coruna, Spain
rgarcia@udc.es
Mª Teresa García-Álvarez. University of A Coruna, Spain
mtgarcia@udc.es

ABSTRACT

In this paper, we analyze the evolution pattern of ownership in Spanish franchised


chains and we study some of the key factors or characteristics that can determine the
proportion of franchised units. With this double aim, data is drawn for period 1997-2003
from published annual guidebooks. First, we have represented how the percentage of
franchised units varies with time for the 316 chains included in the sample. In order to detect
existing differences, we have also divided chains into two basic groups –service and product
chains-. Second, taking into account data for 2003, we try to identify the key variables that
significantly determine the propensity to franchise.

I. INTRODUCTION

The existing literature has employed different theoretical perspectives to justify the
existence of franchising. Specifically, agency and resource-based theories have been applied
to explain why, in some cases, the franchisor chooses to invest directly in a new outlet of the
chain and, in others, he decides to franchise it. In this sense, many empirical studies have
established that franchised units are efficiently superior to franchisor-owned outlets. This
may be due to the fact that franchising enables increased chain growth, while it reduces
monitoring costs, especially when units are located in a disperse manner. Geographic
dispersion increases, in this sense, difficulties and costs associated with control of managers
of franchisor-owned stores (Brickley, Dark & Weisback, 1991; Jensen & Meckling, 1976;
Shane, 1996, 1998). More specifically, franchising increases unit performance through the
allocation of ownership and control rights to the same person, the franchisee, and this reduces
agency hazards. Another stream of the franchising literature, far from establishing the
superiority of one of these alternative forms of government, has highlighted that the presence
of both type of units in the same chain gives rise to relevant synergetic effects (Bradach &
Eccles, 1989; Lafontaine & Kaufmann, 1994; Pénard et. al., 2002; Yin & Zajac, 2004).
Specifically, franchisor-owned stores are most useful for maintaining and developing brand
name quality and homogeneity, exploiting certain economies of scale. On the other hand,
franchisees are best in supplying the chain with new ideas and adaptations to local markets.
Therefore, the so called “plural form” or “dual form” is an efficient solution to mitigate
asymmetrical information, limited rationality and incomplete contractual hazards. However,
many studies have found that as chains reach maturity, they open less franchised units and,
therefore, choose to grow, in greater extent, through franchisor-owned establishments
(Oxenfeldt & Kelly, 1968-1969).

568
II. METHODOLOGY

Due to the non-existence of a ready to use data base related to franchising in Spain,
data employed has been collected, in collaboration with a working group of the University of
Oviedo (Spain), from the annual franchise guidebooks published for the period 1997-2003.
First, to represent the evolution pattern of ownership for Spanish indigenous chains, the
proportion of franchised units was calculated as the quotient between number of chain
franchised outlets in Spain and total number of chain outlets in Spain. However, from more
than 1300 existing Spanish franchised chains only about 500 fulfilled the necessary condition
of having started to franchise in 1997 or before and continue to do so in 2003. Moreover, it
was only possible to collect sufficient data for 316 of these. Second, for ordinary least
squares (OLS) regression, we introduced other key variables. This analysis was conducted for
data corresponding to year 2003. After plotting pattern for outlet ownership evolution, we
conducted an OLS regression in SPSS. The dependent variable –proportion of franchised
outlets- is modeled as the natural log of the ratio of the percent franchised by the percent
company-owned. This transformation has been used in many other empirical studies (see, for
example, Shane, 1998 or Michael, 1996) as a more robust measure of the distribution for both
OLS and Tobit regressions. The independent variables employed are:
• SECTOR. It is equal to one when the chain is basically dedicated to the distribution of
products and equal to zero when its object consists in the commercialization of services.
• AGE. This variable represents the number of years since the franchisor opened the first
unit. It is quite evident that, in general, the longer this period is, the greater the proportion of
franchised units will be. This is not only due to the simple fact that time just goes by. AGE
has been used as a proxy for franchisor experience, brand name value or reputation and for
franchisor accumulated resources (González & López, 2003; Lafontaine, 1992). Given that
the number of franchisees willing to join the chain increases as chain perceived value does, a
positive relation over the proportion of franchised units is expected.
• YNOTF. It represents the number of years the chain initially remained without franchising
any outlets at all. We expect YNOTF to have a negative influence over the proportion of
franchised units because this period of time can reflect, in a certain manner, franchisor
difficulties to adequately design and develop the complete franchise package (González-Díaz
& López, 2003). Moreover, during that period the franchisor has installed a totally centralized
organizational form, and it can result difficult for him to let in quasi-independent
businessmen that will make their own decisions.
• INTERN. This independent variable will be equal to one when the chain has some sort of
international presence and equal to zero when it has outlets only in the domestic Spanish
market. If the franchisor has chosen to expand activities overseas, we should find a higher
proportion of franchised units because, in most cases, he will not have the sufficient local
market knowledge to undertake unit opening by himself. Local franchisees will have much
better and complete information about demand conditions, governmental procedures, etc.
• SIZE. To measure the size of the chain we use the total number of outlets. Chain size has
been used as a proxy for geographical dispersion and, in this sense, for monitoring difficulties
(Agrawal & Lal, 1995; Brickley, Dark & Weisbach 1991). So, geographical distance
affecting the new outlet would make monitoring difficult. However, if the decision is to
franchise the new outlet, monitoring needs are reduced. From another point of view, chain
size has also been used as a proxy for brand name value. In this sense, greater chain size will
increase the number of potential consumers attracted and served (Lafontaine, 1992).
Therefore, chain size is likely to favor franchising.
• ININVEST. The initial investment is the amount, in euros, the franchisee must invest in his
outlet. However, we have not taken into account here the initial lump sum entry fee paid to

569
the franchisor; this amount is included in FIXED PAYM. Therefore, ININVEST reflects the
amount the franchisee must pay to adequately lay out and decorate his premises.
• TOTAL VAR. To calculate the value of this variable, we have added the percentages of
royalties and advertising fees for each chain. Royalty rates contribute to the alignment of both
parties interest because both franchisee and franchisor will be interested in increasing sales.
High royalty payments would serve as a powerful incentive to franchisors to control or
monitor activities in order to increase brand name value but would reduce franchisee
motivation to be efficient.
• DUR. Longer contract duration contributes, from a transaction costs view, to reduce
advantages of hierarchy compared to that of the market. Transaction costs associated to this
last option will be reduced and, as an intermediate case, the same will occur in the case of
franchising. Besides, longer contractual duration also reduces agency costs for various
reasons (Shane, 1998). Therefore, we expect a positive relation over the dependent variable.
• SURFACE. It is the minimum surface, in square meters, fixed by the franchisor in order to
permit the opening of a new unit of the chain. Therefore, to some extent it can reflect effort
required from the franchisee and, in this manner, reduce the number of potential franchisees
willing to join the chain. The negative relation between SURFACE and the dependent
variable can also be due to the fact that the franchisor is usually the owner of the larger
outlets located in big cities, while the franchisee is left with the smaller more disperse units.
• POPUL. We already made reference to the high probability of large units to be owned by
the franchisor. Smaller units located in little villages or towns –where only one outlet of the
chain usually exists- are more commonly franchised. Therefore, the minimum population
fixed by the franchisor to open a new store in a given location should have a negative
influence over the dependent variable.
• FIXED PAYM. It is the sum of the necessary initial lump sum entry fee and the present
value of fixed periodical payments.
We expect to find a negative influence over the proportion of franchised units. However,
initial entry fee compensates the franchisor for selection and initial training costs while the
remaining periodical payments are justified as being remuneration for brand value and on-
going support (Lafontaine, 1992). It is for this reason that we may find a positive relation
between FIXED PAYM and the proportion of franchised units, given that the latter may be
willing to make larger payments in exchange for greater support and intangible resource
transfer from the franchisor.

IV. CONCLUSION

Because we expect to detect different ownership evolution patterns with respect to the
type of activity, chains were divided into two groups, namely, service and product chains.
Figure II shows that service chains, at average, present a higher proportion of franchised
outlets compared to product distribution chains. This is in line with results obtained by
Pénard et. al. (2002), Lafontaine & Shaw (2001) and López et. al. (2000), even though the
percentage of units franchised is slighter higher in our case –nearly 80% for service chains
and 70% for product chains-.

570
1
0,9

percentage of franchised units


0,8
0,7
0,6
0,5
0,4 service
chains
0,3
product
0,2
chains
0,1
0
0 1 2 3 4 5 6 7 8ó+
ye ars franchis ing

Figure II: The evolution of the proportion of franchised units for Spanish service and product
distribution chains (1997-2003).

Next, Table I displays OLS regression results. The dependent variable measures the
proportion of franchised units of chains. Within the independent variables, SECTOR, AGE,
YNOTF, INTERN, SIZE, ININVEST and POPUL are found to have a significant influence
over the proportion of franchised outlets. All of the former present the expected signs.
Therefore, we can say that service chains choose to franchise a higher proportion of outlets.
This seems to confirm that when necessary local activities are of more relevance and more
labor-intensive, the given incentive system makes franchising the best choice. Second,
chains that have been in business for a longer time - AGE - tend to present larger proportions
of franchised establishments. It’s obvious that as time since the franchisor opened the first
outlet of the firm goes by, the greater the value of the dependent variable. Moreover, this
effect can also be due to increased franchisor experience and brand name value as chain age
is longer; this would surely have a positive influence over the number of franchisees willing
to join the chain. Variable YNOTF has a significant negative influence over the proportion of
franchised units. Therefore, as the number of initial years during which all outlets are
franchisor-owned and no franchised units are opened increases, franchisor seem to be
reluctant to le franchisees in. They get used to a centralized organizational form where all
decisions are made by central offices and this situation reduces future franchising activity.
Another significant independent variable is the presence of chain outlets in foreign markets.
INTERN has a positive influence over the dependent variable. This means that when the
chain has outlets abroad, it chooses to grow, more intensively, through franchised units.
Geographical dispersion and reduced franchisor local market knowledge reduces the
proportion of franchisor-owned units. SIZE has a significant positive relation over the
percentage of franchised outlets. Therefore, larger chains exhibit a higher proportion of the
latter, probably because these chains are subject to increased geographical dispersion. The
quantity of initial investment –ININVEST- reduces the proportion of franchised outlets of
chains. Franchisee risk aversion and the impossibility for these to diversify investment
adequately seem to reduce the number of potential franchisees willing to join the chain. We
do not, therefore, find any empirical evidence to prove that franchising exists due to resource
restraints of franchisors. It is the franchisor who directly invests in the opening of new outlets
when the necessary investment is higher.

571
Independent Standardized
variables Coefficients

(R-squared = 0,39) Beta


2,33
(Constant)
8
SECTOR -,214 (***)
AGE ,059 (*)
YNOTF -,109 (**)
INTERN ,106 (**)
SIZE ,336 (***)
ININVEST -,062 (*)
TOTAL VAR. -,042
DUR ,071
SURFACE -,038
POPUL -,092 (*)
FIXED PAYM ,020
Table I: Regression results (n=316).
(*) Significant at 0.1. (**) Significant at 0.05. (***) Significant at 0.01.

The last significant independent variable to explain the proportion of franchised units is
POPUL. It has a negative relation over the dependent variable so we can say that the
franchisor tends to be the direct owner of units located in the larger cities, while outlets
situated in smaller towns are chosen to be franchised.
The remaining variables included in the analysis –TOTAL VAR., DUR, SURFACE and
FIXED PAYM- do not help to explain, in a statistically significant manner, variations in the
proportion of franchised units. However, the signs displayed by the first three of these are as
expected. On the contrary, FIXED PAYM presents a positive relation; this seems to indicate
that larger periodical fixed payments do not reduce franchisee interest to join the chain.
Maybe, this is because they are willing to pay more in exchange for greater intangible
resource transfer and support form franchisor.

REFERENCES

Agrawal, D. & Lal, R. “Contractual Arrangements in Franchising: An Empirical


Investigation”. Journal of Marketing Research, 32, (May), 1995, 213-221.
Bradach, J. & Eccles, R. “Price, Authority and Trust: From Ideal Form to Plural Forms”.
Annual Review of Sociology, 15, 1989, 97-118.
Brickley, J. Dark, F. & Weisbach, M. “An Agency Perspective on Franchising”. Financial
Management, 20, (spring), 1991, 27-35.

572
IN A GLOBAL ECONOMY, EFFECTIVELY MANAGED DIVERSITY
CAN BE A SOURCE OF COMPETITIVE ADVANTAGE

Kayong L. Holston, Ottawa University


kayong.holston@ottawa.edu

ABSTRACT

This paper briefly reviews the previous research findings to identify the advantages of
effectively managing diversity. Managing diversity means taking advantage of diversity’s
assets while minimizing the potential barriers, such as prejudices and biases. Diversity is
something that is not going way. Workplace diversity is urgently needed, as today’s
competitive environment demands the deployment of a wide range of skills to meet the needs
of the marketplace. Effective 21st century diversity efforts require leadership that seeks to
empower individuals through diversity training. Successfully managing diversity is a
challenging process, but with a clear vision, careful planning, strong leadership, and a
willingness and commitment to change, organizations can develop a competitive advantage
as an employer and a producer of services to the people.

I. INTRODUCTION

“Diversity Management” has become the new business buzzword for the 21st century.
It has been known to enhance workforce and customer satisfaction, to improve
communication among members of the workforce, and to improve organizational
performance (Cox & Dreachslin, 1996). Everywhere you turn, research papers, case studies,
and statistical data appear touting the advantages of successfully managing a diverse
workforce. The 1998 summer issue of Public Personnel Management presented a diversity
symposium that included theories, case studies, and examples of diversity management that
support the vision that if managed well, diversity can help improve organizational
effectiveness. Broadly defined, the term diversity management refers to the systematic and
planned commitment of organizations to recruit, retain, reward, and promote a heterogeneous
mix of employees. Soni (2000) defines managing diversity as developing organizational
structures and processes that effectively utilize diversity, and creating an equitable and fair
work environment for employees of all racial/ethnic and gender groups (p. 396). She adds
that workforce diversity “refers to differences among people based on gender /ethnicity, age,
religion, physical or mental disability, sexual orientation, and socioeconomic class.”

II. IMPORTANCE OF MANAGING DIVERSITY EFFECTIVELY

Businesses today have come to realize the many benefits of a diverse approach, which
is facilitated by a diverse workforce. A recent survey by the Business Higher Education
Forum (BHEF), a collaboration between the American Council on Education (ACE) and the
National Alliance of Business (NAB), found that a majority of Americans believe that
workplace and educational diversity is important and even vital to the success of our future
economy. Managing diversity has become an asset for some companies and a liability for
others. The most successful companies will be those that recognize the power of diversity in
their workforces and in their product mix, and effectively create products and services that
appeal to their increasingly diverse customer bases. These companies know that diversity will
become even more important in the coming years, and that the leading companies will be

573
those best reflecting the increasingly diverse marketplaces they serve. (2005, New York
Times)

Diversity is not simple, not easy to grasp, and not easy to manage. Managing diversity
is important because the differences that exist in the workforce can actually be a hinder to
productivity if not managed properly. The successful management of diversity can be the
single most important issue executives must address as globalization becomes the order of the
day. Diversity and its implications for effective management have become increasingly
important over the decades (Duchatelet, 2001), and global trends indicate that managing
diversity has become a business imperative (Cox& Beale, 1997). Their opinion is that
managing diversity “consists of taking proactive steps to create and sustain an organizational
climate in which the potential for diversity related dynamics to hinder performance is
minimized and the potential for diversity to enhance performance is maximized”.
Organizations willing to make this type of environment a reality will profit from the many
benefits of diversity. Indeed, companies have gradually come to understand how diversity in
the workplace affects the management system and, thereby the performance of groups and of
the organization. The literature suggests that organizations that engage in proactive diversity
management strategies are more likely to experience positive organizational outcomes than
those that shun or ignore diversity. Svehla (1994) describes diversity management as a
strategically driven process whose emphasis is on building skills, creating policies that bring
out the best in every employee, and assessing marketing issues as a result of changing
workforce and customer demographics. The objective of managing a diverse workforce
should be to create an environment in which members with any possible diversity profile and
from any background are both able and willing to contribute their full potential towards
achieving their common vision. Organizational reality, however, indicates that diverse
workforces lack a common base; even words may have different meanings and interpretations
(Rijamampianina, 1999). R. Roosevelt Thomas, president of the American Institute for
Managing Diversity Inc., proposed another term, “ ’managing diversity’ that presents a way
to maximize the contributions of each member of the workforce. The goal of managing
diversity is to access the talents of 100 percent of the people in an organization”.

III. PERSPECTIVE = DIVERSITY DIMENSION

A diversity perspective provides the cognitive frames within which group members
interpret and act upon their experience of cultural identity differences in the group. The
diversity of management groups should be studied in light of relevant contextual factors
(Chatman, Polzer, Barsade, & Neale, 1998). Firm strategy (Richard, 2000) and strategy
process variables are particularly relevant to the study of management diversity, since
strategy formulation and implementation involve individuals at all levels and across all
functional areas of management (Burgelman, 1983). People’s perceptions of how their
cultural identity group memberships influenced their ability to work effectively and exert
influence in their work groups have changed. As one white woman attorney explained,
“Diversity means differences in terms of how you see the issues, who you can work with,
how effective you are, how much you understand what’s going on…there’s not a sense of
‘you’re just like me.’” Groups in organizations around the world are experiencing changes in
the cultural composition of their membership, and the trend is toward even more change as
countries continue to undergo changes in the cultural composition of their general
populations (Erex & Somech, 1996; Hambrick, Canney-Davison, Snell, & Snow, 1998;
Johnston, 1991; Wentling & Palma-Rivas, 2000).

574
The concept of cultural recomposition is an event in which individuals from diverse
cultures are added to or replace members of an existing group. Cultural recomposition may
occur in homogeneous groups, where existing members share the same culture with one
another but not with incoming members, or it may occur in heterogeneous groups, where
incoming members may share the same culture as some existing members but, for the most
part, incoming and existing members are culturally distinct. The diversity work groups are
competitive, and characterized by a high degree of trust, risk taking, and psychological
safety. There are greater opportunities for competency-enhancing cross-cultural learning
(Argyris and Schon, 1978; Edmondson, 1999). Research suggests that a work group’s success
often hinges on members’ ability to engage differences in knowledge bases and perspectives
(Bailyn, 1993; Jehn, Northcraft, and Neale, 1999) and to embrace, experience, and manage
rather than avoid, disagreements that arise (Gruenfeld et al., 1996; Jehn, 1997). Also the
trend toward teams in organizations is increasing (Milliken & Martins, 1996) and employees
are compelled to work together in a variety of ways. When the workplace is diverse, the
different talents and skills, interests, needs and backgrounds, as well as power and
opportunity differences can be harnessed to benefit all. However, this very diversity can also
hamper productivity and teamwork through manifesting as a lack of a common way of
thinking and acting. To maximize effectiveness, managers and team leaders must support the
group to co-create, develop and agree on a vision that transcends their individual differences.
When people work together towards a shared vision, they hold themselves responsible and
accountable, both as individual members of the group and as the groups as a whole.
According to the teaching division of the American Psychological Association, the goals of
diversity content are to heighten sensitivity and awareness, broaden understanding of human
conditions, increase tolerance, enhance psychological mindedness, expose to personal
perspectives, and increase students’ political action (Simon et al., 1992:92). In an article
documenting the team learning process and performance of four diverse groups in a high-tech
manufacturing company, Brooks (1994) also found that unless status differentials between
group members were consciously managed, at best lower status employees were left out of
the learning process and at worst, the teams were dysfunctional. Morrison & Milliken (2000)
hypothesize that demographic differences between top management and lower-level
employees decrease management’s ability to hear and respond to criticism from subordinates,
as well as subordinates’ willingness to voice such views. They suggest, in fact, that “the
negative effects of silence on organizational decision making and change process will be
intensified as the level of diversity increases.” In the integration and learning perspective,
cultural diversity is a potentially valuable resource that the organization can use, not only at
its margins, to gain entry into previously inaccessible niche markets, but at its core, to rethink
and reconfigure its primary tasks as well. Ely & Thomas (2001) found that the common
element among high performing diverse groups was the integration of that diversity. They
also conducted a qualitative study to examine whether different perspectives of the concept of
diversity impact on the organization’s performance and employee satisfaction and
identification with their social group. They found three underlying views of diversity:
integration and learning, access and legitimacy or discrimination and fairness. Each of these
views governed how members of work groups created and responded to diversity. Each
employee’s diversity is to serve as a resource on which all members could draw learning
from each other and reflects a central premise of the integration and learning perspective on
diversity; while there may be certain activities at certain times that are best performed by
particular people because of their cultural identities, the competitive advantage of a
multicultural workforce lies in the capacity of its members to learn from each other and
develop within each other a range of cultural competencies that they can all then bring to bear
on their work. Diversity initiatives focus primarily on dealing with these “Others,” who are

575
“strange, different, maladapted, unusual, requiring additional work, additional effort,
additional qualifications, and additional fit” (Cavanaugh, 1997).

IV. LEADERSHIP ROLE

The roles and responsibilities of a leader are always changing, but one thing remains
the same; behind every success, there is a leader that is willing to embrace and conquer the
challenge. Leadership and diversity are inevitably connected when the target organization is
demographically and culturally diverse.
The meaning of effective leadership in a changing world is a question plaguing people in all
kinds of organizations, in all sectors and areas of the world. Fisher and Ellis (1990) said that
effective leaders exhibit flexible communicative behaviors and interpersonal relationships
according to the situation and the nature of the people they work with, the followers.
Increasing diversity among managers and employees is one of the most critical adaptive
challenges organizations are facing today. The relationship between the manager and
employee is crucial to the success of an organization. Diversity awareness and promotion is a
multifaceted phenomenon that could be positively affected by various leadership perspectives
and organizational strategies (Richards, 2000). Rather than positioning diversity solely within
the social and moral realm, 21st century diversity leaders must reflect bottom line issues of
profitability and enhanced productivity (Owens, 1997). As workplace diversity increases,
conflict among different cultures (rather than blacks and whites) will become more complex
and broadened. Therefore, diversity initiatives must be designed to reflect the contemporary
reality of multiple cultures interacting simultaneously in organizations. American
management literature, both popular (e.g., Thomas, 1991; Morrison, 1992) and scholarly
(e.g., Jackson et al., 1992; Cox, 1993) is rife with advice that managers should increase
workforce diversity to enhance work group effectiveness. The functional multicultural
workplace provides wide possibilities for finding new ideas and practices and cultural
learning. Workforce diversity management has become one of the most pressing business
issues that managers must address. The growth in the U.S. labor force now and for the
foreseeable future will be largely composed of women, minorities, and immigrants. They will
constitute about 85 percent of the new entrants in the workforce, according to the landmark
Hudson Institute study, Workforce 2000. The research results concern perceptions of ill
treatment in the workplace due to race, ethnicity, gender and age differences (Cox & Nkomo,
1991; Olott, Rudeman & McCauley, 1994; Talley-Ross, 1995; Grossamn, 2000), and
documented concrete advancement ceilings for white women and women of color (Catalyst,
1999).
Grossman (2000) suggests that in spite of organizational efforts to manage diversity,
very little has changed in the experiences of culture, ethnicity, race, and gender groups. Often
employees who strive to effectively change and adapt work behaviors find that they must do
so in spite of peer reactions and organizational structures (Cox & Beal, 1997). This context
suggests that leadership in diversity management must realize that individual determination
and perseverance in working to promote a positive and proactive diversity posture are salient
in the change process. Organizational leaders implement diversity initiatives in efforts to
motivate and encourage each individual to work effectively with others so that organizational
goals are achieved (Davidson, 1999). The literature suggests that diversity can have an
important impact on organizational performance and perceptions of organizational
effectiveness (Wright, Ferris, Haler & Kroll, 1995; Gilbert & Ivancovich, 2000; Richards,
2000). Organizational leaders wishing to enhance employees’ abilities to embrace and
actively promote inclusionary practices must not just describe behavioral expectations for
employees with respect to diversity.

576
V. CONCLUSION

Cox (1993) says that to develop and implement successful diversity management
programs, it is important to systematically identify and document key considerations that
must be taken into account by organizations attempting to enhance their diversity
management efforts. Organizations having already recognized the value of a diverse
workforce and made a sincere effort to maximize its contributions, have learned that
changing hiring policies will not in and of itself ensure success. Diversity is something that
is not going away. It must be dealt with wisely, carefully, and continuously. In a country
seeking competitive advantage in a global economy, the goal of managing diversity is to
develop our capacity to accept, incorporate and empower the diverse human talents of the
most diverse nation on earth (Roosevelt). This marketplace, whether in construction,
manufacturing, or the service sector, has become increasingly demanding as the metaphorical
performance bar is moved gradually, yet inexorably, upward. This continuous improvement
philosophy is enhanced by managing diversity in such a way as to derive the best
performance from a workforce that, with each passing year, is becoming less homogeneous
and more geographically dispersed. Diversity, if effectively managed, can be a source of
competitive advantage for the group or organization. Successfully managing diversity is a
challenging process, but with a clear vision, careful planning, strong leadership, and a
willingness and commitment to change, an organization can develop a competitive advantage
as an employer and a producer of services to the people. With increasing business
globalization and different cultures we have in this world, maintaining and managing cultural
differences becomes a challenge for managers and supervisors in the twenty-first century. It
looks at how one’s own culture plays an important role in the way one manages, one must
strive to learn, not only about the different culture which exists in the country where one
wants to do business, but also, how to see one’s own culture in an objective manner.

REFERENCES

Bailyn, L. (1993). Breaking the Mold: Women, Men, and Time in the New Corporate World.
New York: Free Press.
Cavanauth, J. M. (1997). Corporating the Other? Managing The Politics Of Workplace
Difference. Managing The Organizational Melting Pot. Thousand Oaks; Sage.
Forbes. L. H. (Oct, 2002). Improving Quality Through-Diversity More Critical Now Than
Ever Before. Leadership & Management In Engineering, 2 (4), 49.
Grossman, R. J. (2000). Race in the workplace. HR Magazine, March, 41-45.
Hambrick, D. C., Cho, T., & Chen, J. M. (1996). The Influence of Top
Management Team Heterogeneity On Firms’ Competitive Moves. Administrative
Science Quarterly, 41: 659-684.

577
OVERCOMING BUSINESS SCHOOL FACULTY DEMOTIVATION

Robert A. Page, Jr., Southern Connecticut State University


pager1@southernct.edu

Ellen R. Beatty, Southern Connecticut State University


beattye1@southernct.edu

ABSTRACT

With a variety of constituencies demanding curricular reform in higher education, the


academic community has risen to the challenge with improved goals and new strategies.
However strategies are ineffective unless they are implemented, and successful
implementation depends, in part, on the commitment of faculty members. This paper argues
that the probability of successful implementation is increased when administrative reforms
meet the motivational needs of the faculty groups who must use them. By focusing on areas
of common interest, such curricular reforms generate the kind of excitement, sense of
importance and meaning which motivate those involved in them. Examples of these “high
motivation” reforms are presented.

I. ENVIRONMENTAL TRENDS IN THE ACADEMY

Institutions of higher education face a “perfect storm” of public criticism due to the
convergence of three environmental trends: (1) the ever growing demand for college degrees
as professional career credentials since the U.S. continues to shift away from an
industrial/goods-based society towards a knowledge/service-based society, (2) the ever
increasing costs of providing near universal access to college education, and (3) the
problematic nature of student learning outcomes in light of the magnitude of the societal
investment involved (AACU National Panel, 2002; Barker, 2000; Seybolt, 2004). By any
measure business school programs are exploding on both the undergraduate and graduate
levels. Business programs have steadily increased in popularity, accounting for
approximately 25% of all undergraduate degrees in the U.S. (Clegg & Ross-Smith, 2003).
Similarly, Master’s of Business Administration (MBA) degrees have become legitimized as a
credential for managerial careers in large organizations, and now constitute over 23% of all
graduate degrees (Friga, Bettis & Sullivan, 2003). However there is some evidence the
market has become saturated, and the industry is entering a shakeout phase. Student
applications for MBA programs are down, as is the overall potential applicant pool of
GMAT registrants, by 15 to 25 percent (Zupan, 2005). While some of this trend can be
explained by growing national and international competition for students among an ever
increasing number of business schools, analysts note an increasing skepticism about the value
of management education (Grey, 2004; Hayes & Abernathy, 1980; Pfeffer & Fong, 2002;
Porter & McKibbin, 1988). Given the escalating costs associated with most MBA programs,
non-traditional educational institutions, particularly corporate universities and consultants,
have successfully challenged the basic premise that hard, quantitative analysis skills require
an intensive, two-year program of study. (Pfeffer & Fong, 2002). Business Week concludes,
“The drop in B-school enrollments may be signaling that people think they will receive better
training inside Corporate America than out” (Editorial, 2005, 112).

578
To its credit, the academy has developed an effective response to these emerging
trends and pressures. There is an increasing consensus among academic leaders such as the
Association to Advance Collegiate Schools of Business (AACSB), the Association of
American Colleges and Universities (AACU) and the Carnegie Foundation concerning
expectations and goals for business schools and Master of Business Administration (MBA)
programs. The AACSB codified these emerging trends through an international taskforce,
and through its accreditation standards. The AACSB’s Committee on Management
Education summarized the lifelong student learning outcomes of a quality business program,
particularly on the graduate level. The MBA should (a) prepare students with the knowledge
and skills they will need to make meaningful contributions in a wide variety of organizations
and activities (in both the private and public sectors); (b) instill an ethical foundation and
moral base so that MBA students are well-rounded and socially responsible contributors to
their communities and societies; and (c) provide lifelong advantages in personal wealth, self-
sufficiency, and entrepreneurial ability to create wealth (AACSB, 2005, 2003). This new
vision fundamentally expands and transforms the college curriculum across a variety of
dimensions. If these efforts are successful, professionalism as a college professor will be
redefined to include, at a minimum, the following major educational themes in teaching and
curricular development:
• Integrative Learning. Faculty and students will pursue extra-curricular activities, to
the point of collaborating on research and service activities.
• Learning Styles. Professors will adapt classroom pedagogy to address the needs of a
wide variety of student learning styles, embracing the emerging range of cultural
contexts found in increasingly diverse student populations.
• Interdisciplinarity. Professors will move away from presenting discrete knowledge
packets and skill sets towards making explicit linkages and connections between and
among academic specializations (Bisoux, 2005; Smith, 2004). The emphasis will be
on developing students’ abilities to systematically analyze and evaluate whatever
knowledge and experiences they later encounter in life and work.
• Assessment. Professors will be committed to systematically assessing how effective
their choice of content and method is in sparking student learning, and will take that
feedback and make continuous improvements. The course-based assessment
activities of single faculty will be expanded to program level and institutional
outcomes as well. These assessments can elicit collective reflection, guide the
reformulation of learning goals, promote sustained improvement and articulation of
the curriculum, and the design/selection of more appropriate assessments (AACSB,
2005; Smith, 2004).

II. FACULTY MOTIVATORS

Faculty at teaching institutions already complain of heavy teaching, research and


service loads before they were asked to shoulder the additional effort-intensive activities
involved in student-centered, outcome focused educational approaches. “Faculty are
challenged top teach more, collaborate more, and to engage in activities for which the
traditional faculty reward structures have had little regard.” (Omeara, Kaufman & Kuntz,
2003, 19; see also Ghoshal, 2005). So how does an institution of higher education ensure
faculty support and participation in curricular reform? The first challenge is to understand
faculty motivations (Cohen, 2003; Jayaratna, 2005). McClelland (1971) that suggested
people are primarily motivated by either a need for affiliation, power, or achievement.
Faculty motivational needs will be linked with the major academic responsibilities of
research, teaching and service.

579
Need for Affiliation. Faculty members with a high need for affiliation tend to prioritize
personal relationships and the development of communities to facilitate the social interactions
they find so rewarding. These faculty members often find that curricular reform is a natural
fit, since it advocates developing close relationships with students both inside the classroom
and in extracurricular learning communities as well.
Need for Power. Faculty members or administrators with a high need for power will
enthusiastically support reforms that extend their power and influence, especially when those
reforms impress external powerholders, such as legislatures who control resource allocation,
and accrediting associations who confer status and prestige through their approval or their
ranking of the business school. In the political arena, the trend is clear--resource allocation
will be increasingly linked with meaningful educational reforms that improve student
learning outcomes and with assessment efforts to document program effectiveness,
particularly when this results in improved b-school rankings (AACU National Panel, 2002;
Danko & Anderson, 2005). For faculty members with a high need for power who are not on
the administration track, power is associated with internal resource allocation, external
funding and grants, and leadership positions in academic and professional organizations. In
most universities and academic associations the path to prestige, status and resources is
linked with research, not teaching accomplishments (AACU National Panel, 2002; Ghoshal,
2005). To the extent that educational reforms do not add to this power base, or worse, drain
time and effort away from building further power and influence, such reforms are likely to
receive lackluster support from this group.

Need for Achievement. Many faculty have a high and dominant need for achievement, but
they are not a homogeneous group in defining what the nature of true academic achievement
is. While some define achievement as some combination of teaching and scholarship, others
focus almost exclusively in terms of research. A minority focus on service, which presents
motivational permutations so varied and complex they will be left for future analysis. The
true “teacher/scholars” are as dedicated to quality teaching and service as they are to
research, and will champion educational reforms advancing student-based learning outcomes
as a matter of academic integrity and professional pride (AACU National Panel, 2002;
Ghoshal, 2005). However, the same research notes that the weight of this burden is heavy.
Given the teaching and service loads typical of teaching institutions, most faculty feel
stretched to the limit even before assuming the additional duties of assessment, adding
interdisciplinary material, or systematically assessing learning outcomes. Those involved in
such efforts tend to burn out over time, and the reforms implode when their champions can
no longer shoulder the stresses inherent in their efforts (AACU National Panel, 2002;
O’Meara, Kaufman & Kuntz, 2003). In contrast, those who buy in to a more exclusive
research orientation are rewarded for it, as high output research faculty (AACU National
Panel, 2002; Barnett, 2005; Ghoshal, 2005):
• Promotion and tenure reviews usually prioritize research over teaching
accomplishments.
• Public recognition, status and institutional reputation is linked with research output.
• Gaining and maintaining accreditation by prestigious academic associations is often
linked with consistent research accomplishment.
• Resource allocation from state legislatures, private foundations and federal grant
sources is usually tied to research productivity.
Consequently such faculty are likely to be resistant to the educational reform agenda to the
degree it increases the amount of time devoted to teaching at the expense of the amount of
time left available for research. They are likely to be actively hostile if this agenda appeared
to force them to invest large amount of time in learning new teaching methodologies they

580
find unfamiliar, uncomfortable and unrewarding (Barnett, 2005; O’Meara, Kaufman &
Kuntz, 2003).

III. HIGH MOTIVATION REFORMS

This paper argues that the perceived meaningfulness of educational reforms increases
to the degree that a reform addresses the interests and needs of multiple faculty groups.
Discretionary effort is likely to increase when a reform is perceived as a mutually beneficial
behavior which secures a common and worthwhile goal. These goals allow faculty to pursue
their individual interests (including research) while increasing the quality of the students’
educational experience and improving student learning outcomes for accreditors and funding
agencies. When an activity is linked with that level of perceived meaningfulness, it increases
the likelihood that (a) even an exhausted teacher/scholar will summon up the energy reserves
needed to engage in it, and (b) even a hesitant researcher will divert time to pursue it. In
general, high motivation reforms that meet the following criteria should be prioritized:
1. Prioritize student-centered learning activities that generate research and publication
opportunities for faculty. Instead of advancing reforms which threaten to compromise
research output, why not prioritize reforms which incorporate it? To advance publication
efforts, students can be integrated into field research data collection and analysis. “For our
new curriculum development model, we wanted our teaching and research to be inextricably
linked, and our faculty’s research used as a learning tool. In addition we linked our teaching
and research to resources off campus – including industry, publishers, professional institutes,
and employers” (Jayaratna, 2005)
2. Prioritize interdisciplinary efforts that generate research and publication
opportunities for faculty. The benefits of interdisciplinary learning experiences (in course
content, linked courses, interdisciplinary modules, etc.) are well documented, but so are the
costs. Unless carefully supported, such reforms negatively impact faculty quality of work life
– they simply require too much time and effort (Smith, 2004). Interdisciplinary efforts which
lead not only to classroom innovations, but to conference presentations and journal
publications as well, are more likely to be supported and implemented by a broader range of
faculty. The moral of the story: faculty who collaborate in research and publication are more
likely to teach together as well, and vice versa. For administrators, interdisciplinary activities
represent the kind of curricular innovations which command respect from students and
parents, from accrediting agencies, and from funding sources. This type of high motivation
reform can take the form of entire new programs engaging more than one department or
school. Such is the case for Central Connecticut State University's masters and certificate
programs delivered through OnlineCSU (CCSU, 2004). Under the scope of the university's
mission, and using a timely infusion of external financial support, the multidisciplinary
interests of faculty in the mathematical sciences and computer science departments resulted
in the creation of the first fully online Masters and Certificate Data Mining programs in the
world. The faculty need for mutual affiliation, the potential for prestige generated by the first
program of its kind in a growing field relevant to all, the possibilities of reaching a student
audience worldwide, and the interest of the university in creating programs responding to
present and prospective workforce needs, became critical in the establishment of the program.
Faculty appreciate the advantages this type of program offers in pursuing both research and
grant opportunities. For students, online delivery requires the continuous attention to the
achievement of learning goals, with the achievement program learning objectives being
assessed through a capstone experience.
3. Prioritize reforms that do not overburden faculty. Given the heavy teaching, research
and service loads of faculty at teaching institutions, reforms which add to that burden are

581
likely to be resisted, not embraced. Consequently reforms should be prioritized when they
are coupled with implementation strategies which minimize their intrusion and demands on
faculty life. For example, the SCSU Faculty Technology Resource Lab (FTRL) sponsors a
variety of initiatives to support faculty utilization of technology with assessment, teaching
and research applications. Their largest initiative is called STARS (Student Technology
Assistant Representatives). This innovative program allows technologically literate graduate
students to mentor and assist faculty, staff and other students in the use of computer
technology (FTRL, 2004). After extensive training with various kinds of hardware and
software, STARS provide desktop support for faculty and staff, network administration, web
development, programming, instructional technology, and a faculty technology walk-in
service center. STARS come from a variety of majors, including business. They are
recruited as students who have learned how to integrate the latest computer technologies in
their field. Upon graduation they receive a certificate as a professional credential. STARS
are paid for their work with of the service learning goal to "learn while working" in the Office
of Information Technology department.

IV. CONCLUSION

Unless faculty feel motivated to embrace change, they can find innumerable ways to
resist it. Consequently researchers warn that most curricular reforms attempted in large
universities have failed (AACU National Panel, 2002; Barker, 2000). The collapse of well
meaning efforts usually results from a discouraging combination of inadequate resources and
perceived opportunity costs. Given resource scarcity, the perceived costs and risks of shifting
time and effort commitments away from other activities towards teaching are enormous.
However, high motivation reforms appeal to everyone involved by addressing common and
complementary needs and interests.

REFERENCES

Bisoux, Tricia. “The Extreme MBA Makeover,” BizEd, 4(4), 200, 527-533.Editorial. “B-
Schools for the 21st century,” Business Week, April 15, 2005, 112.
Danko, James M. and Anderson, Bethanie L. “In Defense of the MBA.” BizEd, 5(1), 2005,
24-29.
Ghoshal, Sumantra. “Bad Management Theories Are Destroying Good Management
Practices.” Academy of Management Learning and Education. 4(1), 2005, 75-91.
McClelland, David. Motivational Trends in Society. New York: General Learning Press,
1971.
Mintzberg, Henry. Managers Not Mbas: A Hard Look At The Soft Practice Of Managing
And Management Development. San Francisco: Berrett Koehler, 2004.

582
CHAPTER 21

POLITICAL COMMUNICATION
AND
PUBLIC AFFAIRS

583
MEDIA FRAME: THE WAR IN IRAQ

María J. Pestalardo, East Tennessee State University


majupesta@yahoo.com

ABSTRACT

This study analyzes the different framing of the war in Iraq according to the media
coverage of four principal American newspapers during two and a half years. Through this
study, the researcher exposed how the media framing of the war changed throughout time and
the political climate of the moment. Through quantitative content analysis the hypothesis of
this study was confirmed since American newspapers tend to change their framing of the war
in Iraq according to the development of the conflict. The longer the war took the weaker their
support was. Also according to the exploratory question that answered which newspapers
support the U.S. position in the war, the New York Times and USA Today were more critical
towards the U.S. position than the San Francisco Chronicle and the Washington Post.

I. INTRODUCTION

Media significantly shape the way we view and understand present issues. By
focusing attention on selected issues while ignoring others, media framing influences public
opinion. This study analyzes the different framing of the war in Iraq according to the media
coverage of four principal American newspapers: the New York Times, the Washington Post,
USA Today, and the San Francisco Chronicle, and it exposes how the media framing of the
war changed throughout time and the political climate of the moment.

II. LITERATURE REVIEW

Framing News
Framing occurs when, in the course of describing an issue or event, the media’s
emphasis on the subset of potentially relevant considerations causes individuals to focus on
these considerations when constructing their opinion, instead of on others. Fuyuan Shen
(2004) says that media frames can have significant consequences on how audiences perceive
and understand issues and can alter public opinions on ambivalent and controversial issues.
Among studies about media framing, Adam Simon (2001) says that to fully understand how
framing works, it is important to know how human memory works. Memory associates
concepts to create ideas. Therefore, choosing the right concepts in a story can evoke the right
idea by association of those concepts. Matthew Nisbt, Dominique Brossard, and Adrianna
Lroepsch (2003) analyze the framing and the role of the journalists in constructing drama
stories. They support the idea that war times are one of the most profitable times for media
business, since there are a lot of stories that reflect human drama. On the other hand, Jack
Lule (2003) analyzed the metaphors used in the coverage of the war in Iraq. According to
Lule, metaphors frame the news and are used by media and politicians in the conception and
construction of war. During the Iraq war, media adopted the metaphors used by the Bush
administration. These metaphors provided a means to understand how the prelude to war was
framed and portrayed by news media that anticipated rather than debated the prospect of war.
Ray Eldon Hiebert (2003) analyzes how the Bush administration frames the war in Iraq
through public relations and propaganda strategies. He says that the biggest and most
important public relation innovation of the Iraq War was the embedding of about 600

584
journalists with the troops doing the actual fighting.

Cascade Model And Framing Process


Robert Entman (2003) analyzes the cascade model during 9/11 when the Bush
administration framed the attacks and it reinforced this strategy through all the lines of
communication, reaching leaders, the media and the general public. According to Entman,
words and images stimulate support of or opposition to the sides in a political conflict.
Journalists use words and images highly salient in the culture, which is to say noticeable,
understandable, memorable, and emotionally charged. In recurring use of words such as evil
and war in framing September 11, he says that the Bush administration’s was leading the
public to war in advance.

International Framing
Through an experimental study, Paul Brewer, Joseph Graf and Lars Willnat (2003)
examined how media affect the standards by which people evaluate foreign countries. News
stories present a frame linking an issue to a foreign nation in a way that suggests a particular
implication shaping how audience members judge that nation. Ilija Tomanic Trivundza
(2004) analyzes how media shape our knowledge of the world. She indicates that in the
international media coverage of the Iraq war, the ideological framing depends primarily on
culturally specific patterns of self-identification with the nations or cultures involved in the
conflict. According to her, media frame nations based on antagonism (the good-the bad, the
inferior-the superior, etc). In Israel, Tamar Liebes (1992) compared the coverage of the Gulf
War and the War in Israel by American and Israeli media. In this study, he analyzes how the
media cover war differently depending on who is involved. Journalists have to deal with
their patriotic fervor and their instinctive loyalty to their own country and their professional
duty of morale building that presides over their careers. After two years of analysis, he
concluded that ideology of objectivity, neutrality, and balance is reserved for reporting other
people’s troubles, rather than one’s own.
Wilhelm Haumann and Thomas Petersen (2004) studied German public opinion
towards the U.S. position in the war in Iraq. Both America and Germany framed the news of
the war in Iraq differently, while the media from the USA were concentrated on actions of the
governments involved in the conflict and news from the battlefield, the German media
focused on the civilian population in Afghanistan.

War Times and The Media


Lars Nord and Jesper Stromback (2003), analyze the role of professional journalism
during war times. They stress that war reporting relies on political and military resources,
whose incentives are often to cover up the truth and manipulate media reporting. According
to the researchers, there are some factors that influence the media coverage of a war. First, if
the country of the journalist is involved in the war, and second, if there are political or/and
international disagreements about that conflict. Michael Pfau, Michel Haigh, Mitchell Gettle,
Michael Donnelly, Gregory Scoot, Dana Warr, and Elaine Wittenberg (2004) analyze the
coverage of the war in Iraq by journalists that are reporting from military combat units of
Operation “Iraqi Freedom” (2003) in contrast to the journalist’s role in the conflicts
“Enduring Freedom”(2001) and “Desert Storm” (1991). According to researchers,
embedded coverage of Operation “Iraqi Freedom” was more favorable in tone, both toward
the military generally and toward its personnel than during the other two military conflicts.

585
III. HYPOTHESIS AND EXPLORATORY QUESTION

The hypothesis of this study was: American newspapers tend to change their framing
of the war in Iraq according to the development of the conflict. The longer the war will take
the weaker their support will be. The exploratory question of this study was: According to
their news frame, which newspapers support the U.S. position in the war in Iraq?

Through quantitative content analysis the researcher tested the hypothesis and
exploratory question. Using a research randomizer table, the researcher built 10 constructed
weeks in order to compare the newspapers’ framing. The unit of analysis was each title and
each lead of the selected stories. The independent variable was the newspaper’s name, and
the dependent variable was the nature of the framing of each story (favorable, unfavorable,
balanced or factual framing toward the U.S. position). The newspapers were selected on the
basis that they typify the population and because of their importance and influence in their
respective locations. The sample size of the study was 1,243 stories. A story was coded as
favorable if it reflected positively the U.S. position in the Iraq war; highlighted the U.S.
military or the coalition of the willing talents in the war (Coalition of the Willing: group of 38
nations acting collectively and often militarily outside of the jurisdiction of the United
Nations mandates and administration); or associated them with positive characteristics or
actions. A story was coded as unfavorable if it reflected negatively on the U.S. position in the
Iraq war; associated the US. or the coalition of the Willing with unethical, illegal, or immoral
behavior; suggested the US or any of its coalition’s countries was a source of problem; or
associated them with a negative experience of failure. A story was coded as balanced if the
story provided nearly equal amounts of positive and negative information. A story was coded
as factual if the facts transmitted in the news information did not reflect any of the
characteristics of the favorable, unfavorable or balanced stories toward the U.S. position in
the war.

IV. RESULTS

Since the beginning of the war, and after two and half years of participating in the
conflict, a chi-square analysis revealed that out of 1,243 stories that covered the Iraq war in
these four newspapers, 27.0% of them were favorable, 26.9% unfavorable, 8.7% balanced
and 37.4 % factual towards the U.S. position (Table 1). During the first constructed week a
chi-square confirmed that 36.1% of the stories were favorable toward the U.S. position on the
war and during the last constructed week the favorable stories were 19.1%. In between, the
favorable coverage tended to go down month after month. The unfavorable stories toward the
U.S. position on the war went from 24.6% during the first constructed week to 33.0% during
the last constructed week. During the 10 constructed weeks, the level of unfavorable stories
fluctuated constantly, as well, but in general, the unfavorable coverage increased month after
month. Most of the news stories were factually framed (37.4%). The amount of them
ascended during the time frame. The amount of balanced stories descended.

Table I- Constructed Week by Framing


CONSTRUCTED WEEK Framing TOTAL
Unfavorabl
Favorable e Balanced Factual
From 03/19/03 to 06/19/03 36.1% 24.6% 11.7% 27.5% 100.0%
From 06/20/03 to 09/20/03 28.7% 28.7% 7.4% 35.1% 100.0%

586
From 09/21/03 to 12/21/03 20.9% 29.7% 7.7% 41.8% 100.0%
From 12/22/03 to 03/22/04 24.1% 35.4% 7.6% 32.9% 100.0%
From 03/23/04 to 06/23/04 20.8% 29.6% 8.2% 41.5% 100.0%
From 06/24/04 to 09/24/04 34.5% 15.0% 3.5% 46.9% 100.0%
From 09/25/04 to 12/25/04 25.0% 27.1% 10.4% 37.5% 100.0%
From 12/26/04 to 03/26/05 12.5% 30.0% 11.3% 46.3% 100.0%
From 03/27/05 to 06/27/05 23.9% 23.9% 5.7% 46.6% 100.0%
From 06/28/05 to 09/28/05 19.1% 33.0% 6.4% 41.5% 100.0%
TOTAL 27.0% 26.9% 8.7% 37.4% 100.0%
Note. N: 1243; Chi-square: 64.93 df: 27; p<.01

Table II Newspaper By Framing


Framing
Favorable Unfavorable Balanced Factual Total
News WP % within
Paper 39.7% 30.0% 21.1% 9.2% 100.0%
Newspaper
NYT % within
19.1% 22.8% 2.5% 55.6% 100.0%
Newspaper
SFC % within
41.7% 33.8% 9.9% 14.6% 100.0%
Newspaper
USA % within
10.1% 28.7% .0% 61.2% 100.0%
Newspaper
Total % within
27.0% 26.9% 8.7% 37.4% 100.0%
Newspaper
Note. N: 1243; Chi-square: 344.61; df: 9; p<.01

The Washington Post had one of the highest levels of favorable (39.7%) and
unfavorable (30.0%) coverage towards the U.S. position on the war (Table 2). This same
pattern has been repeated in the San Francisco Chronicle’s coverage (41.7% and 33.8%).
However, the coverage on both newspapers tended to frame the stories more favorably than
unfavorably. The New York Times had one of the highest factual coverage (55.6%) along
with USA Today (61.2%) and reported almost the same amount of favorable and unfavorable
stories (19.1% and 22.8%). USA Today reported a significant difference between the
favorable and the unfavorable stories. There was 18.6 percentage points difference between
both of them, since 28.7% of the stories were unfavorable and 10.7 % were favorable.

V. CONCLUSION

Consistent with the studies in the literature, a large proportion (53.9%) of the
coverage of the stories of the Iraq’ war by American newspapers tends to be favorably or
unfavorably framed towards the U.S. position in the conflict. The hypothesis was confirmed

587
since the newspapers changed their framing of the war in Iraq according to the development
of the conflict. During the first week of the conflict, the coverage of the war was more
favorable than unfavorable with 11.5 percentage points difference (36.1% vs. 24.6%). After
two and a half years, the unfavorable stories were 13.9 percentage points higher than the
favorable stories (33.0% vs. 19.1%). After the first six months of the conflict, the favorable
news dropped dramatically from 36.1% to 20.9 % and it did not reach the original percentage
again.The coverage of those weeks that had a large amount of favorable news was mainly
related to issues such as: the end of Saddam Hussein’s regime, Iraq conquered, assassination
of Saddam Hussein’s sons, Saddam Hussein’s capture, U.S. commitment to democracy, Iraqi
free government, democratic elections, and Saddam Hussein’s trial, among others. The
coverage of those weeks that had a large amount of unfavorable news was mainly related to
issues such as: attacks against U.S. troops, unproven weapons of mass destruction,
international disagreements (UN) about the war, misleading of the Iraqi reconstruction,
economic and oil issues, international terrorist attacks related to the Iraq war, mutilation of
U.S. civilian in Falluja, photos of Abu Graib prison, and American presidential elections,
among others. The exploratory question investigated which newspapers supported the U.S.
position on the war in Iraq according to their news frame. The Washington Post had a more
favorable framing of the war than an unfavorable one. The San Francisco Chronicle mainly
balanced the amount of favorable and unfavorable news, taking a more neutral position. The
New York Times and USA Today published more factually framed stories. The New York
Times maintained a fairly well-balanced framing of favorable and unfavorable stories (19.1%
and 22.8%), as well. The only newspaper that showed more openly its disagreement with the
conflict and the decisions taken by the Bush Administration was USA Today.

REFERENCES

Brewer, Paul; Graf, Joseph; Willnat, Lars. “Priming or framing: media influence on attitudes
toward foreign countries.” Gazette., 65, (6), 2003, 493-508.
Entman, Robert. “Cascading activation: contesting the White House's frame after 9/11.”
Political communication., 20, (4), 2003, 415-432.
Haumann, Wilhelm & Peterson, Thomas. “German public opinion on the Iraq conflict: A
passing crisis with the U.S.A. or a lasting departure?” International journal of public
opinion research., 16, (3), 2004, 311-330.
Hiebert, Ray. “Public relations and propaganda in framing the Iraq war: a preliminary
review.“ Public Relations Review., 29, (3), 2003, 243-255.

588
WOMEN’S IMAGE AND ISSUES: A COMPARISON OF
ARAB AND AMERICAN NEWSPAPERS

Don Love, American University of Sharjah


dlove@aus.edu

ABSTRACT

This study examines the differences in reporting women’s issue news in Arab and
American newspapers. Results reveal the American press does not report on women’s issues
any more or less than the Arab press. Arab newspapers, however, have a more positive slant
on women while American newspapers run more negative stories about women. Results are
discussed in relation to the contrast between Arab and American law and social customs.

I. INTRODUCTION

Feminist and political leaders throughout the Arab region have long argued that one
of the most harmful exports of American media is an inaccurate image of Muslim women
(Darrag, 2002). Egypt’s first lady, Suzanne Mubarak, president of the Arab Women’s
Summit, called the image of Arab women “distorted and unfair” (quoted in Hanley, 2002).
Jabar Asfour of Egypt’s National Council for Women warned that there is a new urgency in
combating Western stereotypes of Muslim women saying “Many Westerners mistakenly
view Taliban women as quintessential Muslim women” (quoted in Hanley, 2002, p. 53). And
Fatima Zayed, wife of the late president of the United Arab Emirates, suggested that Islamic
nations face a hostile media campaign in the West that “often focuses on Muslim women’s
rights, implying they have none” (quoted in Chu & Radwan, 2004, p. 39).

II. ARAB MEDIA

Critics claim a more accurate image of Middle Eastern women can be found in media
produced in areas of the world with large Arab populations such as countries along the
Persian Gulf or in northern Africa (Abernethy & Franke, 1996; Al-Olayan & Karande, 2000).
Since the most effective way to learn about a culture is to live immersed in it (Love &
Powers, 2002), reading media publications originating in the Arab world provides a more
precise picture of Middle Eastern society than the western media exported around the world.
Challenges in gaining access to Arab communities due to travel restrictions, inadequate
language skills, or potentially threatening political environments have limited western
reporters from effectively reexamining these media stereotypes (Feghali, 1997) and, as a
result, most western ideas about women in Arab culture are “more fiction than fact” (Khatib,
1994, p. 58).

Although there are conflicting views about the degree of male oppression over women
in Arab society, scholars agree that there is a considerable difference in the status of women’s
rights in Arab countries and western countries (Darraj, 2002; Fernea, 2000; Fox, 2002;
Hymowitz, 2003; Ray & Korteweg, 1999; Sakr, 2002; Saliba, 2000). These differences in
status are a significant influencing factor in determining what is reported about women and
women’s issues in both Arab and American media (Sakr, 2002). The purpose of this paper is
to determine whether American and Arab newspapers are emphasizing news about women
that perpetuates international stereotypes. Specifically, this study examines (a) to what extend

589
international editions of American and Persian Gulf newspapers carry news about women and
issues relevant to women; and (b) to what extent the news, whether about Arab or non-Arab
women, is treated positively or negatively.

III. LITERATURE REVIEW

In most cultures the schema of “feminine” traditionally includes the stereotype of


women as mother, housewife, and sex object (Lemish & Tidhar, 1999). Women are thought
of as emotional, nurturing, and, often times, passive. In contrast, men are viewed as leaders in
their society possessing strengths such as independence, ambition, and aggression (Sapiro,
1993; Chang & Hitchon, 1997). The degree of acceptance of these sex roles is an indication
of the power and autonomy women have in any individual culture (Ferree & Hall, 1996).
This power is then reflected in the amount of news coverage women get in a country’s
newspapers and other media sources. Media gatekeepers in countries where men are given
high status are likely to prioritize stories about men and male issues while media
organizations in countries with greater gender equity will have a more proportional
distribution of men’s and women’s news.
Throughout the twentieth century the feminist movement in America has been characterized
by efforts to reduce acceptance of traditional sexual bias (Schnittker, Freese, & Powell,
2003). Women have been encouraged to voice their opinions about social inequities and
demand media coverage of their actions (Immerman & Mackey, 2003). Women are now
leaders in media, business, and politics, and, although sex role discrimination has clearly not
disappeared in America, a woman’s right to gender equality is firmly entrenched in American
cultural identity (El-Ghannam, 2003).

In Arab society, however, cultural values have traditionally restricted women’s


behavior (Feghali, 1997; Fox, 2002; Love & Powers, 2004). To safeguard family honor, men
are accorded authority over women and dictate extremely restrictive codes of behavior
including what women can wear, with whom they can talk, and how many children they can
have (Khalid, 1977: Moore, 1998). Although interpretation of what actions a women must
avoid to prevent shame upon a family’s name varies considerably between Arab countries
(Nydell, 1996), honor has been a prominent principle in the development of women’s roles
throughout Arab society (Feghali, 1997).

Arab women, therefore, have traditionally had fewer opportunities than Americans to
participate in newsworthy activities (El-Ghannam, 2003; Ray & Korteweg, 1999). Interests
were limited to environments in which there was little or no interaction with men, other than
family members (Feghali, 1997). Educational opportunities were reserved for male children
while business and property rights were afforded to women only under the direction of a
husband, father, or brother. Child raising, household management, and involvement with
religious or charitable organizations were encouraged but closely monitored by other family
members (Fernea, 2000). Although some scholars suggest that this paradigm of men in the
public sphere controlling women in the private sphere (i.e., the home) is no longer valid
(Fernea, 2000), many admit that the so-called “cultural evolution of Arab society”
(Immerman & Mackey, 2003, p. 217) has not significantly permeated Arab life (Fox, 2002;
Hymowitz, 2003; Ray & Korteweg, 1999; Sakr, 2002; Saliba, 2000). Activist women’s
groups are pushing for change through both governmental and nongovernmental initiatives
(Sakr, 2002) but many media outlets in the Arab world fail to cover these events due to
government censorship or societal pressure (Darwiche, 2000).

590
IV. HYPOTHESISES

Since proponents of women’s rights charge that coverage of women’s issue is lacking in
the Arab media, the following hypotheses are proposed:
H1 There should be a difference in the proportion of women’s news carried by
American and Arab newspapers.
H2 American newspapers will carry more hard news, business news, and editorials about
women and women’s issues than will Arab newspapers.
H3 American newspapers will carry more positive women’s news than will Arab
newspapers.
H4 American newspapers will carry more negative news about Arab women and their
issues than about American women’s issues.
H5 American newspapers will carry more negative news about Arab women than will
Arab newspapers.

V. METHOD

International editions of two American and three Arab newspapers were used for this
study. The newspapers examined were the International Tribune Herald (United States),
U.S.A. Today, the Saudi Gazette, the Gulf News (United Arab Emirates), and the Arab Times
(Kuwait). The American newspapers were selected based upon their availability throughout
the 22 Arab nations in the Middle East and northern Africa. The English-language Arab
newspapers represented the spectrum of freedom for expression in the region ranging from
the most limited (Saudi Arabia) to more moderate (UAE), to the most open (Kuwait)(Ruch,
2004). All sections of the newspapers were coded except for classified advertising. Two
coders were trained to examine the data. The unit of analysis was the news story, which
included hard news, features, sports, business, editorials, and accompanying pictures. The
articles and pictures selected for analysis were judged to about women or relevant to
women’s issues. Examples include Arab women’s groups asking the West for help with
women’s rights issues, Israeli attacks on Palestinian families, Chinese women protesting an
extension of the time required for marriage before they can apply for Taiwanese citizenship,
kidnappings in the Philippines, and an American woman biting off the tongue of a would be
rapist. Coders examined articles published over a one month period. Intercoder reliability for
all variables combined was 94.25. Relevant items were then rated according to whether the
slant on the news was positive, negative, or neutral. Positive article/pictures covered events
beneficial to women while negative articles/pictures portrayed women as victims or acting in
harmful ways to themselves or others. When the effect or image of women could not be
determined the slant of the news was considered neutral. Intercoder reliability was 91.05.
When ratings differed between coders, the differences were discussed until a consensus was
achieved.

VI. RESULTS

To determine the news coverage of women’s issues a total of 17,780 stories were
content analyzed in 152 issues of American and Arab newspapers. Women’s news were
covered in 2040 stories, 10.8% of the American articles (N=564) and 11.8% of the Arab
articles (N=1476). (See table 1).

591
Table 1: Women’s Issue Coverage-
________________________________________________________________
Newspaper Women’s Articles Total Articles Percentage
Herald Tribune 172 2616 6.6
USA Today 392 2576 15.2
Saudi Gazette 280 2532 11.2
Gulf News (UAE) 624 6000 10.4
Arab Times (Kuwait) 572 4056 14.1
_________________________________________________________________________

Hypothesis 1: The proportion of women’s issue news in American newspapers was 10.8%
whereas the proportion of women’s news in Arab newspapers was 11.8 percent. The
difference in proportions was not significant, X2(1,N = 2040) = 0.54, p>.05.
Hypothesis 2: The proportion of hard news, editorials, and business news about women’s
issues was 69.5% in American newspapers whereas the proportion of similar women’s news
in Arab papers was 69.3%. The difference in proportions was not significant, X2(1,N = 1416)
= 0.00, p>.05.
Hypothesis 3: The proportion of positive women’s news in American newspapers was 45.4%
while the proportion of positive women’s news in Arab papers was 59.9%. The difference in
proportions was significant, X2(1,N = 1140) = 10.30, p>.01. Post hoc analysis of editorials,
business, and hard news about women’s issues showed the proportion in the American media
was 39.8% and the proportion of similar stories in the Arab media was 56.5%. The difference
in proportions of these types of stories was also significant, X2(1,N = 660) = 10.23, p>.01.
Hypothesis 4: In American newspapers the proportion of negative news about Arab women’s
issues was 38.8% while the proportion of negative news about American women’s issues was
19.1. The difference in the proportions was significant, X2(1,N = 122) = 4.72, p>.05.
Hypothesis 5: The proportion of negative news about Arab women’s issues in American
newspapers was 38.8% whereas the proportion of negative news about Arab women’s issues
in Arab newspapers was 15.0. The difference in the proportions was significant, X2(1,N = 28)
= 5.17, p>.025.

VII. DISCUSSION

The intent of this paper was to examine differences in coverage of women’s news by
American and Arab newspapers. The results of this study failed to support that newspapers in
America give women’s issues more coverage. The American press covers neither hard news
nor general women’s news any more or less than the Arab press. Arab newspapers, however,
have a more positive slant on women while American newspapers run more negative stories
about Arab women. In addition to providing entertainment and advertising, the role of the
Arab media is to convey news, provide opinions, and reinforce social norms and cultural
awareness (Rugh, 2004). In this way, Arab mass media performs the same basic functions as
media in America and throughout the world. Unlike the libertarian system of the American
press, however, Arab media outlets work under a more authoritarian system, supporting and
advancing the policies of the government (Siebert, 1953).

The results of this study indicate that Arab newspapers reflect an image of women
that is more similar to American journalism than the restrictive, sexist society commonly
associated with Arab culture. Perhaps this can be accounted for in differences between the
laws and social customs of many Arab states. Throughout the Middle East, laws require
equality between the sexes. These laws reflect the tradition of Islamic teachings that women

592
have the right to inherit property, own and operate businesses, and be educated (Darraj,
2002).

Although parts of the Qur’an, like other religious writings, have been interpreted to
support male oppression over women, Islamic teaching is supportive of a woman’s right for
free expression (Saliba, 2000). Due to the close relationship between government and religion
in most Arab nations, the legal systems also support equality between the sexes. The Arab
press, having developed under this traditional, authoritarian structure, reflects these
government policies and, therefore, provides women’s news significant coverage. Social
customs in Arab society, however, often do not reflect government laws supporting women’s
rights. Arab women may be prohibited from exercising their full ranges of choice in
employment and education, or even the option of expanding beyond their traditional role in
the home (Chu & Radwan, 2004). News stories about the plight of the Arab women are
underreported or omitted entirely, especially in more traditional counties such as Saudi
Arabia, Kuwait, and the Sudan (Rugh, 2004). This is an area that significantly differentiates
Arab and American societies. The results of this study indicate American media will
highlight these “negative” stories while Arab newspapers will not.

VIII. CONCLUSIONS

Critics of American media claim that the press presents a negative stereotype of Arab
women and society. Critics of Arab media argue the press does not give full coverage of
important women’s issues and the result is an “Arab public that has to resort to foreign media
outlets…to get something resembling the full picture” (El-Affendi, 1993, p. 187). The results
of this study suggest, perhaps, both viewpoints are right. It is hoped that understanding the
differences in Arab and American newspapers will lead to greater insight into the issues that
affect women of both countries and ultimately improve cultural relations between Arab and
western societies.

REFERENCES

Acker, J. “Hierarchies, jobs, bodies: A theory of gendered organizations.” Gender and


society, 3, 1990, 160-186.
Chang, C., & Hitchon, I. “Mass media impact on voter response to women candidates:
Theoretical development.” Communication Theory, 7, 1997, 29-52.
Chu, J., & Radwan, A. “Raising their voices.” Time, 163, 2004, February 23, 38-42.
Darraj, S. M. “Understanding the other sister: the case of Arab feminism.” Monthly Review,
53(10), 2002, 15-26.
Dodd, P. “Family honor and the forces of change in Arab society.” International Journal of
Middle East Studies, 21, 1973, 40-54.
El-Affendi, A. “Eclipse of reason: The media in the Muslim world.” Journal of International
Studies, 47, 1993, 163-194.
El-Ghannam, A. R. “Analytical study of women's participation in political life in Arab
societies.” Equal Opportunities International, 22, 2003, 38-54.
Feghali, E. “Arab cultural communication patterns.” Intercultural Journal of Intercultural
Relations, 21, 1997, 345-387.
Fernea, E. “The challenges for Middle Eastern women in the 21st century.” The Middle East
Journal, 54, 2000,185-193.

593
DOES CHARITY TRULY BEGIN AT HOME?

Louis K. Falk, University of Texas at Brownsville


Louis.Falk@utb.edu

Hy Sockel, Youngstown State University


hysockel@cc.ysu.edu

John A. Cook, University of Texas at Brownsville


John.A.Cook@utb.edu

ABSTRACT

As a result of Hurricane Katrina many agencies both governmental and private are
restructuring planning and business strategies. The failure in leadership has lead to finger
pointing, resignations, and a tremendous amount of media coverage. A call for charitable
donations to help the people of this region has reached unprecedented proportions. This
paper investigates the relationship between the epicenter of a natural disaster and the
motivation to respond to that tragedy. The authors conducted a survey in Texas, Louisiana,
and Ohio. The results suggest that the closer a person was to the event the more information
they craved. The results also suggested that there is no correlation between celebrity
endorsement and the motivation to give. Many obstacles arose in the conducting of this
study. A discussion of those obstacles and an overview of charitable giving is introduced in
the paper.

I. INTRODUCTION

The destruction in New Orleans and surrounding areas during hurricane


Katrina and the subsequent strike by hurricane Rita according to the WorldWatch
Institute will be the “Most Costly Weather-Related Disasters in History”
(www.worldwatch.org/press/news/2005/09/02/). The following day, Michael Chertoff,
U.S. Homeland Security Secretary declared the devastation as "probably the worst
catastrophe, or set of catastrophes" in the country's history (www.ceca-
raznatovic.com/public.html/hurricanekatrina.htm).

Prior to the storm, New Orleans was a world-famous tourist destination, a city known
for its historic atmosphere. The 2000 U.S. census indicated that in excess of 1.3 million
people lived in the greater New Orleans area, with nearly a half million people in the city
alone. The aftermath according to Wikipedia, left the city with under 150,000 people
(http://en.wikipedia.org/wiki/New_Orleans).

It is no wonder that the devastation of New Orleans and the surrounding areas caused
by the Hurricanes has resulted in an outpouring of donations from across the nation, and the
world. The media was saturated with messages asking for charitable assistance. Luminaries
ranging from musician Aaron Neville to former presidents Bill Clinton and George H.W.
Bush helped boaster the cause.

594
II. BACKGROUND

U.S. News and World Report (2003) indicated that some two thirds of Americans
donate to charities. Individual American citizens in 2002 accounted for more than 80% of
the $241 Billion donated to charity. Of those donations (an average of $2,499 per person)
35% goes to religious institutions, while 13% goes to education. With over 1.6 million non-
profits in the U.S. taking contributions under normal circumstances, many non-profit
executive directors have expressed concern that the extra demands of the Hurricane disasters
will affect overall donations. Some seers expect no drop in traditional charitable giving
despite the extra demands for contributions sparked by Hurricanes Katrina and Rita. The
charitable organization, Star of Hope in Houston, TX reported in late December a "bleakly
empty warehouse." This warehouse usually serves approximately 1000 homeless people
during the winter holiday season. Marilyn Fountain Star of Hope spokeswoman stated there
was a 1.4 million dollar shortfall for December (Holiday Donations, 2005). A spokeswoman
for Charity Navigator, a non-profit group in New Jersey that evaluates philanthropic
groups, Sandra Minuitti, stated that last year charitable giving was about $250 billion and that
no reduction was expected. However, there is believed to be some "donor fatigue" that could
create problems for small charities and food banks. About half the donations for small
charities and food banks are made between Thanksgiving and New Year's Day according to
Miniutti (Holiday Donations, 2005). This led the authors to ask the question: What are the
patterns of giving and what factors might determine the likelihood and magnitude of
donations? Philanthropic patterns have been studied and there is some data regarding who
gives what. Much of the research is demographically oriented, such as the following.
Economic status seems to be one factor. According to The Chronicle of Philanthropy (April,
2004), the middle range wealthy Americans, those who have an annual income of $200,000
to $10 million dollars tend to give less to charity than both people who are richer and those
with less income. Gender is another factor that affects the likelihood of giving to charitable
causes. Single women are more likely to be donors. They also give larger gifts than their
male counterparts “with comparable education and income levels” according to Patrick
Rooney, research director at the Center on Philanthropy at Indiana University (Giving ‘til It
Hurts, 2005.) Age appears to be a factor that affects the size of donations. While donors are
seeing an increase in giving from younger people, Gardner (2005) reports that older people
tend to give larger gifts. The beneficiaries of minority gifts can be also divided by age.
According to The Chronicle of Philanthropy (Oct., 2004) those born after the passage of civil
rights legislation of the mid-1960 tend to support charities and educational causes that help
people of all ethnicities. However, people who are older are more likely to support causes
that help members of their own minority group. Another factor that weighs in on the
magnitude of donations is the Internet. Not only does it increase the speed at which
donations are given, it also increases the size of those gifts. At the Salvation Army, the
average mail-in gifts for Hurricane Katrina range from $20 - $50, about 1/4th the average
online gift of $185 (Gardner 2005).

III. METHODOLOGY

A self-reporting two page survey was administered to a convenience sample of under-


graduate students. The students represent three diverse areas of the country, Texas,
Louisiana, and Ohio. The instrument was designed to address fourteen features commonly
associated with charitable contributions. The scale items were drawn from general
publications. The features include a person’s normal charitable generosity, the impact of the
recent natural disasters, whether or not there was affect from user involvement, and the

595
impact of different communication channels. Respondents were also asked demographic
questions.

An individual’s distance from the epic center (New Orleans) was calculated in
hundreds of miles. Thus, the respondents in northern Ohio (slightly over a thousand miles
away) distance was calculated as 10, while individuals in Baton Rouge (about 60+ miles
away) received a 1 for distance. Additional demographic information was requested, but was
not used in this research. The data was explored with factor analysis, reliability tests, and then
multiple regressions. In total, approximately of 250 survey instruments were distributed and
166 surveys were returned; 8 were rejected because of missing data. If the number of
questions answered was less than half the number of items for a section the individual survey
was rejected. In total 158 usable questionnaires were returned. The usable response rate was
63%, which is consistent with research design of this nature.

AGE Number of items Mean Median Mode Number in each Range Bin
Items in ctg Items in ctg Items in ctg

Valid Missing (13 = 8%) (14 = 8%) (29 = 18%) 17 18-20 21-23 24-29 30-35 37-45

154 4 22.32 21.00 20 2 69 43 24 7 3

As suggested by Doll and Torkzadeh (1989), the validity was assessed using factor
analysis and correlation between the items. Factor analysis reduced the component items to
four factors explaining 63% of the variance. Reliability was measured by Cronbach’s alpha.
Alpha scores above 0.70 are considered satisfactory, and those above 0.80 are considered
excellent (Nunnally, 1978). The first factor dealing with “charitable giving” contained six
items explaining 33% of the variance. Factor one’s six items returned an alpha of 0.815. The
additional factor dealt with “charitable motivations” and had a low alpha of .602 and would
normally be consider acceptable for exploratory purposes only. All items were significant at
the .01 level.

IV. CONCLUSION

This study empirically tested the relationship of charitable giving in relationship to the
recent natural disasters. We were specifically interested in the August devastation of New
Orleans, Louisiana (considered a U.S. treasure) by Hurricane Katrina. To try to limit the halo
effect; the survey was conducted several months after the Katrina disaster. This research was
not meant to provide an overall relationship of philanthropy with natural disasters. It
investigated the attitudes of college students in relationship to various factors. The initial
premise was that there existed a relationship between the distance the individual was from the
epic center of the disaster and one’s charitable intentions. This was not supported. The only
relationship that was supported with the distance criteria was peoples’ attitudes towards the
level of news coverage. This was inversely supported. The closer the people were to the
event, the more they craved information. The survey also explored the contribution of
celebrity endorsements for the purposes of giving. This study found there was no correlation
with the intent of giving and individual celebrity endorsement. The level of contribution was
found to be significantly (.05%) related to a person’s natural propensity to donate and the
emotional impact of the event. A person’s emotional impact was found to be correlated to
the level of news that was received from Cable news networks such as Fox, CNN, MSNBC.

596
A person’s willingness to volunteer was found significantly (.01%) related to willingness to
volunteer time in general and his/her willingness to donate funds.

Part of the problem with measuring the amount of charitable intention, as well as, the
amount of actual giving is the multitude of natural and man made disasters. While each
charitable cause has its own virtues, the average person has only a limited amount of time and
money. If a new disaster arises the conventional contributor may not have the resources to
donate again. The number of charitable causes in today’s society has led to an increasingly
competitive environment among charitable organizations. Every where you look in every
conceivable environment some group or organization is looking for a donation. This over
saturated industry is calling on normal everyday people to support an overwhelming number
of causes. But how does one choose which charity to give to? As the competition increases
for donations many groups use the media to show horror stories. This leads to
desensitization, the process of seeing so many shocking images that it has no effect on the
intended recipient. The graphic images portrayed within the media (both paid and unpaid)
could very well corrupt the intended result. As images from the world’s disasters hit the
general public, a sense of “I have seen that before” and “what makes it different” from any
other disaster, could tend to lower donations.

Another factor that could play into donations is the perception of the disaster. With
the recent increase of weather related disasters the public may not be inclined to donate as
much time or money into areas that are prone to these conditions. It also did not help that
predictions (from private and government officials) for a New Orleans area Hurricane related
disaster were expressed in biblical terms. These predictions have left some of the general
public with a feeling that this area deserves what it gets for not preparing ahead of time. In
addition the U.S. government’s (local and national) failure in dealing with the aftermath of
the Hurricane could also have effected donations. While the criticism of the Federal
government is high and deservedly so, there were also foul-ups at the local level -- as the
mayor failed to execute the disaster plan. Charges were exchanged. Some said the mayor did
not activate the city’s school buses for evacuation. Louisiana State Governor Blanco claims
that FEMA told them not to use the buses because they were not air conditioned and feared
heat stroke. The governor also claimed that FEMA promised suitable buses that never
arrived. In addition to the bus issue an Amtrak train that offered to carry hundreds of
passengers out of the city was declined by the city and so the train left without passengers.
Failure to preposition food and water at the Superdome is also considered a city government
blunder. These sorts of failures left a lot of Americans discouraged about preparation. Also
many non-profit charitable organizations have advocated, a disaster this big is a Federal
obligation. Unfortunately for Katrina and Rita victims, government priorities shift. With
issues like the “war on terror”, the level of deficit spending, and the call for re-prioritization
of Katrina relief by Congresspersons and Senators from unaffected areas, the Federal relief
will likely be inadequate. These series of events could have lead to a “why should we
donate” attitude, especially if the government is not going to.

Publicity could have also been an issue. Because the New Orleans disaster was one of
the first devastating recent U.S. natural disasters it has gotten an obscene amount of media
coverage. This continual coverage has increased awareness and may have contributed to the
amount of overall donations. The additional coverage while intended to raise donations may
have had the opposite effect. Many people give one time contributions to charities.
Extending coverage will most likely not get people to give a second time. A byproduct of
continually publicizing the after effects of the hurricane has also led to many allegations

597
(racism, misappropriation of funds, etc…) turning people off from donating to any cause
related to the area.

Many causes do not do well because they are perceived to be long term. The
consensus is no matter how much money you throw at the problem it will never be fixed.
Other causes don’t receive money because they are deemed unworthy. The participants
involved are faulted because they did nothing to correct the problem. Still others are not
funded because of mismanagement. All of these issues could be applied to the New Orleans
disaster, thus leading to lower donations. This study specifically looked at proximity to New
Orleans and donation patterns geographically. The areas surveyed were at varying
geographic distances but, only one was inland. Although the authors posited that geographic
proximity would be predictive of giving it was not a determinant. While no real correlation
was found, it could have been because the authors failed to take into account the coastal
location of the majority of the survey respondents. Most coastal region residents have
experienced windstorms and accompanying floods. These coastal respondents have the
potential to identify with others who live in these coastal regions. From Burke (1950) to
Strong & Cook (1996) persuasion scholars have written about the power of identification.
The images and the language of the mediated messages about Katrina in the news may have
allowed for donors and refugees to share a common experience concerning hurricanes and
floods coming close to their own homes. This level of identification may be a factor that is
more important than the region of the country from which one comes.

V. LIMITATIONS AND FUTURE RESEARCH

The current study provides a preliminary investigation of the impact of natural


disasters and the charitable nature of individuals. The study suffered from several concerns,
the first was that while the intent of study was to gather information from diverse areas of the
country, in retrospect, this did not happen. One of the problems is that areas like Texas and
Florida are routinely battered by natural disasters, thus causing a distortion in the data
analysis. Another obstacle to this study is that the time frame from when the gulf coast was
hammered by Katrina and until the survey was taken; New Orleans has been constantly in the
news. Thus, while there was delay built into the study to reduce the halo affect, the constant
coverage probably tainted the results.

REFERENCES

Burke, K. A Rhetoric of Motives. New York: Prentice-Hall. 1950.


“Causes Supported by Minorities Differ by Age of Donor”. Chronicle of Philanthropy, 17,
October 14, 2004, 1, 12.
Doll, W.J and Torkzadeh, G. “A Discrepancy Model of End-User Computing Involvement”
Management Science, 35, (10), October 1989, 1151-1171.
Gardner, M. “Hurricane Katrina Changes the Pace and Face of Giving.” Christian Science
Monitor, 97, (209), September 21, 2005, 13-14.
“Giving Lessons.” U.S. News & World Report, 135, (20), December 8, 2003, 46-56.
“Giving ‘til It Hurts Not To.” New York Times, 154, (53087), B1, January 7, 2005.
“Holiday Donations Lag After Historic Giving Following Hurricanes.” Valley Morning Star,
December 19, 2005, A5, Col 1.
“Middle Rich’ Give Away Smaller Share of Income than Others, Study Finds.”
Chronicle of Philanthropy, 16, (14), April 29, 2004, 10.

598
Nunnally, J. C. Psychometric Theory. (2nd Edition), New York: McGraw-Hill 1978.
Strong, W.F. & Cook, J.A. Persuasion: Strategies for Social Influence. (4th Edition),
Dubuque: Kendall-Hunt. 1996.

599
IN THE PROCESS OF DECOLONIZATION:
THE RE─CREATION OF CULTURAL IDENTITY IN TAIWAN

Pei-Ling Lee, Bowling Green State University


peilee@bgsu.edu

ABSTRACT

Taiwan, a previous colony of Japan, has transferred by different regimes over


hundreds of years. In the process of colonization, the word “Taiwan” and the sense of being
Taiwanese were absolutely banned by colonizers. Taiwanese people, however, have never
abandoned the resistance to authorities. This paper will take Gramsci’s standpoint of
hegemony to discuss how Taiwanese resist an authoritarian government’s hegemony and how
they construct (or reconstruct) the cultural identity of being Taiwanese. By analyzing two
Taiwanese songs, the author of this essay will discuss what type of local consciousness is
revealed in these songs and how it reinforces the development of the Taiwanese identity in
the process of decolonization.

I. INTRODUCTION

Taiwan, a previous colony of Japan, has transferred by different regimes over


hundreds of years. In the process of colonization, the word “Taiwan” and the sense of being
Taiwanese were absolutely banned by colonizers. In Taiwan, after the government of the
Republic of China (ROC) retreated to Taiwan in 1949, the oppression of the authorities
provoked Taiwanese people’s resistance. The ROC government worked on forcing people to
accept the Chinese identity, no mater culturally or nationally, for over 40 years. This kind of
propaganda, however, excited people’s awareness of local consciousness. The result of the
2000 presidential election in Taiwan can be treated as a political victory of people’s
resistance. Also, the newly-formed Taiwanese identity has also replaced the old Chinese
identity. However, the new cultural identity causes a new round of conflicts and resistances in
Taiwanese society because of its mutual relationship to the new national identity.

The major argument of this paper is that people's cultural identity is developed from
daily rhetorical activities (e.g. songs); but, according to Gramsci’s claims of hegemony,
people’s attitudes, values, and beliefs are controlled, or at least influenced, by the state’s
mechanism. A government can lead the construction of cultural identity purposely in order to
develop a specific national identity or to achieve some particular political goals. The main
purpose of this paper is to analyze how Taiwanese people have constructed their identities
and faced the history in the process of decolonization. By analyzing two Taiwanese songs,
the author of this essay will discuss what type of local consciousness is revealed in these
songs and how it reinforces the development of the Taiwanese identity. This paper will also
mention how and why the newly-formed Taiwanese identity has replaced the Chinese identity
and its influence afterwards. The discussion in this paper points out the circular activity of
diffusing hegemony. In a word, when a government makes mistakes of utilizing national
mechanism, people’s attitude, beliefs, values, identities, etc. may exceed the government’s
expectations. People’s resistance over an old ideology may be recognized. However, there is
always a new ideology replacing the old one and predominating over the subordinate class or
other minor groups in a society. Therefore, the activity of diffusing hegemony keeps

600
repeating, even though the meaning and type of ideology might be different, and would never
stop.

II. THEORETICAL FRAMEWORK

Gramsci, an Italian Marxism theorist, brought up the concept of hegemony to describe


and analyze “how modern capitalist societies were organized, or armed to be organized, in
the past and present” (Bocock, 1986, page 27). Many Marxists before Gramsci emphasized,
maybe overemphasized, the function of economy and treated Marxism as a mechanical
formulation to analyze class struggle between working class and ruling class in a capitalist
society. Gramsci, however, overthrew some traditional Marixist thoughts; his work on
hegemony was equally considered with cultural, religious, philosophical and moral factors as
well as economic and political issues. In Gramsci’s analysis of hegemony, he clarified two
important concepts: civil society and political society. According to him, “These two levels
[civil society and political society] correspond on the one hand to the function of ‘hegemony’
which the dominant group exercises throughout society and on the other hand to that of
‘direct domination’ or command exercised through the State and ‘juridical’ government”
(Gramsci, 1971, page 12).

Gramsci’s definition of political society means a mechanism of a state which is


directly controlled by dominative social groups and a political government. These dominative
groups depute intellectuals to exercise the concept of hegemony. Civil society, according to
Femia (1987), is the “the ideological superstructure, the institutions and technical instruments
that create and diffuse modes of thought” (page 26). These social or “private” organizations
in civil society, such as church, school, labor union, family, community, etc., construct and
shape people’s attitudes, beliefs, values, and identities through daily activities. In addition,
they may serve to support the dominate groups in political society to maintain an existed
social order or to satisfy some particular dominate interests. Hegemony therefore can be
described as “an ‘organizing principle’ or world view (or combination of such world-views),
that is diffused by agencies of ideological control and socialization into every area of daily
life” (Boggs, 1976, page 39). From this description, people can see the relationship between
hegemony and rhetoric. According to Makay (1980), rhetoric is “ a way of thinking and
expressing feelings and ideas for purposes of generating knowledge, influencing values,
beliefs, attitudes, and actions (persuasion) and achieving mutual understanding” (page 185).
Therefore, the ideology that the superstructure purposely diffused is constructed and then
reinforced by people’s daily rhetorical activities.

III. MUSIC IN TAIWAN: POLITICS AND IDENTITY

The power and influence of music has been discussed over centuries. In his book The
Republic, Plato (n.d.) indicated that “any musical innovation is full of danger to the whole
state, and ought to be prohibited” (page 135). In Chinese history, philosophers, scholars, and
politicians also discussec the relationship between music and politics. Ancient Chinese
believed that music was one instrument for an imperial government to educate people, to
influence social customs, and to reform abuses. For instance, one Confucians book stated that
“The music in the piping times of peace is harmonious and joyful, and demonstrate the
harmony of politics; the music in troubled times is sad and angry, demonstrate the turmoil of
politics; the music in a subjugated nation is grieved, demonstrate the people's predicament.
Music and politics have direct relationship” (Li Ji Zhu Shu,1993, page 663). Thus, it can be
seen that both Western and Chinese philosophers noticed the power of music in a society and

601
treated music as a tool served for political purposes. This kind of idea can be linked to
Gramsci’s concept of hegemony. According to Gitlin (1980), “Hegemony is a ruling class’s
(or alliance’s) domination of subordinate classes and groups through the elaboration and
penetration of ideology (ideas and assumptions) into their common sense and every day
practice” (page 253).

One function of music is to gather collective consensus. Also, music presents an


intimate relationship of shaping and developing people’s identity. As Frith (1997) states,
“identity is…an experiential process which is most vividly grasped as music. Music seems to
be a key to identity because it offers, so intensely, a sense of both self and others; of the
subjective in the collective” (page 110). On one hand, music is easy to be utilized by
politicians or propagandists to achieve political goals. On the other hand, the words, lyrics,
and repetitions of songs can reflect people’s feelings, emotions, or even resistance to the
authorities.

When Taiwan was ruled under martial law, the ruling party (KMT) of the ROC set a
strict standard to rule and prohibit publications and music. In one of Chiang Kai-shek’s
article (Chiang was the ROC president from 1947-1975), he revealed the necessity for the
government to “train national righteousness, encourage fighting spirit… concentrate on the
music and song in order to correct the decadent music and excessive song that waste”
(Chiang, 1953, page 73). The government therefore diffused a series of patriotic songs. These
patriotic songs included Chinese folk songs, military songs, anti-Japanese songs, and anti-
communism songs.

All of patriotic songs under material law represent a sense of “Great China”, the
memory of Chinese mainland, or the enmity to communism. Also, all of them were written
and sung in mandarin. On one hand, by diffusing these patriotic songs, the KMT government
efficiently controlled the national ideology and created (or recreated) Taiwanese people’s
identities. By admiring Chinese culture, history, places, the government wanted people to
identify themselves as Chinese culturally and nationally. On the other hand, however, this
kind of patriotic song suppressed people’s consciousness of being Taiwanese. The word
“Taiwan” was a taboo and had never presented in any patriotic songs during the period of
martial law.

In the late 1970s, when the government’s control was loosened, Taiwanese people
finally had chance to present their anger that they had been repressed over hundreds of years.
The resistance of Taiwanese people can be separated into two parts: politically, many people
in Taiwan organized a series of demonstrations against the authoritarian regime and for
seeking Taiwan’s independence; culturally, Taiwanese people’s resistance against the
government’s ideology of Chinese identity reinforced the awareness of Taiwanese local
consciousness. In other words, more and more Taiwanese have re-learned the history of
Taiwan that has been ignored purposely by different colonizers, emphasized the education of
dialects, and changed their cultural identity from Chinese to Taiwanese. This kind of political
and cultural change is reflected on the results of the 2000 and 2004 presidential elections in
Taiwan as well as on modern Taiwanese literature and music. In this paper, I select two
Taiwanese songs that are usually sung on occasions of seeking Taiwan’s independence in
order to discuss the change of people’s cultural identity and the development of local
consciousness in Taiwan.

602
IV. CONCLUSION

In this paper, “Mother’s name is Taiwan” and “Do Not Annoy Taiwan” are chosen
to analyze the re-creation of cultural identity in Taiwan. These two songs present some
mutual characteristics. First, both of them encourage Taiwanese people to recognize that their
homeland is Taiwan rather than other places. The first song indicates directly that mother’s
(motherland’s) name is Taiwan, and the second song asks people to love Taiwan. “Mother’s
name is Taiwan” starts with describing Taiwan’s geographical features; “Do Not Annoy
Taiwan” also mentions Taiwan’s geography, but it pays more attention to Taiwan’s plentiful
produces. Taiwan’s another name is Formosa, which means a “beautiful island.” Taiwan is
famous for its abundance of landforms. This little island is surrounded by sea and consists of
mountains, hills, plateaus, and coasts. All types of climate can be found in Taiwan; hence, the
produces in Taiwan are ample. Both Taiwanese songs indicate this characteristic and imply
that Taiwanese people should be proud of their land. In addition, they both represent a painful
memory of being colonized. Historically, under the Qing dynasty rule, Taiwan was a border
of China and had never gotten any notice from the imperial government; under the Japanese
rule, Taiwan was a colony of Japan and lacked its independence; and under the KMT
government rule, Taiwan was a base for the ruling party to defeat the Chinese Communist
Party and to resume the Chinese mainland. The word “Taiwan” and the meaning of it has
been prohibited, ignored, or even distorted. These two Taiwanese songs describe this
situation; “Mother’s name is Taiwan” uses the words “dumb person” and “no voice” to
describe people’s silence when they were colonized, and then it raises several questions to
ask Taiwanese people why they would have negative feelings to recognize that they are
Taiwanese and their motherland is Taiwan. “Do Not Annoy Taiwan” describes a situation
that being Taiwanese was a shame when Taiwan was colonized and consequently asks people
to abandon the old ideology.

In terms of cultural identity, both songs present positive features of Taiwanese culture
and adopt historical factors to reinforce Taiwanese people’s local consciousness. In
“Mother’s name is Taiwan,” the lyric describes Taiwanese culture with direct, positive
words, such as the intuitive truth, justice, and spring. Both songs reveal a fortitudinous
Taiwanese spirit. “Do Not Annoy Taiwan” adopts a direct way to present it; at the end of the
song, it encourages people to have bravery to face difficult life, and Taiwanese people are not
worse than others. On the contrary, the lyric of “Mother’s name is Taiwan” uses “the sweet
potato son” to represent Taiwanese people. Sweet potato is a kind of native plant in Taiwan
and is easy to grow. During the 1940s to 50s, many Taiwanese were poor and did not have
money to buy rice and food. They therefore dug sweet potatoes from their farms and ate them
as the staple food everyday. In the late 1970s, when Taiwanese organized a series of political
activities to resist the authorities, they started to call themselves as “sweet potato sons.” In
“Mother’s name is Taiwan,” the word “sweet potato sons” can help it to achieve at least two
goals: first, it reinforces people’s local consciousness; and second, it recalls people’s memory
back to that poor period and consequently asks people not to repeat the same mistakes. “Do
Not Annoy Taiwan” also contains historical elements to emphasize people’s cultural identity
and local consciousness. However, this song chooses a positive description; the purpose of
mentioning “ancestor” and “generation” is to tell Taiwanese people that their families had
moved to Taiwan hundreds of years ago, and their children and descendants will continually
live on this land. This kind of depiction asks people to review their family histories and not to
forget their identities as Taiwanese.

603
Indeed, this kind of Taiwanese songs reflect Taiwanese people’s desire for
constructing their own cultural identity in the process of decolonization in Taiwan. By
constructing Taiwanese cultural identity, Taiwanese people also present their anger and
resistance to the KMT government and other colonizers throughout the Taiwanese history.
The way people develop a Taiwanese identity is to depict their love to the land of Taiwan. On
one hand, this kind of linkage can reinforce Taiwanese people’s local consciousness; on the
other hand, the link between land and identity explains the failure of the old ideology that
was emphasized and diffused by the KMT government over 40 years. Because of Taiwan’s
geography and special historical experience, the relationship between Taiwan and Chinese
mainland is actually not as strong as the KMT government thought. When the authorities paid
a lot of attention to change people’s Japanese identity, Taiwanese people actually have been
confused in the process of reconstructing a new cultural identity because of the lack of
sympoath and common memory of the Chinese mainland. Besides, the more the authoritarian
government oppressed them, the stronger desire that Taiwanese people would have to
develop a Taiwanese cultural identity and local consciousness.

However, this kind of development is not totally positive and healthy. On one hand,
finally, Taiwanese people have chances to face squarely Taiwanese history and language and
to show their love to the land where they live and grow up. On the other hand, the
development of the Taiwanese identity and local consciousness has represented a newly-
formed ideology. The two songs chosen in this paper are written and sung in so-called
“Taiwanese.” The meaning of “Taiwanese” here is actually a popular dialect called “Min
Nan.” The Taiwanese society consists of different sub-cultural groups. Ethically, people in
Taiwan can be separated to aborigine and Han people, with Han people are the majority. Han
people, depending on the regions their ancestors originally came from and the time they
immigrated to Taiwan, can be separated into several different sub-cultural groups: Holo,
Hakka, non-native groups. Among these sub-cultural groups of Han, Holo is the majority
(over 70%) and the dialect they speak is Min Nan. According to Shih (1997), the broad
definition of being Taiwanese presents a meaning of resident Taiwanese; in other words,
people who identify themselves as Taiwanese can be called Taiwanese. However, the current
Taiwanese identity, culturally, only reflects the image of Holo group; and the language
“Taiwanese” is the dialect of this group. Even though this new Taiwanese identity has
successfully resisted the old ideology of the Chinese identity, it represents a new ideology
and predominates over other sub-cultural groups in Taiwan.

In short, from analyzing these two Taiwanese songs, people can find out how
Taiwanese people reveal their resistances to previous colonizer and how they construct (or
reconstruct) the Taiwanese cultural identity. Because of Taiwan’s history and the political
resistance, the construction of the Taiwanese cultural identity and the development of
Taiwanese people’s local consciousness mutually influence each other in the process of
decolonization. The 2000 presidential election can be treated as a victory against the old
authoritarian regime and its hegemony. When the new Taiwanese identity replaces the old
Chinese identity and is accepted by more and more people in today’s Taiwan, it actually
starts a new round of diffusing hegemony. In the process of colonization and decolonization,
the constructions of Taiwanese people’s cultural identities were led by political forces and
served for political purposes. When the old identity was overthrown by Taiwanese people’s
resistance, it should give the society a good opportunity to review the history that they are
forced to forget and encourage the integration by different sub-cultural groups in Taiwan.
However, the image of the majority group continues the route of the old ideology because it
can not avoid the manipulation by politicians. This kind of political exploitation has given the

604
newly-formed Taiwanese identity a hegemonic meaning. What people can expect is the
resistance from other minor sub-cultural groups in Taiwan, and it has already begun. Today,
when Taiwanese people speak loudly and proudly that they are Taiwanese, they should
carefully consider one question: what does Taiwanese mean?

REFERENCES

Bocock, Robert. Hegemony. New York: Tavistock Publication and Ellis Horwood Litmited,
1986.
Boggs, Carl. Gramsci’s Marxism. London: Pluto Press.
Chiang, Kai-shek. Min Sheng Zhu Yi Yu Le Liang Pian Bu Shu [The supplements for the
chapters of education and amusements in the doctrine of People’s livelihood]. Taipei,
Taiwan: Centrol Wen Wu Publication, 1953.
Femia, Joseph. V. Gramsci’s Political Thought. New York: Oxford University Press, 1987.
Firth, Simon. “Music and Identity. “ In Hall, Stuart., and Paul du Gay, eds., Questions of
Cultural Identity. London: Sage, 1997, 108-126.
Gitlin, Todd. The Whole World is Watching: Mass Media in the Making and Unmaking of
the New Left. Berkeley, CA: University of California Press, 1980.
Gramsci, Antonio. Selections from the Prison Notebooks of Antonio Gramsci. Translated by
Hoare, Quintin., and Geoffrey N. Smith. New York: International Publishers, 1971.
Li Ji Zhu Shu [The explanations of Li Ji]. Taipei, Taiwan: Yi Wen Publications, 1993.
Makay, John J. “Psychotherapy as a Rhetoric for Secular Grace.” Central States Speech
Journal, 31, 1980, 184-196.
Plato. The Republic, Book IV. Translated by Jowett, Benjamin. NewYork: Modem Library,
n.d.
Shih, C. F. “Taiwan Di Zu Qun Zheng Zhi” [The politics of sub-cultural groups in Taiwan].
Jian Shou Lun Tan Zhuan Kan [The journal of professors forum], 4, 73-108.

605
CHAPTER 22

PUBLIC RELATIONS
AND
CORPORATE COMMUNICATIONS

606
NEWSPAPER ENDORSEMENTS AND ELECTION RESULT HEADLINES IN
THE 2 0 0 4 U.S. PRESIDENTIAL ELECTION

John Mark King, East Tennessee State University


johnking@etsu.edu

Adriane Dishner Flanary, East Tennessee State University


flanarya@etsu.edu

ABSTRACT

A content analysis of election headlines in 36 daily newspapers covering the results of


the 2004 U.S. presidential election revealed that newspaper endorsements of candidates more
frequently showed a conservative trend at a time when the election result itself was not
conclusive. Newspapers that endorsed President George W. Bush, the Republican candidate,
more frequently used headlines leaning toward him as the winner than did newspapers that
endorsed John Kerry, the Democratic candidate. Newspapers in the Northeast, South,
Midwest and West used headlines that favored Bush.

I. INTRODUCTION

On election night of the 2004 United States presidential election, newspaper headline
writers were faced with an electoral count that left the final decision up in the air as many of
them went to press that night and into the next day. As in the 2000 United States presidential
election, the final decision about who would win the election, Republican incumbent
president, George W. Bush of Texas, or Democratic candidate, Senator John Kerry of
Massachusetts, all rested on the voters of one state, this time Ohio. In what was widely
regarded as another highly polarized election year, and amid the ever-present cries of a liberal
media bias among some pundits and conservative politicians, research about presidential
endorsements and election result headlines seemed particularly appropriate. If a liberal
media trend were to have shown itself, it might have been manifested in election result
headlines, among papers that endorsed Kerry when the outcome was unclear, that leaned
toward the democratic candidate winning, declaring Kerry the winner or an election too close
to call. Conversely, a conservative media trend might have shown up in election result
headlines among papers that endorsed Bush that leaned toward a Bush win or declared Bush
the winner.

II. LITERATURE REVIEW

Research on newspaper endorsements in elections has primarily focused on


endorsement patterns and effects on voters’ voting decisions. St. Dizer (1986) found that
newspaper publishers, who commonly have some influence over editorial board political
decisions, support conservative candidates more often than newspaper editors do.
Historically, newspaper endorsements tend to support the Republican candidate because of
labor relations in the media industry (Devereaux, 1999). However, editorial decision makers
are not necessarily members of the endorsee’s political party; instead, they tend to be long-
time acquaintances (Ragland, 1987). Editors reported satisfaction with the newspaper
endorsement process (St. Dizer, 1986). Editorial decision makers report they believe
newspaper endorsements influence voter decisions (Ragland, 1987). Editorial views are

607
often affected by popular support of candidates (Schaefer, 1997). Research shows that
community leaders and newspaper editorial endorsements influence local and state election
outcomes (Lariscy, Tinkham, Edwards and Jones, 2004; Fedler, Smith and Counts, 1985;
McLenegham, 1983).

Voter demographics are locally and nationally consistent. Independent voters have
higher education and family income levels and tend to base voting decisions more on
newspaper editorials and endorsements (Smith, 1985; Hurd and Singletary, 1984).
Endorsements from group-owned and independently-owned newspapers are also consistent
with statewide averages for local and national elections (Rystrom, 1987). However, voters
rely more heavily on newspapers in local rather than national elections (Fedler, Smith and
Counts, 1985).

Research on the 1948 presidential election concluded that newspaper endorsement led
to an increase in voter turnout, but only affected voting to a small degree (Counts, 1989). A
comparison of the 1948 and 1960 presidential elections found editorial newspaper
endorsements and presidential outcome results support previous findings that newspaper
endorsements have little influence on voting (Counts, 1989). Examining the 1976, 1980 and
1984 presidential elections, Busterna and Hansen (1990) found that newspaper endorsements
heavily favored Republican candidates. Independent newspapers were more likely than chain
newspapers to favor the Republican candidate (Gaziano, 1989). Emig (1991) concluded that
past newspaper endorsements were predictors for 1988 presidential endorsements.

In the 1988 Bush, Dukakis presidential election newspapers appealed to liberal and
conservative positions, which were not predicted by the newspapers’ endorsements (Boeyink,
1992/1993). Newspaper candidate endorsements were independent of the coverage of
candidates (Dalton, Beck, Huckfeldt & Koetzle, 1998). Candidate coverage showed more
favorable coverage early in the campaign for Democratic candidates, while Republicans
received more favorable coverage late in the campaign (Stempel and Windhauser, 1989.)

Fedler, Counts and Stephens (1982) concluded that in the 1980 presidential election
voters were more likely to vote for the candidate endorsed by the newspaper published in the
city in which they lived. In that election, St. Dizier (1985) determined that endorsements had
a strong effect on voters’ decisions, which tended to remain consistent throughout the
election. There was a shift of newspaper party endorsements from Republican Party to
Democratic Party candidates during the 1964, 1992 and 1996 presidential elections.
Devereaux (1999) concluded that the shift was due to two factors. First, organizational and
business support was apparent in the media. Second, Democratic Party candidates developed
close relationships with media decision makers.

In the 2004 election, Kerry won the newspaper endorsement race with 213
newspapers endorsing him with a total of 20,882,889 in daily circulation, while 205 endorsed
Bush with a total of 15,743,799 in daily circulation (Mitchell, 2004). Bush won the real
election when final vote totals were tallied in Ohio.

III. HYPOTHESES

H1: Newspapers that endorsed Kerry will more frequently use election result
headlines leaning toward Kerry as the winner or declaring Kerry the winner of the 2004 U.S.
presidential election than will newspapers that endorsed Bush.

608
H2: Newspapers that endorsed Bush will more frequently use election result headlines
leaning toward Bush as the winner or declaring Bush the winner of the 2004 U.S. presidential
election than will newspapers that endorsed Kerry.
H3: Newspapers that endorsed Kerry will more frequently use election result
headlines concluding the result is too close to call in the 2004 U.S. presidential election than
will newspapers that endorsed Bush.
H4: Newspapers published in the northeastern U.S. region will more frequently use
election result headlines favoring Kerry rather than Bush.
H5: Newspapers published in the western U.S. region will more frequently use
election result headlines favoring Kerry rather than Bush.
H6: Newspapers published in the southeast U.S. region will more frequently use
election result headlines favoring Bush rather than Kerry.
H7: Newspapers published in the mid-western U.S. region will more frequently use
election result headlines favoring Bush rather than Kerry.

IV. METHODOLOGY

Researchers conducted a content analysis of U.S. newspaper articles, using the Lexis-
Nexus database, published Nov. 3, 2004, the day after the 2004 United States presidential
election. The unit of analysis was any headline published concerning the pending outcome of
the 2004 presidential election. Independent variables were which candidate was endorsed
(Bush or Kerry) and the region in which the paper was published (Northeast, Southeast, West
or Mid-West). The dependent variable was headline outcome (leaning toward Bush win,
Bush wins, leaning toward Kerry win, Kerry wins, too close to call/winner uncertain). A
total of 36 metro daily newspapers from across the nation were included in the study; 19
newspapers endorsed Bush, and 17 endorsed Kerry. Two coders achieved 100 percent
agreement in an intercoder reliability test on all variables except headline outcome, on which
they achieved 90 percent agreement, after two rounds.

V. RESULTS

Table I displays the results of H1, H2 and H3.


H1 was not supported. Newspapers that endorsed John Kerry did not more
frequently use headlines that leaned toward Kerry or headlines that declared him the winner
of the 2004 United States presidential election than newspapers that endorsed George W.
Bush. In fact, no newspapers that endorsed Kerry used headlines that leaned toward Kerry as
the winner or headlines that declared Kerry the winner in election night coverage. Actually,
no newspapers declared Kerry the winner.
H2 had mixed support. Newspapers that endorsed Bush did, as predicted, more
frequently use headlines leaning toward Bush as the winner (60.5 percent) than did
newspapers that endorsed Kerry (35.5 percent). However, newspapers that endorsed Bush
less frequently used headlines that declared Bush the winner (14 percent) than did
newspapers that endorsed Kerry (32.3 percent). These differences were statistically
significant at less than .05.
H3 was supported. Newspapers that endorsed Kerry more frequently used headlines
that concluded the outcome was too close to call (32.3 percent) than did newspapers that
endorsed Bush (18.6 percent). This difference was statistically significant at less than .05.
Overall, 50 percent of the headlines leaned toward Bush as the winner; 21.6 percent
declared Bush the winner; 24.3 percent characterized the election outcome as too close to
call; 4.1 percent leaned toward Kerry as the winner and zero declared Kerry the winner.

609
Table I Candidate Endorsed By Headline Tone
Candidate Leaning Bush Too Close Leaning
Endorsed to Bush Wins To Call To Kerry
Bush 26/ 60.5% 6/ 14.0% 8/ 18.6% 3/ 7.0%
Kerry 11/ 35.5% 10/ 32.3% 18/ 32.3% 0/ 0%
Totals 37/ 50% 16/ 21.6% 18/ 24.3% 3/ 4.1%
Note: N=74; Chi-Square= 8.58; df= 3; p= <.05. No newspapers declared Kerry the winner.

H4, H5, H6 and H7 were designed to test regional influences on headline tone.
Results are seen in Table II. Categories were collapsed so that headlines favoring Bush
included headlines leaning toward Bush as the winner and headlines declaring Bush the
winner. Headlines favoring Kerry included headlines leaning toward Kerry as the winner and
headlines concluding the outcome of the election was too close to call.
H4 was not supported. Newspapers published in the northeast U.S. region did not
more frequently use election result headlines favoring Kerry rather than Bush, but just the
opposite. Sixty percent of the headlines favored Bush, while 40 percent favored Kerry. H5
was not supported. Newspapers published in the western U.S. region did not more frequently
favor Kerry. They favored Bush by 72.2 percent. H6 was supported at less than the .05
level. Newspapers published in the southeastern U.S. region favored Bush (60 percent) over
Kerry (40 percent). H7 was also supported at less than the .05 level. Bush received
overwhelmingly favorable headlines in newspapers published in the mid-western U.S. region,
where 100 percent of the headlines favored him.
Overall 71.6 percent of all headlines analyzed favored Bush, the incumbent president
and Republican candidate, while 28.4 percent favored Kerry, the Democratic candidate.

Table II Newspaper Region By Headline Tone


Region Where Newspaper Headlines Headlines
was Published Favoring Bush Favoring Kerry
Northeast 18/ 60.0% 12/ 40.0%
Southeast 6/ 60.0% 4/ 40.0%
West 19/ 72.2% 5/ 20.8%
Midwest 10/ 100% 0/ 0%
Totals 53/ 71.6% 21/ 28.4%
Note: N=74; Chi-Square= 7.29; df= 3; p= <.05

VI. CONCLUSION

Results pointed toward a conservative trend more so than a liberal trend. Newspapers
that endorsed Kerry did not more frequently use headlines that leaned toward him as the
winner; no newspapers declared Kerry the winner. On the other hand, newspapers that
endorsed Bush did more frequently lean toward him as the winner, but they did not more
frequently declare Bush the winner. Newspapers that endorsed Kerry, more frequently
declared Bush the winner than did newspapers that endorsed Bush. This does not indicate a
liberal trend.

The only suggestion of any possible liberal trend was that newspapers endorsing
Kerry more frequently concluded the outcome was too close to call than did newspapers
endorsing Bush. This is weak support for Kerry, since these newspapers did not declare
Kerry the winner or lean toward him.

610
However, Bush won the headline race with ease with 50 percent of the headlines in
the sample leaning toward him as the winner and 21.6 percent declaring Bush the winner.
Only 4.1 percent leaned toward Kerry as the winner and zero declared Kerry the winner. It
might be assumed that headlines characterizing the election as too close to call (24.3 percent),
could have been seen as favoring Kerry, but even including this weak support, only 28.3
percent of the headlines favored Kerry.

Regional influence, irrespective of newspaper endorsements, also leaned heavily


toward Bush and may also suggest a conservative trend. In the Midwest, 100 percent of the
headlines favored Bush; in the South, 60 percent favored Bush. Even in the Northeast, an
area of the nation where Kerry captured the majority of the electoral votes, 60 percent of the
headlines favored Bush. Even though Kerry captured the electoral votes in the Pacific coast
states, in the West overall, 72.2 percent of headlines favored Bush.

At a time when the outcome of the 2004 presidential election was inconclusive,
newspaper headlines appeared to trend toward the conservative candidate. An alternative
explanation might have been that headline writers may have made their headline decisions
based on their own conclusions about the probable election outcome. Some afternoon
newspapers, especially those published in the Pacific time zone, may have had more of an
indication about the outcome of the election before they went to press than did morning
newspapers.

Future research could examine newspaper endorsements and presidential elections by


analyzing the entire story rather than just the headlines for more depth. Qualitative research
might also more directly determine the impact of newspaper endorsement decisions on the
tone of headlines in close elections.

REFERENCES

Boeyink, D. “Analyzing Newspaper Editorials: Are the Arguments Consistent?” Newspaper


Research Journal, 13/14, 1992/1993, 28-39.
Busterna, J., and Hansen, K. “Presidential Endorsement Patterns by Chain-Owned Papers,
1976-84.” Journalism Quarterly, 67, 1990, 286-294.
Counts, T. “Editorial Influence on GOP Vote in 1848 Presidential Election.” Journalism
Quarterly, 66, 1989, 177-181.
Counts, T. “Effect of Endorsements On Presidential Vote.” Journalism Quarterly, 62, 1985,
644-647.
Dalton, R., Beck, P., Huckfeldt, R., and Koetzle, W. “A Test of Media-Centered Agenda
Setting: Newspaper Content and Public Interests in a Presidential Election.” Political
Communication, 15, 1998, 463-481.
Devereaux, E. “Newspapers Organized Interests and Party Competition in the 1964
Election.” Media History, 5, 1999, 33-64.
Emig, A. “Partisanship in Editorial Endorsements.” Newspaper Research Journal, 12, 1991,
108- 119.
Fedler, F., Counts, T., and Stephens, L. “Newspaper Endorsements and Voter Behavior in the
1980 Presidential Election.” Newspaper Research Journal, 4, 1982, 3-11.
Gaziano, C. “Chain Newspaper Homogeneity and Presidential Endorsements, 1972-1988.”
Journalism Quarterly, 66, 1989, 836-845.

611
CHAPTER 23

QUALITY, PRODUCTIVITY
AND
MANUFACTURING

612
THE EFFECT OF AMBIGUOUS UNDERSTANDING OF PROBLEM
AND INSTRUCTIONS ON SERVICE QUALITY AND PRODUCTIVITY

Palaniappan Thiagarajan, Jackson State University


Palaniappan.thiagarajan@jsums.edu

Yegammai Thiagarajan Esq.


mthiagarajan@pswslaw.com

Sheila C. Porterfield, Jackson State University


Sheila.c.porterfield@jsums.edu

ABSTRACT

Many services require service provider and service consumer to work together to
create the service product. As co producers, the service provider must understand the
problem of the consumer and consumer must understand the procedures and instructions of
service production to fully cooperate to produce the service product. This article proposes to
find the effects of ambiguous understanding of consumer’s problem by the provider and the
ambiguous understanding of procedures and instructions by the consumer on service quality
and productivity.

I. INTRODUCTION

Leonard L. Berry (1980) defines service, “as an act or performance offered by one
party to another. Although the process may be tied to a physical product, the performance is
transitory. Often intangible in nature, and does not normally result in ownership of any of the
factors of production.” Lovelock and Wirtz (2004) modifies the definition to, “a service is an
economic activity that creates value and provides benefits for customers at specific times and
places by bringing about a desired change in, or on behalf of, the recipient of the service.”
Basically the above definitions differentiate products from services. Products provide benefit
to the consumers by endowing them with ownership of devices or physical objects whereas
the services provide benefits through action or performance.

Manufacturing processes and marketing concepts were developed in the past based on
the manufacturing sector. Therefore, it will not be possible for us to transfer those processes
and concepts directly from the manufacturing sector to the service sector. Lovelock and
Wirtz (2004) list nine basic differences between products and services and they further
caution not to generalize these differences to all services. One of the differences, customers
may be involved in the production process is important to this paper.

Producing a service products many times involve customers to participate. In some


services, like automobile repair, the entire production might be done by service provider.
Kotler (1997) states that the customer involvement in helping to create a service product may
take two forms, self service like using ATMs and cooperation with the service provider like
getting a haircut in a barber shop. This means in the production of the service product both
the service provider and the service consumer are involved.

613
Therefore, to produce a service product, which would solve the problem for the
consumer the service provider must understand the problem of the consumer and the
consumer should understand the instruction/s of the service provider to fully cooperate in the
production. An ambiguous understanding by either of the two will result in a sub standard
service product that will not completely solve the consumer’s problem. There is a dearth of
or almost no research in this area requires an immediate attention and hence, this paper.

II. LITERATURE REVIEW

In the past some researchers have done work on service quality and productivity.
Mary Jo Bitner, Bernard H. Booms, and Mary Stanfield Tetreault (1990) studied critical
incidents in airline, hotel and restaurant businesses. In this study they focused on the
experience of the service consumers. Following this Susan M. Keaveney (1995) studied
critical incidents that led service consumers to switch to competitors. In 1994 Mary Jo
Bitner, Bernard H. Booms, and Lois A. Mohr studied critical service incidents. This time
they looked at the employee or the service providers’ point of view. Jagdip Singh (2000) did
on service productivity and quality. Singh’s work too focused on service provider only.

III. CRITICAL SERVICE ENCOUNTERS – CONSUMERS POINT OF VIEW

Mary Jo Bitner, Bernard H. Booms, and Mary Stanfield Tetreault (1990) studied
critical incidents in airline, hotel and restaurant businesses. In this study of critical incidents,
a sample of customers was asked to recall a service encounter which was particularly
satisfying or dissatisfying. The respondents were asked to answer the following questions:
(1). When did the incident happen? (2).What specific circumstances led up to the situation?
(3).Exactly what did the employee say or do? (4).What resulted that made the consumer feel
the interaction was satisfying /dissatisfying? A total of 699 incidents were recorded. Fifty
percent of them were satisfying incidents and the others were dissatisfying. Then they were
categorized into three groups: (1) employee response to service failures, (2) employee
response to request for customized services, and (3) unprompted and unsolicited employee
actions. The results show that the customers were likely to be twice dissatisfied when a
service fails, twice satisfied when employees met the need of the consumer in a different
way, and half were satisfied and the other half was dissatisfied when unprompted or
unsolicited actions occurred.

IV. THE CONSUMERS’ PERSPECTIVE

When a service provider satisfactorily resolves a negative critical incident then it has
a great potential for enhancing consumer loyalty. By the same token, if not resolved, the
consumer loyalty will vanish and the customer will switch to the competitors. Susan M.
Keaveney (1995) studied 838 critical incidents that led customers to switch to competitors.
The reasons ascribed for switching: (1) core service failures 44%, (2) dissatisfactory
encounters 34%, (3) unfair pricing 30%, (4) inconvenience 21%, and (5) poor response to
service failures 17%. Many respondents described a decision to switch to competitors as
resulting from interrelated incidents, such as a service failure followed by an unsatisfactory
response to resolving the problem.

614
V. CRITICAL SERVICE ENCOUNTERS – EMPLOYEES POINT OF VIEW

Mary Jo Bitner, Bernard H. Booms, and Lois A. Mohr (1994) studied the service
providers’ viewpoint of the critical service encounters. In service settings, satisfaction is
often influenced by interactions between the service provider and the service consumer. In
this study 774 critical service incidents reported by hotel, restaurant, and airline employees.
The research found that many service providers do have a true customer orientation and do
identify with and understand customer needs. They have a respect for customers and a desire
to deliver excellent service. According to them the inability to provide excellent service is
governed by inadequate or poorly designed systems, poor or non existent recovery strategies,
or lack of knowledge. Further this study found from the employees that customers can be the
source of their own dissatisfaction through inappropriate behavior or being unreasonably
demanding.

VI. PRODUCTIVITY AND SERVICE QUALITY

Jagdip Singh (2000) sought answers for the following questions in his work: (1).What
mechanisms govern productivity and quality for service providers? (2). Does the tension of
competing demands from consumers and management have dysfunctional consequences?
(3).What resources help counters these dysfunctional effects? Singh found that service
providers’ productivity was unaffected by burnout tendencies but negatively impacted by
conflict between resources and demands and by role ambiguity relative to customers. He
believed that employees seek to maintain their productivity, even in the face of burnout,
because the relevant indicators are visible and relate to pay and job retention. By contrast,
the quality of service, which is less quantifiable and less visible, is likely to be damaged
directly as employees’ burnout on service consumers. Singh found an unexpected negative
correlation between organizational commitments and service quality, indicating that service
providers who are more committed to the organization may be less committed to service
consumers, and vice versa. Providing greater task control and boss support helps to shield
service providers from role stress, burnout, and thoughts of quitting, while also enhancing
positive attitudes.

VII. PURPOSE OF THE STUDY

As shown above some studies looked the service encounters through the lens of the
service consumers and some other studies looked through the lens of the service provider.
But, no study has so far studied simultaneously through the view points of the service
consumer as well as the service provider. Further, no study has ever studied the effects of
unambiguous understanding between the service consumer and the service provider. Hence,
in this paper we will study the effect of unambiguous understanding on the service quality
and productivity.

The purpose of this study is to examine (1) the effect of service provider’s
unambiguous understanding/ambiguous understanding of consumer’s problem and (2) the
effect of service consumer’s unambiguous understanding/ambiguous understanding of
provider’s instructions and guidelines.

615
VIII. PROPOSITIONS

Proposition 1: Acting on the understanding of consumer’s problem by the service


provider and cooperating on the understanding of service provider’s instructions and
guidelines will result in a service product that will satisfy the service consumer (Quadrant 1).
Proposition 2: Acting with an ambiguous understanding of consumer’s problem by
the service provider and co-producing with an understanding of service provider’s
instructions and guidelines will result in a service product that will make the service
consumer to loose credibility in the service provider (Quadrant 2).
Proposition 3: Acting on the understanding of consumer’s problem by the service
provider and co-producing with an ambiguous understanding of service provider’s
instructions and guidelines will result in a service process and a service product that will
frustrate the service consumer (Quadrant 3).
Proposition 4: Acting with an ambiguous understanding of consumer’s problem by
the service provider and co-producing with an ambiguous understanding of service provider’s
instructions and guidelines will result in a service product that will dissatisfy the service
consumer (Quadrant 4).

VIII. METHOD AND ANALYSIS


SERVICE PROVIDERS’ UNDERSTANDING
UNAMBIGUOUS AMBIGUOUS
SERVICE UNAMBIGUOUS (Q1) (Q2) PROVIDER
CONSUERS’ SATSFIABLE LOSE
UNDERSTANDING SERVICE CREDIBILITY
PRODUCT

AMBIGUOUS (Q3) CUSTOMER (Q4)DISSATISFIABLE


GETS SERVICE
FRUSTRATED PRODUCT

Data Collection: Data will be collected using (CTI) critical incidents method, a
systematic method for recording events and behaviors that are observed to lead to success or
failure on a specific task (Ronan and Latham, 1974). Here the success will be indicated by a
production of a quality service product that solves the consumers’ problem and the failure
will be indicated by a sub standard service product that will not solve the consumers’
problem. Using the CTI, data will be collected through open ended questions and the results
will be content analyzed. In the first study respondents will be asked to report specific events
from their stay in the institution within the past two to four years of their course work. In the
second study clients of law firm will be studied. As the respondents are asked about specific
events rather than generalities, interpretations, or conclusions, this procedure meets criteria
established by Ericsson and Simon (1980) for providing valuable, reliable information about
cognitive processes. Researchers have found this method could yield reliable results in
finding success or failure of the task in question Ronan and Latham, 1974; Flanagan 1954;
White and Locke 1981).

616
Figure I. Typology of Service Product Outcomes

STUDY 1:
The students of a South Eastern University will be the service consumers
and the teachers will be the service providers. The instructions to the students/teachers being
interviewed will be as follows: Think of a time when you or a fellow student/teacher had a
particularly satisfying (dissatisfying) interaction with a teacher/student at your school. Then
they will be asked the following questions:
1. When did the incident happen?
2. What specific circumstances led up to this situation?
3. Have you understood the need of the consumer/the instructions of the provider?
4. Exactly what did you or your fellow student/teacher say or do?
5. What resulted that made you feel the interaction was satisfying (dissatisfying)?
6. What should you or your fellow student/teacher have said or done?

STUDY 2:
The clients of a Mid Western law firm will be the service consumers and the attorneys
will be the service providers. The instructions to the clients/attorneys being interviewed will
be as follows: Think of a time when you or a fellow client/attorney had a particularly
satisfying (dissatisfying) interaction with an attorney/client at your law firm. Then they will
be asked the following questions:
1. When did the incident happen?
2. What specific circumstances led up to this situation?
3. Have you understood the need of the consumer/the instructions of the provider?
4. Exactly what did you or your fellow attorney/client say or do?
5. What resulted that made you feel the interaction was satisfying (dissatisfying)?
6. What should you or your fellow attorney/client have said or done?

IX. CONCLUSION

The incidents reported by service consumer will be classified under the three major
groups of provider behaviors that account for all satisfactory and dissatisfactory incidents: (1)
provider response to service delivery system failure, (2) provider response to consumer needs
and requests, and (3) unprompted and unsolicited provider actions. The incidents reported by
service provider will be classified under the following groups for satisfactory and
dissatisfactory incidents: Satisfactory: (1) Extra work done, (2) model behavior and for
dissatisfactory: (1) Misbehavior, (2) verbal and/or physical abuse, (3) breaking
company/institutional policies, and (4) uncooperative. Then a comparison of provider and
consumer responses would provide us with the type of incident outcome. The outcomes will
be used to test the validity of the typology provided earlier.

REFERENCE

Christopher Lovelock. And Jochen Wirtz (2004), “Services Marketing: People, Technology,
Strategy,” 5th Ed. Pearson Prentice Hall, Upper Saddle River, NJ 07458.
Ericsson, K. Anders and Herbert A. Simon (1980), “Verbal Reports as Data,” Psychological
Review, 87 (May), 215 – 250.
Jagdip Singh (2000),”Performance Productivity and Quality of Frontline Employees in
Service Organizations,” Journal of Marketing, 64 (April), 15 – 24.
Leonard L. Berry (1980), “Service Marketing is Different,” Business (May – June).

617
Mary Jo Bitner, Bernard H. Booms, and Louis A Mohr (1994), “Critical Service Encounters:
The Employee’s View,” Journal of Marketing, 58 (October); 95 - 106.

618
I T PROJECT MANAGEMENT AND SOFTWARE
EVALUATION AND QUALITY

Jagan Iyengar, North Carolina Central University

ABSTRACT

Project Management is a very essential tool for how migrate through the cradle to
grave process. Projects sometimes fail due to bad system design or manufacturing defects
due to bad process design or poor quality checks and inspection. However, a huge issue in
project success is time management.

I. INTRODUCTION

New and improved system and technology projects involve many elements which can
be very complex in nature. Technology is always changing and requires constant research to
stay ahead of the competition. Most often new technology requires moving into unfamiliar
territory. Compatibility is a huge issue, and most new systems projects integrate into
existing systems and incompatible technologies. Sometimes it may be simpler to drop old
system altogether and install a totally new system. Since technology changes so rapidly,
most projects have to deal with those changes in the middle of project which causes questions
of staying with original plan, change direction and incorporate new changes, or scrap old
project and start over. A detailed analysis will be done later in the paper to explain how
knowing technology can help come to a quicker decision. Affected business units,
information systems, and management all have to be accommodated for when implementing
a new system. IT projects are like investing in high risk growth stocks, it’s usually feast or
famine. IS projects can either give huge ROI or result in huge LOSS in the millions of
dollars. Technology projects are very technology dependent and have a cumulative effect on
other projects. Employee understanding of the new technology is a very important issue as
some employees may be set in their ways and become very reluctant to change. That can
slow down a project fast. Systems projects take away doing tasks manually and slowly.
They allow for quick data transfer, easier accessibility to data for qualified individuals, and
moving scattered data into one centralized place via data warehousing. Some of the trends in
systems are software package tools that help automate jobs and eliminate manual work.
Now that there is so much competition out there, tools can be made to help validate which
vendor to use when purchasing equipment or deciding who to outsource manufacturing to.
Software tools can help pull out the relevant data needed to make decisions as well as
monitor productivity. Tools can help investment reps dig up relevant mutual fund, stock, or
bond info. The challenge of the project is to design, test, debug, redesign, retest, debug,
(constant loop) until prototype is finished from an engineering standpoint. All during the
process market research, return on investment, and cost analysis work needs to be done also.
This paper will explore failures and successes of projects, the process of cradle to grave
project completion, and the need for top management participation in strategy formulation.

II. STATISTICS

Project Management skills are very important in the workplace, more than most
people care to realize. The ancient way of doing projects only required single and stand-
alone projects with minimal benefits and limited resources. Specific profits as opposed to

619
company-wide profits were the main emphasis. Companies in the earlier days did not realize
that projects can’t be built in a day and just be released anytime just to fail considerably.
The new and improved way of thinking as far as Project Management goes is to have more
integrated projects with shared resources. The expectations and benefits have been raised to
a higher level as well as the connection to processes. The business environment wants to get
the most out of all resources available which not only means buying all new resources, but
realizing what’s already in-house and getting the best uses out of those. The most effective
project management provides greater benefits to the business since the purpose and scope of
project is clearly defined to provide tangible business benefits.

III. MORE REASONS SKILLS ARE VERY IMPORTANT

Approximately 60% of companies today have no standard method to evaluate ROI


from potential IT projects. (Nucleus Research, May 2003) Close to 60% of firms surveyed
have little or no formal training as recently as 2002. (Organizational Project Management
Baseline Study, Interthink Consulting, September 2002) This is due to the fact that
employees tend to get set in their ways even thought technology changes so frequently.
Also, companies fear spending the money if they aren’t convinced it will enhance the
decision making and automation of allowing less manual work by people and more busy
work done by the computer. Project Management has shown its ability to directly and
positively impact the company’s bottom line and boost the return on investment in project
management. One study shows than an average of 36% improvement in customer
satisfaction, 30% improvement in employee satisfaction, and a 54% improvement in financial
performance. (Center for Business Practices, “Value of Project Management” survey,
February 2002) Another study shows a 21.7% improvement in Time to Market for Project
Management in IT firms. (Center for Business Practices, “Value of Project Management in IT
Organizations” survey, February 2002).An outplacement firm, Drake Beam Morin recently
reported that 57% of 367 large corporations that were surveyed replaced their CEOs in three
years, and in some cases due to the fact that projects did not deliver fast enough results to
satisfy the board of directors. (USA Today, April 8th, 2002). It is predicted that by 2005,
70% of IS organizations will have adopted project portfolio management application services
for team collaboration, resource collaboration, and utilization and cost tracking. (0.6
profitability) (Gartner SPA—18-9677) A study done in 1998 by International Data
Corp.(IDC) provided details as to how organizations can gain back the costs of getting an
employee certified in four months through increased productivity and savings on the financial
side. (Certification Magazine, December, 1999). Standish Group research shows that 52.7%
of IT projects cost 189% of original estimates . (The Standish Group, June 30th, 1999) Only
9% of projects are delivered on time and on budget. (The Standish Group, June 30th, 1999)
Approximately 31% of projects are cancelled before they are completed. (The Standish
Group, June 30th, 1999) The domino effect on this is all the employee time lost on project
as well as customers lost due to promises being broken and use of resources that will have to
be scrapped. In all, 74% of projects fail, are over budget, or over deadline while $75 Billion
is being spent on these projects as reported by the Standish Group. According to a Robins-
Gioia Survey in 2001 on ERP systems, 46% of participants noted that they felt their
organization did not understand how to use the system to improve the way they conduct
business.

620
IV. THE PERTINENT QUESTIONS FOR IT PROJECT MANAGEMENT?

Reengineering tools: When is it feasible to modify existing systems of familiarity or


implement a major change with new technology? A big push many companies are making
today is the ultimate information system that will allow all information to be shared
throughout the company. Even though it is important for employees to be able to know their
direct effect on bottom line by evaluating cost and revenues, allowing too much access can
hurt project also. Sharing of information has to be monitored tightly. If information gets
in the wrong hands, influences can be made to alter project flow without the right
authorization. The right system will have the necessary filters and blocks so that only
pertinent employees can gain access to the information that will help them perform their
specific job function. Customers need to be given the information they need for self-service
as well as help them with their unique needs from the information systems provider. An
example is taken from a Swiss Bank SkandiaBanken. Of the many services they provide, they
provide fund and stock trading to their customers. Their CEO Goran Lenkel explained that
the key to his business is to provide the customers with the relevant information, not all the
information. This project management issue is a huge challenge because of the pressures to
release the system prematurely in order to be the first out in the market. Time still needs to
be taken to ensure the system does not overload the customer with information that does not
make sense to them or will drive them to a competitor. There are over 1400 mutual funds
available in Sweden, however SkandiaBanken have specifically chosen the best 80 to 90
mutual funds to offer their customers (based on customer feedback surveys). Lenkel believes
his employees should not only learn about customers, but learn from customers. There is an
ongoing battle of staying in the comfort zone or going into the danger zone. (Proceed at your
own Risk!!) Can people be trained, is there time for training, are people willing to be trained?
Tweak existing system or chuck it? How old or outdated might the current system be? How
is the ROI of the project tied to the IT Project Management? All managers want to see how a
new project will make the company money and save the company money, but it’s not all
about how well the widgets are built since many others are most likely building those same
widgets.

V. BUSINESS PROCESS REENGINEERING

Speed, service, cost and quality always seem to come up in conversations about how a
systems process can be improved using information technology. Reengineering is analysis,
simplification, and the act of redesigning a process. In other words, If companies choose
not to implement reengineering when new system planning is on the horizon, that would most
likely imply that companies want to keep the existing system in place and modify as opposed
to drastically changing the system altogether. The reengineering reorganizes work flows,
which takes combined steps to cut very repetitive manual paper-intensive tasks and move
toward automation. This is a very important tool in project management since time is a very
critical component of success. Time spent on more research oriented tasks is better than too
much time spent on busy work tasks that can be handled by automation. This also requires
much more than tweaking a lot of existing procedures. The reengineering process requires a
new vision of how the process is to be organized. An important step in reengineering is for
management to understand and measure the performance of existing processes as a baseline.
In other words, existing standards have to be understood and enforced in order for change to
take place.

621
Management has to make sure they fully understand the existing process cover to
cover and understand the positives and negatives about it so the same mistakes won’t be done
twice. Also by understanding the positives, those concepts can be carried over to the new
system. Another key to success is to not think of only what the company wants to achieve,
but think of what the customer wants to achieve. An important concept some companies miss
is the fact that they don’t factor in the customer when making the decisions on the next wave
of systems to implement. Customers are the ones spending the money to keep the company
afloat, so without keeping up with customer trends, companies can’t survive. In the case of
the Swiss bank SkandiaBanken, their CEO Goran Lenkel believes in the concept that the
customer leads his company and their initiatives, not the high up executives. The customer
may not know how to manipulate the technology to get the systems working, but they sure do
know what they want to do with the technology. Before any reengineering gets done, the
customer has to be heavily involved with the brainstorming stages via customer surveys and
constant follow-up and feedback with the existing customers.

The process can be improved, or redesigned. The paybacks and the risks have to be
heavily thought out. Studies have shown that about 70% of reengineering projects fail to
deliver the benefits. Reasons are fear, anxiety and resistance. There will be instances where
people believe, “if it ain’t broke don’t fix it”. Some people don’t like change or getting out
of that “comfort zone” with the existing system. Some reasons are that people feel threatened
that their job security will fall due to the new implementation, so they will fight to keep the
old systems in place. Another issue is the amount of training that would have to take place to
get up to speed on the new system as well as general growing pains to any type of change that
takes place. Customer concerns are very important, however, employee concerns also have to
be correctly taken into account. All employees should be brought up to speed on what is
going on so their concerns can accurately be taken under consideration. If the employees
don’t like it or feel uncomfortable with it, they won’t support it which could slow down the
release of the project. It’s an ongoing challenge for redesigning new systems, but if
companies are going to keep doing things the same, how will growth and continued success
happen? Also, putting band-aids on the sore spots still does not solve the complete problem.
This leads into the reason why ROI definitely has to be measured accurately before the
project even starts to ease the transition into the project.

VI. MEASURING ROI

Finding the connection between technology spending and corporate performance is an


ongoing challenge for most companies. Before management buys into huge change, a
measure of comfort has to be felt as far as the money invested is concerned. Consultants
have been working on many tools that profess to measure return on investment. Calculations
can be done on self-service websites based on a few inputs. There are complex software
programs for ROI measurements that run up to $200,000. Everyone has their own system,
but what system accurately measures ROI? There are somewhat immeasurable items that
most likely are measurable that companies don’t see. Also, the time that managers spend
running around chasing down fires and dealing with soap operas in the workplace as opposed
to spending their time doing research and improvement tasks for their respected departments
needs to be measured. The Investrics system assigns a probability-weighted range of
values that deals with the inherent risk factor vs. benefits. Instead of a single value that
doesn’t cover all the details, a range of outcomes is determined so that managers will have
more information to base their decision on. The system may predict a 30% chance of 70%
ROI on a new system based on certain inputs. Taking other negative factors into

622
consideration, the system will add a possible 20% chance the product will not produce a
positive ROI. Depending on unique circumstances, many clients of this system will have
different tolerances on how much risk they are willing to take with the project, so a project
may have a 60% ROI with a 20% risk for negative return, green light goes to those who are
willing to take the 20% risk, red light to those who are not. The project manager has the
tough task here to ensure that the IT and finance groups are working together when
evaluating ROI. These two groups have totally different responsibilities, however when it
comes to ROI for a project, they need to be clicking together on all cylinders. It also means
having an understanding of what the responsibilities of both functions are so decisions can be
made more clearly. If both sides are on the same page, the company will move twice as fast,
if not, project will be at a standstill.

VII. CONCLUSION

A Project Office is a useful technique that will help achieve a measurable ROI with
business issues and limited resources being on the rise. The purpose is to evaluate, measure,
and essentially enforce the performance and interaction of IT processes across a company’s
business units. Key players and their roles for each project are known as well as who is
responsible for communicating and setting the policy regarding the project’ s performance.
It communicates to the heads of the business units involved as well as senior management
about ongoing project performance. A project office should have a specialist and an analyst.
The specialist is the person with the skills to oversee those that are charged with delivering
the projects. The analyst gathers, compiles, and reports project performance to senior
management. They also have to take into account the other projects in construction
elsewhere in the organization since no one IT project in an organization stands alone from the
rest of the IT projects. The Office can definitely draw out a clear picture of how the project
falls in line with company strategy and objectives. Also actual vs. projected ROI comes into
play with the Office. It is one thing to compile a projected ROI at the beginning of project;
however the measure of actual ROI when project is complete is just as important. Actual
ROI can be used as a measuring stick for future and similar projects.

REFERENCE

Laudon, C. Kenneth, Laudon, P. Jane (2003) Essentials of Management Information S


ystems. Fifth Edition New Jersey: Prentice Hall, Inc.
O’Brien, A. James (2004) Management Information Systems. Sixth Edition New York:
McGraw-Hill Companies, Inc.
G. Davis (2003) Time Tracking as a Core Business Process, Companies Win with Time-
Based Business Strategies. Bitpipe Technical White Paper:
http://www.journyx.com/pdf/timecore.pdf
Elkins, J. William (2003) Maximize ROI with a Project Office.
http://www.computerworld.com/managementtopics/roi/story/0,10801,78517,00.html
Alster, Norm (2002) ROI: Results Often Immeasurable?
http://www.cfo.com/article/1,5309,7880|0|||,00.html
S. Krishna (2003) Why are Project Management skills so important?
http://www.barrett-korth.com/pmstatistics.htm

623
IMPROVING PRODUCTIVITY WITH ENTERPRISE RESOUCE PLANNING

Hooshang M. Beheshti, Radford University


hbehesht@radford.edu

Cyrus M. Beheshti, Deloitte & Touche


cbeheshti@deloitte.com

ABSTRACT

The term productivity has an inherent meaning for most people. Many consider
productivity to be a measure of efficiency. In organizations, however, productivity should be
viewed from both efficiency and effectiveness (performance) point of view. Focusing on
efficiency alone can be harmful to the organization’s long-term success and competitiveness.
Over the years, corporations have adopted new technology to integrate business activities in
order to achieve both effectiveness and efficiency in their operations. In recent years, many
firms have invested in enterprise resource planning in order to integrate all business activities
into a uniform system. The implementation of enterprise resource planning enables the firm
to reduce transaction costs of the business and improve its productivity and profitability.

I. INTRODUCTION

In today’s global business managers are increasingly under pressure to improve the
financial performance and the profitability of their companies. One method of improving
profitability is to focus on a strategy that improves productivity by reducing costs of business
activities in the firm. To reduce costs and to improve productivity many organizations
consider the adoption of new technology. This approached is based on the assumption that
the company is operating as productive as possible with the existing technology and therefore
one way to improve efficiency is to upgrade or change the current technology. In recent
years, one such technology adopted by many companies is the enterprise resource planning
(ERP).

Enterprise resource planning systems are designed to improve productivity by


upgrading an organization’s ability to generate timely and accurate information throughout
the enterprise and its supply chain. Successful implementation of ERP systems can lead to
reduced product development cycle, lower inventories, improved customer service, and
enhanced coordination of global operations. At the same time ERP systems are expensive,
complex, and difficult to implement.

An ERP system, also called enterprise system, is a set of business applications, or


modules, which links financial, accounting, manufacturing, human resources, supply chain
and customer relations management systems into a tightly integrated single system with
shared data and visibility across the entire business.

A review of the literature suggests that ERP systems are used by small, medium and
large corporations as well as government agencies and nonprofit organizations. In recent
years a growing stream of research has focused on the competitive advantage of ERP and
stress the importance of considering the organization’s business models and core

624
competencies when making decisions for or against ERP implementation (Lengnick-Hall et
al., 2004; Davenport, 1998; Prahalad and Krishnan, 1999; Holland and Light 1999).

To successfully implement an ERP system and to avoid failure, the firm must conduct
a careful preliminary analysis and develop a plan for ERP acquisition and implementation.
The most important success factors for ERP implementation include top management
support, effective project management, extensive user training, and viewing ERP as a
business solution. Factors such as inadequate technology planning, user involvement and
training, budget and schedule overruns, and availability of adequate skills are considered
reasons for ERP failures (Sumner, 2000; Umble and Umble, 2002, Wright and Wright, 2002).

II. COSTS OF ERP SYSTEMS

ERP systems are very expensive to implement. The costs of an ERP system come in
various forms and include the software, hardware and network investments and often
consulting costs. These costs vary from one company to another and depend on the degree of
system integration and applications desired by the firm, the larger the company and the more
advanced and complicated the ERP system, the greater the cost.

The cost of the ERP package varies among vendors, type of package, and degree of
customization. Some companies require more customized versions of ERP systems than
others. The more customization a company desires from an ERP system, the greater the cost.
Changes in the operation environment, implementation costs, and integration costs are also
among the fixed costs of an ERP system.

Adequate training of personnel is imperative for the success of an ERP system.


Training is a continuous process, it must be done during the entire implementation process of
the system, as well as when upgrades and other aspects of the ERP system are changed,
updated, or added. Seminars as well as hands on experiential training are important for a
successful implementation. As in every area of information technology, ERP systems are and
will continue to be constantly changing. Keeping up with these changes are essential for an
organization to have successful use of an ERP system, and to get the most out of this
expensive endeavor. Training also requires a massive amount of time. There is an
extraordinary amount of time associated with ERP systems. Included is the time it takes to
choose, implement, update, and maintain an ERP system.

Company time, time when other functions could be performed is an opportunity cost
of ERP systems. Along with the amount of time that it takes to train employees and keep
them up-to-date with the newest innovations within their present system, it takes a lot of time
to determine which system to choose, implement the system and keep it running. It usually
takes at least a year to implement an ERP system. In some instances it can be shorter than
that, possibly in smaller companies, however that is extremely rare. In many cases it takes
much longer than a year for full implementation of ERP, sometimes as much as 2 to 3 years
(Leitch, 2002).

III. BENEFITS OF ERP SYSTEMS

ERP systems can provide an organization with many benefits. It is important that
these benefits outweigh the costs of the system and they should as long as the correct system
for the organization is chosen and the system is implemented properly. These systems can in

625
the long run save millions of dollars, improve quality of information, and increase workers
productivity by reducing the amount of time to do a job. ERP systems can virtually eliminate
the redundancies that occur from outdated and separate systems that may be present in each
department of an organization.

One benefit of ERP systems is that information has to be put into the system only
once. Various employees can access data simultaneously in ERP systems, where as in
outdated and separate legacy systems this task was much less likely. With the integration of
departments in ERP systems, personnel from the finance department can obtain information
about a customer for example, as well as personnel from the Human Resources department.

A successful ERP system will provide real-time and up-to-date information to all of a
company’s decision makers, from executives to front-line employees. Customer service is
improved by the rapid release of information. Labor, production and inventory costs are
reduced by having timely information.

Process improvement is another important benefit of ERP. Process improvement


saves time and reduces redundancies. The city of Pasadena was one of the first cities to
implement an ERP system. They have greatly improved the time it takes to develop various
reports. Before ERP implementation it took approximately 10 days per month to produce
reports after ERP was implemented it takes one day to generate the same reports leaving nine
other days per month for other important tasks to be performed (Ferrando, 2000). The
process improvements brought by ERP systems can also help organizations to reduce the
amount of inventory that they hold therefore reducing storage and warehousing costs.

Unfortunately, determining the benefits of an ERP system is much harder than the
costs. Although a company cannot be precise in estimating the cost of a system, measuring
the benefits is much more difficult. Often companies will not be able to determine the
monetary benefits, and cost savings that an ERP solution will offer until several years after its
implementation.

IV. FACTORS TO CONSIDER WHEN IMPLEMENTING ERP

Organizations have to carefully consider various factors before commencing on an


ERP initiative. The most important factor is that companies should consider ERP
implementation as a strategic initiative rather than a technical or IT solution. In addition, the
analysis of ERP projects should include such tactical factors as technical software
configuration and project management variables, together with broader strategic influences,
such as whether the new system will provide a competitive advantage or minimize their
competitive weaknesses.

Strategic evaluation requires the firm to take a careful look at its existing legacy
systems. Legacy systems encompass the existing business processes, organizational structure,
culture, and information technology. Thus, they determine the organizational change required
to successfully implement an ERP system. For example, if the existing legacy systems are
complex, with multiple technology platforms and a variety of procedures to manage business
processes, then the amount of technical and organizational change required is high. If the
organization already has common business processes and simple technical architecture,
change requirements should be low.

626
The organization’s readiness for change influences the ERP strategy to a great extent.
Based on that, different approaches can be adopted. For example, the company can
implement a skeleton version of a software package initially, and then gradually add extra
functionality once the system is operating and the users are familiar with it. The main
advantages of this approach are speed and simplicity. By adopting a skeleton approach,
implementation of an ERP system across multiple sites can be achieved in a much shorter
timeframe. A more ambitious approach is to implement a system with complete functionality
in a single effort. This approach will require less time for the entire system to be integrated,
however, it requires a higher level of expertise and stronger organizational commitment, and
risk of failure is greater.

ERP implementation raises important questions for global corporations. Companies


need to decide how much uniformity they need in the way they do business in different
regions of the world and how much customization they can allow. Organizations can use their
enterprise systems to install consistent operating practices across their units, which enables
them to achieve tight coordination throughout the enterprise resulting in better efficiencies.
However, if differences in regional markets are significant, process uniformity would be
counterproductive. For these companies, a different approach to ERP is more appropriate.
Instead of a uniform system, different versions tailored to local needs in each unit with a core
of common information that all divisions share should be installed.

Apart from strategic issues, many tactical factors such as vendor selection,
outsourcing, hiring of consultants and personnel should be considered as well. Companies
should consider vendor selection by determining the appropriate functions that are desired of
the system first. If a vendor offers a solution that is close to what the company needs, this
vendor should be selected to reduce the need for customization. If a single vendor cannot
satisfy all the company’s needs then a multi-vendor solution could be considered, but the
implementation is complex.

Instead of buying, some companies can lease ERP modules or suites, renting only the
modules they need. Leased software usually is accessed over the Internet rather than installed
on the company’s hardware. With this option, the ERP vendor assumes responsibility for
maintaining and upgrading the system. However, this approach requires a lot of trust and
security since the company that is leasing the system places all its data capabilities in the
hands of the online software vendor.

Often companies hire outside consultants to assist them with ERP system selection
and implementation. Many consultants have considerable experience in specific industries or
comprehensive knowledge about certain software. Thus, they are often better able to
determine which ERP module will work best for a given company. In selecting consultants,
companies should look carefully at the consultant’s resume and inquire about that
consultant’s financial ties to the software vendor to ensure objectivity.

The information technology personnel capable of installing and maintaining an ERP


system should be available. A successful IT team must know both the company’s business
and the ERP package, and both the company personnel and outside consultants should be
comfortable to work with each other.

627
V. CONCLUSIONS

ERP implementation offers significant benefits for today’s businesses; they are based
on a value-chain view of the business in which functional departments coordinate their work.
It forces the firm to standardize and integrate its processes across the enterprise, eliminate or
re-engineer redundant or non-value-added processes, which leads to significant reductions in
operational costs and increases in productivity in the long run.

The investment that is required to implement and maintain an ERP system is


significant, sometimes running into hundreds of millions of dollars. The decision to purchase
and implement an ERP system is one of the most important decisions a manager will have to
make. Careful analyses of strategic and tactical issues are necessary to ensure the success of
the project.

REFERENCES

Brickley, P. “Defunct outfit firm blames IT firm; in liquidation, firm alleges Andersen
Consulting added to its troubles.” Philadelphia Business Journal, 17, (40), 1998, 3-4.
Davenport, T. “Putting the enterprise into the enterprise system.” Harvard
Business Review, 76, (4), 1998, 121-131.
Ferrando, T. “ERP systems help with integration.” American City & County, 115, (11), 2000,
12.
Holland, C. P., and Light, B. “A critical success factors model for ERP Implementation.”
IEEE Software, 16, (3), 1999, 30-35.
Leitch, J. “The cost of cutting costs.” Contract Journal, March 6 2002, 8-10.
Lengnick-Hall, C., Lengnick-Hall, M. and Abdinnour-Helm, S. “The role of social and
intellectual capital in achieving competitive advantage through enterprise resource
planning (ERP) systems.” Journal of Engineering and Technology Management, 21,
(4), 2004, 307-330.
Prahalad, C.K. & Krishnan, M.S. “The new meaning of quality in the information
Age.” Harvard Business Review, 77, (5), 1999, 109-118.
Sumner, M. “Risk factors in enterprise-wide/ERP projects.” Journal of Information
technology, 15 , (4), 2000, 317-327.
Umble, E.J., and Umble, M.M. “Avoiding ERP implementation failure.” Industrial
Management, 44, (1), 2002, 25-33.
Wright, S. and Wright, A. M. “Information systems assurance for enterprise resource
planning system: implementation and unique risk considerations.” Journal of
Information Systems, 16, (supplemental), 2002, 99-113.

628
CHAPTER 24

SPIRITUALITY IN ORGANIZATIONS

629
REFLECTIONS ON ISLAM AND GLOBALIZATION IN
SUB SAHARA AFRICA

David L. McKee, Kent State University


dmckee@bsa3.kent.edu

Yosra A. McKee, Kent State University


ymckee@bsa3.kent.edu

Don E. Garner, California State University, Stanislaus


Garner@toto.csustan.edu

ABSTRACT

The present paper will be one of several overviews intended to introduce a larger study
of economic and developmental problems in sub Sahara Africa. This paper is concerned with
how religion functions with respect to the integration of underdeveloped nations with the
global economy. In Sub Sahara Africa there are three major religious groups: native
religions, Islam, and Christianity. This paper shows how nations in the region where Islam is
strong can use that religion in a positive manner to strengthen global linkages. Beyond that
the message seems to be that those concerned with global linkages must regard existing
religions as givens as they pursue their objectives.

I. INTRODUCTION

The recent meetings of the Elite Eight in Scotland focused some attention upon one of
the poorest regions of the world—Sub Sahara Africa. As a result of those meetings Tony
Blair, the Prime Minister of Great Britain was partially successful in getting at least a portion
of the monies owed to the Elite Eight by governments in the region forgiven. Of course even
this partial success will be somewhat helpful to the governments concerned. Whether or not
it raises positive possibilities for more assistance for the region remains to be seen.

Since the breakup of European Empires in the period following World War II little
positive attention from what is sometimes characterized as the Western World, has been
evident in the region. The result has been that at this writing (2005) many of the nations of
the region were listed amongst the poorest in the world. Evidence of that can be seen in the
Human Development Report issued by the United Nations 2005, in which no Sub Saharan
mainland nations were listed in the top 100 as measured by the Human Development Index
(134-142).

A closer look at the component factors in the index adds weight to the issue. In Sub
Sahara Africa life expectancy stood at 46.3 years as compared to 78.3 years in high income
nations. The literacy rate for those aged 15 and above in 2002 stood at 63.2 per cent in the
region. There are various reasons why the nations of the region are doing poorly compared to
other parts of the globe. European colonial powers left behind them former colonies ill
equipped to govern themselves. Beyond that in many cases European settlers retained the
best land and European business interests continued in control of valuable resource exports.
Beyond such matters foreign business interests were slow in investing in facilities in nations
controlled by questionable regimes.

630
II. THE IMPORTANCE OF EXTERNAL LINKAGES

There is much that could be discussed concerning the poor conditions prevalent in Sub
Sahara Africa. The current discussion will be aimed at how a better linkage to globalization
and the global economy may improve economic prospects for the jurisdictions concerned and
some specifics on how that can be accomplished.

To begin with the diversity of African cultures, belief systems and languages make
foreign linkages rather difficult. Beyond that diversity amongst former colonial powers
regarding languages, governance measures and imposed cultural and religious imperatives
seem to have been of questionable assistance. Indeed the former colonial regimes established
geographical boundaries that separated tribal groupings, extended families and other
functional groupings. Thus many current nations in the region appear to be based upon
artificial boundaries. Any prospective global business aspirations must face many or all of
these regional peculiarities.

Thus, from the point of view of individual nations in the region global integration and/ or
linkages must appear daunting. Indeed in many cases natural developmental ambitions may
be being thwarted by religious and cultural differences between residents. Sub Sahara Africa
divided as it was by European powers is having some difficulties in developing the national
identities required of functional nations. It is the position of the current investigators that
these national identities can be fostered to some extent through the development of sound
business climates and international business linkages in the nations concerned. Such
elements must be mastered without incursions into existing cultures and religious mores.
This may appear difficult if not impossible. Hopefully the current discussion will begin to
lay a foundation for how this task can be accomplished.

Africa has long been a place in which various religions have flourished. Today its
population espouses three religious traditions which enjoy the adherence of the bulk of the
population. The first of these traditions covers the native religions which predated foreign
influence and are still popular today. The second is Islam, the first foreign religious tradition
to gain a foothold in the region. The third is Christianity, embracing a number of sub sets.
Of course these traditions are only the most visible and hardly imply the non existence of
other belief systems.

III. THE LINKAGE POTENTIAL OF ISLAM

The present investigators are of the opinion that the successful linkage of the nations of
the region to the global economy should not represent the eclipse of religious and cultural
traditions. Indeed success can only come if local populations are encouraged to retain their
customs and beliefs. In general it should be possible to bring that about.

The Islamic faith has had a significant impact in the region under discussion here. For
example in the nations of Mali and Somalia 100 per cent of the population is Muslim (Central
Intelligence Agency, 2004). The same source identifies the Cameroons as 98 per cent
Muslim. Zambia boasted a population which was 90 per cent Muslim. Djibouti and Senegal
stood at 74 per cent. In Sub Sahara Africa nearly 245 million people were identified as
Muslim.

631
Clearly such numbers suggest that the Islamic faith is a force to be reckoned with in Sub
Sahara Africa if the region is to improve its material status. If the region’s material status can
be improved with respect to the global economy, this can only be accomplished if Islam
sustains a position of respect both religiously and culturally.

One problem that most of Sub Sahara Africa shares with many nations that were
formerly classified as members of the Third World is the lack of employment opportunities
for young people. This problem causes unrest and violence in poor nations. The frustrations
of young populations can quickly translate into anarchy and even revolution. In today’s
world this issue has become very significant among young Islamic populations. Such
populations can become recruiting grounds for international terror groups.

The present authors see nothing in this which translates into the justification for labeling
Islam as a violent or anti western religion. Anyone modestly knowledgeable about Islam
would dismiss this as an oversimplification.

Any disenfranchised population of young adults is a fertile field for trouble. The youth
of Sub Sahara Africa are no exception. In jurisdictions where Islam is significant the solution
of the problem of youth unemployment would go a long way toward solving youth related
issues.

For instance a closer alignment with the global economy should create employment
opportunities and help in drying up the pools of disenfranchised youth.

The youth who are employed have a more hopeful future and are less susceptible to
international adventures of any sort. In the current context the question becomes whether or
not Islam can contribute to forming and supporting an economy largely dependent upon
globalization. The current authors assume that it can. Of course such a view requires an
answer to the question of how.

In an ideological paper David McCormack suggests that Sub Sahara Africa is facing an
infusion of recruits for radical Islam (2005). Of course disenfranchised young people in any
poor country represent a pool to be recruited for various questionable transnational causes,
not to mention domestic groups whether criminal or ideological.

The present authors are of the opinion that Islamic nations, where that religion has
successfully established a closer relationship with global interests, are in a strong position vis
a vis material betterment. This can be seen among various Middle Eastern nations (McKee,
Garner and McKee, 1999).

The general case for cooperation between Islam and the global economy has been
presented more recently (McKee, McKee and Garner, 2004, 2005). Specifically the coming
together of Islam and global interests was seen as benefiting from the actions of multi
national business service firms. Such firms are capable of facilitating and linking national
interests with those operating internationally.

To elaborate, various business services , such as those offered by major international


accounting firms, banks, law firms and a wide range of consultants facilitate the meshing of
local interests with those of the global economy. All that is required is that local branches of
the multi national service firms employ two types of specialized personnel. These two sub

632
groups are composed of foreign experts familiar with the machinations of the global economy
and foreign interests generally and local professionals familiar with both the peculiarities of
the local economy and Islamic traditions. Such a mix of service personnel should be able to
assist business with both international and domestic needs. In the practical world there seems
to be an overlapping of interests which should allow for a successful foundation for
businesses both foreign and domestic which should have no need to compromise local beliefs
and traditions.

Within Islam practices and traditions exist which should actually be conducive to
successful business transactions within a global systems of markets.

Obvious among such practices are the well known Islamic prohibition of interest and the
practice of alms giving known as the zakat.

In the case of interest it is known that there are financial interests in Islamic nations
where interest charges are accepted. This is not a needed adjustment nor is it well thought of
by religious Muslims. It hardly seems to be an advantageous policy if the nations concerned
are seeking a stable relationship to the global economy. To the eyes of the current authors the
prohibition of interest may be a positive element vis a vis the recruitment of modern business
for the global economy. Venture capitalists in search of positive investment opportunities
may be more easily recruited in the absence of interest opportunities. Certainly investment
for profit is an excellent hedge against inflation.

The zakat is one of the five pillars of Islam and certainly central in the Muslim belief
system. Under the zakat each Muslim is required to provide 2.5 per cent of his or her wealth
to the poor on an annual basis. There is little need to dwell upon how such persons are
impacted by inflation. Together with the prohibition of interest the zakat is a powerful
argument for Muslim participation in free market activities.

If such participation causes the domestic economies of Islamic nations to expand the
creation of employment opportunities appears obvious. This of course should reduce the
cadres of unemployed youth popularly referred to in some circles.

IV. CONCLUSION

Clearly it appears as though a more extensive participation in the global economy by


Islamic countries is a very desirable phenomenon. The Islamic religion provides an
environment well suited to international market seeking activities. Certainly, Islam is far
from an obstacle to development. Indeed it seems quite compatible with development.
Beyond that there is no reason for foreign business interest involved in ongoing international
expansion to feel that in the Islamic world there are religious road blocks to surmount or
circumvent. They would be well advised to accept Islam as a given. In various nations in
Sub Sahara Africa Islam should be a help rather than a hindrance to globalization.

Christianity with its myriad of subsets may appear even more problematic vis a vis
globalization, since Christianity has been a frequently vocal critic of materialism.
Nonetheless Christianity has long been known to be compatible with the free enterprise
system (Worland 1967, Gutierrez 1973). If religions such as Islam and Christianity are
given their due they should be found to be quite compatible with globalization. Indeed
international business interests should treat such belief systems with respect as potential sites

633
for their operations. In the same vein traditional religious interests must be treated
respectfully as accomplished facts by those searching to do business in Sub Sahara Africa.

REFERENCES

Buckley, Peter J., and Jeremy Clegg, (eds.), (1991) Multinational Enterprises in Less
Developed Countries. New York: St. Martins Press.
Central Intelligence Agency (2004), the World Fact Book.
Gutierrez, Gustavo (1973), A Theology of Liberation. Maryknoll, New York: Orbis Books.
Kuran, Timur (2004), “Why the Middle East is Economically Underdeveloped: Historical
Mechanism of Institutional Stagnation”, Journal of Economic Perspectives,
Volume 18, Number 3, 71-90.
McCormick, David (2005), An African Vortex, Islamism in Sub-Saharan Africa, Washington
DC: Center for Security Policy. Occasional Paper, Number 4.
McKee, David L., Don E. Garner, and Yosra A. McKee (1999), Accounting Services, The
Islamic Middle East and the Global Economy. Westport CT, Quorum Books.
McKee, David L., Yosra A. McKee and Don E. Garner (2002), “Multinational Consultants
as Contributors to Business Education and Economic Sophistication in Emerging
Markets”, in Alon, Ilan and John R. McIntyre (eds.). Business Education and
Emerging MarketEconomies PP. 15-26, New York: Kluwer Academic Publishers.

634
KARMA-YOGA AND ITS IMPLICATIONS FOR
MANAGEMENT THOUGHT AND INSTITUTIONAL REFORM

Rashmi Prasad, University of Alaska Anchorage


afrp2@uaa.alaska.edu

Irfan Ahmed, Sam Houston State University


irfanahmed@shsu.edu

ABSTRACT

Karma-Yoga is an ancient Indian philosophy concerned with work as a means to


spiritual advancement. This paper provides a primer on the main tenets of the Karma-Yoga
philosophical system and its normative implications for working life. The Karma-Yoga
perspective is then applied to critically interpret selected relevant branches of managerial and
organizational thought. Potential contributions of the Karma-Yoga perspective to
mainstream management ideas are assessed. The paper also examines the potential of the
Karma-Yoga philosophy to serve as a foundation of institutional reform in India, especially in
the area of the civil service and government agencies.

I. INTRODUCTION

Karma-Yoga, an ancient Indian philosophy concerned with work as a means of


spiritual advancement, is far removed in space and time from contemporary U.S.
management students. When these students first encounter ideas derived from the philosophy
in the form of Larry Brilliant’s aphorism: “Live your life without ambition. But live as those
who are ambitious” (Cameron & Whetten 2005), they express puzzlement at the seemingly
paradoxical statement, prompting a more thorough discussion of the philosophy and its
implications for management thought and practices. This paper serves as extension of the
discourse through a presentation of the key elements of Karma-Yoga philosophy, an
examination of its implications for management thought and practice in the U.S., and an
analysis of its potential in contributing to institutional change in India.

II. KARMA-YOGA: SPIRITUAL PROGRESS THROUGH WORK

The Karma-Yoga philosophy is most strongly advocated in the Bhagvad Gita, one of
Hinduism’s principal scriptures, which unfolds as a discourse on the battlefield between
Arjuna, a warrior-prince, and Krishna, an incarnation of God (and Arjuna’s charioteer in the
impending battle). Arjuna, a great warrior but suddenly reticent on the eve of battle, desires
to flee the battlefield and return to a life of exile and mendicancy in the forest. Krishna
counsels Arjuna that his reticence results from cowardice, not compassion for relatives in the
opposing ranks, and that a Ksatriya pursues spiritual progress through acquitting duty in the
proper fashion:
Looking at thine own Dharma, also, thou oughtest not to waver, for there is nothing higher
for a Ksatriya than a righteous war. Fortunate certainly are the Ksatriyas, O son of Prtha,
who are called to fight in such a battle that comes unsought as an open gate to heaven.
(Swarupananda 1996, pages 31-32)

635
The relative efficacy of an active or contemplative life in the pursuit of spiritual
progress has been a perennial subject of debate within Hindu society. The Bhagvad Gita
addresses the debate by asserting that work and activity are unavoidable in human life, and
that the spirit in which this work is carried out can either enmesh the individual by forming
new karma (good or bad), or liberate the individual by working out the effects of old karma
and avoiding the accumulation of new karmas. The principal elements involved in
transforming one’s karma into a form of yoga include non-expectation of rewards, emotional
detachment from context and consequences of the work, full concentration of mind on the
work itself, regarding the work and associated rewards as an offering to God, and seeing
agency in the Divine while regarding oneself as an instrument. According to the Gita, work
actuated by desire for rewards ultimately produces misery:
Work with desire is verily far inferior to that performed with mind undisturbed by
thoughts of results…The wise, possessed of this evenness of mind, abandoning the fruits of
their actions, freed for ever from the fetters of birth, go to that state which is beyond all evil.
(Swarupananda 1996, pages 49-51)

In addition to concern for rewards, all other work-related attachments should be


abandoned, including concern for hardships involved in the work, fear of failure, excitement
at the prospect of success, or dispute over credit-apportionment and free-riding. Attaining
this state of detachment in work is aided by identifying God as the ‘doer’, the agent, and
oneself as an instrument, thus avoiding ego-identification with tasks performed. As stated in
the Gita:
He who is everywhere unattached, not pleased at receiving good, nor vexed at evil, his
wisdom is fixed…He whose mind is not shaken by adversity, who does not hanker after
happiness, who has become free from affection, fear, and wrath, is indeed the Muni of steady
wisdom (Swarupananda 1996, page 56).

Despite the attitude towards rewards and attachment contained in Karma-Yoga, work
is not to be carried out in a shoddy, indifferent, haphazard manner. Work and associated
rewards are to be considered as offerings to God, and thus should be carried out with a
dedication bolstered by a detachment from self-consideration:
By endowing karma with a spiritual outlook Karma-yoga teaches the worker efficiency in
work. He is not to consider the kind of work as much as the spirit in which the work is done.
The humblest work done earnestly with the right attitude produces the greatest good. Having
no ulterior motive, a Karma-yogi can devote his whole attention to the work itself. He is
unconcerned with the pleasures and pains entailed in the work…A Karma-yogi does his work
neither mechanically nor in a mood of abstraction but with full attention and devotion
(Satprakashananda 1977, page 221).

Given that the core of Karma-yoga lies in detachment, how does it does with the
ethics of work? Does it discriminate between good and bad works? Karma-yoga is not the
same as good-works, if such works are performed in a spirit of attachment and a desire for
rewards. The result of such works is the production of good-karmas and samskaras, a form of
payment that is eventually exausted. However, the moral development that emerges from the
practice of good works, combined with the disillusionment that arises from attachment to
such works may lead to the path of Karma-yoga. Though unattached, the Karma-yogi does
discriminate between works of divergent ethical qualities:
(the Karma-yogi) knows that certain acts are right and promote happiness, while certain acts
are wrong and bring about misery. He finds the cause of bondage in both. But he is not blind
to their relative worth. As he has to work, he must choose the right work. There is another

636
reason why a Karma-yogi cannot work indiscriminately. It is selfishness that impels a man to
wrong deeds. No one will harm others unselfishly. A Karma-yogi, having no selfish motive,
cannot do misdeeds. His very nature directs him to the right path (Satprakashananda 1977,
page 224).

Karma-yoga philosophy also maintains a dispassionate view of human and historical


progress. Remedying all the ills of the world is likened to manually straightening the curly
hair of a dog. Attachment to the ideas such as progress or utopia are regarded as another the
creation of another sort of bondage for oneself.

II. MANAGEMENT THOUGHT IN THE U.S. AS SEEN FROM THE


PERSPECTIVE OF KARMA-YOGA
Complex organizations have come to pervade most aspects of American life to an
extent that one cannot properly consider critical issues of culture, commerce, or public policy
without reference to its dynamics (e.g. Perrow 1986, Pfeffer 1997). The perspective of
Karma-yoga cannot be brought to bear on all facets of this ‘organizational society,’ however
it does offer insights, critical and complementary, on many key issues in the realm of
managerial ideas. The Karma-yoga perspective can be applied most readily to individual-
level variables and issues rather than those at the organizational-level; the concepts originated
in a society with states, but not much other complex organization. However, the Bhagvad
Gita was composed in the midst of a complex, hierarchical society and has much to say about
fundamental questions of maintaining a complex role-order. The Karma-yoga perspective is
decidedly non-managerialist. In eschewing rewards, the Karma-yogi negates a primary tool
of management. The perspective could also be labeled ‘anti-Expectancy theory’ as it also
espouses the rejection of ‘instrumentality’ (performance to outcome expectancies) and effort
to performance expectancies, in addition to not expecting rewards. The perspective aligns
more with the humanistic (i.e. worker-centered) strains in management thought, which focus
on variables such as job-satisfaction, self-efficacy, and task-significance (epitomized by
Hackman & Oldham’s Job Characteristics model). Karma-yoga perspective would regard
the pleasure and satisfaction resulting from good person-job fit as ephemeral. Note that the
scale of time informing the perspective is very long indeed, spanning multiple life-times, thus
the judgment of what is ephemeral is quite relative. An individual may regard the balance of
their working career as positively satisfactory, but the ultimate result of ego-centered work
(karma) is sorrow and disillusionment. But the perspective is not prescriptive, regarding
reward-focused periods of working-life as a necessary phase, preceding spiritual awakening.
In considering the literature on work-stress, we identify an area where the Karma-yoga
perspective may be adopted in a more immediate and heuristic fashion, than in the literature
discussed above. An increasing interest in the management of stress is evidenced in its
increasing prominence in management texts, and the popularity of time-management and
self-management training, prominent among which is the work of Steven Covey and the
Effectiveness movement (Covey 1989, Jackson 2001). Perlow’s (1998, 1999) influential
studies on time-utilization (and the resultant ‘time-famine’ phenomenon) in software firms
has updated fundamental work-stress and work-life boundary issues. Covey’s Effectiveness
movement has some affinity with the Karma-yoga perspective, as his approach is largely
derived from his Mormon spiritual beliefs and from the ‘character ethic’ deeply embedded in
the American ‘Success’ literature (Fort 1997, Jackson 2001). While Covey’s ideas place far
greater emphasis on this-worldly success (than Karma-yoga), we must keep in mind that his
movement represents a selective adaptation of and synthesizing of Christian ideas and
American values. Christianity has been seen historically as ambivalent or even hostile
towards the accumulation of wealth and temporal success, hence Weber’s (1958) singling out

637
the importance of Calvinist doctrine in promoting the ‘worldly asceticism’ that enabled
Western Europe the institutional breakthroughs resulting in modern capitalism. Likewise,
Karma-yoga philosophy would require a parallel sort of contemporary interpretation to
influence management thought and practices in the United States. Covey’s Effectiveness
movement has managed to provide Christianity-derived ideas that resonate with a broad
American public, while speaking the secular language of business and not requiring the overt
acceptance of religious audiences by secular audiences. Similarly, Karma-yoga has been
interpreted in the modern age (by Vivekananka, for example) as being a heuristic,
experimental technique not requiring the acceptance of religious doctrine or self-conscious
religious avocation. However, the diffusion and influence of Karma-yoga based management
ideas would be facilitated by a potential audience for whom ideas from Indian and Eastern
philosophy resonate. Religious movements derived from eastern philosophies have a
significant presence in the United States, not least among the burgeoning immigrant
communities from Asia. Would the Karma-yoga philosophy hinder or facilitate the
integration of an organization? Complex organizations have been seen as having to
overcome basic problems of integration and cooperation, variously labeled as ‘collective
action’, ‘free-rider’, and ‘obediance to authority’ problems. The basic problem of integrating
complex organizations has been dealt with by solutions as varied as developing strong
organizational cultures and devising reward systems in accordance with principal agent
theory. The Karma-yogi would not be a self-interested agent, a free-rider, or a loyalist who
regards the organization as his very own. At the same time, there is a conservatism in
Karma-yoga, which encourages to be tolerant of circumstances, to not be contentious and
rebellious. This conservatism of the Bhagvad Gita’s message has been criticized by some
Indian leftists as reinforcing the injustices of the caste-system (Dirks 2001).

IV. KARMA-YOGA AND INSTITUTIONAL CHANGE IN INDIA

Karma-yoga philosophy’s greatest cultural resonance should, logically, be in India.


Social mobilization in modern India has often been most successful when action has been
framed in language derived from the Hindu lexicon. Moreover, India has a vibrant civil
society which provides fertile soil for the social movements which are often the developers
and engines of the process transforming ideas into practices and institutional change.
However, the country has such an evenly poised balance of institutional strengths and
weaknesses that it defies easy optimism or pessimism. The Swadhyaya movement is an
exemplar of social and institutional change founded on ideas derived from Hindu philosophy.
Founded by Pandurang Shastri Athavale in western India, ‘Swadhyaya’ means ‘self-study’
indicating to movement members that they should discover the ‘higher self’ or divinity that is
within them and in their fellow human beings. This cardinal idea of Hindu philosophy was
mobilized by Athavale to promulgate the notion of human equality, and to motivate members
to build social capital (in the form of cross-caste friendships, inter-village associations,
programs of social welfare, and educational institutions). Athavale applied a social-action
frame to religious ideals by stating that his movement’s purpose was “to create a new man
who pursues the divine mission in which God is at the center…Since God is with us and
within us, he is a partner in all our transactions. Naturally, he has his share…”. This logic
serves as the basis for organizing Yogeshwar Krishi (divine farming) in which farming
communities set aside land and labor to produce for the needy, and Matsya Gandha (floating
temple) in which fishing communities collectively purchase boats to serve charitable
purposes. Government service is another institution in great need of reform. The Indian
Administrative Service, a direct legatee of the Indian Civil Service (the famed ‘steel frame’
of the British Raj), is a highly elitist corps designed and suited to the maintenance of an

638
imperial system, more than public service in a democracy. In fact, many government
structures in India represent not only vestiges and legacies of the British Raj, but also the
Mughal Imperium that preceded it, cumulating to over four centuries of imperial rule.
Governance suffers as a result of this chasm between government officials and people.
Administrative reforms in India, as in other countries, surely would involve such steps such
as an opening up of human resources and staffing practices beyond the monopoly of an elite
civil service cadre, but an even more fundamental task is to transform the ideological and
cultural foundations of government service. Swami Ranganathananda, a leading spokesman
of the Ramakrishna Mission, attempted to re-think the foundations of democratic
administration in India in his lectures to public administrators in, collected and published
under the title Democratic Administration in the Light of Practical Vedanta (1996). The
Karma-yoga philosophy is likely to serve as a pillar in the ideological rethinking of public
service in India.

REFERENCES

Swarupananda, S. (translator) Bhagavad Gita. Mayavati: Advaita Ashrama, 1996.


Cameron, Kim and David Whetten. Developing Management Skills, 6th ed. Upper Saddle
River, New Jersey: Prentice-Hall, 2005.
Dirks, Nicholas. A Caste of Mind. Princeton, New Jersey: Princeton University Press, 2001.
Fort, T. “Religion and Business Ethics: The Lessons from Political Morality”, Journal of
Business Ethics., 16, (3): 1997, 263-270.
Hackman, J.R. and G. Oldham. Work Redesign. Reading, MA: Addison-Wesley, 1980.
Jackson, B. Management Gurus and Management Fashions. London: Routledge, 2001.
Kapur, D. “The Causes and Consequences of India's IT Boom”, India Review Journal., 1, (2),
2002,91-110.
Perlow, L. “Boundary Control: The Social Ordering of Work and Family Time in a
High-Tech Corporation”, Administrative Science Quarterly., 43, 1998, 328-357.
Perlow, L. “The Time-Famine: Toward a Sociology of Work-Time”, Administrative Science
Quarterly., 44, 1999, 57-81.
Perrow, Charles. Complex Organizations. New York: Random House, 1986.
Pfeffer, Jeffery. New directions for organization theory: problems and prospects. New York,
New York: Oxford University Press, 1997.
Ranganathananda, S. Democratic Administration in the Light of Practical Vedanta. Madras:
Sri Ramakrishna Math, 1996.
Satprakashananda, S. The Universe, God, and God-Realization: From the Viewpoint of
Vedanta. St. Louis, Mo.: The Vedanta Society of St. Louis, 1977.
Vivekananda, S., The Yogas and Other Works. New York, New York: Ramakrishna-
Vivekananda Center, 1953.

639
CHAPTER 25

SPORT MARKETING

640
GOOD GAME, GOOD GAME: APPLYING SERVQUAL TO AND ASSESSING AN
NFL CONCESSION’S SERVICE QUALITY

Brian V. Larson, Widener University


brian.v.larson@widener.edu

Doug Seymour, Widener University


dsseymour@mail.widener.edu

ABSTRACT

Assessing quality has been vital for conventional organizations for years and now
sport organizations have begun to focus on it too. Rising ticket prices for fans, skyrocketing
team costs for owners, and increasing competition from other entertainment entities make
quality control central to many. The most accepted technique of assessing quality,
SERVQUAL, is discussed and then applied to an NFL team’s concession experience. The
results deliver the first averages of how fans rate the key dimensions of service. Results are
reported and conclusions and recommendations are drawn.

I. INTRODUCTION

Every year millions of fans flock to their favorite sporting event in droves and the
way they assess the quality of their game day experience is becoming increasingly important
to venue managers, fans, and concession vendors. As ticket prices continue to increase, so do
fan expectations. In addition to the event itself, which is out of the marketer’s control, the
concession experience is one of the most influential elements affecting the fan’s experience.
First class service and selection is expected from customers to match the premium ticket
prices. The average fan in regular seats will spend nearly $20 at each NFL event on standard
concessions (Team Marketing Report 2005). In short, it has become apparent that a regular
hotdog was not going to do the trick (Buzalka 2000). Food vending is an enormous business
within sport: it is estimated that $9 billion is spent on foodservices at sporting events
annually (King, 2004); $2 billion is from the NFL’s suite / club seating alone (Cameron
2004) a relatively new revenue stream. Suite holders are typically charged between $145 and
$250 per person.

Moreover, the venue and concession service provider has only limited opportunities to
establish a relationship of high quality exchanges because of the nature of sport. “You only
have 10 games to make an impression on your guests” says Hans Williamson, president of
the sports and entertainment group for Levy Restaurants (Cameron, 2004). Professional
sporting events are also becoming increasingly costly for owners as the expenses of the game
(e.g., player salaries, equipment, maintenance, and new venues) continue to escalate. For
sport managers increasing the game day experience’s value is a primary concern and critical
for the organization's survival. For team marketing professionals, understanding the variables
that affect the service quality perception is a key input into their resource allocation and
strategic marketing decisions. Moreover, for the providers who have been outsourced to
create the service, it is vital to continually improve the service because their business
customer (the team or venue) demands it and has the luxury of seeking contracts with other
providers if service quality isn’t good enough. The number of qualified vendors capable of
serving at major venues has intensified the competition for stadium and arena contracts. If a

641
vendor fails to satisfy the team’s fans with valued food experiences, then the team can readily
choose another food service provider. Good suppliers must provide outstanding service. In
this sense then, service quality is important to the fan as a valued part of the game day
experience, to the team as an important attribute of the total sport product sold to the fan, and
to the outsourced supplier as a business-to-business differentiation tool. This paper will
discuss service quality perception as it applies to an NFL team’s concession experienced by
fans. This will be done by using the RATER model of service quality. Following the
literature review of sport service quality assessment, we report the execution of an empirical
study where the dimensions of service are explores and assessed. The paper concludes with a
presentation of the results and discussion. Implications are suggested.

II. SERVICE QUALITY ASSESSMENT IN SPORT

As with other service industries, in the sport industry it is not enough produce
adequate service encounters, but crucial for a company to hire, train and motivate employees
to consistently provide quality service. To do that, it is important for a company to listen to
what exceptional service means to customers and incorporate feedback into the company’s
vision and training programs. In a service, customer perceptions of service quality are the
measures used. The excepted way to measure customer perceptions is to use the
SERVQUAL model to identify and understand customer expectations. SERVQUAL is a
service quality assessment tool that was created by Parasuraman, Zeithaml and Berry (1988)
to measure how customers perceive the quality of service being provided. It’s been shown
that consumers tend to use the same basic criteria no matter what type of service is being
provided (Parasuraman, 1988). The original SERVQUAL model contains 22 questions that
measure the expectations consumers have about service quality and the perceptions of what is
actually delivered during their experience. These 22 questions are broken down into 5
dimensions. Easily remembered with the acronym RATER, it includes the dimensions on
which service quality are assessed: Reliability, Assurance, Tangibles, Empathy, and
Responsiveness. By using a Likert scale ranging from “strongly disagree” to “strongly
agree”, the customers’ perceptions can be gauged by asking the service’s customers items
related to the five dimensions (Hudson, 2004). Each of these dimensions is discussed as they
apply to sport.

III. DIMENSIONS OF SERVICE QUALITY IN SPORT

Reliability is the service quality dimension that measures the ability to perform the
service dependably and accurately (Parasuraman et al., 1988). It has been called the most
important dimensions. When an employee is trained for a specific job, it includes making
sure the customer satisfaction is the top priority. This should include the proper way to greet
a customer, provide helpful information to service the customer and how to accurately
address questions the customer may have. (Czaplewski, 2002). For the purpose of rating the
service’s reliability in the NFL venue environment, the following questions have been
adapted for the dimension of reliability: Assurance is the service quality dimension that
measures the knowledge and courtesy of employees and their ability to convey trust and
confidence (Parasuraman et al., 1988). Training an employee to perform their job function
should also include providing a skill set that empowers them to make the right decisions.
This not only shows that the company hiring the employee has faith in them, but that they are
an important part of the organization. This also has positive benefits for the customer
receiving the service, such as making corrections instantly when services rendered do not
meet customer’s expectations. Tangibles dimension takes into consideration the appearance

642
of physical facilities, equipment, personnel, and communication materials. Training the
employee the correct way to handle the job also plays a part in this dimension. It is important
the product and services are executed so that it is appealing to the consumer and done in a
clean, well lit, and comfortable setting. Facilities should also be designed to appeal to the
customer’s senses. A question might be “The employees were neat in appearance.” Empathy
is the service quality dimension that measures the perceived caring, individualized attention
the employees provide to each customer. Providing service that goes above and beyond the
expected service levels occurs when an employee displays empathetic qualities. Empathy is
difficult to instill in an employee because of its intimate nature. It manifests itself in smiles,
personal attention, and clear communications. Customers feel a high quality service provider
is able and eager to give prompt and satisfactory service

IV. METHODOLOGY

The study utilized a mall-intercept technique at an NFL team’s stadium during a 2005
regular season afternoon game (beginning at 1:00 EST). Twenty field researchers were
divided into five teams to cover the stadium systematically and approached attending fans at
random who had recently exited a concession stand within a research team’s assigned area.
Respondents from all areas of the stadium were solicited (upper concourse, club level, and
main concourse). Data were collected beginning three hours before kickoff (when
concession areas opened) until just after halftime when the flow of fans visiting the
concession stands slowed.. Respondents were approached by field researchers, invited to
participate in the survey, and offered a food coupon to promote participation. A total of 269
usable surveys were collected. Measures - The survey used to measure customer service
includes all 5 dimensions of the RATER model with approximately 3 to 5 questions per
dimension (totaling 17 items). Five-point Likert scales ranging from Strongly Agrees to
Strongly Disagrees were used which allowed for the measurement of the difference between
customers’ expectations and perceptions of the actual service they received (Brown,
Churchill, and Peter 1993). The single page survey finished with multiple standard
demographic questions (see Figure I).

Figure I. Service Quality Questionnaire

Thank you for your help by completing this survey. All answers are confidential.
Circle the number that best reflects your concession stand experience.

Strongly Strongly
Disagree Agree
The line moved quickly. R1 1 2 3 4 5
I received exactly what I ordered. R1 1 2 3 4 5
I received what I ordered quickly. R1 1 2 3 4 5
Staff appeared well trained to handle the job. A2 1 2 3 4 5
The staff greeted me in a friendly manner. A2 1 2 3 4 5
Staff recommended additional items to purchase. A2 1 2 3 4 5
The concession stand was clean. T3 1 2 3 4 5
The condiment area was clean. T3 1 2 3 4 5
The condiment area was well stocked. T3 1 2 3 4 5
The employees were neat in appearance. T3 1 2 3 4 5
The menu was easy to read. T3 1 2 3 4 5
The food presentation met my expectations. T3 1 2 3 4 5

643
The quality of the food met my expectations. T3 1 2 3 4 5
The drinks were worth the price. T3 1 2 3 4 5
The employee greeted me with a smile. E4 1 2 3 4 5
The staff seemed happy to provide service. E4 1 2 3 4 5
The staff seemed thankful for my patronage. E4 1 2 3 4 5
The staff displayed willingness to help. R5 1 2 3 4 5
The staff provided prompt service. R5 1 2 3 4 5
My overall concession stand experience was positive.
R5 1 2 3 4 5
I will return to this concession again. 1 2 3 4 5

Age ______ Gender________


Racial group: O White O Black O Hispanic O Asian O Other __________
Annual HouseHold Income $______________
Highest education level completed: O Some high school O High school O Some
trade school O Trade school O Some college O College O Some graduate
school O Grad. School
Number of times you have frequented concession stands at XXX this year: _______
I frequent concession stands: O Before the game O Halftime O During the
game
How many events will you attend this year at XXX? _________
R1 – Reliability, A2 – Assurance, T3 – Tangibles, E4 – Empathy, R5 – Responsiveness

Questions were developed based on the original RATER model (Parasuraman et al. 1988)
and in partnered cooperation with the host management team to capture pertinent issues
crucial to their specific business environment. It is not uncommon to adapt service quality
assessment items to accommodate specific industry needs, and may even be necessary, to
collect more pertinent information (Eastwood 2005).

V. RESULTS

Participants - The final sample was comprised of fewer females (22%) than males
(78%), and the sample's ages ranged from 13 to 81 years although the majority (92.0%) fell
between 18-69 years of age (mean = 36.84; SD = 13.86). Most of the respondents were white
(88.1%). The next largest race represented was African-American (17%). The annual
household income reported by the solicited fans was $102,118. However, 70% of the
respondents reported making less than that. The median annual household income was
$85,000. Most respondents reported higher education levels: less than 20% reported having
less than “some college” as the highest education level completed.
Behaviors of the fans were also collected. On average, fans reported to have visited
the concession area 3.6 times during the game. Most of those visits (24%) occur before the
game begins. The respondents attended, or expected to attend, about six games (5.58) over
the season. Most (72.3%) planned to attend at least eight games.
Service Quality Assessment - The 17 items used to assess quality perceptions held by
fans were first organized into the five dimensions and tested for Cronbach reliability (see
Table I). Each scored .70 or above indicating acceptability. Next, the dimension averages
were computed.

644
Table I
Dimension Cronbach ά Mean Quality
____________________________________Rating
Reliability .7434 4.229
Assurance .7496 3.697
Tangibles .8574 4.149
Empathy .8839 3.961
Responsiveness .7800 4.089

As the results above indicate, the Reliability dimension was rated the most positively
by the NFL team’s fans with a 4.2 on a scale of 1 to 5. In order of performance, the
remaining ServQual dimensions were perceived as Assurance, Tangibles, Response, and
Empathy. According to Berry et. al (2003) reliability is the most important dimension.

VI. CONCLUSION

Gauging customers’ perceptions of the quality of service being offered is important to


companies to help retain customers. Returning customers will lead to additional sales to help
an organization to grow. Companies must focus on hearing customers’ concerns in order to
change and modify employee training. To ensure quality service, every employee who
comes in contact with a customer must possess the right skills to respond quickly and
effectively to all needs. Therefore, organizations must train each employee how to provide
great service to customers and how that service plays an important role in customer retention
(Keele, 1994). Our study is an early effort to apply the RATER model to a professional sport
service setting. In doing so, we adapted a scale to work within a sport service environment
and tested it. The results establish a baseline of averages for the five dimensions of
SERVQUAL. In accordance with previous literature, Reliability was rated the highest.

REFERENCES

Black, B., (2000, August). The application of SERVQUAL in a district nursing service.
Retrieved March 15, 2005, from www.touchmedia.co.uk
Brown, T., Churchhill, G., & Peter, J.P. (1993, Spring). Research note: Improving the
measurement of service quality. Journal of Retailing, Greenwich. Vol 69, Iss. 1; p.
127
Buzalka, M., (2000). Catering to the suite life. Food Management, Vol. 35, Iss. 7; pg. 54-58
Cameron, S., (2004). The Frill of it All. Amusement Business, Vol. 116, Iss. 26; pg. 14-15
Czaplewski, A.J., Olson, E., & Slater, S. (2002, Jan/Feb). Applying the RATER model for
service success. Marketing Management, Chicago, Vol. 11, Iss 1, pg.14-18
Dolezalek, H. (2004, July). Boot Camp Brewhaha, Training, Vol. 41, 7, pg. 17
Eastwood, D., Brooker, J., & Smith, J. (2005). Developing marketing strategies for green
grocers: An application of SERVQUAL. Agribusiness, Vol 21, Iss. 1; p. 81
“How to create a Service Quality Survey/ How to build a Service Quality Survey.”Retreived
March 15, 2005, from http://www.surveyz.com/howto
/how%20to%20build%20service%20quality%20surveys.html
Hudon, S. & Graham, A., (2004). The measurement of service quality in the tour operating
sector: a methodological comparison. Journal of Travel Research, Vol. 42, pg 305-
312
Keele, (1994). Keeping the Customer Satisfied. Health Manpower Management, Vol. 20, Iss.
4; p. 11

645
King, P., (2004). Home Park Advantage. Nation’s Restaurant News, Vol. 38, Iss. 13; p 17
Nitecki, D. SERVQUAL: measuring service quality in academic libraries. Retrieved March
15, 2005 from http://www.arl.org/newsltr/191.servqual.html
Parasuraman, A., Zeithaml, V., & Berry, L., (1988). Servqual: a Multiple-Item Scale for
Measuring Consumer Perception. Journal of Retailing, Greenwich, Vol. 64, Iss. 1; Pg.
12-29
Parasuraman, A., & Zeithaml, V., (2003). Ten Lessons for Improving Service Quality. MSI
Reports Working Paper Series, No. 03-001, p 61-82
“The Rater Model”(2002). Monash University. Retrieved March 15, 2005 from
www.adm.monash.edu.au/cheq/support/rater_model.html
Team Marketing Report. Retrieved January 10, 2006 from http://www. team
marketing.com/fci.cfm?page=fci_nfl_05.cfm

646
CONVERGENCE IN MISSISSIPPI: A SPATIAL APPROACH.

Mihai Nica, Jackson State University


Mihai.P.Nica@jsums.edu

Ziad Swaidan, University of Houston Victoria


swaidanz@uhv.edu

ABSTRACT

This study analyzes the convergence process in Mississippi at county level, from both
a descriptive and general test perspective, applying a spatial statistics framework. Mississippi
makes an interesting case study for analyzing the income convergence process because of
several characteristics, such as the fairly large number of counties, its relative homogeneous
economy and its low income compared with the rest of the U.S. It finds evidence of low but
significant spatial correlation, suggesting an almost pattern-free spatial distribution of
percapita income growth. It also finds significant evidence of β convergence, albeit at a low
speed (less than one percent).

I. INTRODUCTION

One of the most intriguing research topics, designated by some researchers as the
“regional scientist’s art” (Plane 2003, p105), is the permanent growth and change at the
regional level. Indeed, the question if inequalities between different regions (countries and
their subdivisions) tend to decrease over time, and whether the process is endogenous, always
preoccupied economists. But, although research in the economic growth area is common, the
econometric (and not only) issues underlying the topic are still highly debated. Thus some
scholars plead for moving from general tests to “statistical descriptions of what is happening
coupled with a forecasting mechanism” (Carvalho and Harvey 2002).

This study analyzes β convergence for real income within an U.S. state, namely
Mississippi, over the 1969 – 2001 period, combining both a descriptive and a general test
perspective. Mississippi has a mix of characteristics that makes such a study interesting. First,
the absence of trade barriers of any kind (including the less important interstate barriers)
allows for an absolute convergence approach. Furthermore, the problem of different
standards and imperfect conversions amongst the data, which may lead to biases (Dowrick
and Nguyen 1989, Dowrick and Quiggin 1997), is also avoided. Indeed, there should be little
reason to distinguish between conditional and absolute convergence in this case (Barro and
Sala-i-Martin 2004, Carvalho and Harvey 2002). Third, the low percapita income compared
with the U.S. also makes Mississippi an interesting case, since extremes are known to behave
unpredictably.

The study finds a relatively low level, albeit significant, of spatial correlation within
the area. Tests against the OLS or spatial error model as the best specification for a general
convergence test seem inconclusive, although they appear to favor the spatial model.
However, relatively strong support for absolute convergence is found, even if at a low speed
(less than one percent).The next section reintroduces the reader to some basic convergence
concepts, and the most common model specifications as well as their interpretations. After

647
presenting and commenting on the estimation results, the study ends with conclusions and
suggestions for further research.

II. CLASSICAL CONVERGENCE

Theory. The concepts used in classical convergence studies can be illustrated


mathematically as follows. The average growth rate of variable Y, corresponding to the time
interval [t, T], may be expressed as (Barro and SM 2004):
ln y t +T − ln y t 1 − e − βT ⎛ ^ * ^ ⎞
=g+ ⎜⎜ ln( y ) − ln( y (0) ⎟⎟ (1)
T T ⎝ ⎠
Where y represents the per capita (or per effective worker, or per hour worked) level of
income (or other analogous indicator) Y, g represents the exogenous rate of growth of the
^* ^
technological progress, y represents the steady state of the economy, and y (0) its initial
value, both in intensive form. It is easy to see that in this model y’s average rate of growth
depends on both β and the distance between the initial and the steady state of the economy.
Indeed, the larger β and the distance between the initial level and the steady state level, the
higher the average growth rate.

It is said that absolute β convergence exists when poor economies tend to grow faster
than the rich ones while all possible factors that govern the phenomenon are endogenous
^*
(Barro and SM 2004). To model such a situation one would assume that y has a common
^
value for all economies under study and therefore the growth rate depends only on y (0) , as
^
suggested by Baumol (1986). Then, if the coefficient of y (0) is statistically significant, one
may conclude that the sample exhibits absolute convergence.
On the other hand, conditional convergence exists when there are other variables
influencing the speed of convergence, and these variables differ between economies, being
^*
therefore area specific. Such variables may lead to different steady states y and therefore the
growth rate for each economy would depend not only on the initial conditions but also on
these variables. Finally σ convergence occurs when the dispersion of the real income (or other
measure of economic relevance) of a group of economies tends to decrease overtime. It can
be demonstrated that β convergence is a necessary but not sufficient condition for σ
convergence (Barro and SM 2004).

Model specification. One of the simplest empirical specifications for a model allowing testing
for convergence was proposed by Baumol (1986) and is the starting point for many
contemporaneous studies. While the theory behind it might have been less formal (from both
an economic and econometric point of view) it is simple and provides a robust study
framework. Moreover, it was demonstrated later that the model is in line with economic
theory. Ignoring the economy subscript (i) the model is:
ln yt +T − ln yt Y
= α − β * ln( yt ) + ε , where yt = t (2)
T Lt

Here Y represents income, L labor, and T the time interval under analysis. The norm in the
literature is to compute the growth rate over the entire time period for which data is available
and annualize it, and to standardize income by division to population, number of active

648
workers, hours worked, or other indicators. Then, obtaining a statistical significant β* is
interpreted as evidence that areas with lower income at the beginning of the period (time t)
grow faster. Since the average growth rate depends only on the initial y, such evidence would
indicate absolute convergence. The underlying convergence speed is obtained from the
following formula:

ln(1 − β *T )
β =− (3)
T

The convergence speed was found to be somewhere between 1.5 and 3.0 percent by several
previous studies (for various regions and time intervals).

But even if the economy under scrutiny is homogenous enough to be analyzed under an
absolute convergence assumption, an important effect may be introduced by the possible
spatial dependence in the data. Whilst spatial dependence only relatively recent begun to play
an increasingly explicit role in economics and econometrics, several scholars pleaded for
shifting the focus of research from treating areas of interest as “islands” to taking in
consideration the spatial dimension of the phenomena (Quah 1996). Consequently, the
classical convergence tests might need to be augmented to take in consideration possible
spatial dependencies. Such possible “spatial” models may be (but are not limited to) a spatial
Durbin or spatial error model. The decision between the best specifications relies as usual on
theory and econometric tests, and several recent papers describe such approaches (Anselin
2002).

III. DATA AND MODEL ESTIMATION

Data. The data used in this study is compiled from the REIS system of the Bureau of
Economic Analysis (Bureau of Economic Analysis 2003), and consists of yearly realizations
of “Personal income” and “Population”
0 .0 4 0
for the 1969 – 2001 interval. The data and
methodology is described in the CD-ROM
0 .0 3 5 notes and on the web, and therefore need
Madison not be discussed here. The “Personal
0 .0 3 0
consumer expenditure: Chain-type price
0 .0 2 5 index” was used as a deflator for
calculating real per capita income. The
0 .0 2 0
data is relatively well known, being used
0 .0 15 in several other studies (Boasson 2002,
Higgins et all. 2003). The first step is to
0 .0 10
8 .6 0 8 .8 0 9 .0 0 9 .2 0 9 .4 0 9 .6 0 visually assess the strength of the
relationship between per capita income
Figure 1. Growth scatter plot, 1969-2001.
growth and initial percapita income.
Figure 1 suggests a negative relationship between the initial LRPI and the real percapita
income growth (LRPIGR), and therefore possible β convergence.

A second step is to understand the patterns of spatial correlation in the data and
identifying the possible spatial clusters. Figure 2 reveals the map of the LRPIGR values,
which helps one visualize the counties with the highest and lowest growth, as well as possible
spatial patterns in the data. The county with the largest average annual growth (3.8 %) is
Madison, situated in the middle of the state. It is interesting to observe that the counties

649
where gambling became an important part of the economy (gambling was established in late
1990s in Tunica, Coahoma and Bolivar, and the growth of the industry was astonishing) seem
to have benefited since they are in the group of counties with relatively high growth. As
expected, the counties with the lowest growth are situated mostly in the Mississippi Delta.
LRPIGR
The degree and significance of
the global spatial correlation in the data
is assessed with the help of the
Moran’s I statistics. Mississippi’s
spatial neighborhood structure is
characterized by an average of 5.48
neighbors for each county (based on
the queen neighborhood definition).
The computed Moran’s I is .1768, with
a p value of about .002 after 999
Monte-Carlo randomizations.
Corresponding to the above Moran
statistics, the standard deviation for the
LISA statistics is .4551. Table 1 shows Figure 2. Spatial distribution of real income growth.
the locations that qualify as possible
outliers regarding the spatial distribution of the LISA statistics as well as the associated p
value. The outliers were established with the two times standard deviation rule. There are
three such locations, out of which only Scott County has a negative Local Moran value
(indicating negative spatial correlation). Naturally, all outliers also appear as possible clusters
in the analysis.

Table 1. LRPIGR spatial distribution outliers.

FIPS County ST LISA z-score p-value


28045 Hancock MS 1.1983 2.4803 0.0131
28047 Harrison MS 1.1050 2.2885 0.0221
28123 Scott MS -0.7975 -2.1733 0.0298

Figure 3 shows the map of the clusters (as suggested by the LISA statistics, at a
significance level of .05, after 999 randomizations) based on the per capita income growth for
Mississippi as well as for the surrounding counties. It appears that, while the overall spatial
correlation is significant, there are relatively few clusters. Indeed there are two high-high
(Leak and Marshall counties, H-H in the legend), two high-low (Warren and Itawamba
counties, H-L in the legend), three low-high (Scott, Lauderdale, and Winston counties, L-H in
the legend), and one large low-low (Pearl River, Hancock, Harrison, Jackson, George and
Stone counties, L-L in the legend) clusters. The later is the clearest spatial cluster, situated in
the southern region of the state and composed by six low growth regions (out of which three
are adjacent to the Gulf of Mexico). As it can be observed from the maps the counties
surrounding the state border are maintained in the sample throughout the analysis. This
approach assures correct statistics for all calculations where a first-degree neighborhood
matrix is involved.

650
N Estimation. There are several examples of
H-H studies that looked at different regions and
L-L time periods to assess the degree to which
L-H the classical convergence framework holds.
H-L They employed different methodologies and
their findings are many times contradictory,
but in the case of the U.S. many studies
reported convergence at a speed of around
two percent (for a review of such studies see
Barro and SM 2004). However, researchers
found that divergence may not be ruled out
in certain cases, and suggested that the
possibility of formation of “clubs” should
Figure 3. Cluster Map, real percapita income also be considered (the term “clubs” was
coined by Quah 1996 who suggested that
the distribution of growth patterns may be bimodal, or even multimodal). Moreover, they also
found that, for certain time periods at least, divergence may appear even within countries,
suggesting that relatively similar economies do not necessarily converge (Evans and Karras
1996). Examples of regions exhibiting very weak or no support at all for convergence are
Austria (Hofer and WorgotteR 1997), and Greece (Siriopoulos and Asteriou 1988).

Several convergence studies that tested the assumption of spatial autocorrelation in


their data, found a spatial econometrics approach to be appropriate. For example Rey and
Montouri (1999) analyzed U.S. state level percapita income data for the 1929 – 1994 period
and found significant spatial autocorrelation, suggesting that any estimation overlooking it
may be misspecified. They estimated a cross regressive model similar with the one in this
study and found a convergence speed of about 1.9 percent, a value very close to the well
known two percent. Moreover, they suggest that taking the spatial correlation in
consideration improves the specification. The path for deciding if a model should be
augmented to take in consideration the possible spatial dependence is again similar to time
series methodologies. In the first step a classical OLS regression is performed and the
residuals are analyzed with the help of tests such as Moran’s I, Lagrange multiplier (LM),
and Robust LM against alternate specification. Based on the results a spatial model is then
considered.

Table 2 reveals the estimation for the “classical” OLS regression as well as for the
spatial model believed to fit the data best. The OLS results suggest an acceptable fit for this
type of model and data, while the maximum likelihood model brings no significant changes.
Moreover, although the diagnostic statistics for the OLS estimation suggest weak
heteroscedasticity (White test marginally significant with a p value of .0422) and the spatial
diagnostic tests suggest a spatial error model (the Moran’s I p value is .0171 and the LM test
for the error model has a p value of .0330), the Akaike and Schwartz criterion are both
slightly larger for the ML model, and the LR test for spatial dependence is only marginally
significant (p value .0478). However, for the spatial model, the Breusch-Pagan test does not
indicate significant heteroscedasticity (p value .0563) suggesting a better specification.

651
Table 2. Estimation results.

OLS ML
Variable
Coefficient t-stat. Coefficient t-stat.
Constant 0.1132 7.1436 0.1109 6.7094
LRPI 1969 -0.0098 -5.5894 -0.0095 -5.2165
Lambda - - 0.2654 2.0782
R2 0.2066 0.2434
F statistics 31.2414 -
AIC -1010.49 -1014.54
SIC -1004.88 -1008.93
Note: As in (7), lambda stands for the coefficient of the lagged error.

In both cases β* is highly significant and has a fairly close value, which is taken as a proof
that the real percapita income in Mississippi converges. In both cases the speed of
convergence is about .8 percent, much less than the two percent which part of the literature
suggest as a universal constant. More research is needed to understand why the convergence
speed is so low. This result is unexpected especially since at the county level many people
work in a different county then the one they live, a movement that would tend to equalize
income growth. A possible explanation is that

IV. CONCLUSION

This study investigates the convergence process at the county level for Mississippi,
for the 1969 – 2001 interval, from both a descriptive and a general test approach. It finds
indications of low but significant global spatial correlation, but a relatively low number of
spatial clusters, suggesting a spatially unorganized economy. Applying both a classical and a
spatial approach, the study finds significant evidence of real percapita income convergence
amongst the counties in Mississippi. The convergence speed of about .8 percent however is
lower than the two percent speed suggested by other authors as “standard”.

REFERENCES

Baumol W. J. “Productivity Growth, Convergence, and Welfare: What the Long-run Data
Show.” American Economic Review. 76, 1986, 1072-1085.
Boasson E. The Development and Dispersion of Industries at the County Scale in the United
States 1969 – 1996: An Integration of Geographic Information Systems (GIS),
Location Quotient and Spatial Statistics. Ph.D. dissertation, State University of New
York, Buffalo, 2002.
Dowrick S. and Hguyen D. “OECD Comparative Economic Growth 1950-85: Catch-up and
Convergence.” American Economic Review, 79, 1989, 1001-1020.
Dowrick S. and Quiggin J. “True Measures of GDP and Convergence.” American Economic
Review. 87, 1997, 41-64.
Evans P. and Karras G. “Do Economies Converge? Evidence from a Panel of U.S. States.”
Review of Economics and Statistics, 78, 1996, 384-388.
Hofer H. and Worgotter A. “Regional Income Convergence in Austria.” Regional Studies,
31, 1997, 1-12.

652
Plane D. A. “Perplexity, Complexity, Metroplexity, Microplexity: Perspectives for Future
Research on Regional Growth and Change.” Review of Regional Studies, 33, 2003,
104-120.
Quah, D. T. Twin peaks: “Growth and Convergence in Models of Distribution Dynamics.”
Economic Journal, 106, 1996, 1045-1055.
Rey S. J. and Montouri B. D. “US Regional Income Convergence: A Spatial Econometric
Perspective.” Regional Studies, 33 (2), 1999, 143 – 156.
Siriopoulos C. and Asteriou D. “Testing for Convergence across the Greek Regions.” Regional
Studies, 32 (6), 1998, 537-546.

653
CHAPTER 26

STRATEGIC MANAGEMENT AND MARKETING

654
EXPLORING CRITICAL STRATEGIC MANAGEMENT

Kok Leong Choo, University of Wales, Institute, Cardiff


lchoo@uwic.ac.uk

ABSTRACT

This article examines the tenets of Critical Strategic Management based on the latest
ideas and debates amongst critical strategic management scholars. The perspective subsumes
wider cultural, political, and moral ramifications in the process to reflect the reality of the
wider social world and argues against a technocratic approach that is preoccupied with
instrumental rationality to preserve the sectional interests of elite managers who run
corporations only to improve corporate profitability. Critical Strategic Management resonates
with the paradigm of the power and political school perspective and draws on post-modern
and critical theories to question the neutrality of the strategy process embedded in an
organisational context. The perspective suggests a conception of emancipation to be
introduced in the content, process and context of strategising to improve corporate
performance for the betterment of wider society.

I. INTRODUCTION

Many of those engaged in Critical Management Studies, who have come to be known
as critical management scholars, myself included, feel impelled to do more than simply
communicate our critical thoughts to each other. For many of us, some form of practical
engagement as activists is an essential part of our identities as critical management scholars,
in other words, changing the management world as we find it and in so doing affirm
ourselves as active agents. This article is an attempt to move the new paradigm of critical
strategic management beyond the language of critiques into practice and perspective. In other
words, to make a contribution to the development of a critical understanding of strategic
management by examining the tenets and providing a clearer practical foundation for further
research in the field. The first section of this article examines the minefield of orthodox
strategic management in its current form. The second section examines the latest
contributions to the development in the strategy field by critical scholars who examine
strategic management from a broader critical perspective i.e. one that has significant cultural,
social and political ramifications within organizations and in the wider society. The third
section critiques these latest critical ideas, thinking and contributions to the strategy field.
The final section concludes by suggesting caution about some of the embedded problematic
premises, presumptions and presuppositions of the critical approach.

II. MINEFIELD OF ORTHODOX STRATEGIC MANAGEMENT

For better or worse strategic management has become an academic discipline in its own
right, with its own academic journals, conferences and common body of knowledge embedded
in more than fifty books under the title of ‘Strategic Management’. For the most part, the field
has evolved into a set of taken-for-granted assumptions that the development of strategy in
contemporary organizations is a relatively straightforward rational process, based upon the
simplification or dichotomy of management subject disciplines. There is also a positivist
assumption that there are prescriptive techniques, which are able to determine organization
strategic direction, long-term performance and integrate the entire scope of decision- making

655
activity within an organization by simply performing the five common and fundamental
technocratic tasks of environmental analysis, strategic formulation, strategic implementation and
strategic control. However, the procedural school of strategy (Minztberg, 2004) provides a
skeptical critique of such an approach and argues that such simplification imperative manifests
itself in the subject being trapped in what Weber calls ‘technical rationality’ (Weber, 1978) and
maintains concerns for prescriptions, linearity and orders, in other words, packaging
management knowledge into a series of formalized information (Watson, 1996). Whittington
(2004) argues that technocratic strategic management is trapped in a positivist epistemological
strand of its own making. This author argues that such a simplified technocratic approach takes
for granted the historical and political conditions under which stakeholders’ priorities and
interests are determined and enacted. In other words, such technocratic perspective can easily
overlook the non-linearity of the broader issues of power and politics and the concern for ethics,
domination and managerial assumptions which may have profound impacts on the corporation’s
long-term performance and wider society in general. The dilemma posed by the technocratic
approach stems from the fact that the embedded positivist thought seems to deny uncertainty and
the complexity of contemporary organizations and their business environment. This author
contends that the reality of the business environment of the twenty-first century is much more
complex than the technocratic strategic management model played out in most Business
Schools’ management education programs, or by voluminous strategic management text books.
The world has changed and there is very little fresh reliable or comprehensive empirical
evidence to date that convincingly proves a relationship between technocratic strategic
management and corporate successful performance (Eden and Ackermann 1998). There is also
very little fresh empirical evidence to suggest that technocratic strategic management contributes
to the development of a healthier, better and fairer industrialized society. According to one view,
the technocratic mode of strategy development has actually created a circle of sociological
problems attributed largely to the ‘taken for granted’ historical and political conditions under
which strategic decisions are determined and enacted by corporations (Whittington, 2004).
There is also empirical evidence to suggest that the problems are attributed to the grip of
industrial positivism and attempts made by corporations to rationalize the political and social
world. Levy, Alvesson and Willmott, (2003) argue that the technocratic perspective of
strategizing in organizations is colored by the preoccupations of instrumental values of corporate
performance and profitability at the expense of wider political and sociological problems of the
21st century (Levy, Alvesson and Willmott, 2003).

It can be reasonably argued that there is a need for an alternative approach to strategic
management that is not only relevant but also desirable and reflects the changing needs of wider
society and seeks to explore strategizing as a discursive process, one that has significant social
and political ramifications both within the corporation and in wider society. The key point is to
have a critical approach that moves away from certainty, towards an appreciation for pluralism
and diversity, towards an acceptance of social and political ambiguity and the paradox of
complexity rather than rationality. In other words strategic management should not only be
confined to a managerial perspective which helps corporate elitist managers just to improve
profitability, but also helps managers develop a richer conceptualization of the complexity of the
social and political world, and prepare them for the complicated understanding of value conflicts
found in contemporary organizations and in broader society. The next section explores the tenets
of critical strategic management based on the latest thinking and ideas of critical management
scholars.

656
III. THE TENETS OF CRITICAL STRATEGIC MANAGEMENT

Despite 30 years of academic teaching and research, the field of strategy is still lacking
direction, respect, roles and contributions and replete with various competing fashions,
perspective and directives (McKieman and Carter, 2004). The relevance and desirability of the
strategy field to contemporary organizations and wider society has been widely challenged by
critical scholars over the last two decades (Knight and Morgan, 1991; Whittington, 1992; Bower
and Doz, 1997; Whipp, 1999; Levy, Alvesson and Willmott, 2003; Wilson and Jarzabkowski,
2004; Clegg, Carter and Kornberger, 2004; Knight and Muller, 2004; Ezzamel and Willmott,
2004 and Starkey and Tempest, 2004). They are calling for a more critical approach to revive the
subject field, one that is examined as discourse and practice and has significant political and
social ramifications within corporations and wider society. They challenge the relevance and
desirability of orthodox strategic management that is embedded with positivism and governed by
managerial ideology and values. A set of identifiable tenets or beliefs of Critical Strategic
Management can be extracted from their work. It appears that a Critical Strategic Management
perspective is anchored by:
• a wider strategic context that subsumes significant political and social ramifications
within corporations and in wider society. The context is expected to go well beyond
managerial efforts to harness social and political knowledge and commitment to reflect a
system of values that are democratically acceptable within the corporation and wider
society. In other words strategy-as -practice is expected to go well beyond the business
context to include charity organizations, non-profit making organizations and semi-
private organizations, ranging from regional economic development to the development
of social and political economy.
• a strategic content that subsumes liberal, cultural, social and political cognitive
discourses or ideology and does not just encapsulate positivist ideas where the values or
interests are being trapped in what Weber (1978) calls ‘instrumental rationality’. In other
words a strategic content is expected to include an awareness of corporate social capital
and network relationships that is less technocratic i.e. less rigid, closed and exclusive, but
based on good principles of hegemonic alignment of ideological, political and
economical issues.
• a strategic process that subsumes a cognitive frame of critical reflective learning that
questions the hidden positivist premises and presuppositions that are commonly
embedded as received knowledge and practice found in orthodox strategic management.
The critical reflective learning is expected to include opportunities to discuss what
Arygris (1996) calls ‘the undiscussable’, that is asking questions that are usually not
asked. It is important to distinguish between apparently coherent sets of values, beliefs
and practices that are constructed and disseminated by strategists to explain and sustain
their legitimate position, and the assumptions that are concealed during practice. In other
words, the critical reflective learning is expected to include an opportunity to question
the neutrality of strategy, political knowledge and power in relation to different
stakeholders’ vested values and interests. The Habermasian concept of emancipation
(1972) is to be contained within such a process to foster pluralistic decision- making and
include stakeholders whose voices have been previously marginalized. In other words it
is expected to allow less privileged and powerless minorities to identify and contest
sources of inequality and unfair treatment, and question the implicit managerial
assumptions of the existing hierarchies and the strategic decision outcomes in relation to
the vested interests and values of the organization.

657
IV. CRITIQUES AND CONTRIBUTIONS OF CRITICAL STRATEGIC
MANAGEMENT

Critical Strategic Management resonates with the political and power strategy school
perspectives as described by Mintzberg, Ahlstrand and Lampel (1998). Its tenets are drawn
upon the premises of the power and political strategy school. For example, the strategic
process is shaped by power and politics and sees strategizing as the interplay of persuasion,
bargaining and sometimes confrontation in the form of political games. The engagement of
power and political games is to ensure that strategic content embedded in the context be taken
seriously and to see organization as promoting its own welfare through the use of
maneuvering as well as collective means in combating domination and exploitation by elitist
managers. The political process also helps to open up space for resistance by labour,
environmentalists, and other forces challenging practices or the status quo of corporations
and their elitist managers. One of the weaknesses of Critical Strategic Management is that it
overstates the interplay of power and politics and downplays the importance of leadership of
the entrepreneurial school perspective. By concentrating too much attention on the content
concerning divisiveness and politics, the strategic process may miss fruitful dialogue and
preclude or undermine a visionary leader’s ability to develop clear vision which may be
beneficial to wider society, even in some conflictive fashions. This author argues that the
significance of power and politics as played out in the critical strategic management
perspective may risk an impetuous, overconfident, dogmatic identification of dominant and
subordinate groups and their interests, which can preclude a critical reflection on wider social
goals and virtues. For example, while it is true to claim that political and power dimensions
can play a positive role in strategy development, particularly challenging the established and
legitimate form of domination, divisiveness and unethical practices of corporation elitist
managers as contended by critical scholars, it can also be a source of wastage and distortion
viewed from a wider societal perspective. This is highly prevalent in a situation when
corporations face severe pressure from environmental uncertainty, and when political
activists who have most power and inclination may cloud other critical issues to further their
own interests. This usually happens in an event when ailing corporations are subjected to
intense market pressure and are unable to establish any clear direction or turnaround
strategies, when decision- making tends to become a free-for-all and powerful stakeholders
make claims to expertise, insight and authority that often reproduce or reinforce legitimate
organizational inequalities and unethical practices. Yet such significant issues are hardly
addressed in critical strategic management literature.

On the positive side, critical management scholars have introduced some useful post
modernist and critical ideas in enriching the field, for example, concepts of domination,
coalition, emancipation, hegemony, exploitation and pluralism. These radical ideas hold out the
promise of revealing the taken-for-granted assumptions and ideologies embedded in the
discourse and practice of strategy and challenge its self-understanding as a politically neutral
tool to improve the organization’s long-term performance. It also highlights the need for
stakeholders to question the universality of managerial interests and bring to the surface latent
conflicts. It brings strategic content, process and context into closer scrutiny by fostering more
participative decision-making, from which the voices of the powerless were previously
excluded. Overall, the strategy development process is more likely to be of participative form,
negotiated through persuasion, bargaining or direct confrontation. Strategy that emerges from
such processes is more likely to embrace critical reflective learning. Finally, Critical Strategic
Management breaks new ground and frees us from the comfort provided by rational and linear

658
thinking and relocates us in the modern environment where history, politics, power and culture
are the driving forces of change for the betterment of wider society.

V. CONCLUSION

This article examines the tenets of critical strategic management and its underpinning
theoretical perspective based on the latest ideas and debates among critical scholars. The
approach is consonant with the power and political school perspective. It challenges the
positivist thoughts of orthodox strategic management and argues that strategic management
should embrace a critical examination and understanding of the cultural, social, political, moral
and ethical issues, the fundamentals upon which any contemporary organizational reality rests.
The approach encourages the development of a system that is less coloured by the narrow
interests of top management and their preoccupation with instrumental values, but views the
strategic management process as having significant social and political ramifications. The
approach encourages critical reflective learning that draws upon postmodernist and critical
theories to conceptualize strategic management as a discourse and discursive practice. It helps
us to understand the importance of politics and power in promoting strategic change for the
betterment of wider society. However, the approach like the power and political school
perspective is based on some problematic premises, presumptions and presuppositions. It
assumes that politics is always dysfunctional and is nothing more than a mechanism of control
used by the corporation to serve the interests of elitist managers to make profits, and the
perspective is able to identify dominant and subordinated groups and their strategic interests and
intents in absolute terms and in a way that they promote divisiveness and exclude wider societal
goals and virtues. In many ways, the paradigm of Critical Strategic Management complements
the power strategy school perspective by infusing discursive critical ideas to stimulate strategic
change that is blocked by legitimate corporate systems and procedures and to ensure that all
stakeholders’ issues are fully and democratically debated. However, further empirical research
is needed to examine how and to what extent that such critical approach might benefit
organizations and wider society.

REFERENCES

Argyris, C. On Organisational Learning. Oxford: Blackwell, 1996.


Bower, J.L. and Doz, Y. “Strategy formulation: A social and political process.” In D.E.
Schendel and C.W. Hofer, eds., Strategic Management. Boston, MA: Little Brown,
1997.
Clegg, S., Carter, C., and Komberger, M. “Get up, I feel like being a strategy
machine.” European Management Review, (1), 2004, 21-28. Eden, C. and
Ackermann, F. Making Strategy. London: Sage, 1998, 13.
Ezzamel, M. and Willmott, H. “Rethinking strategy: contemporary
perspectives and debates.” European Management Review, (1), 2004, 43-48.
Habermas, J. Knowledge and Human Interests. London: Heinemann, 1992.
Knights, D. and Morgan, G. “Corporate strategy, organizations, and
subjectivity: A critique.” Organization Studies, 12(2), 1991, 55-61.

659
TOWARD AN UNDERSTANDING OF RELEVANT STRATEGIC
ORGANIZATIONS: A FUZZY LOGIC APPROACH

Jean-Michel Quentier, ESCPAU School of Business, France


jean-michel.quentier@escpau.fr

ABSTRACT

The purpose of this paper is to attempt a preliminary development of a new model that
integrates strategic organizational analysis and the fuzzy logic approach. We suggest that in
today’s business environment and according to the stage of the business life cycle that firms
are facing, managers must pay attention to how important it can be to focus on either
efficiency or effectiveness when designing their firm’s organizational structure. The
preliminary conclusion of the study is that in a competitive environment with a short business
life cycle the question is no longer how efficient the firm’s organization might be but how
relevant are the decisions made in terms of strategic alignment between strategic organization
and business strategy. In other words, how relevant is the organization to its competitive
environment and position.
I. INTRODUCTION
In today’s uncertain and global business environment, companies are facing several
organizational and structural challenges. To explore these new ways and approaches, we
suggest a multi-field research in crossing the fuzzy logic approach with of that of
organizational development (OD). We believe that the findings of this multi-field research
can be helpful for managers to respond to key questions such as How to manage company’s
organization in a global context. How to design or restructure a new corporate organizational
architecture. How to build an organization that is relevant strategically to the firm’s business
life cycle. The central question of our research proposal is How Can Firms’ Managers Move
From an Efficient Organization to a Relevant Strategic One? We make the assumptions that
first, being an efficient organization is no longer enough today to compete in a business world
characterized by uncertainty, short business cycles, and a very dynamic competitive
environment that requires firms to be more flexible and adaptable. Second, at the early stage
of the business life cycle, decision-making about organizational structure might look for
effectiveness instead of efficiency. Third, at the later stage of the life cycle, say maturity and
decline, decision-making about organizational structure might look at efficiency instead of
effectiveness.
II. LITERATURE REVIEW

The organizational strategy is a framework of how an organization relates to its


environment. At this point, we can ask some questions: To what extent the organizational
strategy determines the business strategy; to what extent the business strategy is constrained
to the organizational strategy. The organizational strategy of the firm is what you see when
you stand at the organizational boundary and look inwards (Andrews, 1980). The
implications of this organizational strategy can result in a new way to measure the firm’s
organizational performance by using indicators such as customer’s share instead of market
share, customer’s image instead of brand image, and customers’ profitability instead of
market revenue.

660
It was Adam Smith who built the premise of what we called today “organization
theory” when he demonstrated the greater efficiency that could be gain through division and
specialization of labor (Stigler, 1957). This work laid the foundation for later organizational
and industrial theorists such as Max Weber and Frederick Taylor who advocated narrowing
the scope of workers’ jobs so that specialization could be developed and efficiency enhanced
(Wren, 1972). Another important contribution to organizational theory was that of Chester
Barnard known as the Human Relations School. This approach explored the role of groups
and social processes in organizations. The most notable work is the Hawthorne studies at
Western Electric by Roethlisberger and Dickson and works by Elton Mayo (Roethlisberger
and Dickson, 1939; Mayo, 1945). These studies questioned the rational, efficiency-oriented
scientific management views of work. The works of contingency theorists have a decidedly
rational overtone and have resulted in extensive investigations of organizational technology,
the external environment, goals, organizational size, and how these contextual factors are
related to organizational structure. Contingency theorists reject the one-best-way model of
firm’s organization proposed by earlier theorists (Donaldson, 2001). To finish, this brief
review of organizational theory we focus on two theories based upon industrial and
organizational economics, namely transaction cost economics and agency theory. Although
subtle differences distinguish these two approaches, their central focus is similar (Fama,
1980). Owners seek to maximize their return on investment by the most efficient use of the
organization. Agents, on the other hand, seek to minimize their efforts and maximize their
remuneration. We argue that these different approaches are more focused on the search for
operational efficiency than strategic relevance of the firm’s organization.

III. THE CONCEPTS OF EFFICIENCY AND EFFECTIVENESS

The term effectiveness is itself unclear. An organization may be more or less effective
in a variety of different ways. Is effectiveness simply the amount of profits earned? Or is it
the number of units produced or customers served? What about worker satisfaction? In
addition, what about definitions of effectiveness proposed by the organization’s stakeholders?
Are customers satisfied with the organization’s products or services? Is the broad community
satisfied with the manner in which the organization has conducted itself? Has the company
polluted the air and water? Has the company provided some value to the community? All
these questions help us to define the firm’s organizational effectiveness as the way managers
structure their companies so that they are able to satisfy not only shareholders but also
stakeholders’ interests even though these interests can be somewhat different. Concisely, we
define organizational effectiveness as “the right organization that takes in account not only
the need to create value for stockholders and customers but also to create wealth in an ethical
and societal manner for employees and the community as a whole.” On the other hand,
efficiency is defined as the way the firm uses its resources to maximize its outputs. In this,
the efficiency approach is more a legacy of the industrial engineering and time-motion
studies of Frederick W. Taylor.

IV. DESCRIPTION OF THE FUZZY RSOI INFERENCE SYSTEM

The fuzzy logic literature offers a large variety of fuzzy inference systems. However, to
model the relevant strategic organization index (RSOI), we chose to use a fuzzy logic
controller (Lofti Zadeh, 1965, 1978, and 1983; Mamdani, 1975) which is schematized as:
crisp inputs, fuzzy inference system, crisp outputs.
Definition of linguistic variables:

661
A linguistic variable can be defined as a triplet (V, X, TV), where V is the name of the
variable; X is the reference set and TV is a collection of normalized fuzzy subsets of X. We
first define the linguistic variable business life cycle, which is specially defined in order to
model its four periods or stages.
X = [0,100], a set of real numbers between 0 and 100,
TV = {embryonic, growth, maturity, decline}; we visualize the four periods of the life
cycle on a scale between 0 and 100: between 0 and 10, we are in the embryonic period,
from 10 to 20 we pass from embryonic to growth, and then from 20 to 40 we go from
growth to maturity, and so forth.
The linguistic variables efficiency, effectiveness and relevant strategic organization index
are defined in the same way.
X = [0,100], set of real numbers between 0 and 100
TV = {very low, low, medium, high, very high}

Definition of the rules of the inference engine. The next step is to define the fuzzy rules for
the inference engine. For our model, we have defined four sets of rules. We should notice that
the expertise of the system depends on the quality of the rules. Therefore, building the model
needs the expertise of many persons who know the universe of the firm’s strategic
environment and are aware of the firm’s strategic intent. Figure 1 shows the fuzzy values for
a life cycle at maturity stage.

Figure 1. Definition of the set of rules when the life cycle is at the maturity level stage.
Inference process: The fuzzy inference process contains the following three steps:
(1) Application of rules:
Consider the example where x1 = 50, x2 = 75 and x3 = 65; the value of the variable life cycle
is maturity at level 1; the value of the variable efficiency is high at level 1 and the value of
the variable effectiveness is medium at level 0.4 and high at level 0.6.
The output variable is denoted Y; the result of the application of a rule, called consequent, is:
μ B ' ( y ) = min(min(μ A1 ( x1 ), μ A2 ( x 2 ), μ A3 ( x 3 )), μ Bi ( y ))
Now, if we take an example in applying two rules, the result can be as follows:
If efficiency is high and effectiveness is medium then the RSOI is low.
If efficiency is high and effectiveness is high then the RSOI is medium.
(2) Aggregation of consequents:
{ }
μ A( B '1 ,..., B 'n ) = max( μ B '1 ( y), μ B '2 ( y),...; μ B 'n ( y ) )
Therefore, we obtain a polygonal structure.

662
(3) Defuzzification:
By this means, we can calculate the value of RSOI by the centroidal method;

y 0 = μ A( B '1 ,..., B 'n ) ( y ) ydy ∫μ A( B '1 ,..., B 'n ) ( y ) dy

V. SIMULATIONS AND COMMENTS

Simulation when the business life cycle is at the embryonic stage


Consider the case whose life cycle value is 10, efficiency value 60, and effectiveness
value 60 the RSOI value, once the fuzzy rules are applied, is equal to 46 or close to a medium
fuzzy value (see figure 2). What can a firm’s managers do to improve their strategic
organization? To answer this question we simulate a list of RSOI when efficiency and
effectiveness vary from 50 to 70 with a pace increment of 2.

Figure 2. Crisp RSOI values when the value life cycle is 10, efficiency and effectiveness
varies from 60 to 70.
We can imply that managers must decide to improve first their effectiveness and
secondly their efficiency if they want to improve their firm’s RSOI value. Consequently, the
RSOI value goes from 46 to 54 and then to 61, which are values that can be read as medium.
The conclusion we can draw from this first simulation is that at the embryonic stage of the
life cycle, the relevant the decision is to look at for effectiveness instead of efficiency.
Simulation when the business life cycle is at the maturity stage
For input variables whose life cycle is 50, efficiency 60, and effectiveness 60 the value
of RSOI falls down to 32 or rather lower than medium.

663
Figure 3 Crisp RSOI values when the value life cycle is 50, efficiency and effectiveness vary
from 60 to 70.
In this case, managers should decide to improve simultaneously efficiency and effectiveness
if they want to improve their firm’s RSOI value. If they do not do that, their firm’s RSOI will
fall down dramatically (e.g., when the values of efficiency and effectiveness are 50, the RSOI
falls down to 8). Therefore, the conclusion we can draw here is that at the maturity stage of
the business life cycle for firms willing to maintain their competitive advantage they must
seek for efficiency instead of effectiveness.

VI. CONCLUSION

A general conclusion we can draw from this analysis is that on the one hand, the more
the business life cycle is at the beginning (embryonic) of the cycle the more important the
role of effectiveness becomes to set up the firm’s organization. On the other hand, the more
the business life cycle is at the final (decline) stage the more crucial the role of efficiency
becomes to adjust the firm’s organization to the competitive environment that the firm is
facing. We trust that the Relevant Strategic Organization framework we are suggesting could
be an interesting and useful approach and tool to help managers to align their firm’s
organizational structure to that of their firms’ competitive strategy. Further studies and
applications must be done to determine how consistent and relevant our framework can be to
that knowledge not only to the organizational development theory but also to the business
decision-making process.

REFERENCES

A. BOOKS
Andrews, Kenneth, R. The Concept of Corporate Strategy. New York: Richard D. Irwin, Inc.,
1980.
Cyert, M. Richard & James A. March. A Behavioral Theory of the Firm (2nd Ed.). Madden:
Blackwell Publishing, 1992.
Child, John. Organizational Structure, Environment, and Performance: The Role of Strategic
Choice, in Complex Organizations. Richard H. Hall (ed). Aldershot: Dartmouth
Publishing Company, 1972.
Donaldson, Lex. The Contingency Theory of Organizations. Thousand Oaks: Sage
Publications, 2001.

664
Miles E. Raymond and Charles C. Snow. Organizational Strategy, Structure and Process.
New York: McGraw-Hill, 1978.
Stigler, G.Ed. Adam Smith: Selections From the Wealth of Nations. New York: Appleton-
Century-Crofts, 1957.
B. JOURNAL ARTICLES
Bojadziev, G. & Bojadziev, M. Fuzzy Logic for business, finance and management. World
Scientific, 1997, 25-37.
Sharfman, P. Mark & James W. Dean Jr. Conceptualizing and Measuring the Organizational
Environment: A multidimensional Approach. Journal of Management 17, 1991, 681-
700,
Zadeh, L. The role of fuzzy logic in the management of uncertainty in expert systems. Fuzzy
sets and systems, 11, 1983, 199-227.

665
TOTAL QUALITY MANAGEMENT ACCEPTANCE AND APPLICATIONS IN
MULTINATIONAL COMPANIES: AN EMPIRICAL EXAMINATION

Abbass Alkhafaji, Slippery Rock University

Nail Khanfar, Nova Southeastern University

ABSTRACT

In today's global economy, companies have been constantly revising their


management systems in order to have and maintain a strategic competitive advantage. In this
global economy, remaining competitive is a condition to financial success. Today’s
customers demand high quality, outstanding service, and reasonable prices. Therefore,
companies are looking for new approaches to improve productivity, profitability and obtain a
greater market share. Strategic competitive advantage requires innovations in technology,
accurate and timely information to achieve better product quality and improve services. The
purpose of this study is to examine Total Quality Management acceptance and application in
multinational companies based in the Arab Gulf.

I. INTRODUCTION

Total Quality Management refers to the systematic improvement of quality and


cultural transformation in management techniques through the involvement of everyone in
the organization and in all aspects of the business operation. In today's global economy,
companies have been frequently revising their management style. Today’s customers demand
high quality, outstanding service, and reasonable prices. Therefore, companies are looking for
new approaches to improve productivity, profitability and obtain a greater market share.
Strategic competitive advantage requires innovations in technology, accurate and timely
information to achieve better product quality and improve services. Any company that is
seriously uncompetitive because of poor service, low quality or high costs should explore
total quality management (Jabnoon & Alkhafaji 2005). Restructuring and transformation are
requirements for companies to remain competitive and score financial success. Companies
adopting TQM successfully will be in a position to improve their product-mix, service and
overall performance.Senior managers of Fortune 500 firms were surveyed using a mail
questionnaire. Responses were received from 173 individuals accounting for a 35% response
rate. It was found that knowledge of total quality management (TQM) influenced the
managers of major industrial firms to a) believe in and commit their organizations to TQM
strategies, and b) perceive that they used a participative style of management. The purpose of
this study is to examine Total Quality Management concept acceptance and application in
multinational companies based in the Arab Gulf.

The Importance of The Process


Writers agree that the successful implementation of TQM requires a strong
commitment from, and involvement of, top managers (Deming, 1972; Juran, 1986; Costin,
1994; Roberts, 1992). Costin (1994), for example, writes that a company's senior managers
must create visible quality values and high expectations. Reinforcing these quality values
and expectations requires their substantial personal commitment and involvement.

666
Without management involvement and commitment to TQM, a company can never
become successful at TQM. Deming (1972) argued that simply teaching people to use
statistical tools aimed at achieving quality improvement was insufficient. He stressed that
only management has the power to change a firm’s processes affecting the quality of its
products and services. Deming observed that "no permanent impact has ever been
accomplished in quality control without [the] understanding and nurture of top management."

TQM recognizes that the quality and cost of product/services produced are activity-
related and that the product being manufactured consumes these activities at a given rate. It
recognizes that quality costs are not driven by volume or direct labour alone but by activities
such as design the product, product processes, engineering, storage, shipping, and other
related services (Componation, P. & Farrington, P. 2000). Today, labor costs represent a
small percentage of the production process because of automation. However, labor cost in
providing the services to loyal customers is high. It soon became obvious to managers that if
their companies were allocating the costs improperly, they might be making strategic
decisions concerning activities related to product mix, pricing and marketing that were
inaccurate. Therefore companies adopting TQM concerns with the continuous improvement
of the quality of the product and service they provide. The company will focus on the
efficiency and effectiveness of the entire process (Gitlow, Howard S., Einspruch, Norman G.,
Loredo, Elvira N., and Percival, Mary McKenry, 1994).

Implementing TQM does not guarantee success and full realization of its benefits unless
every one involved in the process and a clear commitment from the top granted. If the
implementation process is not properly carried out, and facilitation steps taken to get
workforce acceptance and usage of TQM, then, success becomes questionable (Coyle and
Alkhafaji, 2005).

II. METHODOLOGY

This study surveyed Managers of about 100 companies located in the Gulf (United
Arab Emirates, and Oman). A questionnaire (see respondent and their industry table I) was
designed to assess the experiences of those who have implemented TQM in their operations.

Table I: What type of industry does your company belong to?


Type of industry Number of respondents Percentage
Agriculture 06 4.5
Industrial 11 8.3
Minerals and Oils 12 9.2
Financial 20 15
Shipping 14 10.5
Service 17 12.7
Chemical 07 5.3
Computer and Technology 22 16.5
Health Care 16 12
Other 07 5.3
Total 133 100%

The research instrument consisted of both objective questions and subjective (open-
ended) questions. The questionnaire contained twenty objective questions and consisted of

667
two types of questions. One set of objective questions attempted to address system issues is
summarized in Table 2.

Table 2: Questions on System and Demographic Issues


Questions attempted to address system Questions related to demographic
issues issues
-Reasons for adapting TQM approach. -In what industry would the
-Driving forces behind TQM
implementation. company is classified?
-Role played by outside consultant.
-Benefits realized from TQM -How many employees does the
implementation. company have?
-Problems associated with TQM -Level of employees Training.
implementation. -Other issues related to Decision
-Relationships between TQM and Making processes
Strategic Management

The subjective questions were open-ended in nature and allowed for any suggestions
the respondent might have for others embarking on any TQM implementation efforts.

III. PRESENTATION OF FINDINGS

A preliminary analysis of the data is given in this section below. However, this paper
represents the first part of this research. The author intends to write the second part in the
next few months.)
In regards to the most important reasons for implementing TQM (Table 3), about
45.5% of the respondents indicated that determining the quality of the product is crucial.
Another 30% indicated that competitiveness is behind the implementation of TQM. The
remaining 24.5% gave another reason as being most important.

Table 3: Why Did Your Company Implement Total Quality Management


Reason Value Percent
Determining product/service quality is crucial 1 45.5
To stay competitive is the reason behind the 2 30
implementation of TQM
To position the business more strategically 3 16
To better understand critical activities 4 5
Other, such as reducing cost, adopting a new 5 3.5
method in management,…etc
Total 100%

About one third (40.%) of the respondents indicated that their TQM system and the
application of ISO 900-2000 was explained to the entire employees. They indicated that
TQM has helped them in obtaining the ISO certificates. About 64.5% of the respondents
hired outside trainers and consultants to educate their employees. In the open-ended questions
they indicated the role of the outsiders in the process. Those outside consultants were hired
to serve in different capacities, including the application of TQM statistical tools, assisting

668
management and employees in the change required, and providing other expertise. See Table
4.

Table 4: If Your Company Hired Outside TQM Consultants,


Why Were Those Consultants Hired?
Reasons Percentage of
Respondents
Including the application of TQM statistical tools 28.4
Assisting management and employees in the change 14
required
Lack of in-house TQM expertise 9.1
Top management unable to devote sufficient time to 5.0
project
Consultants provided additional credibility 4
Providing other expertise such as: Provides foundation for 4
quality efforts

In terms of the length of time for TQM implementation, 30% of respondents indicated
less than three years, 20% more than three years and less than five years, and 50% of
respondents indicated that project was still in process. The majority of respondents (56%)
indicated that TQM implementation cost was less than $100,000, while 24% of the sample
indicated the cost to be between $100,000-$150,000, 20% between $150,000-$200,000. In
addition, 35.5% reported that TQM implementation was successful, and 21.5% indicated that
it was mildly successful. About 43% of the respondents indicated that it was a failure.

Three major benefits (Improve quality of product and services, improved management
information, and greater cost awareness) were stated as the most common benefits derived
from TQM. Improved strategic decision-making and eventual cost reduction were also given
as potential benefits. However about 38% of those responded indicated that there was no
obvious improvements or benefits. Respondents indicated that they encountered numerous
problems during the implementation of their TQM systems. Examples of these problems
include a problem with:
1 Gathering relevant data (27%)
2 Lack of cooperation from department (19%)
3 Lack of employee awareness (16%)
4 Employees not well-informed about TQM implementation (24%)
5 Other problems (14%)
Forty managers or about 31% indicated that the process will enhance their
competitiveness in the global market. About 34% of the respondents indicated that it was too
soon to know if TQM implementation will result in reducing cost and increasing overall
profit. Only 38% indicated that TQM implementation resulted in an overall net benefit, while
about 37% reported no change. About 45% of the companies surveyed had more than1000
employees, while 34% of the companies had less than 500 employees, and 21% of the
companies have less than 250 employees. The questionnaire revealed that the respondents
classified themselves into the following categories: See Table 5.
Table 5; Respondents Position within their companies.

669
Please indicate what level of management you consider yourself:
Number Percentage
Top-level management 15 11.3
Middle level management 42 31.6
Operating level management 29 21.8
Administrative department manager 16 12
Administrative department staff 24 18
Others 7 5.3
Total 133 100%

IV. CONCLUSION
Organization can be efficient when its people, processes, systems and structure are
effectively integrated. Based upon the findings, the following conclusions emerged.
1. Total quality management provides many advantages to companies who choose to
implement the concept.
2. A good number of managers indicated that TQM and ISO-9000-2000 are connected
and one will lead to the other.
3. The process of implementation is a long term and require outside help to assist
management and employees in the change required.
4. Still about 40% of those companies adopting TQM are not successful. Further
research is needed to find why this is so.
5. Overall, Managers of various businesses differ regarding the extent to which TQM
implementation in their organization will improve competitiveness and reduce cost.
6. A strategic approach to implement TQM will improve its implementation.

REFERENCES

Alkhafaji, A. (2001), Corporate Transformation and Restructuring: A Strategic Approach,


p.38 (Westport: Quorum Books).
Jabnoun, N. and Alkhafaji, A. (2005) National Cultures For Quality Assurance And Total
Quality Management, Journal of Transnational Management, Volume 10, Issue 3.
Barczyk Casimir C. and Falk, G. (1996) Does knowledge of TQM Influence Top Managers
Beliefs, Commitment, and Management Style ? Business Research Yearbook Volume 3
pp.463-467.
Anderson, J.C. Rungtusnatham, M., & Schroeder, R.G., (1994)A Theory of Quality
Management Underlying the Deming Management Method, Academy of Management
Review, Vol 19, No 3., pp. 472-509
Costin, H. (1994) Ready in Total Quality Management (Fort Worth, Texas: Dryden Press)
Denes, S. (2003). All In A Days Work. Rural Telecommunications. Vol.22, Issue 3.
Deming,W.E. (1986) Out of the Crisis, (Cambridge, MA, MIT Press) And Latsko W.J., &
Saunders D.M.; (1995) Four Days With Dr. Deming, Addison - Wesley Publishing
Company, Massachusetts, p.12.

670
THE VALUE RELEVANCE OF HOSPITAL INTEGRATION STRATEGIES,
OWNERSHIP CONTROL CHARACTERISTICS AND DIVESTITIURE DECISIONS

Richard P. Silkoff, Eastern Connecticut State University


silkoffr@easternct.edu

ABSTRACT

Recent research has been devoted to examining the types of organizational change
associated with hospital divestitures. The increased acceptance of divestiture as a strategy
may reflect recent patterns of consolidation in the health care field that require health systems
to cut back certain subsidiaries by removing assets that do not contribute to the core business
and organizational mission of the system. Using a sample of 362 system hospitals, an
examination of the effects of integration and ownership control on health system divestiture
decisions, and the interaction of these factors on hospital financial performance, was
conducted. Employing data from archival sources (American Hospital Association, Health
Care Financing Administration), discrete-time with probit regression, a method appropriate
for analyzing longitudinal data with a continuous dependent variable but with both
dichotomous and continuous independent variables, was used to test three hypotheses.
Findings support the argument that hospital divestitures remove activities that generate
negative value, and that both integration activities and ownership control provide strong
incentives to improve operations following the divestiture.

I. INTRODUCTION

The introduction of the Medicare prospective payment system (PPS), and the resulting
increase in competition among health care providers during the early to mid 1990s,
compelled United States hospitals to engage in more affiliation and consolidation behavior in
healthcare organizations (Shortell 1999). Fundamentally, four options have existed for
hospital systems to improve the fit between the goals and actions of physicians, hospitals and
administrators (Rich 2000). Although they are presented here as distinct choices, they may
also be taken in combination in establishing a course to improve a hospital’s financial
performance and its contribution to the health care system. The choices are to: (1) improve
and find processes for management to reduce costs, maximize revenue, and selectively
consolidate hospitals and practices; (2) transform or close hospitals with significant economic
problems and then restructure the remaining hospitals within new entities which are
strategically aligned with the health system; (3) privatize or sell hospital assets and physician
contracts to a commercial physician practice management company (PPM) which then
establishes a strategic relationship with the parent hospital or health system; and (4) divest or
sell/transfer hospital assets to another hospital system.

II. CONCEPTUAL FRAMEWORK AND HYPOTHESES

Divestiture occurs when a business unit loses its value to the parent firm (Kaplan and
Weisbach 1992). A decrease in financial performance is a strong indicator of a decline in the
value of a business unit. In the hospital industry, factors other than divestiture may affect
financial performance. To identify such factors, this study draws from the literature on
contingency theory and interorganizational relations theory.
Impact of hospital ownership control on divestiture

671
Contingency theory illustrates that an affiliated hospital's value to its health system is
determined by its ability to adapt to the uncertainty and instability of the environment. It is a
systems model based upon a framework of factors that have a generally important influence
on strategic choice and also have performance implications. Contingency theory emphasizes
the importance of ownership control in determining the fate of organizations (Ginsberg and
Venkatraman 1985). Hospitals possessing assets that are valuable and not easily accessible by
other hospital organizations achieve advantages over others in the system. A major reason
that hospitals join systems is to help secure needed resources and gain greater bargaining
power with purchasers and health plans. Through these actions, an individual hospital's
dependence on its environment is reduced, and thereby its prospects for survival and growth
increase (Lin and Wan 1999). Thus, the control of critical resources relative to nearby
hospitals may support a system's competitive position and thus reduce an affiliated hospital's
chance of divestiture. On the basis of these assumptions, it is hypothesized as follows:
Hypothesis 1: System-affiliated hospitals that possess more ownership control over assets
by their management are less likely to be divested by their parent health system.
Impact of hospital integration on divestiture.
Interorganizational theory suggests that firms integrate to compensate for an
incomplete market for resources, such as brand names, management expertise, or referrals. In
the case of hospital integration, both acquirers and targets may hold critical resources for
which markets are incomplete. Through integration, the acquirer might gain access to the
target’s resource of a close attachment to local patients and physicians; the target might gain
access to specialized technology, the quality reputation of the acquirer, and potentially
valuable contracts with managed care payers (Coddington, Fischer and Moore 2000).
Functional integration determines the long-term allocation of existing resources and the
development of new ones essential to assure the success of health systems (Oliver 1990). To
guarantee the efficient use of resources in meeting their own objectives and to add value,
health systems need to achieve functional integration (Shortell et al. 1996). Thus, the
following hypothesis is postulated:
Hypothesis 2: System-affiliated hospitals that are more integrated with their parent health
system are less likely to be divested by their parent health system.
Effects of hospital divestiture on financial performance.
The literature on divestiture in non-health care industries has highlighted the
importance of poor financial performance as a determinant of divestiture (Duhaime and Grant
1984). Health systems may consider divestiture of poor-performing hospitals as a way to
avoid further financial losses. A testable hypothesis can be stated as follows:
Hypothesis 3: Hospitals that are less likely to be divested by the system are more likely to
enhance a health system's financial performance.

III. METHODS

Study design and data sources.


This study utilized a correlation and longitudinal design using archival data sources
(American Hospital Association, Health Care Financing Administration). The model used
discrete-time probit regression, a method appropriate for analyzing longitudinal data with a
continuous dependent variable but with both dichotomous and continuous independent
variables. The sample consisted of 402 community hospitals that were affiliated with a health
system between 1997 and 2003 with the exception of sole community providers and contract-
managed hospitals. The resulting number of hospital observations was 362. Three sources of
1997 and 2003 data were employed: (1) the American Hospital Association (AHA) Hospital
Guide, Part B, (2) AHA Annual Surveys of Hospitals files, and (3) the Health Care Financing

672
Administration (HCFA) data files which includes the Cost Reports. The AHA Hospital
Guide, Part B and Annual Surveys of Hospitals files contain characteristics such as
ownership, services, and bed size. HCFA cost reports include hospital financial records and
case-mix data.

Measurement of variables.
The variables in this study were divided into four categories. The first category has
two exogenous constructs consisting of ownership control and integration strategies. The
second category is the endogenous construct, divestiture. The third variable, also an
endogenous construct, represents financial performance. The last group is a set of control
variables representing the common but significant hospital characteristics of size and
nonprofit ownership.
Ownership control was measured based on two independent variables: profit status and
system type. Profit status includes two dummy variables used to differentiate for-profit and
non-profit organization. System type consists of two dummy variables used to distinguish
centralized and decentralized types of operations. Integration strategies were determined by
six independent variables representing three different dimensions of integration. Integration
based on service type was measured by the number of inpatient beds used, number of
outpatient visits made, and the number of physicians associated with the hospital. Integration
based on physician participation in the management of the hospital includes two dummy
variables used to differentiate open physician hospital organization and closed physician
hospital organization (Goes and Zahn 1995). Integration based on managed care contracting
was represented by the number of managed care contracts provided in the system. Hospital
Divestiture is the dependent variable defined as the transfer or sale of the assets of an
associated hospital from one system to another or as the termination of the hospital
relationship with the system whereby the hospital divested is converted to independent
(freestanding) status. In this study, a hospital was considered divested if the hospital name
was removed from the member list of one health system and appeared on the list of another
health system (AHA Hospital Guide). A hospital was also considered divested if the hospital
assumed freestanding status. Financial Performance was measured by cash flow from
operations or the ratio of changes in working capital and depreciation to total assets. This
measure is a more effective and timely indicator of both profits earned based on hospital
cash-based activities than from financial measures based on profits only (McCue 1991).

IV. RESULTS

Sixty-four system hospitals of the sample were divested during the study period.
There were significant differences between the for-profit and non-profit groups across
integration strategies and centralized ownership control. Non-profit hospitals were more
likely to utilize system integration strategies and centralized control, but for-profit hospitals
were more likely to divest, as indicated in Table 1. As non-profit hospitals clearly behaved
differently, the study analysis controls for the effects of non-profit ownership and hospital
size.

673
Table 1 Hospital Integration Strategies and Centralized Ownership Control
By Profit Status, 2003

Non-
profit For-profit
Profit Status (N=362) (N=179) (N=183)
Frequenc
y % Frequency %
Physician management 13
integration 23 % 13 7%
68
Managed care integration 122 % 87 48%
76
Centralized ownership control 136 % 17 9%
12
Divestiture 22 % 42 23%
Hospital staff physician 59
integration 105 % 90 49%

Descriptive statistics and correlations for study variables are presented in Table 2. None
of the correlations between independent variables are over .50, suggesting that there is little
potential concern of multicollinearity.

Table 2
Descriptive Statistics and Pearson Correlation Matrix of the Independent Variables, 2003
Variables Mean S.D. 2 3 4 5 6 7 8 9 10
Dependent Variable
1. Financial performance 7611.99 2764.64

Predictor Variables
2. Inpatient beds used/day 214.86 293.68 -
3. Outpatient visits/day 144.15 195.51 0.45 -
4. Physician management 68.48 24.18 0.18 0.34 -
integration
5. Managed care integration 9.93 12.38 0.20 0.01 -0.03 -
6. For profit ownership 0.23 0.26 0.01 0.07 -0.02 -0.01 -
7. Centralized ownership 0.48 0.14 -0.10 -0.26 -0.10 0.03 -0.36 -
8. Divestiture 0.17 0.33 -0.39 -0.28 0.19 0.14 -0.21 0.13 -
9. Hospital MDs on staff 76.5 20.84 -0.12 -0.10 -0.07 -0.09 0.12 -0.02 -0.07 -

Control Variables
10. NFP ownership 0.58 0.09 0.19 0.35 0.17 -0.03 0.15 -0.27 -0.10 0.07 -
11. Hospital size-beds 342.17 0.03 -0.07 0.26 -0.13 0.16 -0.18 0.07 0.16 -0.17 -0.43
available/day

Results indicate a high negative association between the hospital service integration
variables (inpatient and outpatient services) and divestiture, and a high positive association
between the hospital management integration variables (physician management and managed
care) and divestiture. However, results indicate a high negative association between for-profit
ownership control and divestiture, and a high positive association between centralized
ownership control and divestiture. Table 3 presents the results of the probit regression model
used for testing the hypotheses. The results of the study show a significantly negative
relationship between divestiture and hospital financial performance.

674
Hypothesis 1 predicted that system-affiliated hospitals that possess more ownership control
over assets by their management were less likely to be divested by their parent health system.
This hypothesis was partially supported. For-profit health systems are less likely to divest
hospitals from the system but only when they maintain less centralized control over their
assets.
Hypothesis 2 predicted that system-affiliated hospitals that are more integrated with their
parent health system were less likely to be divested by their parent health system. This
hypothesis was partially supported. For-profit health systems are less likely to divest
hospitals that provide more inpatient services, managed care products, physician participation
in management and when they provide less complex (riskier) medical and surgical
treatments.
Hypothesis 3 predicted that hospitals that are less likely to be divested by the system are
more likely to enhance a health system's financial performance. This hypothesis was fully
supported.
Table 3 Results From Probit Regression Modeling: Analysis of the Effects of Divestiture
on Changes in Hospital Financial Performance (Cash Flow From Operations Per
Hospital Admission), 1997-2003
Hospital Financial Performance

Model A Model B Model C

β S.E. β S.E. β
S.E
Hospital Ownership Control
1. For-profit ownership .118 0.22 .245 0.44 *
. 345 0.34 **
2. Centralized ownership -.157 0.13 -.142 0.18
.122 0.16 *

Hospital System Integration


3. Inpatient services .352 0.10 **
.345 0.09 **
4. Outpatient services .314 0.01 *
.116 0.01
5. Physician management .149 0.08 *
-.146 0.07 *
6. Managed care integration .108 0.15 .113
0.11 *
7. Hospital MDs -.011 0.09
-.008 0.07
8. Casemix .032 0.11 -
.075 0.12 *

9. Divestiture .059
0.51 **

Control Variables
10. Nonprofit ownership -.163 0.12 -.427 0.55 *
-.264 0.38
11. Hospital size .260 0.53 .355 0.77

675
.273 0.71

Model F 1.734 5.462 *


10.146 **
2
R 0.017 0.285
0.389
N 362 362
362

* p< .10 ** p< .05

V. CONCLUSION

This study investigated the financial performance of system hospitals that divested
assets in order to produce gains. Hospitals that sell or transfer assets that cause negative
synergies should experience improved financial performance, but hospitals that divest assets
to raise capital or in response to economic declines may not see improved financial gains.
Results supported the notion that hospital divestitures improve the health system's operations,
perhaps by removing non-performing assets and improving services.

REFERENCES

Coddington, D., Fischer, E., and Moore, K. "Characteristics of Successful Health Care
Systems." Health Forum Journal 43, no. 6 (2000): 40-46.
Conklin, M.S. "Thorough System Integration Results in Better Financial Performance."
Health Care Strategic Management 12(7) (1994): 16-22
Duhaime, I.M. and J.H. Grant. "Factors Influencing Divestment Decision-Making: Evidence
from a Field Study." Strategic Management Journal 5(2) (1984): 301-18.
Ginsberg, A. and N. Venkatraman. "Contingency Perspectives of Organizational Strategy: A
Critical Review of the Empirical Research." Academy of Management Review 10(3)
(1985): 421-434

676
MANAGING AND MEASURING INDUSTRY ANALYST RELATIONS

A. Abbott Ikeler, Emerson College


abbott_ikeler@emerson.edu

ABSTRACT

The first section of this paper examines case studies of agency-driven and in-house
managed AR programs from Europe and the U.S., benchmarking the most successful
strategies. From these and the direct testimony of analysts a checklist or best-practices metric
for analyst relations management is derived. The paper's second section extrapolates from
this model a template for measuring the results of analyst relations efforts. The template
includes tools and services recently available through a new breed of agencies and consultants
that specialize in evaluating analyst programs. It also, for the first time, broadens the tracking
metrics to include the wider range of publics now influenced by industry analysts.

I. INTRODUCTION

Over and above their obvious influence on customers, investors, and the media,
industry analysts provide business intelligence and strategic advice to manufacturers and
vendors. Such counsel, whether delivered for free in the give-and-take of pre-announcement
briefings, or for a contracted fee to marketing and product managers, often includes vital
positioning, timing and competitive insights. It may even include qualified customer leads.
Between product or service announcements, industry analysts can also prove a rich source of
industry knowledge and trending patterns that generate the material for client initiatives and
“soft news” PR campaigns. With many analyst firms, you can, for a supplementary fee, take
the relationship even farther and enlist their senior people as event partners and podium-
sharers.

In all, analysts speak to four critical concerns of the corporate communicator—three


external (corporate valuation, sales generation, and PR or marketing endorsements) and one
internal (the need for market intelligence and expert industry counsel)—comprising a wider
circle of influence and a greater number of stakeholders than is commonly understood.

II. MANAGEMENT STRATEGIES: CASE STUDIES

For most corporate communication programs, especially in the business-to-business


sphere, industry analysts are an active and broadly influential public that must be tightly
managed and carefully evaluated. In this section we’ll look at case studies in successful
analyst relations management from Europe and the U.S.—first at those driven by agencies for
their B2B clients, and then at in-house programs managed across multiple regions and product
divisions. From those takeaways and the direct commentary of several current and former
analysts, we’ll then derive a checklist of best-practices strategies and tactics.
Brodeur for WRQ and Nortel Networks. WRQ, an integration software provider to U.S. and
EMEA customers, asked the Brodeur agency to consolidate the company’s uncoordinated
worldwide analyst relations activities. The agency located 12 to 15 key U.S. and European
analysts in the client’s market, developed a consistent global outreach plan, persuaded and
trained local WRQ spokespeople to meet with those in their region, and arranged 1-on-1s in
every country where WRQ had a significant presence. (http://www.brodeur.com)

677
Nortel had minimal visibility in the networking marketplace, and was ranked 16th
according to industry analysts covering their product category—well below chief competitors
Lucent and Cisco. Brodeur advised Nortel to make direct contact with report authors at key
firms well ahead of their deadlines, and to pre-brief and test messages with senior analysts
before announcements. These strategies resulted in better report coverage and a leap to 6th
place in their target audience’s collective opinion. (http://www.brodeur.com)

Weber Shandwick for Microsoft Pocket PC. At the third annual GSM World
Congress in 2001 in Cannes, Microsoft’s wireless division ranked fourth in share of media
and analyst voice, well behind powerhouses like Nokia. Weber took a two-pronged approach,
pre-briefing both analysts attending the 2002 show, and their more senior colleagues at home
in the U.K. The effort resulted in an improvement to second place in share of voice and a full
90% of the analyst coverage noting Microsoft’s importance and correctly positioning the
company’s strategy. Ben Wood, of Gartner group, offered a typical encomium: “I arrived at
the 2002 event skeptical about Microsoft’s progress in the wireless space, and my view has
certainly changed.” (http://www.webershandwick.com) Burson-Marsteller for Alcatel.
Alcatel had no analyst relations prior to 1998, and looked to Burson to boost its worldwide
awareness and brand equity among opinion leaders. The agency conducted a perception audit
among analysts and, from that, designed the format and content for Alcatel’s first annual
industry analyst conference. Burson also provided a target list of invitees for the event,
oversaw all preparations, and monitored 1-on-1s, breakfast sessions and exclusive site visits
for senior analysts. In addition, they launched a bi-annual analyst tour for Alcatel’s CTO and
a bi-weekly media watch for analyst opinions. Hundreds attended the conference and senior
Alcatel management responded enthusiastically. Based on the event, the audit findings and
the media watch quotes, Alcatel’s leadership has funded an annual analyst relations program
ever since. (http://www.bm.com)
Motorola. Motorola had no industry analyst relations program at the corporate or Internet-
division level before 1998. Financial analysts were treated like favored customers, while their
industry counterparts were neither pre-briefed, nor electronically informed, nor even invited
to networking product announcements. A newly hired communications director for the
Internet and Networking division replaced his existing agency with a more analyst-relations
savvy support arm, conducted an immediate analyst phone audit, trained spokespeople for
analyst 1-on-1s, toured the top firms on background, and instituted announcement pre-
briefings with all first-rank analyst houses. In addition he lobbied for a dedicated corporate-
level AR director. To drive home the need, he presented benchmarking evidence of strong
AR funding and resource allocation among Motorola’s chief competitors, and submitted to
top management a Kensington Group report ranking Motorola’s analyst relations program
27th of the 27 top companies in the IT and telecom industries. These efforts led to the
appointment of a Corporate Analyst Relations Director, annual corporate-level forums for IAs
in London and Chicago, a $200K AR program budget, and partially dedicated AR managers
in the more technical divisions of the company. By 2001, Motorola had moved up in
Kensington’s analyst relations rankings from 27th to 16th place (Mike Doheny interview,
12/20/04).
Oracle. This worldwide leader in software, having recently absorbed PeopleSoft for
an estimated $35 billion, boasts the best analyst relations program in the high-tech field.
Nothing since Digital Equipment Corporation’s late 80’s model (which enjoyed a then
industry record analyst budget of $31 million and a 13-member corporate team) comes close.
Today Oracle fields a corporate AR team of 15 to 20, reporting directly to the CCO and
dotted line to the CMO. They in turn are supported by an even larger number of product-
specific AR managers who report directly into the corporate group. Moreover, each corporate

678
manager owns a specific relationship with a top firm and the group as a whole owns
consulting contracts with all top tier analyst firms. Understanding the circle of analyst
influence, Corporate AR also interfaces directly with the heads of Sales, PR, IR and partner
programs. Year on year, Kensington Group’s tally of industry analyst opinion has given
Oracle top or near top ranking among all software companies.

III. MANAGEMENT STRATEGIES: ANALYST PERSPECTIVES

What do analysts themselves say they want from vendors? At the strategic level,
former Ovum analyst Duncan Chapple reminds relationship managers to maintain a dialogue
with analysts in the market where the sale is being negotiated, since it is here that expert word
of mouth often makes or breaks a multi-million dollar deal (Chapple,
http://www.brodeur.com/insights). By the same token, Laurie McCabe, a senior analyst at
Summit Strategies, underscores the reach of IA influence beyond customers and press: be
aware, she advises, that they also affect how “prospects, partners, financial analysts and
competitors perceive a vendor and its standing in the market” (McCabe, quoted in
http://www.kensingtongroup.com). At the tactical level, IAs are equally forthcoming in their
recommendations. Analysts’ in-depth, fact-checking standards, says Kathy Quirk of Nucleus
Research, mean they want from a communication department “not just a link to a press
release, but to background materials, presentations, white papers, customer statements, and an
overview of what’s going on.” (http://www.prnews.blogspot.com) William Hopkins, CEO of
Knowledge Capital Group and a former analyst, warns against a glut of electronic updates,
however. Analysts want greater depth than reporters, but “they don’t want an endless stream
of information pushed out to them. They want to know what they need to know.”
(http://www.prnews.blogspot.com) Laurie McCabe seconds the point bluntly: “Don’t spam
industry analysts.” (http://www.kensingtongroup.com) She follows with a short list of
practical do’s-and-don’ts that includes calling analysts early (ahead of the press), sending
briefing materials in advance of meetings, and exploiting the analyst mindshare guaranteed in
your consulting contracts before you announce products. (http://www.kensingtongroup.com)

IV. MANAGEMENT STRATEGIES: A BEST PRACTICES CHECKLIST

If you’re a B2B communications manager set on building an analyst relations program


from scratch, you will first need to secure the buy-in of your company’s dominant coalition.
It’s necessary not only because you need their funding, but also because you’ll need their
active support to act as external spokespeople and help you interface with other internal
functions. Begin the process with benchmarking research—analyst opinions of your company
and its chief competitors expressed in media coverage and published reports, and a summary
of your own findings from a baseline telephone audit of top-tier analysts. When confronting
dyed-in-the-wool bottomliners among your top management, it also helps to argue the
importance of analyst influence on the investment community and on sales. In landing hi-
tech B2B contracts, for example, research indicates analyst endorsements are crucial 40% to
60% of the time.

What should you ask for? A large multinational IT or telecom company will need a
dedicated, director-level corporate manager (analysts expect as much), and at least one full-
time analyst relations manager per division. Program funding, exclusive of salaries, should be
comparable to your media relations budget. In an SME or a large firm that wants to pilot
before it commits, it’s reasonable to propose a joint AR and PR charter for one manager per
division, program dollars that allow for regular outreach, and an annual industry analyst

679
conference event, as well as the tactical support of a PR agency with analyst relations
experience and expertise.

Be forewarned: it may not be an easy sell. The traditional dominance of other


communication disciplines and current AR budget averages are against you, even in a B2B
environment. Despite the impact of analyst opinion on sales and investment, the typical
expenditure on analyst relations is, on average, only 2% of the marketing program budget.

On the other hand, if your AR program is up and running, developing a checklist of


several dozen questions will help you estimate the program’s structural soundness and likely
success. Here are a few of the most critical questions you should ask yourself:

• Do your goals include driving sales, bolstering corporate valuation, increasing the
quality of market intelligence, and supporting PR and marketing campaigns?
• Do you have a dedicated corporate director to plan and manage the worldwide
program, advise top management, and offer a consistent, company-wide perspective to
senior analysts?
• Is each AR manager conversant with the detailed features and benefits of the products
and services he/she covers for the analysts? (IAs expect all contacts to be technically
savvy.)
• Have you or your agency trained designated marketing and technical spokespeople for
the more rigorous encounters they’ll have with analysts? (Press training is not
enough.)
• Do you brief all key firms under NDA one or two weeks ahead of press
announcements? Do you also invite them to the press event?
• Do you solicit endorsement quotes from top analysts to include in major releases?
• Do you practice inbound as well as outbound analyst relations to gain market
intelligence and test strategies and messages?
• Do you employ an analyst-savvy agency to support your efforts? Does the agency
have existing consulting contracts with analyst firms they can exploit for your benefit?
Do they have regional subs or affiliates with similar strengths?
• Between major announcements and IA conferences, do you update individual analysts
by phone and in-person? (They want to be the first to know new developments in
their space.)
• Are you also prepared to discuss competitive, pricing, business, product, and channel
strategies if asked?
• Do your CEO, CMO and CTO meet regularly with industry analysts?

• Have you enlisted analysts as event speakers, or commissioned them to write white
papers on your behalf?

V. THE METRICS OF PROGRAM SUCCESS

In most cases, accurate evaluation of an AR program requires the right mix of internal
research and outsourced measurement expertise. I’ll conclude with a look at the best of those
external and internal resources and suggest an integrated approach to evaluating your analyst
relations success. Most full-service PR agencies can provide you with program assessments
via custom audits or secondary data from reports and press coverage. A variety of Internet
sites also offers generalized how-to-evaluate advice. More important, in the last decade or so,

680
a small but significant industry has grown up that focuses exclusively on analyzing the
analysts. They offer help not only to overwhelmed communication departments, but to CIOs
and investors as well. Some—Outsell, SageCircle, and the recently announced Tekrati—boil
down information on leading firms, individual analysts, upcoming events, and AR
management methods (http://www.tekrati.com). Others, chiefly Kensington Group and
Knowledge Capital Group, provide detailed evaluation tools. Kensington Group is the largest
and oldest, and the only one with a primary focus on measuring analyst response to the AR
programs of major corporations. Today they publish several reports annually: if you’re
among the top 25 companies in the hardware, software, networking, services or security
business, you’re reviewed and rated in Kensington. For a fee, you can buy the book and
discover what 80 to 100 key analysts think of your program and those of your competitors.
The categories include product positioning, strategies, access to staff, central contact point,
briefings, and forums. You’re also ranked for responsiveness, credibility, candor and
relationship “comfort level.” In addition, each company’s program is plotted against its
competitors numerically for quality of information content, information channels, attitude and
program concepts. There are even quantitative and qualitative sections that compare your
European and North American efforts (http://www.kensingtongroup.com). In an interview
with Norma LaRosa, the CEO of Kensington, I learned they have recently expanded their
industry-standard services to include packaged analysis for SMEs and customized reports on
product niches and the chief competitors in each.

A Paradigm for Measuring Analyst Relations Success


To gauge the effectiveness of a decently funded, well-staffed and competently
managed analyst relations program, you’ll need to adopt many of the methods common to
press relations measurement: baseline and follow-up phone audits, a media watch for reports
and imbedded quotes, worldwide clip counts and clip quality analysis, a count of releases that
include analyst endorsements, third-party research that ranks your program against top
competitors, and a database of positive references for major announcements and strategy
initiatives. At a more granular level, you may want to include other standard metrics, such as
pie charts of positive, negative or neutral analyst coverage, bar graphs of how many and
which senior analysts have commented on your company versus top competitors year-over-
year, or even the relative visibility each of your spokespeople is getting from the major IA
firms.
The crucial difference between gauging analyst and press success is rooted in the
analyst’s more decisive influence on audiences outside the immediate control of the AR
manager. You can for a price ask third parties such as Kensington to poll these audiences.
Even if you do so, I recommend developing an integrated and comprehensive measurement
program with your marketing, sales and IR colleagues that allows you to stay in closer touch
with all stakeholders. At the practical level, such an internal program might include IA-
specific questions added to customer satisfaction surveys, to questionnaires at annual
distributor conferences, to IR phone audits of financial analysts and investors—even, if
appropriate, to CEO and CFO interactions with venture capitalists. Since every fully
conceived analyst relations program must have at its core four major objectives—to drive
sales, improve strategy and intelligence, increase corporate value, and support PR and
marketing—it must of necessity measure its results both internally and externally, within and
without the immediate circle of communication disciplines. Only through the adoption of
such a 360-degree paradigm, can analyst relations managers hope to comprehend the impact
of what they do, and communicate it to the people who govern their professional lives.

681
REFERENCES

Chapple, Duncan. Why Do Analyst Relations in Every Region Where You Want
to Sell?, April 2002. (www.brodeur.com/insights).
Doheny, Mike. (Director of Global Industry Analyst Relations at Motorola.)
Marketing is Under More Pressure to Deliver Results. Fall 2004 Presentation
to Motorola senior management.
Doheny, Mike. Telephone interview with author, December 20, 2004.
Gartner, IT Service Strategy, Winter 1996 (www.gartner.com).
Forrester, CIO/MIS Receptiveness to the VoIP Trend, Spring 2000.
(www.forrester.com).
Larkin, Douglas. “Good Relations with Industry Analysts a Credible Benefit,”
The Washington Business Journal, May 2, 2003.
LaRosa, Norma. (CEO of Kensington Group) Telephone interview with author,
December 17, 2004.
McCabe, Laurie. Seven Ingredients for a Winning Analyst Relations Program.
2000 (www.kensingtongroup.com).
Paul, Laurie Gibbons. How to Analyse the Analysts, August 9, 2001.
(www.cio.com).
PR News (article not bylined). Analyst Newsletters Present Risky Investment
(www.prnews.blogspot.com)
Reynolds, Joshua. (Vice President and Director of US Analyst Relations)
Boston: Blanc and Otus Presentation, 2004.
Schatt, Stan. Securing the Campus Network, Forrester Research, Inc., September 2004.
Tekrati press release. (www.tekrati.com).

682
CHAPTER 27

TEAMS AND TEAMWORK

683
A COMPARISON OF STUDENT PERCEPTIONS OF TEAMWORK IN THE
ACADEMIC AND WORKPLACE ENVIRONMENTS

Nathan K. Austin, Morgan State University


naustin@jewel.morgan.edu

Felix Abeson, Coppin State University


fabeson@coppin.edu

Michael Callow, Morgan State University


mcallow@moac.morgan.edu

ABSTRACT

This study compares student experiences of teamwork in both the workplace and
academic environments for critical factors that might contribute to the effectiveness of the
teamwork. The data for this study was collected over a two week period at an educational
institution in Maryland. The data was analyzed using paired t-tests. The result suggests that
subjects tend to appreciate similarities and differences in team dynamics between the work
and academic environments. Implications of the result are discussed.

I. INTRODUCTION

Individuals working together as a team are believed to be an effective medium for


harnessing the creativity and ideas of all participants (Teare et al, 1997). Teamwork is
therefore widely used in industry (Lawler, 1998). Though resisted by some faculty (Baker
and Campbell, 2005), teamwork in the form of group projects i.e. course-related group tasks,
where members collaborate to complete assigned tasks (Tang, 1993), is now a common
feature of course outlines on many college campuses (King and Behnke, 2005). This is in
large part due to the assumed role of educational institutions in the training of future
employees of organizations and institutional accreditation requirements (Ulloa and Adams,
2004). Many of today’s college students work either full or part time during semester breaks
or even throughout the year as permanent employees. Considering that educational
researchers already look to industry as a source for understanding the effectiveness of teams
(Ulloa and Adams, 2004), the authors propose that instructor understanding of student
experiences of teamwork in the workplace environment can significantly inform how group
projects are designed, incorporated in course outlines and managed as a learning tool. The
need for continuing investigation of how group projects are utilized as part of the education
process and students’ perception of the process and outcomes to enhance its effectiveness as a
learning tool, is indeed a worthwhile area of pedagogical research (Lizzio and Wilson, 2005).
The purpose of our study therefore is to compare student experiences with teams in both the
workplace and academic environments and to identify critical factors that might contribute to
teamwork effectiveness among students within an academic setting. As per Ettington et al,
2002, we consider work teams to be synonymous with work groups.

II. TEAMS AND TEAM EFFECTIVENESS

A team consists of a group of individuals who, being aware of their group


membership and thus their interdependence, are committed to a specific performance

684
objective or recognizable goal to be attained, interacting with and influencing each other
(Katzenbach and Smith, 1993). Membership of teams may be voluntary or compulsory and
they can make horrible decisions that alienate members (Hackman, 1990) but when they
function effectively, they exhibit a unitary behavior characterized by morale, cohesion,
confluence and synergy (Ingram and Desombre, 1999). Hackman, 1990 defines team
effectiveness as the degree to which a group’s output meets requirements in terms of
quantity, quality and timeliness. Effective teams are generally characterized by regular
feedback to the team members, extensive collaboration and communication (Ancona and
Caldwell, 1992). Bateman et al, 2002 suggest that effective teams are characterized by
extensive team synergy, performance objectives, skilled members, efficient use of resources,
innovation and the provision of quality. Ettington et al, 2002 drawing from previous studies,
also suggest interdependence, group composition, group development, motivational job
design, organizational support and effective leadership as being critical to effective teams in
general. The above brief review identifies a number of critical factors that enable team
effectiveness and therefore provides a basis for comparing student perceptions of team
effectiveness in both the academic and workplace environments.

III. METHODOLOGY

A multi-section questionnaire on team effectiveness in general and respondent


experiences of teamwork in both the academic and workplace environments was used in the
collection of data. The first part of the questionnaire focused on respondent perceptions of
team effectiveness (e.g. the nature and extent of communication, collaboration,
encouragement, and leadership). These variables were measured on a 5-point Likert scale.
The second section accessed respondent experiences of teams in the workplace environment
while the third section focused on team work, exemplified by coursework related group
assignments in an academic environment. The fourth section included open-ended questions
eliciting further information on group work in the academic environment. The final section
dealt with respondent demographics. Data collection took place over a two week period i.e.
late October to early November 2005 at an educational institution in Maryland. The
participants were undergraduate business students taking a minimum of twelve (12) credit
hours and thus classified by the University as full time students. Those students who had not
held either a full time or part job in the last six months were not included in the study due to
potential recall difficulties.

IV. RESPONDENT PROFILE

A total of 103 questionnaires out of the 200 distributed were returned, reflecting a
response rate of 51.5%. However, of these, 13 had not had any employment in the last six
months and were thus excluded from the study. A further 18 were incomplete and so
excluded. Therefore 72 questionnaires were usable. There were 18 males (25%) and 54
females (75%). Two-thirds reported holding full time jobs. Positions held were mainly non-
managerial and almost all the respondents worked in the service industry. Over 50% of the
respondents had been working for their current employer for at least a year.

V. DATA ANALYSIS AND INTERPRETATION

The data was analyzed using paired t-tests (see Table 1). Overall, the results suggest that
subjects tend to appreciate similarities in team dynamics between the work and academic
environments. Of the 24 mean comparisons, 11 were not statistically significant (p≥0.10), 6

685
were marginally significant (p<0.10), and 7 were statistically significant (p<0.05). The
statistically greater means disparities occurred in the following instances: respondents were
more likely to recognize the potential of other members of their team at work than within an
academic environment (t=3.12, df = 72, p = 0.003); respondents were usually informed
when they did something that made another member’s job easier/harder at work than within
an academic environment (t=4.36, df = 72, p = 0.040); respondents were more likely to let
another team member know when he/she did something that made their job easier/harder at
work than within an academic environment (t=2.60, df = 72, p = 0.011); respondents were
more likely to experience effective leadership within the team at work than within an
academic environment (t=2.16, df = 72, p = 0.034); respondents were more likely to identify
clear work-related activity targets established for the team at work than within an academic
environment (t=2.08, df = 72, p = 0.041); respondents were more likely to feel that the team's
standards are monitored on a regular basis at work than within an academic environment
(t=2.93, df = 72, p = 0.005), and; respondents were more likely to expect regular feedback on
the team's performance at work than within an academic environment (t=2.26, df = 72, p =
0.041). It is also interesting to note that, of the 24 comparisons, it was only in 4 that the mean
score for team work in the academic environment was directionally greater than the mean
score for the workplace environment. Overall, there were no statistical differences between
males and females in their evaluation of the team dynamic in the work or academic
environment. However, female respondents were more willing to help finish the assigned
work of other members of the team in an academic setting compared to their male
counterparts (xw = 4.04, xm = 3.50, F=4.72, df=65, p=0.034), even though there was no
statistical difference between male and female respondents on this issue in the work
environment (xw=4.00, xm=3.69, F=1.74, df =65, p=0.191). Also, there were no statistically
significant results between respondents with part time and those holding full time jobs.

Applying content analysis to the narrative texts generated on group work in the
academic environment revealed a number of additional insights. Overall respondent
perception of group work in the academic environment was that though the importance and
purpose of working in teams was well understood, they did not like engaging in an exercise
which was dependent on individual willingness to actively participate and the uncertainties
therein. The primary underlying reason for continued participation was the need to attain a
good grade. Though not always a horrible experience, group work was considered a time
consuming and difficult task which was necessarily shared to minimize the time and effort
required of members and also to

686
Table 1: Paired T-Test Comparisons

Item Work Academic t-value


I often make suggestions about better work methods to other team
members.
4.12 3.96 1.76*
Other members of my team usually let me know when I do
something that makes their jobs easier (or harder).
3.89 3.71 4.63
I often let other team members know when they do something that
makes my job easier (or harder).
4.16 3.89 2.6**
Other members of my team recognize my potential. 4.16 3.97 1.39
I recognize the potential of other members of my team. 4.26 3.99 3.12***
I am willing to help finish the assigned work of other members of
the team.
3.96 3.92 0.45
Other members of my team are willing to help finish work that was
assigned to me.
3.6 3.52 0.67
There is effective leadership within the team. 4.01 3.75 2.16**
There is a common understanding of the purpose of the team among
team members.
4.04 4.05 -0.13
Members know their roles within the team. 4 4.01 -0.13
There is effective communication within the team. 3.97 3.86 0.92
Individual team members perform tasks assigned within the team, to
the best of their ability.
4.08 3.96 1.32
There are clear work-related activity targets established for the
team. 4.14 3.91 2.08**
The team's assigned work-related activity targets are reasonable. 3.97 3.94 0.28
The team is involved in agreeing on how its work-related activity
targets are set.
3.64 3.86 -1.73*
The team meets its work-related activity objectives. 3.98 3.82 1.69*
Individual team members are competent in executing tasks assigned
to them.
3.97 3.86 1.18
Team members are competent to perform a range of jobs within the
team.
4.07 3.89 1.81*
Team members make efficient use of resources available for the
team's tasks.
3.99 3.8 1.85*
Team members see problem solving as an opportunity. 3.56 3.41 1.37
Team members are not encouraged to try new work methods. 2.79 2.78 0.08*
Team members regularly review complaints and the lessons learned
are used to improve team performance.
3.42 3.67 0.42
The team's standards are monitored on a regular basis. 3.86 3.45 2.93**
Feedback on the team's performance is regularly given to the team. 3.74 3.42 2.26**
*** p <0.01, ** p<0.05, * p <0.10

ensure that everybody contributes to the task at hand. Respondent expectations of instructor
input to the effectiveness of the group work process focused on a desire for active instructor
participation that exhibited concern for student needs especially in the areas of decisions on

687
submission deadlines, recognition of the reality of students with full time jobs and
enforcement of member participation in the group task. In-group operational challenges
identified include the lack of credible leadership within the group, minimal communication
between group members and the refining of individual contributions into a coherent whole.
Other issues included the tendency of individuals to procrastinate or to seek maximum
benefit with as little input as possible.

VI. CONCLUSIONS

The data suggests that students perceive the dynamics of teamwork in the work
environment as being generally similar to that of the work environment. This confirms the
appropriateness of the intent underlying the use of group work within the academic
environment; a means of exposing students to the nature of teamwork in organizations.
However, respondents felt that the academic group project dynamic was less encouraging,
tended to suffer from less effective team leadership, and was less likely to recognize
individual effort and be monitored by superiors (i.e. faculty) compared to workplace teams.
Considering that teambuilding takes time and that most university group project often last no
longer than a semester, it is perhaps not surprising that there is less collaboration and
encouragement among students within the same group. The implication here is that
instructors need to find ways of extending student awareness of the interdependency in group
projects beyond the current short-term focus. Also, rather than wait until the end of the
semester to determine the success or failure of the group project, instructors can learn from
teamwork in the work environment to be more involved in the team dynamic. They should be
well informed about student attitudes to group work, support the development and
strengthening of leadership within the teams, encouragement of one another and provide
regular performance checks and feedback throughout the duration of the group work. It
therefore suggests a greater level of intervention than is currently practiced. This finding is
consistent with efforts being made in the development of teaching aides e.g. the Team
Learning Assistant , Boston University, that enhances instructor ability to monitor and
intervene in the team learning experiences. Moreover, managers in the workplace
environment regularly utilize rewards and sanctions in managing individual and group effort.
As a result, individual members of the team are more willing to take up leadership roles to
increase the likelihood that their individual effort within the team will be recognized and
rewarded accordingly. This is unlike the nature of group projects in academic environments
where leadership qualities and individual success seems to be of less concern to members
especially where the team is on schedule with the tasks assigned or when the grading system
is focused on group effort as opposed to individual input within the group. There is a strong
case for creating team skills learning assignments where students are evaluated on both the
process of working as a member of a team as well as the collective final output of the group.
This dual form of evaluation is likely to increase co-operation among team members,
minimizing some of the challenges of group project assignments.

REFERENCES

Ancona, D. G. and Caldwell, D. F. (1992) Bridging the boundary: External activity and
performance in teams, Administrative Services Quarterly, Vol. 37, No. 4, pp. 634-665
Baker, D. F. and Campbell, C. M. (2005) When is there strength in numbers?: A study of
undergraduate task groups, College Teaching, Vol. 53, No. 1, pp. 14-18

688
Bateman, B., Wilson, F. C. and Bingham, D. (2002) Team effectiveness – Development of an
audit questionnaire, Journal of Management Development, Vol. 21, No. 3, pp. 212-
226
Ettington, D. R. and Camp, R. R. (2002) Facilitating transfer of skills between group projects
and work teams, Journal of Management Education, Vol. 26, No. 4, pp. 356-379
Hackman, J. R. (Ed.), (1990) Groups that work (and those that don’t), Sans Francisco, CA:
Jossey-Bass
Ingram, H. and Desombre, T. (1999) Teamwork: Comparing academic and practitioners’
perceptions, Team Performance Management, Vol. 5, No. 1, pp. 16-21
Katzenbach, J. R. and Smith, D. K. (1993) The discipline of teams, Harvard Business
Review, Vol. 71, pp. 111-120
King, P. E. and Behnke, R. R. (2005) Problems associated with evaluating student
performance in groups, College Teaching, Vol. 53, No. 2, pp. 57-61
Lizzio, A. and Wilson, K. (2005) Self-managed learning groups in higher education:
Students’ perceptions of process and outcomes, British Journal of Educational
Psychology, Vol. 75, No. 3, pp. 373-390
Lawler, E. E. III (1998) Strategies for high performance organizations, San Francisco:
Jossey-Bass
Tang, K. C. C. (1993) Spontaneous collaborative learning: A new dimension in student
learning experience?, Higher Education Research and Development, Vol. 12, pp. 115-
128
Teare, R., Ingram, H., Scheuing, E. and Armistead, C. (1997) Organizational teamworking
frameworks: Evidence from UK and USA-based firms, International Journal of
Service Industry Management, Vol. 8, No. 3, pp. 250-256
Ulloa, B. C. R. and Adams, S. G. (2004) Attitude toward teamwork and effective teaming,
Team Performance Management, Vol. 10, No. 7/8, pp. 145-151

689
AN EXAMINATION OF THE RELATIONSHIP AMONG SELF-MONITORING,
PROACTIVITY, AND STRATEGIC INTENTIONS FOR HANDLING CONFLICT

Gerard A. Callanan, West Chester University


gcallanan@wcupa.edu

David F. Perri, West Chester University


dperri@wcupa.edu

Roberta L. Schini, West Chester University


rschini@wcupa.edu

ABSTRACT

This study examines the relationship between the five modes of handling
organizational conflict (as measured by the Thomas-Kilmann Conflict Mode Instrument) and
the two personality factors of self-monitoring and proactivity. Participants in this study were
a mix of 157 undergraduate and graduate students from a large public university located in
the mid-Atlantic region of the United States. Results show that self-monitoring was not
significantly correlated with any of the five conflict handling strategies. Proactivity did show
a significant positive association with the competing and collaborating styles, and a
significant negative correlation with avoiding and accommodating styles. Implications for
future research are discussed.

I. INTRODUCTION

An inescapable fact of life in organizations is that interpersonal conflict occurs on a


routine basis (Amason, 1996; Jameson, 1999). Further, the presence of this conflict can have
positive benefits for an organization if it is managed properly (Jameson, 1999; Rahim, 2002;
Rahim, Magner, & Shapiro, 2000). In its positive form, conflict can stimulate organizational
members to action, can make individuals and organizations more creative and innovative, and
can be a source of feedback regarding critical relationships and the distribution of power.
Interpersonal conflict can also bring focus to problem areas within an organization, which can
then lead to improvements (Amason, 1996). This study adds to the growing body of
knowledge on individual conflict handling by assessing two personality factors, self-
monitoring and proactivity, which have received only limited attention for their relationship
with conflict management. Specifically, the intent of this study is to examine the linkages
between strategic conflict handling intentions and individual self-monitoring and proactive
personality (proactivity).

Individual choices for responding to interpersonal conflict have generally been


viewed as representing five fairly distinct categories. As originally developed by Blake and
Mouton (1964), and later extended by other researchers (Rahim, 1983; Sorenson, Thomas &
Kilmann, 1974), five different orientations or styles are possible for handling conflict:
competing (domination), collaborating (integration), sharing (compromise), avoiding
(neglect), and accommodating (appeasement).

Thomas (1983) proposed dual dimensions that help in understanding the differences
among the five orientations. One dimension measures an individual’s desire to satisfy his or

690
her own needs, also referred to as the degree of assertiveness. The second dimension
indicates a person’s desire to satisfy the needs of the other party, also referred to as the degree
of cooperativeness.

In essence, each of the five conflict orientations represents a unique combination of


the dual dimensions: competing represents assertive and uncooperative, collaborating
represents assertive and cooperative, avoidance is defined as unassertive and uncooperative,
accommodation represents unassertive and cooperative, and sharing represents a moderate
amount of both assertiveness and cooperation (Jameson, 1999; Rahim, Magner & Shapiro,
2000; Rahim, 2002).

II. PREVIOUS RESEARCH

The inevitability of interpersonal conflict and the potentially positive outcomes


associated with it have prompted researchers to study the underlying factors that dictate how
individuals respond to conflict when it arises. Over the past three decades researchers have
looked at the relationship between the five conflict handling intentions and a wide variety of
personality-related variables. Kilmann and Thomas (1975) examined the relationship
between Jungian personality dimensions and one’s predominant form of conflict handling
behavior, finding that individual differences in psychological tendencies toward conflict
processes were likely to be influential in the conflict-handling mode that the individual
chooses to use in a given situation. Similarly, other researchers suggest that basic
psychological predispositions and differences in personality dimensions influence individual
preferences for approaching and managing conflict (Antonioni, 1998; Moberg, 2001).
Examples of personality variables studied include the “Big Five,” machiavellianism,
dogmatism, and emotional intelligence.

III. HYPOTHESES

High self-monitors tend to monitor and control the images they present to better fit
with their perception of the social climate, whereas low self-monitors tend to be true to
themselves, exhibiting more consistent behavior across various social contexts (Day,
Schleicher, Unkless, & Hiller, 2002). High self-monitors, with their chameleon-like response
to social context, can vary their behavioral response depending on the situation and the
potential outcomes. Accordingly, research has found that high self-monitors are more likely
to emerge in leadership roles (Day, Schleicher, Unkless, & Hiller, 2002), display
organizational citizenship behaviors (Blakely, Andrews, & Fuller, 2003), and be promoted
within the corporate hierarchy (Kilduff & Day, 1994). Other researchers have suggested that
high self-monitors might not represent the most appropriate leaders, given that high self-
monitors might not display the full portfolio of needed leadership skills (Bedeian & Day,
2004) or might put their own career success and self-preservation above the interests of the
organization (Callanan, 2003).

By its very nature, self-monitoring indicates the degree to which an individual is able
to adapt behavioral responses to meet situational demands. Low self-monitors would likely
respond to a conflict episode in line with their primary type given their proclivity to “reflect
their own inner attitudes, emotions, and dispositions” (Premeaux & Bedeian, 2003, p. 1542).
In both cases, there would not be a clear-cut linkage between self-monitoring and any one
conflict handling strategy. Given this expectation, the first hypothesis to be tested is:
H1: Self-monitoring shows no overall association with any of the conflict handling styles.

691
Proactivity has received considerable research attention as a personality characteristic
that can influence individual behaviors. People with a proactive personality display
aggressive, action-oriented behaviors that allow them to be agents of change who can
transform an organization (Callanan, 2003). Given its desirability within various
organizational contexts (Seibert, Kraimer, & Crant, 2001), it would be of interest to know the
linkage, if any, between proactivity and the various strategic options for handling conflict.

Given the nature of the competing and the collaborative styles, where concern for
oneself is manifest in degree of assertiveness used in conflict situations, it could be expected
that proactivity would have a significant association with both competing and the
collaborative styles. Given this expectation, the second hypothesis to be tested is:
H2: Proactivity shows a significant positive association with the competing and
collaborating conflict handling styles, and a significant negative association with the avoiding
and accommodating styles.

IV. METHODOLOGY

Research Participants
Subjects for this study (N=157) were a mix of undergraduate and graduate business students
from a large state university located in the mid-Atlantic region of the United States.
Participation in the research was voluntary and was part of normal coursework and
instruction in various management courses. Students were not given incentives to participate
and all responses were anonymous. Further, subjects had not been exposed to coursework in
conflict management prior to participation in the research. Demographic information is
included in Table I.

Table I Participant Characteristics And Tki Conflict Management Styles


N=157: 137 undergraduate and 20 graduate students
Gender: 58% male and 42% female
Average Age: 24.4 years
Conflict Management Style (from the TKI)
Number Percentage
Competing 36 22.9
Collaborating 16 10.2
Compromising 35 22.3
Avoiding 41 26.1
Accommodating 29 18.5
Total 157 100.0

Assessment Materials
A survey was used to collect data for the study. Participants completed the survey on
their own and at their own pace. Strategic intentions for handling conflict were measured
using the Thomas-Kilmann Mode Instrument (the TKI). The TKI is based on Blake and
Mouton’s (1964) conceptual model and reports scores for each of the five modes or styles.
The TKI is viewed as easy to administer and is relatively uncontaminated by social
desirability effects (Womack, 1988). The TKI has been used extensively both in research and

692
in training, and it is the most widely used instrument for determining conflict resolution style.
Table I shows the overall pattern for dominant conflict handling style as given by results
from the TKI.

For measurement of the proactive personality variable, participants responded to a


shortened (10 items) version of Bateman and Crant’s (1993) original Proactive Personality
Scale. Seibert, Crant and Kraimer (1999) designed this shortened version of the scale.

Self-Monitoring was measured by a revised 18-item true-false version of the original


Self-Monitoring Scale (Snyder, 1974).

V. DATA ANALYSIS

Correlations were calculated based upon the total self-monitoring and proactivity
scores for each participant along with scores in each of the five conflict handling styles.
Table II shows the mean scores, standard deviations, and Pearson correlations for all of the
main variables included in this research.

TABLE II. DESCRIPTIVE STATISTICS AND INTERCORRELATIONS AMONG


CONFLICT -HANDLING INTENTIONS AND PERSONALITY TYPES
Variable M SD 1 2 3 4 5
1. COMPETING 5.49 3.110
2. COLLABORATING 5.76 2.228 .038
3. COMPROMISING 6.87 2.021 -.273** -.131
4. AVOIDING 6.01 2.717 -.500** -.505** -.137
5. ACCOMMODATING 5.78 2.426 -.546** -.311** -.166* .115

6. PROACTIVITY 39.05 6.005 .325** .184* .019 -.287** -.284**


7. SELF-MONITORING 25.96 3.442 -.113 .025 -.023 .099 .017
Note: N = 157. ** p < .01 * p < .05

Table III summarizes the information on the five styles and includes the average proactivity
and self-monitoring scores for each of the dominant conflict handling modes.

Table III. Mean Personality And Conflict Handling Scores


By Conflict Management Style
Designated Conflict Management Style
Category Competing Collaborating Compromising Avoiding Accommodating
Competing 9.55 (1.63) 6.11 (1.74) 6.11 (1.94) 4.03 (2.13) 4.06 (1.49)
Collaborating 5.37 (1.93) 9.19 (1.28) 5.56 (2.03) 4.63 (1.71) 5.06 (2.08)
Compromising 4.40 (1.94) 6.06 (1.94) 9.29 (0.99) 5.11 (2.11) 5.17 (1.95)
Avoiding 3.98 (2.57) 4.44 (1.72) 6.34 (1.39) 9.27 (1.34) 5.93 (2.20)
Accommodating 3.97 (2.46) 4.97 (2.01) 6.38 (1.50) 5.72 (1.77) 8.86 (1.25)

693
Proactivity 41.61 (4.47) 39.31 (8.90) 39.76 (5.53) 37.29 (6.13) 37.28 (5.06)
Self-Monitoring 25.61 (3.12) 26.81 (3.62) 25.88 (3.60) 26.39 (3.73) 25.52 (3.18)
N 36 16 35 41 29
Note: Standard deviations are in parentheses.

VI. RESULTS

In line with Hypothesis 1, Table II shows that self-monitoring was not significantly
correlated with any of the five conflict handling styles. In support of Hypothesis 2, Table II
shows the proactivity variable with a significant positive correlation with the competing and
collaborating styles, and a significant negative correlation with the avoiding and
accommodating modes.

VII. CONCLUSION

Future research should continue to examine the extent to which personality influences
not only the strategic dispositions for responding to conflict, but also whether personality
influences or moderates the choice of conflict response when distinct contextual factors are
apparent in the conflict episode. For example, one possible stream of research could test
whether individuals who are relatively higher in self-monitoring, given their supposed ability
to read social cues and adjust their behaviors, are better able to choose an appropriate
response to a conflict episode regardless of their primary conflict handling strategy. In
addition, future research should assess whether the present findings would be different with
an older, more experienced sample. The participants in this study were relatively young and
with limited work experience, which might have an influence on the overall results.

REFERENCES

Antonioni, D. “Relationship between the Big Five personality factors and conflict
management styles.” The International Journal of Conflict Management., 9, 1998,
336-355.
Amason, A. C. “Distinguishing the effects of functional and dysfunctional conflict on
strategic decision making: Resolving a paradox for top management teams.”
Academy of Management Journal., 39, 1996, 123-148.
Bateman, T. S., and Crant J. M. “The proactive component of organizational behavior.”
Journal of Organizational Behavior., 14, 1993, 103-118.
Bedeian, A. G., and Day, D. V. “Can chameleons lead?” Leadership Quarterly., 15, 2004,
687-718.
Blake, R. R., and Mouton, J. S. The Managerial Grid. Gulf, Houston, TX., 1964
Blakely, G. L., Andrews, M. C., and Fuller, J. “Are chameleons good citizens? A
longitudinal study of the relationship between self-monitoring and organizational
citizenship behavior.” Journal of Business and Psychology., 18, 2003, 131-144.
Callanan, G. A. “What price career success?” Career Development International., 8, 2003,
126-133.
Day, D. V., Schleicher, D. J., Unckless, A. L., and Hiller, N. J. “Self-monitoring personality
at work: A meta-analytic investigation of construct validity.” Journal of Applied
Psychology., 87, 2002, 390-401.

694
Jameson, J. K. “Toward a comprehensive model for the assessment and management of
intraorganizational conflict: Developing the framework.” International Journal of
Conflict Management., 10, 1999, 268-294.
Kilduff, M., and Day, D. V. “Do chameleons get ahead? The effects of self-monitoring on
managerial careers.” Academy of Management Journal., 37, 1994, 1047-1061.
Kilmann, R. H., and Thomas, K. W. “Interpersonal conflict-handling behavior as reflections
of Jungian personality dimensions.” Psychological Reports., 37, 1975, 971-980.
Moberg, P. J. “Linking conflict strategy to the five-factor model: Theoretical and empirical
foundations.” International Journal of Conflict Management., 12, 2001,47-68.
Premeaux, S. F., and Bedeian, A. G. “Breaking the silence: The moderating effects of self-
monitoring in speaking up in the workplace.” Journal of Management Studies., 40,
2003, 1537-1562.
Rahim, M. A. “A measure of styles of handling interpersonal conflict.” Academy of
Management Journal., 26, 1983, 368-376.
Rahim, M. A. “Toward a theory of managing organizational conflict.” International Journal
of Conflict Management., 13, 2002, 206-235.
Rahim, M. A., Magner, N. R., and Shapiro, D. L. “Do justice perceptions influence styles of
handling conflict with supervisors?: What justice perceptions precisely?” The
International Journal of Conflict Management., 11, 2000, 9-31.
Seibert, S. E., Crant, J. M., and Kraimer, M. L. “Proactive personality and career success.”
Journal of Applied Psychology., 84, 1999, 416-427.
Seibert S. E., Kraimer, M. L., and Crant, J. M. “What do proactive people do? A longitudinal
model linking proactive personality and career success.” Personnel Psychology., 54,
2001, 845–874.
Snyder, M. “Self-monitoring and expressive behavior.” Journal of Personality and Social
Psychology., 30, 1974, 526-537.

695
CHAPTER 28

STUDENT PAPERS

696
MARTHA STEWART: FROM LEONA HELMSLEY TO FOLK HEROINE

Paula Baldwin, University of Texas at San Antonio


p_baldwin_davis@hotmail.com

ABSTRACT

Martha Stewart, the queen of gracious living, is known as an American success story.
But in 2002, she was confronted with the biggest challenge of her career, an investigation of
her personal ImClone stock trading by the Justice Department and the Securities Exchange
Commission. Martha maintained her innocence throughout, but was brought to trial early in
2004. The court dismissed the original accusation of insider trading from which other charges
stemmed, but a jury did find Martha guilty of misleading federal investigators and obstructing
an investigation. Although she appealed her conviction, Martha served a five-month prison
sentence. The company she founded survived the scandal and continues to thrive thanks to a
well-orchestrated public relations strategy.

I. INTRODUCTION

To many Americans, Martha Stewart is the epitome of gracious living. Many people
assume that she grew up in the type of home pictured in her books and magazine. The fact is
that Martha was born in of Jersey City, New Jersey. Her parents, Martha and Edward
Kostyra, a schoolteacher and a pharmaceuticals salesman, were the heads of a close-knit
Polish-American family. From the age of three, she grew up in Nutley, New Jersey with four
brothers and sisters. Martha’s dad taught her gardening when she was only three; her mother
taught her cooking, baking, and sewing. Martha attended Barnard College in New York City,
working as a model to help pay expenses. At the end of her sophomore year, she married
Andrew Stewart, a law student. It was not until 1967 when Martha began a successful second
career as a stockbroker. When recession hit Wall Street in 1973, Martha left the brokerage.
She and her husband moved to Westport, Connecticut, where they restored the 1805
farmhouse seen in her television programs. (Cohen, 2002).

Company History
In 1976, Martha Stewart started a catering business with a friend from college, and
then she went out on her own. In ten short years, this basement-run business became a million
dollar enterprise. 1982 saw the publication of the first of many of her now-signature lavishly
illustrated books, Entertaining, co-written with Elizabeth Hawes. The book was an instant
success, and Martha Stewart, fast becoming a one-woman industry, was soon producing
video tapes, dinner music CDs, television specials and dozens of books on hors d'oeuvres,
pies, weddings, Christmas, gardening and restoring old houses. (NCOE, 2003).

Brand Name Identification


Regular appearances on the Today show made her a household name. Her brand name
recognition was on a meteoric rise. With $5 million from Kmart, $10,000 for a lecture and
$900 per person to attend seminars at her Connecticut farm, Martha was a financial dynamo.
In the 1980s, Martha was a contributing editor to Family Circle magazine before breaking off
to start her own magazine, Martha Stewart Living, which quickly attained a circulation of 1.3
million. After appearing on multiple television specials on cable, public and network
television, in 1993, Martha started a syndicated half-hour TV show called, like her magazine,

697
Martha Stewart Living. (Cohen, 2002). Her enterprises have grown into a large
conglomerate, Martha Stewart Living Omnimedia, Inc. (MSLO), with branches in publishing,
television, merchandising, and Internet/direct commerce, providing products in home,
cooking and entertaining, gardening, crafts, holidays, housekeeping, weddings, and child
care. (MSLO Overview, 2003).

II. PROBLEM STATEMENT

Although not a warm public personality, nevertheless, Martha Stewart has shown
patience and good humor in the face of the inevitable criticism and satire common with
public figures in the mass media. But beginning in 2002, she was confronted with the biggest
challenge of her career, the ImClone scandal, an investigation of her personal stock trading
by the Justice Department and the Securities Exchange Commission. Martha maintained her
innocence throughout, but she was brought to trial in the first months of 2004. The court
dismissed the original accusation of insider trading from which the other charges stemmed,
but in 2004, a jury found Martha guilty of misleading federal investigators, and obstructing
an investigation. Although she appealed her conviction, she served a five month prison
sentence. The company she founded continues to thrive, and after her release, she resumed
her business career. Whatever crime she may or may not have committed, one cannot deny
the influence that Martha Stewart has had on how Americans, eat, entertain, and decorate
their homes and gardens.

Public Relations Prior to the Scandal


Although Martha has endured some sniping about her less-than-warm exterior and her
penchant for the finer things in life such Hérmes shoe and handbags, there was no denying
that her product was solid and her stock was on the rise. Prior to the insider trading scandal,
her public relations efforts were limited to promotion and expansion and were certainly not
prepared for any type of crisis. Because Martha is so closely identified with her company,
any misstep on her part was bound to have serious implications for her company.

The First Public Relations Misstep


When information about the ImClone issue began to circulate, Martha took the
position that she did not need to say anything and so, she didn’t. If the media are not fed a
steady diet of information regarding a breaking scandal, they will, and in fact, did, speculate
endlessly about it. This, in turn, affected Martha’s publics, the media, her consumers, the
general public and her investors, negatively. The first example of Martha’s poor public
relations preparation occurred during her appearance on CBS’ The Early Show.’ During her
weekly cooking segment, when asked about the ImClone insider trading allegations, Martha
dodged questions by commenting she wanted to focus on her salad. A media feeding frenzy
ensued.

The Second Public Relations Misstep


Obviously, Stewart had no clear public relations strategy after the ‘salad’ show and
began to cancel public appearances. Being so closely identified with her brand, Martha’s first
strategy was supposed to proactively focus on preserving and reinforcing the brand.
Although Martha eventually hired The Brunswick Group to handle damage control and help
create a crisis strategy, she should have engaged a crisis team the day the scandal broke. In a
crisis, time is precious and silence is unwise. Until she brought online her website,
www.marthatalks.com, designed to help communicate with her consumers about the scandal,

698
all of her publics, consumers, media, investors, and employees alike, were left to conjecture,
assume and assign guilt.

Martha’s Public Relations Gets on the Right Path


The ‘Martha Talks’ website was her first positive step towards any type of effective
crisis communication, demonstrating how quickly and effectively the Internet can be used,
particularly in crisis communication, to reach multiple publics. Because readers could not
blog or comment, clearly the public relations strategy was the use of the two-way asymmetric
model. The website concentrated on telling her publics what they ‘needed’ to know about
Martha’s trials and tribulations. The two-way symmetric model could have been used
effectively to allow her publics, particularly consumers, to blog and post, thereby garnering
public opinion support. The website tells Stewart’s side of the story in order to generate
support and most importantly, present Martha as a normal person—not the perfect home diva
demonstrated on her shows. The site was well written, emphasizing her innocence in a
humble, subtle manner. The website became a news source containing timely trial updates,
statements from her legal team, and positive articles on her behalf, all good examples of crisis
communication principles. The website enabled Martha to send this information to her
consumers, media, and to investors, while bolstering her public image in a positive manner.
Of course, the website enabled Martha to limit her interviews, which in view of her lack of
personal warmth and verbal inflexibility, ultimately worked in her favor. However, Martha
did conduct carefully orchestrated, strategically-timed interviews with Larry King and
Barbara Walters. Her website initially received more than 34 million hits and more than
170,000 supportive emails, providing a master stroke by allowing Martha to communicate
with her supporters and garner public support. (Dugan, 2004).

III. MARTHA’S PUBLIC RELATIONS STRATEGY LONG TERM VIEW

Martha’s Internet campaign was not enough to keep her out of jail and ultimately, she
served 5 months in a minimum security West Virginia prison, beginning in October, 2004.
However, while Martha was resting in prison, her public relations team was not. Reports of
Martha losing a decorating contest in prison raised eyebrows. Known for regaling her
audiences with countless holiday decorating tips, Martha was unable to lead her team to
victory in a prison decorating contest. (USA Today, 2004). This type of stunt sounds like
good public relations strategy. Her heretofore cool public persona has also been judiciously
cultivated to a much warmer one. The following quote from Martha is a good demonstration
of the result of the cultivation. “The experience of the last five months in Alderson, West
Virginia, has been life altering and life affirming. Someday, I hope to have the chance to talk
more about all that has happened, the extraordinary people I have met here, and all that I have
learned. I can tell you now that I feel very fortunate to have had a family who nurtured me,
the advantage of an excellent education, and the opportunity to pursue the American dream.
You can be sure I will never forget the friends I met here, all that they have done to help me
during these five months, their children, and the stories they have told me. Right now, as you
can imagine, I am thrilled to be returning to my more familiar life. My heart is filled with joy
at the prospect of the warm embraces of my family, friends, and colleagues. Certainly, there
is no place like home.” (Stewart, 2005). Not only is this a clear demonstration of the ‘new’
Martha, but she also alludes to the future where she will utilize her experience in prison, pays
homage to her family and to America, providing us a well-crafted message indeed.

699
IV. SOCIAL, CULTURAL, POLITICAL AND ECONOMIC PORTENTS

During the public awareness of the trial, MSLO suffered from the negative press,
putting the financial health of the company at risk. Less than one year after the story broke
publicly, MSLO reported its first-ever quarterly losses. (Ritchie, 2003). Clearly, investors
and consumers were losing faith in Martha. It is a tribute to the strength of the public
relations crisis campaign that the recovery of Martha’s company as well as the repair of her
somewhat tarnished image has been so successful. After losing 75 percent of its value during
the insider-trading scandal, the MSLO stock is now hovering around an all-time high.

Clearly, the initial reaction to the breaking scandal was a public relations nightmare.
Underestimating the power of a public figure can be fatal. After the initial misstep and
Martha’s clever employment of a public relations firm, the successful strategy employed
during this time demonstrates that the public relations firm thoroughly researched her
company and worked closely with her executive board, her legal team, and Martha to apply
the existing market research to this crisis. The website was effective in putting Martha’s case
before the public, for all her audiences, including consumers, investors, the general public
and media. Although the website did receive emails, it was not formatted for posting
comments of any kind. Utilizing a two-way symmetric model would have enabled the public
relations firm to measure the posts for audience information levels, attitudes, and public
image of Martha.

As the campaign continues to run, it is necessary for the public relations strategy to
keep measuring and monitoring how well the objectives are being met. At this time, it is not
as crucial whether the public thinks that Martha was innocent or guilty, but that the faith in
her and her company are restored, emotionally and fiscally. An overview of media clips
should demonstrate a positive trend regarding her public image. The fact that her stock is on
the rise and remains strong indicates that investor confidence has been restored.

V. EVALUATION

Martha has demonstrated a greater respect for the media now than she did prior to her
scandal. Her portrayal of a warmer, more media-friendly persona to the press will go a long
way toward continuing to smooth her way in the media spotlight. As she has done during the
scandal, it is likely that she will continue to court strong relationships with her media.

Even though Martha has the reputation of a diva, she must inspire loyalty among her
staff. Not once during the scandal and trial, did a media exposé occur because of a leak
among her staff. Each staff member was obviously very aware of the possible implications of
leaking information and chose not to disclose information. This loyalty communicated itself
well to the public, helping send a strong message of the staff’s belief in Martha’s innocence.

Martha chose to drag out her case and take it to court. The MSLO stock languished at
$10 per share (about half of its pre-scandal price). When Martha was convicted and sentenced
to prison, the stock price began improving. Investors breathed a sigh of relief to see
resolution to the whole issue. Because Martha is brand-identified, it was easy for the
investors to lose faith in her company and indeed, her company did suffer stock losses. But
the mark of a successful public relations campaign is ultimately ensuring the fiscal livelihood
of the company and clearly, as the stock has risen and holds steady and her company
continues to demonstrate a healthy fiscal viability, the campaign did its job successfully.

700
The restoration of the public’s faith in Martha is evidenced by the growing strength of
her company, the apparent humbleness of Martha during her prison stay, her subsequent
release, and her simple gratitude at being in her own home. Carefully orchestrated footage
demonstrated Martha visiting a working class home to watch them cook their favorite dish
and then bringing them on her show to cook the dish in Martha’s own kitchen. This is a
flashback to Martha’s working class roots, reminding America that she is a manifestation of
the American dream. Martha’s diva-like persona, at least for the time being, has been
shelved in favor of a warmer, caring, more personable Martha, continuing to ensure that
Martha as a positive, brand-name celebrity will thrive for years to come.

The execution and distribution of all communication materials linked to the


communication strategies are evident. Her website was a powerful media tool for feeding
timely information to the media, her consumers, her investors, and the general public.

VI. PROGRAMMING, PLANNING AND EXECUTION

Using the two-way asymmetric model of public relations, the Martha Stewart
campaign worked to persuade all her publics of her innocence and to assure them of her
company’s vital fiscal health. As a public persona, the use of the model continued to assure
her publics that Martha as the brand-name image of the company would continue to be strong
and positive. After her initial couple of missteps, clearly a proactive approach to the scandal
was implemented. The media were not only courted, but given timely updates via her
website. When she was released from prison, a convenient flatbed truck was set up for the
photographers. Not only is she cooperating with the media, but she is also making the job
easy by being selectively accessible to them. She continues to promote her theme of
disclosure, trust, quality and ethics, emphasizing her innocence, in spite of her being
convicted and at the same time, reestablishes her company as a solid, trustworthy investment.

If her publics, investors or media have any question that Martha is back stronger than
ever, a quick reread of Martha’s quote shows a woman humbled, but not broken. Now that
Martha is out of jail, her public relations campaign continues full speed ahead. Her release
from prison, complete with a photo op in the snow outside her New England home, got wall-
to-wall coverage by all major media. She has a new television show and her company is
thriving. Everyone makes mistakes. The key is to admit them, ask for forgiveness, and
accept the consequences. Had Martha admitted her mistakes and asked for leniency, her story
might have had a different ending. As it is, her actions could well serve as a public relations
primer for what not to do in a crisis situation. Her categorical denials and defiance only
inflamed the media and investigators. Martha never apologized for what happened. Although
this is a difficult concept when considered to be the epitome of domestic perfection, Martha
might have averted a lot of public scrutiny and mitigated her legal consequences with a
simple apology. We, as human beings, have a great capacity for forgiveness, but that is
balanced by our equal contempt for deception.

701
REFERENCES

A. BOOKS
Dezenhall, E. (1999). Nail 'em!: confronting high profile attacks on celebrities & businesses.
1st ed. Amherst, NY: Prometheus Books.

B. INTERNET ARTICLES
Cohen, D. (2002) Biography on Martha Stewart. Retrieved October 31, 2005 from
http://lala.essortment.com/biographyonmar_rino.html
Company Overview. (2003). Martha Stewart Living Omnimedia Company Overview.
Retrieved October 31, 2005 from http://www.corporate_ir.net/ireye/ir_site.html.
CourtTV. (2003) Martha Stewart indicted on nine counts stemming from insider-trading
scandal. Retrieved November 1, 2005 from
http://www.courttv.com/people/2003/0604/marthastewart_ap.html.
Dugan, K. (2004, July 16). The Martha Stewart crisis. Message posted to Global PR Blog
Week 1.0, archived at
http://www.globalprblogweek.com/archives/the_martha_stewart_c.php
National Commission on Entrepreneurship. (2003). Stories of Entrepreneurs. Retrieved
November 1, 2005 from http://www.noce.org/toolkit/stories_stewart.html.
Ritchie, A. (2003). Save Martha timeline. Retrieved October 31, 2005 from
http://www.savemartha.com/timeline.html
Report: Stewart loses decorating contest in prison. (2005). USA Today.
Retrieved November 1, 2005 from http://www.usatoday.com/money/2004-12-31-
stewart-loses_x.html.
Report: Stewart convicted on all charges. (March 5, 2004). CNNMoney.com. Retrieved
November 30, 2005 from
http://money.cnn.com/2004/03/05/news/companies/martha_verdict.
Stewart, M. (2005). News from Martha. Retrieved Nov. 1, 2005 from http://www.
marthastewart.com/page.jhtml?type=learn-cat&id=cat20171.
SEC charges Martha Stewart, broker Peter Bacanovic with illegal insider trading. (2003).
Retrieved November. 30, 2005 from http://www.sec.gov/news/press/2003-69.html.

702
CASE STUDY OF TOLL ROAD PROPOSAL FOR LOOP 1604

Sara V. Garcia, University of Texas at San Antonio


sari-garcia@sbcglobal.net

Jessica M. Perez, University of Texas at San Antonio


j_marieperez@yahoo.com

ABSTRACT

Named after former Bexar County Judge, Charles W. Anderson, Loop 1604 was
originally built in the 1960s as a Farm-to-Market and State Loop road and is known today as
the “Death Loop” around the city of San Antonio. Since its original design was a two-lane
rural state highway, the expansion to a four-lane freeway left a rather narrow median.
Speeding, the lack of barriers and the high volume of traffic on such a narrow stretch of
freeway is cause for immediate disaster in the event of an accident. The Texas Department of
Transportation (TxDOT) has claimed awareness of the problems along Loop 1604 and
proposed the installation of a toll road as a long-term solution. This study proposes a public
relations plan to ensure the passing of the toll road proposal.

I. INTRODUCTION

In 1917, the Texas Highway Department (THD) was established by the Texas
Legislature to administer federal funds for highway construction and maintenance. By the
mid 1970s, the Legislature merged the Texas Mass Transportation Commission with the
THD to form the State Department of Highways and Public Transportation (SDHPT.)
Ultimately, the Texas Department of Transportation (TxDOT) was formed by combining the
SDHPT, the Department of Aviation and the Texas Motor Vehicle Commission in 1991
(www.dot. state.tx. us/insdt dot/geninfo .htm? pg=history, 2005.) Through its mission to
provide safe, effective and efficient movement of people and goods, the Texas Department of
Transportation is able to ensure the social, political and economic needs of Texas citizens
(www.dot.state.tx.us/insdtdot/geninfo.htm, 2005.) Socially, TxDOT is committed to
providing comfortable, safe, durable, cost-effective, environmentally sensitive and
aesthetically appealing transportation systems that work together. It also ensures a desirable
workplace for its employees, which creates a diverse team of all types of people and
professions. Politically, it promotes a higher quality of life through partnerships with the
citizens of Texas and all branches of government by being receptive, responsible and
cooperative. In addition to its social and political commitment, it uses efficient and cost-
effective work methods that encourage innovation and creativity therefore maintaining its
economic responsibility to the state of Texas.

Since 1990, the northern arc of Loop 1604 has experienced a substantial amount of
growth. The growth has resulted from new subdivisions and apartments, which have
increased the demand for schools, which in turn have attracted businesses to establish in that
area. All of these factors have increased volume of traffic. The Average Annual Daily Traffic
(AADT) recorded traffic growth well above 200% over the entire route of Loop 1604 from
1990 to 2003. Of the three sections, the northern arc near Bandera Road has had the most
growth at 500%. In addition, the AADT also reported that the section west of Highway 281

703
has an average of 100,000 vehicles per day traveling on Loop 1604
(www.texhwyman.com/l1604.htm, 2005.) As of 2005, the accidents along the loop have
received a substantial amount of media attention which may be in correlation with the 250%
increase of fatal accidents from 2003 to 2004 (www.sanantonio.gov/sapd/TrafStats.htm,
2005.) The Texas Department of Transportation recently responded to citizens’ concerns
about their safety on the loop by collaborating with the San Antonio Police Department in
setting up speed traps from June 5 to July 6 while temporary concrete barriers were being
placed along the medians to help prevent any further fatalities.

II. OBJECTIVES

To ensure the passing of the toll road proposal, we have identified informational,
attitudinal and behavioral objectives to target our key publics. The key publics are commuters
on Loop 1604, San Antonio residents, business establishments along Loop 1604, local
Chambers of Commerce, San Antonio Police Department (SAPD), City of San Antonio
elected officials and local media outlets.

Informational Objectives
• Educate 50% of Loop 1604 commuters, San Antonio residents, and business
establishments along the loop about the issues regarding the safety of the freeway
including traffic, dangers of speeding, median, barriers and toll roads within one year.
• Educate 100% of local Chambers of Commerce, SAPD, City of San Antonio Elected
Officials, and local media about the issues regarding the safety of the freeway
including traffic, dangers of speeding, median, barriers and toll roads within one year.
• Create awareness of 100% (of the 50% educated) regarding the benefits of the toll
road among Loop 1604 commuters, San Antonio residents, and business
establishments along the loop within one year.
• Create awareness of 100% regarding the benefits of the toll road among local
Chambers of Commerce, SAPD, City of San Antonio Elected Officials, and the local
media within one year.
• Inform 100% (of the 50% educated) of Loop 1604 commuters, San Antonio residents,
and business establishments along the loop of the cost involved with accepting the
proposed toll road (state funding, tax-payers dollars, and cost to commuters once the
toll road is in operation) within one year.
• Inform 100% of local Chambers of Commerce, SAPD, City of San Antonio Elected
Officials, and the local media of the cost involved with accepting the proposed toll
road (state funding, tax payers’ dollars, and cost to commuters once the toll road is in
operation) within one year.
• Maximize exposure of the benefits of the toll road by 30% through local media
support within one year.

Attitudinal Objectives
• Convince 25% of Loop 1604 commuters to practice defensive driving techniques
within one year.
• Convince 50% of the local media outlets about the importance of covering safety
issues along Loop 1604 and the developments of the proposed toll road within one
year.
• Create favorable attitudes about the proposed toll road among 30% of Loop 1604
commuters, San Antonio residents, business establishments along the loop, local

704
Chambers of Commerce, SAPD, City of San Antonio Elected Officials, and the local
media within one year.

Behavioral Objectives
• Persuade 15% of Loop 1604 commuters to drive defensively within one year.
• Increase local media coverage about the importance of the safety issues along Loop
1604 and the developments of the proposed toll road by 25% within one year.
• Encourage at least 20% of San Antonio residents to get out and vote in the toll road
election.
• Encourage at least 50% of the voter turnout to vote in favor of the proposed toll road.
• Have an attendance of at least 200 citizens at each of the three community forums.

III. PLANNING AND EXECUTION

TxDOT should follow the two-way asymmetric public relations model to undertake
the issues regarding safety along Loop 1604. It will help to promote change in attitudes and
behaviors through honest feedback. The main theme for the campaign would be “Keeping
San Antonians Safe in Every Direction.” The messages that would accompany the theme
would be less traffic, less accidents, less fatalities, faster access across town, and an overall
safer commute.

To kick-off the campaign, TxDOT will have to conduct a news conference at the
Drury Inn & Suites on the access road along Loop 1604 between Highway 281 and Stone
Oak Parkway. The conference will provide local media and the community-at-large with
initial and complete access to details about the dangers of Loop 1604 and the proposed
solution. The details will include issues regarding the safety of the freeway, traffic, dangers
of speeding, medians and barriers. It will also highlight the proposal for the toll road
including the benefits and costs involved (state funding, tax payers dollars, and cost to
commuters once the toll road is in operation.) Attendees will also be able to view a 3-D
model of the proposed toll road. Three months after the campaign launch, TxDOT will have
community forums at three locations along Loop 1604 which will take place on the same day
and at the same time. The locations will include: Live Oak Civic Center, Alzafar Shrine
Temple and the University of Texas at San Antonio Convocation Center (1604 campus). The
community forums will provide complete access to details about the dangers of Loop 1604
and the proposed solution. The details will include issues regarding the safety of the freeway,
traffic, dangers of speeding, medians and barriers. They will also highlight the proposal for
the toll road including the benefits and costs involved (state funding, tax payers dollars, and
cost to commuters once the toll road is in operation.) In addition, the forums will provide
citizens an opportunity to ask questions, provide feedback and raise any additional concerns.
Attendees will also be able to view a 3-D model of the proposed toll road.

An early voting party will be held at three designated voting sites (to be determined)
along Loop 1604 between Bandera Road and FM 78. Another early voting party will take
place at a central voting site downtown San Antonio. Light snacks and refreshments will be
served throughout the day at each of the events courtesy of H-E-B. In the first day of voting,
Krispy Kreme will serve free doughnuts, coffee and orange juice at selected voting sites
throughout the city. Uncontrolled media for this proposal include media kits, press releases,
feature stories, interviews, news conference and photo opportunities. Controlled media for
this proposal include informational brochures, informational packets, information video,
flyers, billboards, public service announcements, community forums, website and

705
PowerPoint presentations. Source credibility for the campaign will come from Hope
Andrade, the Texas Commissioner of Transportation, and Red McCombs, well-respected
local businessman, who will be the official spokesperson for the campaign. Both will
participate in the community forums, news conference and kick-off parties.

Salient information will include facts about the dangers of Loop 1604, and benefits
and costs involved with the proposed toll road. It will also highlight TxDOT’s genuine
commitment to ensure the safety of the travelers on its roads. Verbal and nonverbal cues will
be in a serious tone using key words such as safety, benefits, solution, life-saving, and
priceless solution. Two-way communication will take place at the news conference,
community forums and the voting sites. Opinion leaders will include the president of each
local Chamber of Commerce, City of San Antonio Police Chief, Mayor and Council
Members, Texas Commissioner of Transportation, Hope Andrade, and Red McCombs.

IV. EVALUATION

Impact Objectives
• Obtain a phone list of San Antonio households and conduct a phone survey phone to
assess if residents received the brochures, are aware of the benefits and costs involved
with the proposal of the toll road, and their attitudes toward defensive driving and the
proposal.
• Ensure that a campaign representative conducts a face-to-face meeting with the
president of each local Chamber of Commerce, City of San Antonio Police Chief,
Mayor, Council Members, and local media representatives.
• Assess if the president of each local Chamber of Commerce, City of San Antonio Police
Chief, Mayor, Council Members, and local media representatives were made aware of
the benefits and costs involved with the proposal of the toll road by having each
representative fill out a Likert-type scale at the end of the face-to-face meeting.
• Obtain traffic accident and fatality statistics on Loop 1604 for the year before, during
and after the campaign to determine if there was a correlation between the message
about the importance of defensive driving techniques and the statistics.
• Obtain the number of San Antonio residents eligible to vote at the time of the election.
After the votes have been tabulated, find out the actual number of voters to determine if
the voter turnout objective was met.
• Obtain the number of San Antonio residents who voted in favor of the proposed toll
road.
• Count attendees at each of the community forums.

Output Objectives
• Media exposure will be determined by using media monitoring and clipping techniques
of the campaign coverage.

V. CONCLUSION

Through the recommended plan of action we hope that the community-at-large is not
only aware of the dangers along Loop 1604 but is also exposed and open-minded to the
proposal of toll road, which we feel is the first step to preventing unnecessary accidents and
fatalities. This solution will alleviate the citizens concerns regarding their safety.

706
REFERENCES

San Antonio Police Department, Traffic Fatality Statistics. (2005). Retrieved July 31, 2005,
from
www.sanantonio.gov/sapd/TrafStats.htm
TxDOT History. (2005). Retrieved August 4, 2005, from
www.dot.state.tx.us/insdtdot/geninfo.htm?pg=history
TxDOT’s Mission & Vision. (2005). Retrieved August 4, 2005, from
www.dot.state.tx.us/insdtdot/geninfo.htm
San Antonio Area Freeway System, State Loop 1604. (2005). Retrieved July 31, 2005, from
www.texhwyman.com/l1604.htm

707
LESSONS OF OPTIMUM LEADERSHIP FROM SMALL-CITY MAYORS

Michael A. Moodian, Pepperdine University


Michael.Moodian@pepperdine.edu

ABSTRACT

The study of leadership principles and skills in undergraduate and graduate schools is
generally insufficient to prepare those with political career aspirations. The author
interviewed three Southern California mayors, examined their leadership styles, and linked
their political success with organizational effectiveness achieved through the use of specific
leadership models.

Despite their disparate pathways to office and the size of their city, the interviewees
shared many similar leadership qualities. Each mayor worked hard to gain the trust and
respect of his or her electorate, was an excellent communicator, built strong relationships, and
strived for consensus. Their differences were mainly manifested in how they mixed
transformational and transactional leadership styles.

I. INTRODUCTION

Acclaimed physicist Albert Einstein once stated, “Politics is more difficult than
physics” (as cited in Reardon, 2005, page 2). While this statement may be true, political
leaders have been the strongest and most vital visionaries and revolutionaries throughout
history. From the nation’s representatives in Washington to city council members in the small
towns of America’s heartland, political leaders are the voice of the people; they are the
catalysts of change for a better society, who take great strides to ensure the progression of
their citizens’ rights and liberties. The purpose of this article is to take an in-depth look at
three political leaders in an attempt to gain insight into their styles and philosophies. In this
article, the results of interviews with three city mayors will be presented, along with an
analysis of their approaches to leadership.

II. BACKGROUND OF SUBJECTS

In analyzing the leadership styles of three separate mayors, an attempt was made to
select subjects who were diverse—both in personalities and the demographics of the cities
they govern.
On March 16, 2005, the first subject to be interviewed was Dr. Brenda Ross, Mayor of
Laguna Woods, CA. What separates Laguna Woods from most cities is that the average age
of its citizens is 78. A new city, Laguna Woods received its approval for incorporation in
1998 and local voters ratified its proposal in 1999. Mayor Ross, who is 89 years of age, was
reelected in 2004, and her third term expires in 2008. In addition to her mayoral
responsibilities, Ross acts as commissioner on the California Commission on Aging and
serves on multiple other commissions, committees, and boards of directors.

On March 22, 2005, Mayor Randall Bressette of Laguna Hills, CA was interviewed at
the Laguna Hills City Hall. Reporting a 2003 population of 32,875, the City of Laguna Hills
is an affluent community with a low crime rate. A 23-year veteran of the Navy Reserve,
Mayor Bressette was first elected to the city council concurring with the city’s incorporation

708
in 1991 and has served in a leadership capacity within the city ever since. In addition to his
mayoral duties, Bressette acts as alternative representative to the El Toro Reuse Planning
Authority, a group that works to oppose the creation of an airport on the grounds of the
former El Toro Marine Base.

On March 24, 2005, Mayor Trish Kelley of Mission Viejo, CA was interviewed in her
office at city hall. A volunteer in the community since 1977, Mayor Kelley serves as the
city’s representative for the Orange County Fire Authority Board of Directors, and as the
alternate representative on the board of directors for the San Joaquin Hills Transportation
Corridor Agency and the Orange County Council of Governments General Assembly.

III. SYNOPSIS OF INTERVIEW WITH BRENDA ROSS

The leadership style of Mayor Brenda Ross is as multifaceted and diverse as her
captivating background. “I usually have a leadership position, not because I seek it, but
because I usually get asked to do it, and I do that kind of thing well,” she stated. “I think
being a team leader doesn’t mean that you sit back and let everybody do it, but I listen to
everybody first. I try first to understand and then be understood” (B. Ross, personal
communication, March 16, 2005).

When speaking of the relationship between a leader and a follower, Ross said that it’s
“One of mutual respect, one of equality. In other words, you’re not in a better position
because you’re a leader. You’re in a worse position because it’s your responsibility to get
everybody working together.” Her method of making decisions proved valuable when Ross
said “If somebody comes at me today and they want me to vote on something, I’d like to
sleep on it. It gives me time to think it through. I don’t like to jump to conclusions” (B. Ross,
personal communication, March 16, 2005).

When speaking of what she values in a leader and a follower, Ross stated “listening”
and added, “That’s the single most important (attribute) in either a follower or a leader,
because if you listen to the other person, you hear what he has to say and you can mull that
over in your mind.” Additionally, she said, “A leader needs to have some vision, has to have
some kind of strategic plan, whatever you want to call it. I don’t think you just get to be a
leader and run out in front of a group.” In explaining how she makes decisions, Ross stated “I
like to be sure that I’ve thought of all sides of it if I can. If I can’t, I like to call somebody that
I know will be in opposition, and hear what those worries are per se” (B. Ross, personal
communication, March 16, 2005).

IV. SYNOPSIS OF INTERVIEW WITH RANDALL BRESSETTE

Randall Bressette seems to vary in his perspective. When asked if he felt individuals
are born or trained to be leaders, he replied “I think there are people, who by nature of their
parents and their extended family and their surroundings, become leaders simply because
they are forced into it.” Then, he added “Most people though I think obtain their leadership
skills in their teenage years, as their parents are a very strong influence, mine were, and they
look at other people who they come to respect” (R. Bressette, personal communication,
March 22, 2005).

When speaking of important leadership skills, Bressette stated “Listening. Patience.


Courage. Integrity.” Then he said, “With that goes ethics, because integrity and ethics aren’t

709
necessarily the same thing.” When talking about obstacles, he said, “I am rather
straightforward with people, and I don’t believe very much in political correctness as we
define it today. I believe in being polite, but I also believe in being very straightforward.”
When talking about what the relationship between a leader and a follower should be, he
stated “Teamwork. The leader and the follower can be interchangeable. It’s the person who
comes with the plan, who comes with the energy, who will generally turn out to be the
leader” (R. Bressette, personal communication, March 22, 2005).

When asked how he made decisions, Bressette replied “As a member of the city
council, my ultimate question is ‘What would my neighbors want me to do?’” Also,
“Whether it’s from an electorate or from a board of directors that’s simply by consensus, the
guy in charge has got to make a decision that he thinks is right for the group” (R. Bressette,
personal communication, March 22, 2005).

V. SYNOPSIS OF INTERVIEW WITH TRISH KELLEY

Speaking with Trish Kelley proved to be insightful because she comes from a much
different background. When asked how she would deal with a resister, she replied “I try to
just be honest no matter what.” She added “It really doesn’t happen too often. If I have
something that needs to be done and it can’t be done, then they just respectfully tell me that
this won’t work and here are the reasons why” (P. Kelley, personal communication, March
24, 2005).

When talking about the relationship between a leader and a follower, she answered,
“I’ve always believed that the best leader is someone who can complement and bring out the
strengths in the people that work for you or work with you.” Additionally, “I’ve always tried
to have a very positive outlook.” When talking about her values, she stated “I value integrity,
just to know that a person is honest, and trustworthy, and will be able to make decisions for
the right reasons.” She added “Also, an open mind, which that’s been one of the biggest
surprises being what I would call a normal person and working with a bunch of politicians”
(P. Kelley, personal communication, March 24, 2005).

VI. LEADERSHIP ANALYSIS

After conducting thorough and in-depth interviews with each mayor, a leadership
analysis will be presented that describes the styles and philosophies of each, while examining
the factors shared by all.

First and foremost, Brenda Ross and Trish Kelley are both prime examples of
emergent leaders (Northouse, 2004). However, they both have alternative paths in their
perceptions as the most influential members of their respective groups. As Ross stated, she is
actually pursued for leadership positions because of the respect and admiration she has
gained from subordinates throughout her career. Kelley, meanwhile, built a long-term
reputation of reliability through her consistent volunteer work. In other words, she
continually took on leadership volunteer positions that others did not prefer. It’s her
emergence and empowerment from others that results in her leadership position.

Each leader differs in their situational approaches to leadership. Ross, through her
demeanor (and possibly, her maturity as a leader), epitomizes a high directive/high supportive
behavior (coaching). She has the incredible capabilities to direct a team to institute a new

710
city, yet, at the same time, she places a large amount of effort in her supportive capabilities.
She offers a nurturing, caring, and compassionate attitude. Meanwhile, Bressette exemplifies
a high directive/low supportive (directing) method. He takes pride in his undeviating
approach to lead his staff. Additionally, through the path-goal method, he gives clear
instructions on what is to be expected and at what times. Kelley displays a low directive/high
supportive (supporting) method, and seems to place a particular emphasis on a supportive
environment (Northouse, 2004).

A notable fact from Bressette’s interview is that he practices situational leadership; he


changes the degree to which he practices directive and supportive behavior by adapting to
given environments (Northouse, 2004). Ross practices this to a small degree; however, it’s
Bressette who spoke of times when it’s necessary to alter his leadership style. This proves
very effective for both, as it enables them to lead teams by making modifications to their
approach when necessary.

Additionally, Bressette demonstrates a high degree of trait approach characteristics.


As Northouse (2004) writes, he exudes intelligence, self-confidence, determination, integrity,
and sociability. His intelligence is displayed through his verbal and reasoning skills, he
exudes a great amount of self-esteem, he has an implausible amount of initiative, he appears
honest and trustworthy, and he has moderately satisfactory interpersonal skills. Meanwhile,
Ross exudes many of the same traits, yet does so in a supportive way versus a directive way.
Kelley stands somewhere between a trait and a skills approach. In exhibiting the trait
approach, she reveals high degrees of sociability and integrity. Simultaneously, in
demonstrating the skills approach, she displays exceptional human skills.

Bressette shows a strong understanding of the in-group/out-group mentality. Through


the leader-member exchange theory, Bressette is aware of the negative circumstances
surrounding a feeling of access to privileged information among certain members of office.
His attention to this likely prevents possible conflicts and feelings of animosity among staff
members. It’s apparent that, to some degree or another, all three leaders attempt to make
every subsidiary a part of the in-group. They have respect for their staff members and prove
that they are leaders who value those around them (Northouse, 2004).

A facet that all three mayors share in common is their abidance to transformational
leadership. All three tend to raise their level of moral maturity, convert followers to leaders,
broaden and enlarge the interest of their followers, motivate and entice others to go beyond
their personal interest for the betterment of the organization, and address the sense of self-
worth for each of their followers (Northouse, 2004). All three realize that this is an effective
route and one that leads to success. Additionally, they convey positive emotions about the
future that include faith, trust, and confidence, which leads to better performance on the job
(Seligman, 2002).
Another aspect that all three mayors share is their focus on creating and maintaining
learning organizations. Senge (1990) defines a learning organization as a workplace
characterized by sharing, growth, and adapting. His notion is that a learning organization
doesn’t make the same mistake twice and that the barriers that get in the way are the absence
of time and a reactive versus proactive ideology within the culture of the organization.
Robbins (2005) adds that a learning organization is characterized by a shared vision that
everyone agrees on, people openly communicating without fear of criticism or punishment,
and people sublimating their personal self-interest and fragmented departmental interests to
work together to achieve the organization’s shared vision.

711
Through an analysis of the three leaders who were interviewed, it’s evident that they
vary and differ to various degrees. Yet, they are successful leaders in that they embrace
challenge with meaning and passion, while seizing the initiative with enthusiasm (Kouzes &
Posner, 2002). The next section will discuss the primary lessons that were gained through this
project.

VII. LEADERSHIP LESSONS

Several lessons of leadership were gained throughout the course of this project. This
section will focus on some of the primary lessons that arose throughout the course of the
interviews.

First, all three leaders seem to have mastered what Cashman (1999) refers to as
purpose mastery. “Focusing on how to make a difference” (page 65) refers to building upon
one’s strengths. The three are completely in line with their strengths, realize their
weaknesses, and work to improve upon their strengths. Thus, an important lesson relates to
the validity that was established by Buckingham and Clifton (2001). In Now Discover Your
Strengths, the two refer to building a, “strength-based organization” (page 40). It’s noted that
seeing the capitalization of their strengths in action provides insight. The primary lesson is
that victory over the focus of one’s fears and weaknesses is a key aspect of effective
leadership.

The second lesson is the amazing results that each mayor is able to obtain by utilizing
his or her passion, vigor, and drive. For all, their passion is helping others, serving their
communities, and making their cities better places to live. The passion is demonstrated
differently for each. Kelley strives to serve the community as a volunteer with the PTA, Girls
Scout’s, church, and various other community activities. Her approach has driven her to excel
as the leader of her city. Ross has had a distinguished career of serving her country, teaching
at Boston University, serving on state commissions, and leading the incorporation of her city.
Today, she stands as an emergent leader of one of America’s most unique cities. Bressette
indulges in his passion for instigating innovation and thinking outside of the box—in his
military service, business life, and political career. He strives to take on new endeavors and
lead others to achieve the best results. Through this passion, they have been able to live a life
in which they can achieve the intersection of personal greatness, leadership greatness, and
organizational greatness (Covey, 2004).

Finally, the third lesson from this project is that many of Covey’s notions in The
Seven Habits of Highly Effective People are valid. Each leader reinforces the seven
characteristics model, referring to beliefs in the necessity for work to be meaningful, the
focus on beliefs and behaviors that provide peace and spirituality, and the focus on discipline.
Perhaps most important, each leader realizes the importance of relationships that provide
energy (Covey, 1989). Each is compelled to lead from the heart and accomplish dynamic
objectives.

VIII. CONCLUSION

After conducting interviews with Brenda Ross, Randall Bressette, and Trish Kelley, it
can be concluded that each leader demonstrates great care for his or her city. After gaining
insight into their styles and philosophies, it’s encouraging to see that there are genuine
leaders that exist who truly place the best interest of their followers ahead of their own

712
personal interests. Hopefully, such a trend will extend to other organizations throughout the
years ahead.

REFERENCES

Buckingham, Marcus, & Donald O. Clifton. Now, Discover Your Strengths. New York, NY:
Free Press, 2001.
Cashman, Kevin. Leadership From the Inside Out: Becoming a Leader for Life. Minneapolis,
MN: Executive Excellence Publishing, 1999.
Covey, Stephen R. The Seven Habits of Highly Effective People. New York, NY: Free Press,
1989.
Covey, Stephen R. The 8th Habit. New York, NY: Free Press, 2004.
Kouzes, James M., & Barry Z. Posner. The Leadership Challenge, 3rd ed. San Francisco, CA:
Jossey-Bass, 2002.
Northouse, Peter G. Leadership Theory and Practice. Thousand Oaks, CA: Sage Publications,
2004.
Reardon, Kathleen K. It’s All Politics: Winning in a World Where Hard Work and Talent
Aren’t Enough. New York, NY: Currency, 2005.
Robbins, Stephen P. Essentials of Organizational Behavior, 8th ed. Upper Saddle, NJ: Pearson
Education, 2005.
Seligman, Martin E.P. Authentic Happiness. New York, NY: Free Press, 2002.
Senge, Peter M. “The Leader’s New Work: Building Learning Organizations.” Sloan
Management Review, 32, (1), 1990, 7-23

713
CULTURAL ADAPTATION OF AUSTRIAN AND U.S.-AMERICAN WEBSITES:
A COMPARISON USING HOFESTEDE’S CULTURAL PATTERNS

Wesley McMahon, California State University, Chico


jpwookie@yahoo.com

Dominik Maurer, California State University, Chico


dominik_maurer@gmx.de

ABSTRACT

The purpose of this paper is to explore how cultural values are depicted on Austrian
and US websites, using a cross-cultural approach. Using Hofstede’s cultural dimensions and a
predefined conceptual framework, Austrian and US websites were qualitatively analyzed in
order to measure the propensity of defined cultural attributes. This study implies that as the
global market continues to expand, cultural customization of international websites is
becoming less of a choice and more of a necessity.

I. INTRODUCTION

With almost one sixth of the world population online, the Internet has become the
most important marketing medium to date (http://www.c-i-a.com/pr0904.htm). Consumers
can easily shop from the comfort of their homes, without having to depend on business
operating hours. Current customer awareness of global offerings is higher than ever before.

But the website is still a virtual representation of a shop or an office. The visitors are
still people with individual values, norms and beliefs. These values define how one will react
to symbols and sensory inputs, and are ultimately the basis for one’s culture. In the same way
as shops differ from one culture to another, websites have to be culturally adapted to their
audience. Websites should be locally tailored for each specific country in order to make sure
the site communicates meaning properly and serves the needs of the visitor.

II. LITERATURE REVIEW

It is not surprising that many scholars have written about this topic and have tried to
provide a framework for cultural understanding. The starting point for this research is
Hofstede’s early work about culture, in which he (Hofstede, 1991, 5) identifies culture as "the
collective programming of the mind which distinguishes the members of one group or
category of people from another." Hofstede (2001) specifies five cultural dimensions that
allow for the classification of cultures: Power Distance, Uncertainty Avoidance,
Individualism, Masculinity and Long-/Short-Term orientation. He also gives very helpful
index ratings that show how countries and regions score within these dimensions.

Cultural consumer behavior is shaped by the marketing practices in culturally specific


marketplaces (Darling & Taylor, 1996). Darling and Taylor reach the conclusion that people
from different cultures have different perceptions of the same product and/or marketing
practice. Therefore, the ideal, country-specific marketing tactic should be used (Darling &
Taylor, 1996). Cultural differences will prevail over standardization and international
retailers must adapt in order to succeed.

714
The crucial importance of intercultural competence and knowledge, for businesses, is
shown in an example by Mayo (1991). He examined exporters who entered foreign markets
and failed. The underlying reason for their failure was due to a lack of knowledge concerning
country-specific business practices. Business practices in the private sector are determined by
values, norms and beliefs. The cultural taxonomy that was developed by Hofstede is often
used as a starting point for further investigation. Rawwas (2001) connects this taxonomy to
ethic beliefs of consumers from the USA, Ireland, Austria, Egypt, Lebanon, Hong Kong,
Indonesia, and Australia. He concludes that marketing strategies need to be developed
bottom-up. The purpose is to communicate those values which are important to the consumer
and to that consumer’s native society.

Since customers are different from one country to the next, sellers must receive
intercultural communication training (Bush & Ingram, 1996) and specific approaches should
be taken in order to attract foreign customers (McDonald, 1994). Bush (2001) makes the
statement that the importance of cultural adaptation is understood, but not satisfyingly
executed. The ideal marketer should have “empathy, world mindedness, low ethnocentrism,
and attributional complexity” to excel in the global market. Hofstede’s
Individualism/Collectivism dimension was used by Litvin and Kar (2003) to explore the
relationship between one’s self-perception and the perceived image of a product. It was
concluded that “cultural differences have once again been shown to play a significant role in
consumers' […] attitudes.”

It is obvious that a great deal of work has been completed showing the unalterable
significance of intercultural competence for businesses. To go more in detail, some
researchers have studied how this knowledge is being integrated in electronic marketing. The
two most important variables which must be tailored country specific, when considering an e-
commerce website, are: language and infrastructure for payment and delivery (Bin, Chen &
Sun, 2003).
Junglas and Watson (2004) found that infrastructure and market surroundings play an
important role in website design. Junglas and Watson also explain their findings using
Hofstede’s cultural dimensions. Further studies revealed that most corporations in fact adapt
their websites according to the particular country (Singh, Kumar & Baack 2005).

As shown in many studies, cultural adaptation of websites is not only very important,
but also necessary. It is understood that a successful launch of an e-commerce website in a
foreign country demands more than simply translating the content. Most studies in this field
have focused on comparison between websites from the US and European countries,
including the United Kingdom, Germany, France, Ireland, Spain, and Greece (Darling et al,
1996; Rawwas, 2001; Singh et al 2005).

III. HYPOTHESIS

The intention of this study is to compare websites from both Austria and the United
States in order to find out if Hofstede’s cultural dimensions were used in site construction.
Further analysis is done in order to ascertain the extent to which the dimensions were used
and if they were incorporated in conformity to Hofstede’s findings.

Austria According to Hofstede:


According to Hofstede (2003), the highest ranking dimension for Austria is
Masculinity at an index of 79. Austria also has a relatively high Uncertainty Avoidance index

715
of 70 (2003). Austria has an Individual/Collective index score of 55 (2003), which is
considered relatively neutral.

United States According to Hofstede:


The United States’ most defining cultural dimension is its level of individualism.
With a score of 91 the United States ranks as the most individualistic culture Hofstede (2003)
reviewed. The US is comparatively low on Uncertainty Avoidance (UAI), with a ranking of
46 (Hofstede, 2003).It is the blend of these two cultural dimensions, along with a low Power
Distance index and a medium Masculinity index, which accounts for the unique cultural
positioning of the United States.

IV. ANALYSIS

Based on Hofstede's dimensions, two countries were selected that are as culturally
different as possible. The countries selected were the United States and Austria. The limit for
the selection was the authors’ language proficiency. The panel that was chosen consists of 60
websites, 30 websites from each country. The industries chosen were banking, insurance,
business-to-business, ski resorts and football.

Methodology
Each website was tested on the manifestations of Hofstede’s cultural dimensions. To
achieve this, a questionnaire with seven categories that was developed and used by Singh,
Xhao and Hu (2003) in an earlier study was utilized. The categories used were Collectivism,
Individualism, Uncertainty Avoidance, Power Distance, Masculinity, Low-Context and High-
Context. Each category was broken down into features, which by definition, account for the
respective cultural category. The website analysis consisted of the identification of particular
features on a website and measuring the degree to which these features were implemented.

The degree of the incorporation of those features was measured in a five step scale
with the scores 0, 1, 2, 3 and 4. The lowest score was given for the total absence of the
feature; the highest score was awarded for a prominent depiction on the website’s front page
or for consistent appearance. Scores 1, 2 and 3 were awarded in a relative manor based upon
the degree to which the website incorporated a particular feature in a website.

After scoring of the websites had been completed, the scores were than measured
using statistical analysis. ANOVA tests were applied to the entire data set. This test allowed
for the measurement of each variable in order to find out if deviations in each dimension were
of significant value.

Results
While the results of the ANOVA test do not show a significant difference in the
cultural dimensions of the US and Austrian websites, there are several subsets in these
categories that do appear to be significantly different. The differences in these subsets lie in
parallel with Hofstede’s cultural analysis of both Austria and the US. The results of the
ANOVA are tabulated below.
Individualism-Collectivism
Privacy: When analyzing to what degree a website provided a statement of privacy,
US websites were significantly higher in their propensity to provide a statement of privacy
than were Austrian websites (mean: US 3.16 and Austria 1.90; F = 18.080, Sig. = .000).

716
Table I: ANOVA TEST RESULTS, * denote significance
F- F-
Categories Mean Value Categories Mean Value

Power Distance
Collectivism US Austria Hierarchy Information 1.74 1.69 0.873
Community Relations 2.26 2.24 0.96 Pictures of VIP 1.58 1.55 0.936
Clubs 0.77 1.1 0.35 Awards 1.71 1.97 0.406
News Letter 2.03 2.24 0.579 Vision 1.57 1.72 0.555
Family Theme 1.9 1.9 0.984 Pride of Ownership 2.03 2.52 0.048*
Symbols 1.42 1.66 0.435 Titles 1.84 1.69 0.597
Loyalty Program 1.61 1.24 0.198
Links 1.45 1.93 0.162 Masculinity
Adventure Theme 1.52 2.07 0.194
Individualism Realism Theme 2.68 2.9 0.283
Privacy 3.16 1.9 0* Effectiveness 2.87 2.9 0.891
Independence 1.84 1.76 0.676 Gender Roles 0.77 1.31 0.06*
Originality 2.23 2.52 0.232
Personalization 1.06 1.28 0.504 Low Context
Rank of Position 2.16 2.31 0.554
Uncertainty Avoidance Hard Sell 1.84 1.9 0.822
Customer Service 3.06 3.07 0.981 Comparatives 1.13 0.86 0.31
Navigation 3.19 3.34 0.528 Superlatives 1.97 1.86 0.695
Local Stores 2.55 2.72 0.542 Terms 2.58 2.14 0.062
Local Terminology 1.45 1.48 0.892
Free Trial 0.77 1.52 0.028* High Context
Testimonial 0.48 0.72 0.309 Politeness 0.68 1.1 0.029*
Toll Free# 2.45 1.07 0 Soft Sell 1.52 1.24 0.312
Tradition 1.58 2.28 0.015* Images 2.39 2.69 0.228

Uncertainty Avoidance
Free Trial: On the sub-dimension Free Trial, Austrian websites were significantly
higher in their tendency to provide free trials (mean: US .77 and Austria 1.52; F = 5.069, Sig.
= .028).
Toll Free #: When analyzing the degree to which websites provided a toll free phone
number, US websites were significantly higher in their trend to provide toll free phone
numbers (mean: US 2.45 and Austria 1.07; F = 28.644, Sig. = .000).
Tradition: The degree to which the sub-dimension tradition was shown on websites
was significantly higher for Austria than for the US (mean: US 1.58 and Austria 2.14; F =
6.237, Sig. = .015).
Power Distance
Pride of Ownership: When analyzing the sub-dimension Pride of Ownership Austrian
websites were significantly higher in tendency to show pride of ownership on websites
(mean: US 2.03 and Austria 2.52; F = 4.083, Sig. = .048).
Masculinity
Gender Roles: When analyzing the extent to which gender roles where portrayed on
websites, it was found that Austrian websites displayed gender roles at a significantly higher
rate than did US websites (mean: US .77 and Austria 1.31; F = 3.694, Sig. = .060).
High and Low Context
Politeness: Austrian websites used polite language more frequently than did US
websites (mean: US .68 and Austria 1.10; F = 5.013, Sig. = .029)

717
V. CONCLUSION

The United States is a very individualistic society. When considering the cultural
dimension of Individualism, the US ranks higher than any other country that Hofstede (2003)
analyzed. Personal freedom and achievement are underlying themes in most US cultural
expressions. A major component of personal freedom is privacy. It is no wonder than, that
US websites provide privacy statements much more often and more clearly than did Austrian
websites. Comparatively speaking, Austria is much higher on Uncertainty Avoidance than is
the United States. This means that Austrians are more apt to get involved in situations which
have been clearly defined. It makes sense that Austrian websites scored higher on both the
sub-dimension of Free Trials and the sub-dimension of Tradition Theme. A free trial allows
uncertain users the chance to experience an offering before its purchase, therefore reducing
uncertainty. Incorporating tradition into web design reduces uncertainty by offering products
and services in familiar and traditional settings. When studying the cultural dimension of
Masculinity, Austria (79) scores much higher than the United States (62), according to
Hofstede (2003). This suggests that Austria is a culture that can be described a masculine,
while the United States is only somewhat masculine. Austrian websites tend to exhibit
discrete gender roles; this is most likely because a major component of displaying
masculinity on a website is through the use of gender roles.

Implications
In observing websites from both of these countries it is obvious that there is clear
distinction between U.S. and Austrian websites. However, although there is a clear distinction
between website designs in the two countries, there seems to be a lack of conformity
regarding website design in either country. While, it is not necessarily beneficial to create
uniformity in site structure, it may prove beneficial for companies to display their offerings in
a way in which local consumers find comfortable. There is evidence in this study that
websites are becoming more sensitive and adapting to the needs of their local customers. This
can be seen when observing the differences between the United States and Austria, in their
cultural dimensional subsets. The sub-dimensions, free trial, tradition, pride of ownership,
gender roles and politeness all were in conformity with Hofstede’s ranking of Austria and the
United States. This study implicates that as website construction evolves, there will be a
continuance in this trend of cultural adaptation.

When assessing the methodology in this study, two possible sources of error emerge.
The first source of possible error involves the sample population used. Five industries were
selected for this study, based upon their availability on the World Wide Web. In retrospect it
seems that each of these industries displayed certain values which where intrinsic to their
specific industry. These industry specific values were incorporated into the evaluation
process and could have possibly skewed the statistical results. The second source of error
involved the use of two evaluators, one native to the United States and one native to
Germany. It is possible, that while evaluation techniques were the same, that there were
discrete measurement differences, do to cultural bias. In conclusion, this study finds that
while there appears to be a trend toward cultural adaptation, in both Austria and the United
States, there remains a deficit in offerings that intimately reflect the culture in which they are
offered. Further research, in industry specific areas, should be helpful in generalizing cultural
differences in websites, without the confliction that arises when assessing multiple industries.

718
REFERENCES

Bin, Qiu; Shu-Jen Chen & Shao Qin Sun. “Cultural differences in e-commerce: A
comparison between the U.S. and China”. Journal of Global Information Management.
2003.Vol.11, Iss. 2, pg. 48, 8 pgs.
Bush, Victoria D. & Thomas Ingram. “Adapting to diverse customers: A training
matrix for international marketers”. International Marketing Management. 1996. Vol.25, Iss.
5, pg. 373, 11 pgs.
Bush, Victoria D. et al. “Managing culturally diverse buyer-seller relationships: The
role of intercultural disposition and adaptive selling in developing intercultural
communication competence”. Academy of Marketing Science. 2001. Vol.29, Iss. 4, pg. 391,
14 pgs.
Darling, John R. & Taylor, Raymond E. “Changing attitudes of consumers towards
the products and associated marketing practices of selected European countries versus the
USA, 1975-95”. European Business Review. Bradford: 1996. Vol. 96, Iss. 3, pg. 13.
Hofstede, Geert. (1991). Cultures and Organizations: Software of the Mind. London:
McGraw-Hill.
Hofstede, Geert. (2001). Culture’s Consequences: Comparing Values, Behaviors,
Institutions and Organizations across Nations. 2nd ed. Thousand Oaks, CA: Sage.
Hofstede, G. (2003). Geert Hofsteds’s Cultural Dimensions. Retrieved October 10,
2005. Web site: http://www.geert-hofstede.com
Junglas, Iris A. & Richard T. Watson. “National Culture and Electronic Commerce”.
E-Service Journal. 2004. Vol. 3, Iss. 2, pg. 3, 32 pgs.
Litvin, Stephen W. & Goh Hwai Kar. “Individualism/collectivism as a moderating
factor to the self-image congruity concept”. Journal of Vacation Marketing. London: Dec
2003. Vol.10, Iss. 1, pg. 23, 10 pgs.
Mayo, Michel A. “Ethical Problems Encountered By U.S. Small Businesses In
International Marketing”. Journal of Small Business Management. 1991. pg 51.
Mc Donald, William J. “Developing international direct marketing strategies with a
consumer decision-making content analysis”. Journal of Direct Marketing. 1994. Vol.8, Iss.
4, pg. 18, 10 pgs.

719
VALERO ENERGY CORPORATION AND RISING GAS PRICES

Amber Stanush, University of Texas at San Antonio


amberstanush@satx.rr.com

Courtney Syfert, University of Texas at San Antonio


shaysyfert@yahoo.com

ABSTRACT

This public relations case study looks closer at Valero Energy Corporation and the
rising gas prices. Valero Energy Corporation, based in San Antonio, Texas, is the largest
refiner in the North America and is quoted as refining about 3.3 million barrels a day with its
refineries that are placed all over the western hemisphere. The case study will conduct an
examination of the above history concerning Valero. Many residents of San Antonio believe
that it is Valero that is increasing the gas prices. The purpose of this case study is to inform
the target publics of the truth behind Valero and the rising gas prices.

I. INTRODUCTION

Valero is concerned about its reputation due to an up rise in negative publicity and has
a desire to inform its public about the company’s values. Valero would like to improve and
maintain the local community morale and its positive reputation. The company would also
like to inform the public about the price of gas and circulate the truth concerning the
corporation’s profitability during this time. Valero Energy Corporation was founded in 1980
as the corporate successor to LoVaca Gathering Company, a natural gas gathering subsidiary
of the Coastal States Gas Corporation. Valero is a Fortune 500 company based in San
Antonio with approximately 22,000 employees and assets valued at $33 billion (Valero
Energy Corporation, 2005). The corporation is a premiere refining and marketing business
that leads in shareholder value growth through innovative, efficient upgrading of low-cost
feedstocks into high-value, high-quality products. Valero also holds an in-house public
relations office. Mary Rose Brown, Senior Vice-President of Corporate Communication
works directly with Joanna Weidman, Director of Corporate Communication along with other
individuals who work hard to maintain Valero’s relationship with the public and media. At
present, the public views Valero negatively, along with diminishing assurance in the
company by the consumer’s apparent lack of confidence in the company. Valero has concern
for the potential loss of revenue and reputation.

The main reason for conducting the campaign is to reassure the community of
Valero’s intention to demonstrate the company’s commitment to creating, supporting and
maintaining high standards of excellence.

Valero’s Target Publics


Valero employees, local community of San Antonio, Valero consumers, non-Valero
consumers, and the neighboring communities are the main target publics. Another important
target public is local media. These involved radio, TV stations and newspapers.

720
II. BACKGROUND

Valero is the largest privately owned refining corporation in North America. It has an
extensive refining system with a throughput capacity of approximately 3.3 million barrels per
day, in comparison to the 900,000 barrels per day in 2001. As described on Valero’s website,
the company's geographically diverse refining network stretches from Canada to the U.S.
Gulf Coast and West Coast to the Caribbean. In combination with its interest in Valero L.P.,
Valero has 9,150 miles of pipeline, 94 terminal facilities and four crude oil storage facilities
(Valero Energy Corporation, 2005). Many of these terminal and oil storage facilities
complement Valero's refining and marketing assets in the U.S. Southwest and Mid-continent
regions. According to Valero’s website, as a marketing leader, Valero has approximately
4,700 retail sites branded as Valero, Diamond Shamrock, Ultramar, Beacon and Total. The
company markets on a retail and wholesale basis through a bulk and rack marketing network
in 42 U.S. states, Canada, Latin America and the Caribbean. Valero has long been
recognized throughout the industry as a leader in the production of premium, environmentally
clean products, such as reformulated gasoline (Valero Energy Corporation, 2005).

Since its beginning, Valero's commitment to its employees, environment and


communities not only has made it a better corporate citizen, but also a superior refiner.
Valero’s corporate record goes beyond its leadership in producing clean-burning fuels. It
reflects a vision from management and commitment by employees to set standards in every
area of business. “As a result, the company has been recognized as a top-performing public
company, honored as an industry leader, ranked as a top employer, and lauded for its
commitment to community service.” (Valero Energy Corporation, 2005) Gasoline prices are
affected by a wide variety of factors, ranging from the supply and demand balance of refined
products and the price of crude oil to government regulations and taxes – the vast majority of
which are out of Valero’s control.

Valero has added 380,000 barrels per day (BPD) of refining capacity since 1997; this
is the equivalent of building at least three world-scale refineries. It has also announced the
addition of another 400,000 BPD of refining capacity over the next five years at a cost of $5
billion. Also, Valero has purchased many unreliable plants and invested in them so they now
run reliably at expanded rates because the company has been running all of its plants at
maximum rates, there is little more that Valero can do to positively impact gasoline prices.
The current tight supply and demand picture is due to strong demand from a booming global
economy and the reduced volumes of refined products that can be produced from each barrel
of crude oil due to the cleaner gasoline specifications in the U.S. today. This should come as
no surprise to anyone. In fact, Valero pointed out these challenges to the U.S. House of
Representatives Health & Environment Subcommittee in sworn testimony four-and-a-half
years ago.

Before Hurricanes Katrina and Rita, U.S. refineries were operating at very high
capacity operation rates to keep up with market demand. However, despite these high
operation rates, inventories were already low due to strong demand from a booming global
economy and the reduced volumes of crude oil. Then, the hurricanes dealt with a devastating
blow to the Gulf Coast’s refining infrastructure, intensify an already tight market by knocking
out almost 30% of U.S. refining capacity following Hurricane Rita.
As a result, gasoline prices – which are based on a freely negotiated spot market –
dramatically increased. During the aftermath of these back-to-back hurricanes, Valero chose
not to pass along the full amount of these increases to consumers and its branded jobbers. In

721
fact, following Hurricane Rita, the company’s retail prices in some areas of the country were
$1 per gallon below its cost to replace the gallons Valero was selling in those markets. As a
result, Valero lost $27 million in branded wholesale business and only made $5 million from
its network of more than 1,000 U.S. retail stores during the third quarter (Weidman, 2005).
Valero helped contribute to the decline in gasoline prices by investing significant resources to
restart its impacted refineries in record time, which helped to ease the supply shortages. It
was a monumental effort to provide housing, food, supplies and additional workers from
other Valero refineries immediately after the storms. Despite personal losses, the employees
returned to work immediately and worked around the clock to restore power and repair and
restart the refineries. As a result of these considerable efforts, Valero’s refineries and retail
stores were back online more quickly than neighboring facilities -- providing much-needed
fuel to consumers during this difficult time (Weidman, 2005).

III. RESEARCH

Research will be established through client records such as a Valero profile.


Published materials such as news articles would be evaluated. Consumer feedback will be
gained via telephone surveys, which would create two-way communication. Mail surveys
and gas credit card stub questionnaires will be implemented as well. An organized group
such as Valero’s Board of Directors will be contacted. A website, created with a link to
Valero’s website enabling two-way communication through a question and answer forum,
will be established. In addition, focus group sessions from each target public will be
conducted. Phone surveys will assist in gathering quantitative data. In addition, a content
analysis of media will be examined.

IV. OBJECTIVES

The following are the impact and output objectives of the campaign for the next year.

Impact Objectives
Valero will set the following impact objectives for its employees:
1. To inform 100% of employees of Valero’s current position on the negative publicity
and its vulnerable reputation in three months.
2. To spread the word about improvements taking place at Valero by 80% within one
year.
3. To increase favorable opinions with employees by 60% within one year.
For Valero’s consumers, the objectives will be:
1. To inform 45% of local consumers of Valero’s current position on the negative
publicity and its vulnerable reputation in three months.
2. To spread the word about improvements taking place at Valero by 50% within one
year.
3. To increase favorable opinions with the local community of consumers by 45% within
one year.
Valero’s local community – non-consumers’ objectives will be:
1. To inform 40% of local non-consumers of Valero’s current position on the negative
publicity and its vulnerable reputation in three months.
2. To spread the word about improvements taking place at Valero by 50% within one
year.
3. To increase favorable opinions with the local community of non-consumers by 30%
within one year.

722
Valero’s neighboring communities’ objectives will be:
1. To inform 35% of neighboring communities of Valero’s current position on the
negative publicity and its vulnerable reputation in three months.
2. To spread the word about improvements currently taking place at Valero by 50%
within one year.
3. To increase favorable opinions among neighboring communities by 35% within one
year.
Finally, Valero’s media’s objectives will be:
1. To inform 75% of the media of Valero’s current position on this issue in three
months.
2. To spread the word about improvements currently taking place at Valero by 50%
within one year.
3. To increase favorable opinions with the media by 40% within one year.

Output Objectives
Valero will also set the following output objectives:
1. To send press releases to major local and neighboring community media outlets.
2. To establish a cohesive relationship with 50% of local media within one year.
3. To send out postcards to everyone in San Antonio with fast facts about Valero and the
increase in gas prices.
4. To gain media coverage with all local media sources within six months.
5. To gain favorable attitude among 35% of local media within one year.

Employees are an important part of the campaign. If they are not properly informed,
this may contribute to apathetic feelings about the organization. Thus, memos will be sent to
keep them updated and get them more involved.

V. PLANNING AND EXECUTION

Valero will use the public information model. The model emphasizes the use of
truthful messages to all concerned publics. Valero will implement a proactive program to
avoid any potential problem by making necessary adjustments in the organization to
overcome negative media attention. The campaign slogan will be, “The drive to take you
there & the energy to keep you here” and the theme will be, “The Energy to Inform”. In
addition, the main message will be, “This local energy company puts the interest of the locals
first.”

Special Events
Valero plans to arrange unique special events. At each special event, the media
should not only be contacted and invited, but also treated in a VIP style. As positive
relationships develop with local media outlets, Valero can use this clout to promote itself in
the future. First, Valero believes that a speech by Bill Greehey, the CEO of Valero Energy
Corporation, would be a great way to jump-start the campaign. Following Mr. Greehey’s
speech, Valero would create an event that would be set up like a mock refinery with people
stationed at numerous different locations to provide information and answers to questions. At
this event, Valero will give away free gas cards, finger foods and beverages. To add some
excitement, Valero thinks it would be appropriate to have go-kart races for children of all
ages. In this controlled event, the children who win each race would have their pick from
anything in the “mock” Valero gas station. This gas station would be set up like all Valero
gas stations with candy, gum, drinks, etc. Valero would also hold pumping gas relay races

723
for adults. The gas pumping would not be like today’s gas pumps, but like the original
manual pumps. The winner of pumping gas relay races would also have their choice from
anything in the “mock” Valero gas station. However, the adults would also be able to win
prizes such as Lotto tickets and additional gas cards.

Valero would also invite visitors on a tour of its buildings. In this event, visitors will
learn such useful information as how the company is run and the extent of the hard work put
forth by all Valero employees. Valero would also display the numerous awards the company
has obtained in its years of operation.

Media
Media coverage can enhance or demise an organization. In this case, Valero should
use uncontrolled media to its advantage as often as possible. By sending news releases and
media kits, local media will have the opportunity to shine a positive perception of Valero
through articles and feature stories. Valero plans to create an enduring and positive
relationship with the local media throughout the year and keep them informed about its
happenings. It is imperative that a letter be sent to the editor of S.A. Life of the Express
News to create a positive relationship. In addition, Valero would compose a feature story
about the company’s history, community work, statistics and the refinery process. Valero
feels it is important to keep the community as up-to-date as possible. Thus, the company will
post its events in local print media within community calendars and repeat posting them one
month before the event, one week until the event and then one day prior to the event.

It is important to keep people as informed as possible with factual information. An e-


mailed memo will be sent to all Valero employees and postings will be available on Valero
websites to inform them of the upcoming events. In addition, it is important to again
emphasize informing the community with truthful information and in doing so, a seminar at
the Valero offices would be held. Sending out a mass mail of informational brochures to all
the local and surrounding communities will aid in Valero’s endeavor as well.

Effective Communication
Valero will stress the use of effective communication with all target publics. Thus,
the use of credible and updated information, verbal and nonverbal cues, opinion leaders, two-
way communication and audience participation and feedback will be essential to the
campaign implementation.

VI. EVALUATION

Due to the large number of Valero employees and their diverse locations, the most
effective way to measure the awareness level of the employees is through an extensive survey
distributed to all of them via Valero’s Intranet. The survey will also include questions
regarding the current improvements at Valero and the employees’ opinions of the company.
A self-addressed survey will be distributed to consumers along with their gas card statements.
This survey will assess the consumers’ opinions of Valero and their knowledge of the
improvements.

To evaluate non-consumers and neighboring community individuals, a phone or mail


survey would be conducted. Questions would include: how informed is the individual of
Valero’s current position on the negative publicity, how informed is the individual on the
improvements currently taking place at Valero, and what is the individual’s opinion of

724
Valero. Furthermore, an evaluation of the number of website hits before, during and after the
campaign will be tabulated.
Media will be measured by calculating the number of stories run about Valero and its events
in local print and broadcast medias. The content and size of the stories `would also be
evaluated.

VII. CONCLUSION

This campaign has both strengths and weaknesses. A major strength is that Valero is
dealing with an active audience who is seeking out information on the issues at hand. For
example, gas prices affect all individuals. A major weakness is that San Antonio is a large
city and it is difficult to successfully reach every targeted public. It is also difficult to change
pre-existing attitudes. Based on the controversy, the directions taken by Valero could be
changed during the implementation of the campaign. Valero should maintain a proactive role
of keeping their employees and the community informed.

REFERENCES

Valero Energy Corporation, (2005). Retrieved Nov. 15, 2005, from Valero Energy
Corporation Web site: http://www.valero.com/.
Valero Energy Corporation, (2005). Retrieved Nov. 14, 2005, from Valero Energy
Corporation Web site: http://www.valero.com/ About+Valero.
Weidman, Joanna. Personal interview. 16 Nov 2005.

725
ADAMS
ALKHAFAJI
EDITORS BUSINESS
BUSINESS
RESEARCH RESEARCH
YEARBOOK
Volume XIII
2006
YEARBOOK
Global Business
Perspectives
VOLUME XIII 2006

MARJORIE G. ADAMS
International
Academy of
ABBASS ALKHAFAJI
Business EDITORS
Disciplines
Publication of the International
Academy of Business Disciplines

Cover Design by Tammy Senath ISBN 1-889754-10-2

You might also like