You are on page 1of 63

BUSINESS RESEARCH METHOD NOTES

By
Muathe SMA
Business Administration Department
School of Business

2007
ACKNOWLEDGEMENTS

I take this opportunity first to thank God, the creator of the universe and all humanity, for
bringing our life’s paths together. Next, I am grateful to Kenyatta University (KU) for giving
me the opportunity to use School of Business Computer lab when I was developing this notes.
Am also indebted to The Kenya Institute of Management (KIM) for also allowing me to use
their computer lab and to learn more about Research. My special Hongera goes to charity for
provoking me to take career in research even when I doubted myself. Thanks for seeing the
potential in me.

I am indebted to Co- consultant colleague Prof. Peter Kibas for mentoring and enabling me to
make a giant step in my career in Research, Training, project Monitoring & Evaluation or in
summary consultancy. Not forgetting my son Muathe junior and daughter Faith for your
many questions why I was away when you needed my company not knowing I was in a
serious process of developing these notes. To all my students both in KU and KIM
specifically, Ngugi Nkuru F. (KU), Misiani P. (KU), Kaguri J. (KIM) and also my classmate
at Master’s level, Mugalla Khamati (KIM), Muthoka R. (KIM), Tinega G. (KIM), Adam G.
(ST.Pauls University-limuru), Kirugi J.R. (ST.Pauls University-limuru) and many others.

Last but not least to my colleagues at PhD class especially Evans Mwiti for also encouraging
me

As I finish writing these notes, I do confess that I will definitely be good ambassador and
advocate of Research as a better model of making sound decision whether social on Business.

Muathe SMA 2007 2


TABLE OF CONTENTS
ACKNOWLEDGEMENTS............................................................................................... 2
OVERVIEW OF RESEARCH METHODS ........................................................................ 5
Definition of Research ............................................................................................... 5
Science and the Scientific Approach ......................................................................... 5
Scientific Methods ..................................................................................................... 7
Qualities of a Good Research .................................................................................... 8
Importance of Knowing Research Methods .............................................................. 8
Limitation of Research Methods .................................................................................... 8
RESEARCH PROPOSAL ............................................................................................. 9
Types of Proposal ...................................................................................................... 9
Components of a Proposal ......................................................................................... 9
Format of a Proposal and a Project .......................................................................... 10
a) Proposal Format ................................................................................................. 10
b) Project Format .................................................................................................... 10
CHAPTER ONE: INTRODUCTION .......................................................................... 12
1.1 Background of the study ........................................................................................ 12
1.2 Statement of the problem ....................................................................................... 12
1.3 Objectives of the study .......................................................................................... 13
1.3.1 General objective ............................................................................................... 13
1.3.2 Specific Objectives ......................................................................................... 13
1.4 Research Questions ............................................................................................... 13
1.5 Importance/Significance of the study ............................................................... 14
1.6 Limitation of the study .................................................................................... 14
1.7 Scope of the study .......................................................................................... 15
1.8 Framework of the study .................................................................................. 15
CHAPTER TWO: LITERATURE REVIEW .............................................................. 16
2.1 Definition of Literature Review ............................................................................. 16
2.2 Steps in carrying out literature Review ................................................................... 16
2.3. Sources of Literature ............................................................................................ 16
2.4. Tips on good Reviewing of Literature.................................................................... 17
2.5. Evaluation of Literature Review ............................................................................ 17
2.6. The Scope of Literature Review ............................................................................ 18
2.7. Purpose of Literature Review ................................................................................ 18
2.8. Types of Referencing ........................................................................................... 19
2.8.1 Referencing within Text-APA Styles of Citation .............................................. 19
2.8.2 End of Text References or Bibliography .......................................................... 22
CHAPTER THREE: RESEARCH METHODOLOGY .............................................. 23
3.1 Introduction .......................................................................................................... 23
3.2 Types of Research ................................................................................................. 23
3.2.1 Basic Vs Applied research .............................................................................. 23
3.2.2 “Evaluation” research ..................................................................................... 23
3.2.3 “Performance-monitoring research”................................................................. 23
3.2.4 Qualitative and quantitative research ............................................................... 23
3.2.5 Descriptive Research/Survey research ............................................................. 25
3.2.6 Historical research .......................................................................................... 25
3.2.7 Developmental studies .................................................................................... 26
3.2.8 Analytical Research........................................................................................ 26
3.2.9 Conceptual Vs Empirical ................................................................................ 26
3.3 Research design .................................................................................................... 26

Muathe SMA 2007 3


3.3.1. Exploratory Research design .......................................................................... 27
3.3.2 Conclusive research design ............................................................................. 27
3.3.3 Case study ..................................................................................................... 29
3.4 The Validity and Reliability of a Questionnaire ....................................................... 30
3.4. 1. Reliability .................................................................................................... 30
3.4.2. Validity ........................................................................................................ 30
3.5 Measurement concepts .......................................................................................... 35
3.6 Population and sample........................................................................................... 39
3.7 Sampling technique-.............................................................................................. 40
3.8 Selecting Your Sample .......................................................................................... 41
3.9 Data Collection Methods and Procedure ................................................................. 43
3.9.1 Data Collection Methods ................................................................................ 43
3.9.2 Research Procedure ........................................................................................ 56
3.10 Data Analysis Methods ........................................................................................ 56
CHAPTER FOUR: DATA ANALYSIS AND PRESENTATION OF RESULTS ..... 61
4.1 Introduction .......................................................................................................... 61
4.2 Analysis of the Response Rate........................................................................... 61
4.3 Analysis of the Background Information ........................................................... 61
4.4 Quantitative Analysis ......................................................................................... 61
4.5 Qualitative Analysis ........................................................................................... 61
4.3 Chapter Summary .............................................................................................. 61
CHAPTER FIVE: SUMMARY, CONCLUSIONS AND RECOMMENDATIONS ............ 62
5.1 Introduction .......................................................................................................... 62
5.2 Summary of Findings ............................................................................................ 62
5.3 Answers to Research Questions/ Discussion ........................................................... 62
5.4 Conclusions .......................................................................................................... 62
5.5 Recommendations .............................................................................................. 62
REFERENCE................................................................................................................. 63

Muathe SMA 2007 4


OVERVIEW OF RESEARCH METHODS

Definition of Research

a) The word "research" is derived from the Latin word meaning "to know."
b) Research is about answering questions, such as:

 What do I want to know?


 How do I want to gain knowledge?
 Why do I want to know it?

c) Research as a collection of methods, tools, and techniques forms the basis


for most research texts and courses in research; its methods are what set it
apart from other ways of acquiring knowledge.
d) Research means looking again. To research mean to take another more
careful look/find out more.
e) Technical definition of research
• Research is systematic enquiry aimed in providing information to solve problems,
research can also be defined as system enquiry that provide information to guide
business decisions.
• Business research is the systematic and objective process of gathering, recording, and
analyzing data to investigate specific problems for aid in making business decisions.
(Sekaran 2002; Zikmund 2003 p.6)
• In marketing research is defined as systematic and objective process of identification,
collection of data, analysis and dissemination of information to provide solution to
marketing problems/business problems

Science and the Scientific Approach

Various ways of knowing (fixing beliefs)

a) Method of tenacity:
 Knowledge is based on beliefs
 Holds on to the “truth” of what has always been known to be true even when
there are conflicting facts
 Example: Color/racial/ethnic superiority

b) Method of Authority:
 Knowledge is based on the authority of the source
 Unsound under certain conditions - the authority could be wrong.

c) Method of intuition:
 Knowledge is based on reason rather than experience
 The main question, however is: “Whose intuition?”
 Example: to be a good Accountant you need to be good in Mathematics

d) Method of Science:

 The method of science goes beyond our independent (personal) opinions. It is


based on the scientific approach
 There is need to understand the language and approach of science and
research

Muathe SMA 2007 5


 What is science?

• Stereotypes:
 White coat-stethoscope laboratory stereotype
 Brilliant individuals who think and spin complex theories
 Related to engineering and technology

• Characteristic features of science:


 It is concerned with propositions that can be systematically tested and
subjected to public inspection
 It is not concerned with metaphysical explanations i.e. what cannot
be tested. Hence enhances greater confidence in the findings
 It aims at objectivity

• Basic aim of science is theory i.e. explanation of observed phenomena


 Reinforcement theory
 Entrepreneurship theories
 Buyer behaviour theories
• Theory predicts

 Scientific Research:
• Is a systematic, controlled, empirical and critical investigation of
natural phenomena guided by theory and hypotheses about the
presumed relations among such phenomena.
• Is the activity of investigating some phenomena through practices
that are consistent with the method of science? This scientific
verification belief about real world phenomena always involves
empirical research. Empirical research is something that you can
observe or experience. It is based on the belief that all knowledge
must originate from experience.

Experienced based knowledge is that which is based on the five senses. Scientific research
therefore implies empirical research that deals with facts having objective reality. How do we
undertake empirical research? It has to be within the aims of science. The aim of science is to:
a) Describe the phenomena- when one wants to specify the current state of some
system, description is applied. e.g. we can be able to specify by using
demographic data
b) Explaining the phenomena- entails showing how a system operates. This
approach could be used to show relationship of income and level of expenditure
c) Predicting the phenomena- is applied when one wants to make probability
statement about the future sate of a system. We can also make a prediction
on impact on reduction of income on the purchasing of goods.

 Steps in Scientific Approach:


• Awareness of the problem situation - curiosity as to why something is as
it is. Example: Why are some people more entrepreneurial than others?
• Problem definition- should be defined clearly. Example: What factors
influence entrepreneurship behaviour?
• Development of alternative solutions to the problem (Hypotheses)
• Thorough investigation of alternatives
• Analysis of the results
• Revision of the ideas to reflect the findings

Muathe SMA 2007 6


 Justification of scientific research
Why can’t we use common sense instead of research to make business decisions?
a) Common sense limits people to the familiar. When a person’s source of
information is limited to friends, family members, teachers and mass media then
that person is limited to the familiar of those people and that person ideas can be
limit no conscious ideology. On the other hand, undertaking scientific research
makes you go beyond parochial cultural bounds/limits this because scientific
research entails challenging accepted beliefs by submitting them to scrutiny
through demanding standards
b) Common sense beliefs are not scrutinized to establish accuracy of those beliefs
and their validity whereas scientific research challenges old beliefs, creates new
ones and then turns the challenges upon the new beliefs.
c) Common sense can result to co-existing contradicting beliefs. e.g. we say birds of
the same feathers flock together, also opposite attract each other. Literally this
mean people of same ideas will tend to stay together. We find scientific research
expanding perception by formulating problem and solution which go beyond
common sense.
d) Common sense does not formulate theoretical or methodological methods since it
takes the axioms and methods as given whereas scientific research will formulate
theory and methods before reaching conclusion
In conclusion its important to note that to avoid research in order to be relevant might
perpetuate useless or incomplete ways of thinking about problems.

Scientific Methods
Real world facts

Theories
Predication about
real world

Source: Author 2007

It begins with facts, progresses to theories, predictions then we return back to the facts. The
scientific method forms a circle
a) In this method if the observed facts forms a theory consistent with the facts the
process is called induction and forms the first step in scientific methods
b) When we ask what the consequences of the theory are these is called deduction
and comprise the second step in scientific methods
c) The act of collecting new facts which permits you to decide whether the theories
are to be supported or refuted is called verification third step in scientific methods
Therefore, the researcher should make observation of real world phenomena, formulate
explanation of such facts, and generate prediction about the real world phenomena using

Muathe SMA 2007 7


previously formulated explanations and make verification of these predictions through
systematic controlled experimentation/observation. Hence the scientific research entails the
use of scientific methods.

Characteristics of Scientific Methods


a) Relies on empirical evidence
b) Its utilizes relevant concepts
c) Committed to objective consideration
d) Adheres to ethical standards
e) Results into probabilistic predictions
f) Its methodology is made known to all concerned
g) Aims at formulating most general axioms or what can be termed as
scientific theories.
e) Difficulties of applying scientific method in social/business research:
 Great complexity of the subject
 Difficult in obtaining accurate measurement
 Process of measurement may influence results
 Difficult in using experiments to test hypotheses
 Accurate prediction is difficult
 Subjectivity of the investigator

Qualities of a Good Research


• Clear definition of the research purpose
• There should be consistent in focus through out the research process
• The research process should be detailed
• Limitation of the research should be revealed
• High ethical standards must be maintained. e.g. its unethical to interview children
without the parents permission, its unethical to intelligently collect data for another
company, its unethical to make recommendation beyond your study and its also
unethical to publish the report if the source of the data says you shouldn’t publish it.
• Adequate analysis of decision makers need
• Findings should be logically analyzed
• Findings should be presented un-ambiguously
• You should have an abstract
• Conclusion should be justified and based on your research findings but not your own
thoughts.

Importance of Knowing Research Methods


• Increased effectiveness of solving problems in analytical context, because you will
have data which you can analyze
• Improved ability to understand and effectively apply the findings of the research
• There is enhanced ability to access what is made by others as to the merit of new
practices, tools and equipment
• Increase capacity to evaluate the soundness of theories dealing with various original
behavior phenomena

Limitation of Research Methods


• Research is a costly understating
• Research process takes a lot of time
• We have very few professional researchers

Muathe SMA 2007 8


RESEARCH PROPOSAL
• This is a work plan, an outline, a draft plan, statement of intent or a prospectus.
Proposal tells us what, why, how, where and whom a study will be done. It must also
show the benefit of doing the study
• A research proposal is essentially road work showing clearly the location from which
the journey begins the destination to be reached and method to be used
• Every research must have a sponsor and thus the sponsor must understand the project

Types of sponsors
• Students are responsible to class lecturer hence lecturer is the sponsor of that
research. Other sponsors firms, universities, government etc
A research proposal allows the sponsor to asses the sincerity of research purpose clarity of
research design, extend of research background materials and fitness of understanding the
whole research project, the purpose display ones undertaking of the project. Other aspects of
important
1. A proposal serves as a summary of major decisions in a research process
2. Serves as a written agreement between the researcher and the sponsor. Two aspects to
be agreed on: -
• Research scopes e.g. geographical scope i.e. does the research cover the
whole of Kenya or certain provinces. Time scope especially when then
research entail time series data
• Budget i.e. how much is the research going to cost
3. Serves as a tool for searching for funds

Note: A proposal is important in helping:


• Thinking through the project.
• Resolve any future misunderstandings in case of a commissioned study.

Characteristic of a Good Proposal


a. Clear/Neat
b. Convincing
c. Detailed
d. Reliable
e. Data based
f. Realistic
g. Systematic in focus
h. Time bound

Types of Proposal

Basically we have four main types of proposal


a. Academic proposal- Found in Academic Institution
b. Proposal For Funding commonly found in NGOs
c. Proposal for Business Bidding
d. Business Proposal/Business Plan

Components of a Proposal
Title page
i. Declaration
ii Table of contents
iii. List of tables
iv. List of figures
v. Definition of terms
vi. Acronyms

Muathe SMA 2007 9


vii Abstract
Chapter One
Chapter Two
Chapter Three
Appendices (Questionnaires, Research Budget, Time frame and references)
Note: The above three chapters (1, 2 and 3) plus the preliminaries of a proposal and the
appendices forms a research proposal

Format of a Proposal and a Project

a) Proposal Format
Title page
i Declaration
ii Acknowledgement
iii Dedication
iv Table of contents
v. List of tables
vi. List of figures
vii. Definition of terms
vii. Acronyms
ix Abstract

Note: The above pages are numbered with small Roman numbers at the bottom center of the
page (i.e i, ii, iii, iv etc). However, the title page is never given any Number

b) Project Format
The front matter or preliminary pages of a research project should be paginated appropriately
with small Roman numbers at the bottom center of the page. The pagination should be as
follows:
i. Title page is counted as i, but not paginated
ii. Student’s declaration is paginated as ii
iii. Copyright page is paginated as iii
iv. Acknowledgement is paginated depending on the Copyright
v. Dedication is paginated depending on the acknowledgement
vi. Table of content is paginated depending on the dedication
vii. List of tables is paginated depending on the table of content
viii. List of figures is paginated depending on the list of tables
ix. Definition of terms is paginated depending on the List of figures
x. Acronyms is paginated depending on the Definition of terms
xi Abstract is paginated depending on the Acronyms

Chapter one: Introduction


Background of the Study
Statement of the Problem
Objectives of the study
General Objective
Specific Objectives
Research questions
Importance of the study
Scope and limitation of the study
Framework of the study

Chapter Two: Literature Review


2.1 Introduction

Muathe SMA 2007 10


2.2 Review of the theoretical literature
2.3 Critical review
2.4 Summary and research gaps to be filled

Chapter Three: Research Methodology


3.1 Introduction
3.2Research design
3.3 Target Population
3.4 Sampling Design
3.4.1 Sampling Technique
3.4.2 Sample size
3.5 Data Collection Methods and procedure
3.6 Data Analysis Methods
Note: The above three chapters (1, 2 and 3) plus the preliminaries of a proposal and the
appendices (Reference, Questionnaire, Research Budget. Time frame forms a research
proposal

Chapter Four: Data analysis and Presentation of Results


4.1 Introduction
4.2 Analysis of the response rate
4.3 Analysis of the Background Information
4.4 Quantitative Analysis
4.5 Qualitative Analysis
4.6 Summary of Data Analysis

Chapter Five: Summary, Conclusion and Recommendations


5.1 Introduction
5.2 Summary of Findings
5.3 Answers to the Research Questions/Discussion
5.4 Conclusions
5.5 Recommendations

Note: The preliminaries of a project, the three chapter of a proposal (with tense changed to
past tense), the above two chapters (4 and5) and the appendices (Reference and
Questionnaire) forms a research project

Research Process

Topic→ Statement of the problem → Objectives→ Research Questions → Literature


review→ Data collection→ Data Analysis →Making decision about the data→
Accepting/rejecting hypothesis → Report writing

Muathe SMA 2007 11


CHAPTER ONE
INTRODUCTION

1.1 Background of the study


If the problem has historical background, provide that history. If no historical backgrounds
then describe the current status of the problem; for example Low output of workers in an
organization has historical background. If you are dealing with a case study e.g. Co-op bank
of Kenya, you have to describe what its mission is, vision etc.

1.2 Statement of the problem


A problem is a felt need, a question thrown forward for a solution, a deviation from what is
know and what is desired to be known, a research problem can also be looked as an
opportunity
Condition necessary to say that there is a problem
• Decision maker, there must be one who has the problem e.g. could be an org. student
etc.
• At least when there are two unequally efficient course of action which has chances of
yielding the desired objectives/results.
• When there is a state of doubt in decision maker which course of action is best
• An outcome /desire by a decision maker
Symptoms that a problem exist in business
• Performance is presently not meeting present objectives
• When its anticipant that performance will not continue present objectives
• Objectives for the future are changed and present operating procedures if continued
would not achieve the required goals
Identification of the problem
• Ask expert in particular area of business
• Search literature e.g. reading journals in your area of specialization
• Scan the environment to identify are of dissatisfaction
• Look for current development and trends
Selecting the problem:
• Go for topics of significance; originality, academic & practical value of the problem
• The problem should be within the researchers capability; financial resources, time,
availability of data
• Go for topics of interest to you but do not confuse interest or problem with over
whelming desire for particular answer
Stating the problem
• The problem should be carefully stated in written form before proceed with the
design of the study. The problem statement should identify the key variables and their
possible interrelationship and nature of the problem of interest. e.g. A broad
statement of purpose-the purpose of a research is to examine the relationship
between dependency on bhang and the rate of recovery of the drug users
1. Population of interest (drug patients for bhang)

Muathe SMA 2007 12


2. Independent variable (Dependency level)
3. Dependent variable (Rate of recovery)
• As a research questions stated in interrogative form. e.g. What is the relationship
between dependency level of bhang and their rate of recovery or what is the process
by which adult children make decisions regarding placement of their elderly parents
in a nursing
Note ensure you tell us what is the problem
A problem for investigation can be:
Topic: Change management in organization in improvement of organizational performance,
longevity and sustainable competitive advantage

Statement of the problem


The purpose of the study is to establish whether change management in organization can
lead to improvement of organizational performance, longevity and sustainable competitive
advantage

1.3 Objectives of the study

1.3.1 General objective


To establish whether change management in organization can lead to improvement of
organizational performance, longevity and sustainable competitive advantage
Note. The general objective is usually related to the statement of the problem and it tells us
what we need to do. Purpose is to guide the collection of data.

1.3.2 Specific Objectives


Based on the problem indicated above, the objectives for this study can be:
1. To determine the extent of understanding of the concept of change management in
organizations.
2. To identify and rank the key issues that impact change management within
organizations.
3. To compare key issues and benefits of managing change successfully in organization

Note. It is based on the specific factors assumed to influence the dependent variable.

1.4 Research Questions


Perry (1992, p.9) suggests that the term ‘research questions’ rather than ‘research hypotheses’
be used for research that takes a qualitative, exploratory approach. Research question are
questions that the researcher ask himself/herself such that if answered then the research
problem will be solved. They are related to the specific objectives
1. What is the extent of understanding the concept of change management in
organizations?
2. What are the issues that impact the change management within organizations?
3. What are the key issues and benefits of managing change successfully in
organization
Theory is a scientific acceptable principle that is useful in explanation of
phenomena/a set of statements or principles devised to explain a group of facts or

Muathe SMA 2007 13


phenomena, especially one that has been repeatedly tested or is widely accepted and
can be used to make predictions about natural phenomena.

What is hypothesis- a hypothesis is a statement supposed to be true till it is proved false. It


may be based on previous experience or may be derived theoretically. A hypothesis is a
conjectural statement or a tentative proposition about the relationship between two or more phenomena
or variables

Types of Hypothesis

1. Null hypothesis (Ho)- Null hypothesis: Statement that no difference exists between the
parameter and the statistic being compared to it (e.g. µ=8)
2. Alternative hypothesis (Ha)

Note: if we do not accept Ho we “fail to reject” it:


• The null hypothesis cannot be proved and therefore cannot be accepted.
• Research gives only a chance to reject Ho/Fail to reject Ho.

Characteristic of good hypothesis

• They must be clearly stated and brief


• They must show relationship between variables
• Must be based on sound theory, previous research or professional experience
• Must be testable
• Must be consistent with common sense/accepted truth
• Must be related to empirical phenomena
• Must be tested within a reasonable time

Purpose of hypothesis in research

• They provide direction, they bridge the gap between the problem and evidence
needed for it solution
• Sensitizes the investigation to certain aspects of the situation that is relevant regarding
the problem at hand
• Permits the researcher to understand the problem with greater clarity and use the data
collected to find solution of the problem
• Guides the collection of data and provides the structure for their meaniful
interpretation in relation to problem under investigation
• Forms the frame work for ultimate conclusions as solution
• Helps to determine what is to be studied
• Suggests the most appropriate research design

1.5 Importance/Significance of the study

In this you are answering the question why the research, we look at the benefits that we shall
derive from the research. Tell us who will benefit from your study and how. e.g. the
organization, policy makers, customers, other organization, other researchers and students.

1.6 Limitation of the study


• State the limitation of your study i.e. factors, which are likely to hinder you from
achieving you goals, and how to overcome those limitations e.g. uncooperative
respondents, suspicion and poor weather.

Muathe SMA 2007 14


1.7 Scope of the study
• Scope in terms of geographical and population coverage. i.e. Are you covering the
whole country, industry or specific parts. What is the scope in terms of population
categories?

1.8 Framework of the study


Divided in two theoretical and conceptual frameworks. A theory establishes a cause and
effect relationship between variables with the purpose of explaining and predicting
phenomena (Khan and Best, 1979). Hence a theoretical framework is build on theoretical
structure, that can explain the facts and the relationship between the variables; sets the
conceptual foundation for reliable knowledge, which helps predict and explain phenomena of
interest while conceptual frameworks is explains graphically or in narrative form, the main
dimensions being studied, or the presumed relationships among the variables under study, it
is derived from theory to identify the concepts included in complex phenomena and show
relationships and its very useful and vital in research studies as it sets the foundation of how
concepts are related.

By use of a conceptual framework a researcher usually models the relationship between the
main variables (Independent and Dependent variables) in the study making it easier for a
reader to understand. This also enables you to map out your chapter two.

Note
1. A variable- something which varies depending with the kind of treatment
2. Dependent variable- the output or response that is the result of changes to the
independent variable. Generally speaking, the tester has no control over this
3. Independent/predictor variable -The input or stimulus that is manipulated or
observed. It is controlled by the tester
4. Moderating variable-a variable that affects the direction and or strength of the
r/ship between an Independent variable and Dependent variable. i.e. usually
influences the r/ship between the two variables.

Below is an example of conceptual framework.

Topic: Challenges facing the Performance of HMOs in Kenya

Independent variables Dependent variable


Performance of
Government policies HMOs
Determines

Cost of policies

Competitions among HMOs

Source (Author, 2007)

You are supposed to briefly explain how the above independent variable relate to the
dependent variable

Muathe SMA 2007 15


CHAPTER TWO
LITERATURE REVIEW

2.1 Definition of Literature Review


• Is a systematic process of identification, location and analysis of documents
containing information related to the proposed research problem being
investigated
• Involves the systematic identification, location and analysis of documents
containing information related to the proposed research problem being
investigated.
• Review of empirical studies, historical records, government reports,
newspaper accounts (Note: Do not reference from newspaper articles report
by the journalist but the ones where organization buy space to publish their
report) etc

2.2 Steps in carrying out literature Review


a) Be very familiar with the library before the literature review.
b) Make a list of keywords or phrases to guide your literature search. For example if
the study deals with family conflict, other phrases that could be used to search the
literature are ‘family violence’ or ‘abuse’, ‘family dissolution’ etc.
c) With the keywords and phrases related to the study, one should go to the source
of literature. Library staffs are generally very helpful in offering guidance.
d) Summarize the references on cards for easy organization of the literature.
e) Once collected the literature should be analyzed, organized and reported in an
orderly manner. Such organization, analysis and reporting represents the hardest
part of literature reviews.
f) Make an outline of the main topics or themes in order of presentation. Decide on
the number o f headlines and sub-headlines required, depending on how detailed
the review is.
g) Analyze each reference in terms of the outline made and establish where it will be
most relevant.
h) Studies contrary to received wisdom should not be ignored when reviewing
literature. Such studies should be analyzed and possible explanations for the
differences given. They should be analyzed with a view to accounting for
differences of opinion.
i) The literature should be organized in such a way that the more general is covered
first before the researcher narrows down to that which is more specific to the
research problem. Organizing the research in this way leads to testable
hypotheses.
j) Some researchers prefer to have a brief summary of the literature and its
implications. This is however, optional depending on the length of the literature
under review.

2.3. Sources of Literature


Sources of information can be classified into two broad categories:
a) Primary Sources/Literature-the first occurrence of a piece of work, including
published sources such as Government papers and planning documents and
unpublished manuscript sources such as letters, memos and committee minutes.
b) Secondary Sources/Literature- subsequent publication of primary literature such
as books and journals
Examples of Sources of Information
• Scholarly Journals – properly referenced articles contain name of author, year
of publication, title of article, title of article, volume number and page
numbers.

Muathe SMA 2007 16


• Theses, Dissertations and student Projects
• Government Documents – National Development Plan, Sessional Papers
• Papers presented as Conferences and Workshops
• Books
• References quoted in books
• International Indices – important sources that list theses and dissertations
which have been written in a particular area of specialization eg Dissertation
Abstracts International, in the USA, which lists dissertations with authors,
titles and universities
• Abstracts – Give a list of journal articles and summaries eg Business
Abstracts, Economics abstracts
• Periodicals – journals, magazines or local newspapers which are published
periodically; each library has this section
• The Africana/American section of the library – contains specialized articles
• Reference section of the library
• Grey Literature – anything written but not published eg lecture notes, papers
presented at conferences
• Inter-library loan
• The British lending library
• Computer search through the search engines (e.g. www.google.com)
• Microfilm

2.4. Tips on good Reviewing of Literature

a) Do not conduct a hurried review for fear of overlooking important studies.


b) Do not rely too heavily on secondary sources.
c) Many people concentrate only on findings from journals when reviewing
literature. One should also read about the methodology used and the measurement
of variables
d) It is important to check daily newspapers as they contain very educative, current
information.
e) It is extremely important to copy references correctly in the first place so as to
avoid the frustration of trying to trace a reference later.

2.5. Evaluation of Literature Review

Several writers (Locke et al., 1987; Davitz, 1977; Goetz, 1967; and Fetterman, 1989) have
suggested a number of questions as criteria for evaluating literature review:
i) Is there an introduction paragraph – outlining the organization of the related literature?
ii) Does the order of the headings and sub-headings represent the relative importance of the
topics and sub-topics? Is the order rational and does it represent the Specific
Objectives/Research Questions/Hypotheses?
iii) Is the relation of the proposed study to the past and current research clearly shown in the
summary paragraphs?
iv) What new answers (extension of the body of knowledge) will the proposed research
provide?
v) What is different about the proposed study compared with previous research? Is the gap
clearly stated?
vi). What are the most relevant articles (Not more than five) that bear on this research? Are
they in the first topical heading?

Muathe SMA 2007 17


2.6. The Scope of Literature Review
• The scope in terms of width or length of the review depends on the area of study
and the extent to which research has been done.
• For areas that have been studied for a long time, there will be an extensive body
of literature available to be covered
• For areas that are relatively new or less researched areas, the researcher will have
little to assist and hence review only what is available and relevant.

2.7. Purpose of Literature Review


a) At proposal stage, Literature Review provides a good knowledge of the field of
inquiry: what the facts are who are the eminent scholars and their ideas, theories,
questions and hypotheses.
b) The main purpose is to determine what has already been done related to the
research problem being studied and shares with the reader the results of other
studies that closely relate to the study – to avoid unnecessary and unintentional
duplication [Fraenkel and Wallen, 1990].
c) To demonstrate the researchers’ familiarity with the existing body of knowledge
– hence increase their confidence in the researchers’ professional ability and
confidence.
d) It relates the study to the larger on-going dialogue in the literature about a topic,
filling gaps and extending prior studies (Marshall and Rossman, 1989);
e) It provides a framework for establishing the importance of the study, as well as a
benchmark for comparing the results of a study with other findings.
f) It provides the framework within which the research findings are to be interpreted
g) It will reveal what strategies; procedures and measuring instruments have been
found useful in investigating the problem question – helps one to avoid mistakes
that have been made by others.
h) It may help the researcher limit the research problem and to define it better.
i) It helps to determine new approaches and stimulate new ideas in areas, which
may have been overlooked.
j) Literature review helps the researcher know what new areas for further research
have been suggested by previous studies.
k) Literature Review pulls together integrates and summarizes what is known in the
area – it analyzes and synthesizes different results revealing gaps in information
and areas where major questions still remain.

Theoretical framework and Conceptual Frame work

Theoretical Framework

1. The Theoretical Framework plays an important role in a study – it guides the study as well
as strengthening it.
2. The researcher must therefore produce a concept, or build on theoretical structure, that can
explain the facts and the relationship between them; sets the conceptual foundation for
reliable knowledge, which helps predict and explain phenomena of interest.
3. Definitions of Theory
3.1 A set of concepts or constructs and the interrelations that are assumed to exist among
them; it provides the basis for establishing the hypotheses to be tested in the study.
3.2 A theory establishes a cause and effect relationship between variables with the purpose of
explaining and predicting phenomena (Khan and Best, 1979).
3.3 Theories have at least three levels (Parsons and Shkils, 1962):

Muathe SMA 2007 18


• Ad hoc classificatory system - consists of arbitrary categories constructed to
organize and summarize empirical observations.
• Taxonomies – consists of the system of categories constructed to fit the
empirical observations, so that relationships among categories can be
described.
• Conceptual Framework – descriptive categories are systematically placed
within a broad structure of explicit as well as assumed propositions.
• Theoretical systems – provide narrow system of propositions that are
interrelated in a way that permits some to be derived from others; it provides
a structure for explanation of empirical phenomena and is not limited to the
particular aspect.

Conceptual Framework

• Very useful and vital in research studies as it sets the foundation of how
concepts are related.
• It is derived from theory to identify the concepts included in complex
phenomena and show relationships.
• It explains either graphically or in narrative form, the main dimensions
being studied, or the presumed relationships among them.
• Steps in developing a conceptual framework:
1. Identify the major concepts involved in the phenomenon
under study.
2. Identify the more specific sub-concepts related to the major
ones.
3. Discern how the concepts and sub-concepts related
4. Choose a visual representation that will best show how
concepts and sub-concepts are related.
5. Write a clear explanation of the visual representation.

2.8. Types of Referencing

2.8.1 Referencing within Text-APA Styles of Citation

Work by one author:


The name of the author and the year of publication are inserted in the text at the appropriate
point:

• Jones (1999) demonstrated that…


• A recent study of conformity (Jones, 1999) demonstrated that…
• In 1999, Jones found that…

If you use the same study later in the paragraph, you don’t need to include the year of
publication again.

One work by multiple authors:

• When a work has two authors, always cite both names every time the reference is in
the text
• When a work has three to five authors, cite all authors the first time the reference is in
the text; use the first author followed by et al. in all subsequent uses.
• When a work has six or more authors, cite only the last name of the first author and
follow it with et al. The format is the same as with “work by one author”

Muathe SMA 2007 19


• Join names in running text with the word and. In tables, captions, parenthetical
material, and in the reference list, use the ampersand symbol (&).

Groups as authors
This rule applies to corporations, associations, government agencies, study groups, etc.
Groups that typically go by an abbreviation (e.g. the American Psychological Association,
APA) are spelled out in the first citation and abbreviated after that. The basic rule is that the
reader should be able to locate the source in the reference section without difficulty.

Works with no author:

• Use the title if it’s an article or chapter. Put the title in double quotation marks and the
year of publication:
(“Study Finds,” 1982)
• Italicize the title if it’s a periodical, book, brochure, or report:
The book College Bound Seniors (1979)

Works with an anonymous author


Cite the word Anonymous followed by the year of publication:(Anonymous, 1987)

Two or More Works within the same parentheses

• Order the citation in the order in which they appear in the reference section. If two
works are by the same author, arrange by publication date.
• Separate sources with a semicolon. e.g.: Several studies (Balda, 1980; Kamil, 1988;
Pepperberg and Funk, 1990) indicate that…

Specific parts of a source:

• Indicate the page, chapter, figure, table, or equation at appropriate points in the text.
If giving a quotation, always give page numbers. The word “page” is abbreviated “p.”
and the word “chapter” is abbreviated “chap.”

For electronic sources, use the paragraph symbol or the abbreviation “para.” followed by the
corresponding paragraph number

Reference List

Make sure all your references are cited in your text, and all citations in your text are listed in
the resource section!
Order of References:
• Alphabetize by names
• If there are several works by the same author…
• Arrange by year
• One author-entries precede multiple-author entries beginning with the same
surname
• References with the same first author but different subsequent authors are
arranged alphabetically by the second author, or third author if the two first
authors are the same, etc.
• If a work uses an agency as the author, cute the full name of the agency (e.g.
American Psychological Association, not APA)

Muathe SMA 2007 20


• If there is no author, the title moves to the author position and the entry is
alphabetized by the first significant word of the title
• If the author is “anonymous,” spell out Anonymous for the author and
alphabetize it as though Anonymous were a true name.

Periodical:
For article titles, capitalize the first letter of the first word and the first letter of the first word after
a semi-colon or colon.

Capitalize significant words in periodical titles with more than seven authors, abbreviate
author seven and beyond with et al.

• Author, A.A., Author, B.B., & Author, C.C. (year). Title of article. Title of
Periodical, volume (issue), xxx-xxx.

Book:
For non-periodicals, capitalize only the first word of the title and subtitle. The title should be
italicized.

• Author, A.A., & Author, B.B. (year). Title of book (pp. xxx-xxx). Location:
Publisher.

Book chapter:
For non-periodicals, capitalize only the first word of the title and subtitle. The title should be
italicized.
For book chapters, capitalize only the first word of the title. The editor’s names should not be
inverted (they should read First name Last name). The title of the book is italicized.

• Author, A.A., & Author, B.B. (year). Title of chapter. In A. Editor, B. Editor, & C.
Editor (Eds.), Title of book (pp. xxx-xxx). Location: Publisher.

Online periodical:
For electronic sources, use “retrieved from” if the website you give will take you directly to
the resource cited. Use “available from” if the website you give will take you to information
on how to obtain the cited material. If the information is from a database (such as PsycINFO),
just provide the name of the address.

• Author, A.A., Author, B.B., Author, C.C., (year). Title of article. Title of periodical,
volume (issue), xxx-xxx. Retrieved month day, year, from source.

Online document:
For electronic sources, use “retrieved from” if the website you give will take you directly to
the resource cited. Use “available from” if the website you give will take you to information
on how to obtain the cited material. If the information is from a database (such as PsycINFO),
just provide the name of the address.

• Author, A.A. (year). Title of work. Retrieved month day, year, from source.

Reliable electronic sources: how to make sure that your online source is credible

Muathe SMA 2007 21


Notes on…

Publication date: Give in parentheses the year the work was copyrighted. For magazines,
newsletters, and newspapers, give the year followed by the exact date on the publication.
Write “in press” in parentheses for articles that have been accepted for publication but not yet
published. Follow the parentheses with a period.

2.8.2 End of Text References or Bibliography


• References refer to only those sources cited in the study; whereas the Bibliography to
the list of materials that were read whether they were cited or not.
• Ensure that all cited works in the text are referenced at the end of the proposal or
report.
• Use the appropriate format. The American Psychological Association (APA) format
is the most commonly used by social scientists as compared to other styles.
Further example of literature review

Topic: Effects of Finance on Rural Development

“In the past decades, some observers have begun to challenge those traditional assumptions
and the policies that surround financial intermediation, especially in rural areas (Adams and
Graham, 1981). Early work by Goldsmith (1969) has documented the growth in financial
activities that occurs with overall growth in an economy. Other works by Gurley and Shaw
(1960) and Patrick (1964) have clarified some of the contributions that finance makes to
development. Shaw’s work was particularly useful in stimulating other to dig more deeply
into new regulations that affect intermediaries. He helped set aside the notion that financial
intermediation was only a thin reel that lightly connected consumers and producers in the
economy and he brought into focus the true nature of firms in the financial sector and the fact
that these firms produced goods and services that were useful...” (Dale W. Adams, 1979)

Muathe SMA 2007 22


CHAPTER THREE
RESEARCH METHODOLOGY

3.1 Introduction
Chapter three articulates methodology for the research. In the previous chapter two, literature
pertaining to the study was reviewed and research gaps identified. This chapter discusses the
criteria for determining the appropriate methodology for a study.

3.2 Types of Research

3.2.1 Basic Vs Applied research


a. Basic or pure/fundamental research is conducted to expand the boundaries of
knowledge itself or to verify the acceptability of a given theory (Zikmund 2004, p.7).
It aims to solve problems of a theoretical nature that have minimal impact on direct
action, performance or policy decisions (Cooper and Schindler 2003). Found in
institution of higher learning like middle level colleges and universities
b. Applied research is carried out to solve a current business problem faced by
management in a business setting, needing a timely solution (Sekaran 2000, p.6). It is
action oriented and it is found with research scholars, industries, NGO and
government ministries.

3.2.2 “Evaluation” research


Its objective is measurement and appraisal of the extent to which a given activity,
project or program has achieved its objectives”.

3.2.3 “Performance-monitoring research”


Is a sub-set of evaluation research that regularly provides feedback for monitoring and
management of a specific business activity (Zikmund 2003, p.10).

3.2.4 Qualitative and quantitative research


Researchers approach inquiry from a particular philosophical stance or world-view, which
determines the purpose, design, and methods used and the interpretation of results
(Blunt 1994). Data can be quantitative (numbers) or qualitative (words), with research using
these two methods being complementary more than competitive (Malhotra 1993; Morgan
1988; Perry 1998). Qualitative research provides insights and understanding, while
quantitative research tries to generalize those insights to a population (Perry 1998).
Qualitative research is a generic term for investigative methodologies described as
ethnographic, naturalistic, anthropological, field, or participant observer research. It
emphasizes the importance of looking at variables in the natural setting in which they are
found. Interaction between variables is important. Detailed data is gathered through open-
ended questions that provide direct quotations. The interviewer is an integral part of the
investigation (Jacob, 1988). This differs from quantitative research, which attempts to gather
data by objective methods to provide information about relations, comparisons, and
predictions and attempts to remove the investigator from the investigation (Smith, 1983).

Advantages

• Produces more in-depth, comprehensive information.


• Uses subjective information and participant observation to describe the context, or
natural setting, of the variables under consideration, as well as the interactions of the
different variables in the context. It seeks a wide understanding of the entire situation

Muathe SMA 2007 23


Disadvantages

• The very subjectivity of the inquiry leads to difficulties in establishing the reliability
and validity of the approaches and information
• Its scope is limited due to the in-depth, comprehensive data gathering approaches
required.
• It is very difficult to prevent or detect researcher induced bias

Holistic Description

When conducting qualitative research, the investigator seeks to gain a total or complete
picture. According to Stainback and Stainback (1988), a holistic description of events,
procedures, and philosophies occurring in natural settings is often needed to make accurate
situational decisions. This differs from quantitative research in which selected, pre-defined
variables are studied.

Corroboration

The purpose of corroboration is not to confirm whether people’s perceptions are accurate or
true reflections of a situation but rather to ensure that the research findings accurately reflect
people’s perceptions, whatever they may be. The purpose of corroboration is to help
researchers increase their understanding of the probability that their findings will be seen as
credible or worthy of consideration by others (Stainback and Stainback, 1988).

Maintaining the Validity of Qualitative Research

Be a listener. The subject(s) of qualitative research should provide the majority of the
research input. It is the researcher’s task to properly interpret the responses of the
subject(s).
Record accurately. All records should be maintained in the form of detailed notes or
electronic recordings. These records should also be developed during rather than after the
data gathering session.
Initiate writing early. It is suggested that the researcher make a rough draft of the study
before ever going into the field to collect data. This allows a record to be made when
needed. The researcher is more prepared now to focus the data gathering phase on that
information that will meet the specific identified needs of the project.
Include the primary data in the final report. The inclusion of primary data in the final
report allows the reader to see exactly the basis upon which the researcher’s conclusions
were made. In short, it is better to include too much detail than too little.
Include all data in the final report. The researcher should not leave out pieces of
information from the final report because she/he cannot interpret that data. In these cases,
the reader should be allowed to develop his/her conclusions.
Be candid. The researcher should not spend too much time attempting to keep her/his own
feelings and personal reactions out of the study. If there is relevance in the researcher’s
feelings to the matter at hand, these feelings should be revealed.
Seek feedback. The researcher should allow others to critique the research manuscript
following the developmental process. Professional colleagues and research subjects should
be included in this process to ensure that information is reported accurately and
completely.
Attempt to achieve balance. The researcher should attempt to achieve a balance between
perceived importance and actual importance. Often, the information reveals a difference in
anticipated and real areas of study significance.
Write accurately. Incorrect grammar, misspelled words, statement inconsistency, etc.
jeopardize the validity of an otherwise good study.

Muathe SMA 2007 24


Characteristics of Qualitative and Quantitative Research

Point of Comparisons Qualitative Research Quantitative Research

Focus of research Quality (nature, essence) Quantity (how much, how many)

Philosophical roots Phenomenology, symbolic Positivism, logical empiricism


interaction

Associated phrases Fieldwork, ethnographic, Experimental, empirical,


naturalistic, grounded, subjective statistical

Goal of investigation Understanding, description, Prediction, control, description,


discovery, hypothesis generating confirmation, hypothesis testing

Design characteristics Flexible, evolving, emergent Predetermined, structured

Setting Natural, familiar Unfamiliar, artificial

Sample Small, non-random, theoretical Large, random, representative

Data collection Researcher as primary Inanimate instruments (scales,


instrument, interviews, tests, surveys, questionnaires,
observations computers)

Mode of analysis Inductive (by researcher) Deductive (by statistical


methods)

Findings Comprehensive, holistic, Precise, narrow, reductionism


expansive

3.2.5 Descriptive Research/Survey research


Entail survey and fact finding inquiry. It purposes is to describe the state of affairs as it is in
that particular time. Survey studies- assesses the characteristics of whole populations of
people or situations. e.g. Job Analysis - Used to gather information to be used in structuring a
training program for a particular job, School Surveys - Used to gather data concerned with
internal or external characteristics of a school system, Community Surveys - Used to gather
data concerned with internal or external characteristics of a community

3.2.6 Historical research


Procedures supplementary to observation in which the researcher seeks to test the authenticity
of the reports or observations made by others. Researchers who are interested in reporting
events and/or conditions that occurred in the past employ the historical method. An attempt is
made to establish facts in order to arrive at conclusions concerning past events or predict
future events.

Primary Sources of Information - Direct outcomes of events or the records of eyewitnesses

Secondary Sources of Information - Information provided by a person who did not directly
observe the event, object, or condition

Muathe SMA 2007 25


3.2.7 Developmental studies
Are concerned with the existing status and interrelationships of phenomena and changes
that take place, as a function of time, e.g. Growth Studies - May be either longitudinal or
cross-sectional. The longitudinal technique is the most satisfactory for studying human
development. The cross-sectional technique is more commonly used because it is less
expensive. Trend Studies - Used to make predictions from social trends, economic
conditions, technological advances, etc. to future status and Model or System
Development - Creative development of a model or system (paradigm) based on a thorough
determination of the present situation or system and the goals sought

3.2.8 Analytical Research


Researcher analysis fact and information available to make a critical evaluation of
phenomena, it explains the why things are they way they are. It goes beyond description e.g.
why the number of students is increasing in a university

3.2.9 Conceptual Vs Empirical


• Conceptual-relate to obstruct ideas or theory and is used by philosophers to develop
new concept or interpret existing concepts
• Empirical- relies on experience or observation, gives less attention to theories.

3.3 Research design


A research design is a master plan/ framework or blueprint specifying the methods and
procedures for collecting and analyzing the needed information (Zikmund 2003). It specifies
the details of the procedures necessary for obtaining the information needed to structure or
solve the marketing research problems. The research design also specifies the research
methods chosen determine the information needed as well as define the sampling method,
sample size, measurement and data analysis processes (Kinnear, Taylor, Johnson and
Armstrong, 1993).

Components of a good Research Design:


• Define the information needed
• Determine the type of research to be undertaken: exploratory, descriptive, or
causal
• Specify the collection instruments and scaling procedures
• Specify the sampling process and sample size
• Develop a plan for data analysis
• Specify budgeting and project scheduling

Various methodologies are utilized for business research including exploratory, conclusive
(descriptive and causal) approaches and are considered below.

Figure. Types of Research Designs

Explanatory Conclusive

Descriptive Causal
Source: Author 2007
Characteristic of a good design
a) Has power to detect relationship among variables
b) Appropriate for the research questions

Muathe SMA 2007 26


c) Minimizes bias and maximizes the reliability
d) Has smallest errors
e) Yields maximum information and provide an opportunity for considering many
different aspects of a problem

3.3.1. Exploratory Research design


Exploratory studies are conducted when the nature of the problem is not clear with the
expectation that further research would be necessary to yield conclusive evidence.
Exploratory research helps to crystallize a problem and identify information needs for future
research (Zikmund 1997). Exploratory research is the means of seeking new insights, asking
questions and assessing phenomena in a new light (Robson 2002).
It allows the researcher to familiarize him/herself with the problem or concept to
be studied, and potentially generate hypotheses or research propositions to be
tested. The output is qualitative and may serve as a basis for subsequent
quantitative research (Zikmund 2003).

Justification for Exploratory Research design


According to Perry (1995, p.63) “Time and research constraints, necessitate the choice of only
one major research methodology which is best suited to the research problem”.

Exploratory research is usually conducted in an initial stage of the research process to


primarily determine:
i) The dimensions of the research problem,
ii) Setting of priorities and follow up actions, selection of the best of alternative
research methods available,
iii) The discovery of new ideas and contribution to the body of knowledge.

3.3.2 Conclusive research design


Objective is to test specific hypothesis and examine specific relationships. It’s more formal
and structured than exploratory research. Findings are considered to be conclusive in nature
because they are used as input for managerial decision-making.

Characteristics:

• Information needed is clearly defined


• Research process is formal and structured
• Sample is large and representative
• Data analysis is mainly quantitative
• Findings/Results: Conclusive

Divide into: Descriptive and Casual

3.3.2.1 Descriptive research design

Descriptive design is used to obtain information concerning the current status of the
phenomena to describe, "what exists" with respect to variables or conditions in a situation.
The methods involved range from the survey, which describes the status quo, the correlation
study that investigates the relationship between variables, to developmental studies, which
seek to determine changes over time. “In descriptive research the problem is structured and
well understood” (Ghauri and Gronhaug 2002, p.49). “Descriptive research portrays and
accurate profile of persons events or situations” Robson (2002). This statement is echoed by
Zikmund (2003, p.55) who states that, “The major purpose of descriptive research is to
provide information on characteristics of a population or phenomenon”. Accuracy is

Muathe SMA 2007 27


particularly important in descriptive research. Descriptive studies are based on some previous
understanding of the research problem. (Zikmund, 2003). “A descriptive study tries to
discover answers to who, what, when, where and sometimes how questions.” (Cooper and
Schindler 2003, p.10), it also attempts to capture attitude or patterns of past behavior. Two
types of descriptive research are cross-sectional studies and longitudinal studies.

• Cross-sectional studies- data is carried only once, its like snap shot
• Longitudinal studies- Data is carried out repetitively from a fixed sample, enables one
to note the changes as they occur
A sub-set of descriptive research is “diagnostic analysis” where research findings are clarified
by explanations respondents give for a behavior or attitude. (Zikmund, 2003).

Exploratory design Descriptive design


1. Objective 1. Objective
Provide insight and understanding of the problem Test specific hypothesis
2. Characteristics 2. Characteristics
Uses non probability sampling Use probability sampling
Information needed is defined loosely Information needed is clearly defined
Research process is flexible and unstructured Research process is formal and structured
Uses small sample Use large sample
Uses opened ended questionnaires Uses closed ended questionnaires
Analysis of data is qualitative Analysis of data is quantitative
Findings are not conclusive Findings are conclusive. i.e. Used for managerial
decision Making

3.3.2.2 Causal/ Experimental research


The primary purpose of causal research is the identification of cause-and-effect relationships
between variables. Exploratory and descriptive researches normally precede cause and effect
relationship studies (Zikmund 2003). Hence, in causal research the problems under scrutiny
are structured (Ghauri and Gronhaug 2003) and it is typical to have an expectation of the
relationship to be explained. Causal research attempts to establish that when we do one thing,
another thing will follow and the relationship between the two variables may be symmetrical,
reciprocal or asymmetrical (Zikmund 2003; Cooper 2003).
When the researcher is interested in delineating the important variables that are associated
with the problem, it is called a correlational study” (Sekaran 2000, p.130). Concomitant
variation occurs when two phenomena vary together. When the criterion of concomitant
variation is not met, then no causal relationship exists (Zikmund 2003).

Appropriate for the following purposes:


• To understand which variables are the cause (independent variables) and which
variables are the effect (dependent variables) of a phenomenon
• To determine the nature of the relationship between the causal variables and the effect
to be predicted

The main method of causal research is experimentation


Table. Types of Business Research
Methods Features Degree of uncertainty of
research problem
Exploratory To gain background information, define terms, Ambiguous
clarify problems and hypotheses and establish
research priorities
Descriptive To describe and measure phenomena at a particular Aware of the problem.
time

Muathe SMA 2007 28


Causal To identify cause and effect relationships among Clearly defined problem
variables
Source: (Zikmund, 2000)

3.3.3 Case study


Emory (1995) defines a case study as a “study focusing on one organization selected from the
total population of other organizations in the same industry”. Case studies are detailed
investigations of individuals, groups, institutions or other social units. The researcher
conducting a case study attempts to analyze the variables relevant to the subject under study
(Polit and Hungler, 1983). The principle difference between case studies and other research
studies is that the focus of attention is the individual case and not the whole population of
cases. Most studies search for what is common and pervasive. However, in the case study, the
focus may not be on generalization but on understanding the particulars of that case in its
complexity. A case study focuses on a bounded system, usually under natural conditions, so
that the system can be understood in its own habitat (Stake, 1988).

Justification of Case Study design


According to Yin (1994), case study research is complex and multifaceted. Pursuing case
study research requires three ingredients:
a) Capability to deal with a diversity of evidence
b) Ability to articulate research questions and theoretical propositions
c) Production of a research design
Case study design is a logical sequence that connects the empirical data to a study’s initial
research questions and, ultimately, to its conclusions.

Classification of Research Design


The main purpose of any research is to ensure the study will be relevant to the problem and
will use economical procedures. Research designs can be classified in the following ways
(Emory, 1995)
1. The degree of problem crystallization. i.e. extent to which your problem is well
defined and well understood. On this problem crystallization the concern is with the
degree, which the problem is well defined. A problem that is not well defined is said
to have a low degree of problem crystallization whereas a problem that is well
defined is said to have a high degree of problem crystallization. The degree of
crystallization can help one to determine the type of research design to use. When the
degree is low go for exploratory research design, when degree is high go for
descriptive
2. The methods of data collection. i.e. which methods are you interested to use. If you
are interested in using surveys, observations then go for descriptive design while if
you are interested in using experiments go for causal research design
3. Researcher’s control of variables. This classification is based on the researcher’s
ability to manipulate the variables. This results in experiment and ex post facto
design. In experiment design, the researcher attempts to control and or manipulate the
variables in the study. In ex post facto, investigators have no control over the
variables in sense of being able to manipulate them. They can only report what has
happened or what is happening. Usually, in this case, you go for descriptive design.
4. Topical scope- in this we have statistical and case study. In statistical design, the
emphasis is breath other than depth and findings can be generalized based on sample
and validity of the design. Case study emphasizes on full contextual analysis of pure
events, all conditions and their interrelations (emphasis on depth other than breath).
Findings cannot be generalized.
5. Research environment- does research occurs under actual environment or under other
conditions. These results to field studies and laboratory studies. i.e. field studies can
be descriptive or exploratory while laboratory study will be causal/experiment.

Muathe SMA 2007 29


6. The purpose of the study- the objective of the study influences the design to use. If
the researcher is concerned with findings out the following, who, what, when, how
etc then the study will use descriptive design but if its concerned with why of the
phenomena the study will use causal design
7. Time dimension-this result to longitudinal and cross-sectional study

3.4 The Validity and Reliability of a Questionnaire

3.4. 1. Reliability

Reliability is the consistency of your measurement, or the degree to which an instrument


measures the same way each time it is used under the same condition with the same subjects.
A measure is considered reliable if a person's score on the same test given twice is similar. It
is important to remember that reliability is not measured but estimated.

Types of reliability

a) Test/Retest
Test/retest is the more conservative method to estimate reliability. Simply put, the
idea behind test/retest is that you should get the same score on test 1 as you do on test
2. The three main components to this method are as follows:

1. Implement your measurement instrument at two separate times for each subject
2. Compute the correlation between the two separate measurements; and
3. Assume there is no change in the underlying condition (or trait you are trying to
measure) between test 1 and test 2.

b) Alternative Form Method

Like the retest method, this method also requires two testing with the same people. However,
the same test is not given each time. Each of the two tests must be designed to measure the
same thing and should not differ in any systematic way. One-way to help ensure this is to use
random procedures to select items for the different tests.

The alternative form method is viewed as superior to the retest method because a respondent’s
memory of test items is not as likely to play a role in the data received. One drawback of this
method is the practical difficulty in developing test items that are consistent in the
measurement of a specific phenomenon.

c) Split-Halves Method

This method is more practical in that it does not require two administrations of the same or an
alternative form test. In the split-halves method, the total number of items is divided into
halves, and a correlation taken between the two halves. This correlation only estimates the
reliability of each half of the test. It is necessary then to use a statistical correction to estimate
the reliability of the whole test (Cook and Campbell, 1979)

3.4.2. Validity

Cook and Campbell (1979) define Validity as the "best available approximation to the truth or
falsity of a given inference, proposition or conclusion." A test is valid when it measures what

Muathe SMA 2007 30


it’s supposed to measure. How valid a test is depends on its purpose—for example, a ruler
may be a valid measuring device for length, but isn’t very valid for measuring volume. If a
test is reliable, it yields consistent results.

Validity determines whether the research truly measures that which it was intended to
measure or how truthful the research results are. In other words, does the research instrument
allow you to hit "the bull’s eye" of your research object? Researchers generally determine
validity by asking a series of questions.

Starting with the research question itself, you need to ask yourself whether you can actually
answer the question you have posed with the research instrument selected.

Questionnaire Validity

The validity of a questionnaire relies first and foremost on reliability. If the questionnaire
cannot be shown to be reliable, there it cannot also be validity. The overriding principle of
validity is that it focuses on how a questionnaire or assessment process is used. Reliability is
a characteristic of the instrument itself, but validity comes from the way the instrument is
employed.

The following ideas support this principle:

• As nearly as possible, the data gathering should match the decisions you need to
make. This means if you need to make a priority-focused decision, such as allocating
resources or eliminating programs, your assessment process should be a comparative
one that ranks the programs or alternatives you will be considering.
• Gather data from all the people who can contribute information, even if they are hard
to contact. For example, if you are conducting a survey of customer service, try to
get a sample of all the customers, not just those who are easy to reach, such as those
who have complained or have made suggestions.

Types of Measurement Validity

Face validity: Does it appear to measure what it’s supposed to measure? There would be low
face validity when the researcher is disguising intentions. . Face validity pertains to whether
the test "looks valid" to the examinees who take it. The administrative personnel who decide
on its use, and other technically untrained observers.

Content Validity: This approach measures the degree to which the test items represent the
domain or universe of the trait or property being measured. In order to establish the content
validity of a measuring instrument, the researcher must identify the overall content to be
represented. Items must then be randomly chosen from this content that will accurately
represent the information in all areas. By using this method the researcher should obtain a
group of items, which is representative of the content of the trait or property to be measured.

Identifying the universe of content is not an easy task. It is, therefore, usually suggested that a
panel of experts in the field to be studied be used to identify a content area. For example, in
the case of researching the knowledge of teachers about a new curriculum, a group of
curriculum and teacher education experts might be asked to identify the content of the test to
be developed.

Muathe SMA 2007 31


Criterion Validity: Is the measure consistent with what we already know and what we
expect?

This approach is concerned with detecting the presence or absence of one or more criteria
considered to represent traits or constructs of interest. One of the easiest ways to test for
criterion-related validity is to administer the instrument to a group that is known to exhibit the
trait to be measured. A panel of experts may identify this group. A wide range of items should
be developed for the test with invalid questions culled after the control group has taken the
test. Items should be omitted that are drastically inconsistent with respect to the responses
made among individual members of the group. If the researcher has developed quality items
for the instrument, the culling process should leave only those items that will consistently
measure the trait or construct being studied. For example, suppose one wanted to develop an
instrument that would identify teachers who are good at dealing with abused children. First, a
panel of unbiased experts identifies 100 teachers out of a larger group that they judge to be
best at handling abused children. The researcher develops 400 yes/no items that will be
administered to the whole group of teachers, including those identified by the experts. The
responses are analyzed and the items to which the expert identified teachers and other
teachers responding differently are seen as those questions that will identify teachers who are
good at dealing with abused children.

Predictive validity: Predicts a known association between the construct you’re measuring
and something else. Predictive validity is also a form of criterion validity. If the health status
test can be shown to be able to predict the health of people not only when administered but
also at some later time, then it would be said to have predictive validity. Obviously this is
particularly important for health screening programmes, which attempt to predict later ill
health from some form of present assessment; breast self-examination is an example.

Concurrent validity: Associated with pre-existing indicators, something that already


measures the same concept.

Construct Validity: Shows that the measure relates to a variety of other measures as
specified in a theory. For example, if we’re using an Alcohol Abuse Inventory, even if there’s
no way to measure “abuse” itself, we can predict that serious abuse correlates with health,
family, and legal problems.

Convergent validity is the degree to which multiple attempts to measure the same concept is
in agreement. For testing convergent validity, we evaluated the item-to-total correlation; that
is the correlation of each item to the sum

Discriminant Validity: is the degree to which measures of different concepts are distinct.

a) Threats to Internal validity

Internal validity addresses the "true" causes of the outcomes that you observed in your study
because you are able to rule out extraneous variables, or alternative, often unanticipated,
causes for your dependent variables.

There are many different ways that the internal validity of a study can be threatened or
jeopardized. A list and brief comment of some of the more important ones are given below.

History: Outside events occurring during the course of the experiment or between repeated
measures of the dependent variable may have an influence on the results. This does not make
the test itself any less accurate.

Muathe SMA 2007 32


Maturation: Change due to aging or development, either between or within groups/ the
process of maturing which takes place in the individual during the duration of the experiment,
which is not a result of specific events but of simply growing older, growing more tired, or
similar changes. Example: Subjects become tired after completing a training session, and their
responses on the Posttest are affected.

Measuring Instruments - Changes in instruments, calibration of instruments, observers, or


scorers may cause changes in the measurements. Example: Interviewers are very careful with
their first two or three interviews but on the 4th, 5th, 6th become fatigued and are less careful
and make errors.

Pre-Testing: Experience of taking test has an influence on results. Experience refers either to
mental or physical changes—a participant’s attitude towards a topic may change because of a
survey, which could affect results, or a participant’s physiological response to a test may
change after repeated measures.

Statistical Regression - Groups are chosen because of extreme scores of measurements;


those scores or measurements tend to move toward the mean with repeated measurements
even without an experimental variable. Example: Managers who are performing poorly are
selected for training. Their average Posttest scores will be higher than their Pretest scores
because of statistical regression, even if no training were given.

Selection bias/Differential Selection - Different individuals or groups would have different


previous knowledge or ability, which would affect the final measurement if not taken into
account. Example: A group of subjects who have viewed a TV program is compared with a
group, which has not. There is no way of knowing that the groups would have been
equivalent since they were not randomly assigned to view the TV program.

Mortality: Participants drop put of the test, making the groups un-equivalent. Also, who
drops out and why? (Often it is the people who did most poorly on the test to begin with.)
Interaction: Two or more threats can interact. For example, a Selection-Maturation
interaction: difference between ages of groups could cause groups to change at different rates.
A group of young people may show more improvement in a test than a group of older people,
but that could be because their brains are developing faster relative to their age.

Experimenter bias: Expectations of an outcome may inadvertently influence participant or


cause the experimenter to view data in a different way.

Placebo Effect: Improvement due to expectation rather than the treatment itself can occur
when participants receive a treatment that they consider likely to be beneficial.

Contamination: When the comparison group is in some way affected by, or affects, the
treatment group, causing an increase of efforts. Also known as compensatory rivalry or the
John Henry effect.

Interaction: Two or more threats can interact. For example, a Selection-Maturation


interaction: difference between ages of groups could cause groups to change at different rates.
A group of young people may show more improvement in a test than a group of older people,
but that could be because their brains are developing faster relative to their age.

Muathe SMA 2007 33


b) Threats to External validity

External validity addresses the ability to generalize your study to other people and other
situations. These are factors that can affect how well your results apply to the target
population. Can we generalize with confidence that this is true for the target population?

Selection bias: The sample is not representative of the population demographically/The


selection of the subjects determines how the findings can be generalized. Subjects selected
from a small group or one with particular characteristics would limit generalizability.
Randomly chosen subjects from the entire population could be generalized to the entire
population. Example: 11 corporations turn down Researcher, requesting permission to
conduct experiment, but the 12th corporations grant permission. The 12th corporation is
obviously different then the others because they accepted. Thus subjects in the 12th
corporation may be more accepting or sensitive to the treatment

Reactive effects of experimental arrangements: Results could be because of the


experimental setting, not the treatment. Therefore, the results might not be true for the target
population. This is something to consider when controlling confounding variables—there is
always a tradeoff between control and external validity. When designing an experiment, you
should always explicitly think of what is most important; this varies for every design.
Reactive effects of testing / pretest sensitization: While the sample gets a pre-test to establish
a baseline of behavior, the target population isn’t getting a pre-test, so might respond
differently to the treatment.

Multiple treatment interference/ Pre-Testing: Giving treatments in the first place means
that second treatments could change the participant. Even if the second treatment is effective,
it might only be because of the interaction with the first treatment. This can be accounted for
by using a Latin square, where all the groups get each treatment, but in different orders.

Experimental Procedures - The experimental procedures and arrangements have a certain


amount of effect on the subjects in the experimental settings. Generalization to persons not in
the experimental setting may be precluded. Example: Department heads realize they are being
studied, try to guess what the experimenter wants and respond accordingly rather than
respond to the treatment.

Hawthorne Effect: When members of the treatment group change in terms of the dependent
variable because their participation in the study makes them feel special—so they act
differently, regardless of the treatment

Tools of Experimental Design Used to Control Factors Jeopardizing Validity

Pre-Test - The pre-test, or measurement before the experiment begins, can aid control for
differential selection by determining the presence or knowledge of the experimental variable
before the experiment begins. It can aid control of experimental mortality because the subjects
can be removed from the entire comparison by removing their pre-tests. However, pre-tests
cause problems by their effect on the second measurement and by causing generalizability
problems to a population not pre-tested and those with no experimental arrangements.

Control Group -The use of a matched or similar group, which is not exposed to the
experimental variable, can help reduce the effect of History, Maturation, Instrumentation, and
Interaction of Factors. The control group is exposed to all conditions of the experiment except
the experimental variable.

Muathe SMA 2007 34


Randomization - Use of random selection procedures for subjects can aid in control of
Statistical Regression, Differential Selection, and the Interaction of Factors. It greatly
increases generalizability by helping make the groups representative of the populations.

Tools of Experimental Design Used to Control Factors Jeopardizing Validity

Pre-Test - The pre-test, or measurement before the experiment begins, can aid control for
differential selection by determining the presence or knowledge of the experimental
variable before the experiment begins. It can aid control of experimental mortality because
the subjects can be removed from the entire comparison by removing their pre-tests.

However, pre-tests cause problems by their effect on the second measurement and by causing
generalizability problems to a population not pre-tested and those with no experimental
arrangements.

Control Group -The use of a matched or similar group, which is not exposed to the
experimental variable, can help reduce the effect of History, Maturation, Instrumentation,
and Interaction of Factors. The control group is exposed to all conditions of the
experiment except the experimental variable.
Randomization - Use of random selection procedures for subjects can aid in control of
Statistical Regression, Differential Selection, and the Interaction of Factors. It greatly
increases generalizability by helping make the groups representative of the populations.
Additional Groups - The effects of Pre-tests and Experimental Procedures can be
partially controlled through the use of groups, which were not pre-tested or exposed to
experimental arrangements. They would have to be used in conjunction with other pre-
tested groups or other factors jeopardizing validity would be present.

3.5 Measurement concepts


 The process of assigning numbers of symbols to characteristics of objects, states
or events according to certain specified rules.
 Measurement is the process of assigning numbers to represent the amount of a variable
(ie, a characteristic, attribute, trait present in a person, object, situation under study). In
fact, the measurement result is an observed score. Every observed score is made up of two
parts: a true score and an obsrved score. The true score represents the actual amount of
the variable present. The difference between the observed score and the true score is due
to measurement error/non samplying error; the amount of error cannot be measured
directly/which results solely from the manner in which the observations are made. The
smaller the error score, the more closely the observed score reflects the true score or the
actual amount of the variable present. Measurement results that contain little error are
said to be reliable.

Sources of measurement error include

• The instrument (eg, improper calibration, variations in use),


• The environment (eg, room temperature, noise level),
• The researcher (eg, fatigue, mood), and
• Data processing (eg, striking the wrong key while entering data into the computer,
mathematical errors in summing scores).

Scaling
 A scale is a device for assigning units of analysis to categories of a variable.
 Scaling involves generation of a continuum upon which measured objects are located.

Muathe SMA 2007 35


Scales in Measurement

Nominal scale
 The numbers only serve as labels for identifying and classifying objects, properties or
events.
 Example: Identification of respondents, brands, Attributes etc.
 Permissible statistics: percentages, mode, chi-square, etc.
 Nominal scale is ameasurement consists of assigning items to groups or categories. No
quantitative information is conveyed and no ordering of the items is implied. Nominal
scales are therefore qualitative rather than quantitative. Religious preference, race, and
sex are all examples of nominal scales

Some researchers actually question whether a nominal scale should be considered a "true"
scale since it only assigns numbers for the purpose of categorizing events, attributes or
characteristics. The nominal scale does not express any values or relationships between
variables. Labeling men as "1" and women as "2" (which is one of the most common ways of
labeling gender for data entry purposes) does not mean women are "twice something or other"
compared to men. Nor does it suggest that 1 is somehow "better" than 2 (as might be the case
in competitive placement).

Consequently, the only mathematical or statistical operation that can be performed on


nominal scales is a frequency run or count. We cannot determine an average, except for the
mode – that number which holds the most responses -, nor can we add and subtract numbers.

Ordinal scale: a scale in which data can be ranked, but in which no arithmetic
transformations are meaningful. For example, wind speed might be measured as high,
medium, or low, but we would not say that the difference between high and medium wind
speed is equal to (or any arithmetic transformation of) the difference between a medium and
low wind speed. In short, the distances between points on an ordinal scale are not meaningful.
Ordinal scale uses likert scale

Measurements with ordinal scales are ordered in the sense that higher numbers represent
higher values. However, the intervals between the numbers are not necessarily equal. For
example, on a five-point rating scale measuring attitudes toward gun control, the difference
between a rating of 2 and a rating of 3 may not represent the same difference as the difference
between a rating of 4 and a rating of 5. There is no "true" zero point for ordinal scales since
the zero point is chosen arbitrarily. The lowest point on the rating scale in the example was
arbitrarily chosen to be 1. It could just as well have been 0 or -5.

Note
 The numbers are assigned to objects to indicate the relative extent to which the objects
possess some characteristics. It allows you to determine whether an object has more or less
of a characteristic than some other object, but not how much more or less.
 Example: quality rankings, preferences etc.
 Permissible statistics: percentile, quartile, median, rank-order correlation etc.

Interval Scale: an interval scale has the additional requirement of a series of equal
intervals, indicating the degree of difference when comparing things. An example of
this is the Celsius (centigrade) temperature scale where the difference between 10oC
and 20oC is exactly the same as the difference between 30oC and 40oC, that is, a
difference of 10oC. Interval measurement allows us to make precise claims about the
measurable differences between observations in amount or size. We cannot, however,
make statements such as "40oC is twice as hot as 20oC", even if it may seem like it on

Muathe SMA 2007 36


such as hot day. The reason for this is that there is no absolute value of zero to
indicate complete absence of the variable being measured. In the Celsius scale, 0oC is
the temperature at which water freezes. There are much colder temperatures than 0oC,
which are indicated by a minus sign in front, e.g. the outside temperature in
Antarctica can be -50oC or more
Note
 The numbers are used to rank objects and also represent equal increments of the attribute
being measured. The intervals between numbers can be compared but not the absolute
values.
 Example: measurement of temperature, attitudes etc.
 Permissible statistics: measures of central tendency; mode, median, mean; standard
deviation, product moment correlations, t-test and F-test etc.

Ratio Scale: "a ratio scale is an interval scale in which distances are stated with respect to a
rational zero rather than with respect to, for example, the mean". A rational zero is a location
on an interval scale deliberately chosen for reasons other than the current data. You are also
allowed to take ratios among ratio scaled variables. Physical measurements of height,
weight, and length are typically ratio variables. It is now meaningful to say that 10 m
is twice as long as 5 m. This ratio hold true regardless of which scale the object is
being measured in (e.g. meters or yards). This is because there is a natural zero.

Ratio scales should be used to gather quantitative information, and we see them perhaps most
commonly when respondents are asked for their age, income, years of participation, etc. In
order to respect the notion of equal distance between adjacent points on the scale, you must
make each category the same size.

Note
 This scale possesses all the properties of the nominal, ordinal, and interval scales,
and in addition, an absolute zero point. Thus, in ratio scales we can identify or
classify objects, rank the objects, and compare intervals or differences.
 Example: sales, costs, market share, and number of customers.
 Permissible statistics: All types of statistical operations can be performed on the ratio
scales.

Classification of scaling techniques

a. Comparative scaling techniques

1. Paired comparison scaling


 The objects are presented to the respondent two at a time and the respondent has to
choose between them according to some criterion
 Number of comparisons = n(n-1)
2

 Limitations:
1. More than 5 comparisons create a problem.
2. There may be violations of the assumptions of transitivity.
3. The order in which the objects are presented may bias the results.
4. In the market place there is hardly pairwise comparisons
5. Respondents may prefer one object over others, but they may not like it in an
absolute sense.

Muathe SMA 2007 37


2. Rank Order Scaling
 Respondents are presented with several objects simultaneously and asked to rank
order them according to some criterion.
 The technique is commonly used to measure preferences for
brands as well as attributes.
 Note:
1. Closely resembles the shopping environment.
2. Takes less time and eliminates intransitive responses.
3. Relatively easy to administer.
4. Major disadvantages: Results may not be generalizable.

3. Constant Sum Scaling


 Requires the respondents to allocate (divide) a constant sum of units, such as points,
among a set of stimulus objects with respect to some criterion to reflect the relative
preference for each object.
 Widely used to measure the relative importance of attributes.

 Note:
1. The scale allows for fine discrimination among stimulus objects without requiring
too much time.
2. Disadvantages:
 Only a limited number of objects or attributes can be compared on such a
scale.
 Respondents may allocate more or fewer units than those specified

b. Non-comparative scaling techniques


 Respondents employ whatever rating standard seems appropriate to them when
evaluating objects.

1. Continuous Rating Scale (Graphic Rating Scale)


 Respondents rate the objects by placing a mark at the appropriate position on a line that runs
from one extreme of the criterion to the other.
 Once the respondent has provided the ratings, the researcher divides the lines to as
many categories as desired and assigns scores based on categories into which the
ratings fall.
 Note:
i. They are easy to construct
ii. However, scoring is cumbersome and unreliable.
iii. Have limited use in marketing research.

3. Itemized Rating Scales


 Respondents are provided with a scale that has a number or brief description
associated with each category.

i. Likert (summated) scale


 Requires a respondent to indicate a degree of agreement or disagreement with a variety of
statements related to the attitude object.
 Consists of two parts:
i. A declarative statement
ii. A list of response categories ranging from “strongly agree” to “strongly
disagree.”
ii. Semantic Differential Scale

Muathe SMA 2007 38


 Widely used to describe the set of beliefs that comprise a person’s image of an
organization or brand.
 An insightful procedure for comparing the images of competing brands, stores, or
services used.
 Typically the respondent is asked to describe the organization/product by means of
ratings (seven point rating scales) on a set of bipolar adjectives.

I. Non-comparative scale construction considerations

a. Number of scale categories


 Respondent’s interest in the task and knowledge about the object.
 Mode of data collection
 Data analysis and use
b. Balance versus unbalanced scale
 Balanced scale: favorable categories equal unfavorable categories
 Unbalanced scale: favorable categories unequal to unfavorable categories

c. Odd versus even number of categories


 Odd number of categories: the middle scale position is generally designated as neutral or
impartial.
 Category choice decision depends on whether some of the respondents may be neutral on the
response being measured.
d. Forced versus non-forced choice
 Forced scale: the respondents are forced to express an opinion because a “no opinion”
option is not provided.
 In general, a don’t know category should be provided whenever respondents have
insufficient experience to make a meaningful judgment.

e. Nature and degree of verbal description


 Scale categories may have verbal, numerical or pictorial descriptions.
 Labeling can be done on:
 Every scale category
 Some scale categories, or
 To only extreme scale (polar) categories
 The strength of the adjective used to anchor the scale may influence the distribution of the
responses.

f. Physical form of the scale


 Scales can be presented vertically or horizontally.
 Boxes, discrete lines, can express categories on a continuum and may or may not have
numbers (negative or positive) assigned to them.

3.6 Population and sample


A population is the total collection of elements about which we wish to make some inferences
(Cooper and Schindler 2000). But in some studies you can study a small group instead of the
total population. Cooper and Schindler (2003 p.179) state that the basic idea of sampling is
that by selecting some of the elements in a population, conclusions may be drawn about the
entire population, Sekaran (2000 p.267) concurs with this view, stating that by “studying the
sample, and understanding the characteristics of the sample, it would be possible to generalize
the properties or characteristics to the population elements”.

Sampling has several advantages including lower costs, greater accuracy of results and
faster speed of data collection.

Muathe SMA 2007 39


The ultimate test of a sample design is how well it represents the characteristics of the
population it purports to represent. In measurement terms, the sample must be valid; validity
depends on accuracy and precision. Accuracy is defined as the degree to which bias is absent
from the sample. Precision is measured by the standard error of estimate; the smaller the
standard error of estimate, the higher the precision of the sample.
• Sampling frame- a complete list of all elements of a population from which the
sample is drawn
3.7 Sampling technique-
Methods used to arrive at the desire sample size. There are two methods of sampling;
probability and non-probability sampling. Probability sampling is based on the concept of
random selection while non-probability sampling is arbitrary and subjective. These are
considered in turn.
1. Probability sampling
There are five types of probability sampling:
• Simple random sampling, systematic sampling. Stratified sampling, cluster
sampling and double sampling

(a) Simple random sampling


Probability sampling procedure that ensures that each case in the population has an equal
chance of being included in the sample, you start by assigning each item a unique number,
and then you start picking the items. You can do this with/without replacement. This method
is best used when you have an accurate and easily accessible sampling frame that lists the
entire population (Saunders, Lewis and Thornhill, 2003 P. 162). It works well with small
samples

b) Systematic sampling
Prob. sampling in which the initial sampling point is selected at random, and then cases are
selected at regular interval; you need to calculate the sampling fraction= actual sample
size/total population. It works equally well with small & large sample, its suitable for
geographically disperse cases if you do not require face-to-face contact when collecting your
data.

c) Stratified random sampling


Prob. Sampling procedure in which the population is divided into two or more relevant strata
and a random sample (systematic or simple) is drawn from each of the strata.

d) Cluster sampling
Cluster sampling is the process by which a population is divided into groups or clusters of
elements with some groups randomly selected for study. The cluster can be based on any
naturally occurring grouping (Saunders, Lewis and Thornhill, 2003 P. 167).e.g. geographical
area and type of manufacturing firm. In this technique it result to sample that represent the
total population less accurately than stratified sampling

e) Double sampling/Multi stage cluster sampling


Double sampling, sequential or multiphase sampling is the process of collecting some
information by sample and then using this information as the basis for collecting a sub-sample
for further study. It’s normally used to overcome problems associated with geographically
disperse population when face to face contact is needed or where it’s expensive and time
consuming to construct a sampling frame for large geographical area.

Muathe SMA 2007 40


Samplying error-is the differecne between the sample and the population that are due solely
to the particular units that happened to have been selected. e.g If a sample of 100 ladies is
measured and are all found to be taller than six feet. Samplying error can be caused by chance
or samplying bias

Samplying bias-tendacy to favour the selection of units that have particular characeteristics.
It results from poor sampling plan

2. Non-probability sampling
There are several types of non-probability sampling:
a) Convenience/haphazard sampling: involves selecting haphazard those cases that are
easiest to obtain for your sample, such as the person interviewed at random in a
shopping center for a television programme. i.e. where cases are the easiest to obtain.
You continue with this until your desired sample size.
b) Purposive /Judgmental sampling: enable you to use your expert judgment to select
cases that will best enable you to answer your research question/meet your objectives.
It works well with small sample such as case study and when you want to select cases
that are particularly informative.(Neuman,2000) This is done on the judgment of the
researcher; it can be on extreme cases, heterogeneity (maximum variation),
homogeneity (maximum similarity), critical cases and typical cases.
c) Quota sampling: ensures that certain groups/characteristics of the population are
represented by the sample chosen.
d) Snow balling- non-probability sampling procedure in which subsequent respondents
are obtained from information provided by initial respondents.

• Sample size- the selected element to be studied/subset of a population. To ensure that


the sample accurately represents the population, the researcher must clearly define the
characteristics of the population, determine the required sample size and choose the
best method for selecting members of the sample from the larger population (Cooper,
2000).

3.8 Selecting Your Sample


Perhaps the most frequently asked question concerning sampling is, "What size sample do I
need?" The answer to this question is influenced by a number of factors, including the
purpose of the study, population size, the risk of selecting a "bad" sample, time available,
budget and the allowable sampling error.

Strategies for determining sample size


There are several approaches to determining the sample size. These include using a census for
small populations, imitating a sample size of similar studies, using published tables, and
applying formulas to calculate a sample size. Each strategy is discussed below (Glenn D.
1992).

a) Using a Census for Small Populations


One approach is to use the entire population as the sample. Although cost considerations
make this impossible for large populations, a census is attractive for small populations (e.g.,
200 or less). A census eliminates sampling error and provides data on all the individuals in the
population. In addition, some costs such as questionnaire design and developing the sampling
frame are "fixed," that is, they will be the same for samples of 50 or 200. Finally, virtually the
entire population would have to be sampled in small populations to achieve a desirable level
of precision.

Muathe SMA 2007 41


b) Using a Sample Size of a Similar Study
Another approach is to use the same sample size as those of studies similar to the one you
plan. Without reviewing the procedures employed in these studies you may run the risk of
repeating errors that were made in determining the sample size for another study. However, a
review of the literature in your discipline can provide guidance about "typical" sample sizes,
which are used.

c) Using Published Tables


A third way to determine sample size is to rely on published tables, which provide the sample
size for a given set of criteria.

d) Using Formulas to Calculate a Sample Size


Although tables can provide a useful guide for determining the sample size, you may need to
calculate the necessary sample size for a different combination of levels of precision,
confidence, and variability. In some situation, such as experimental research, it is necessary
for you to calculate the precise minimum sample size you require (Saunders, Lewis and
Thornhill, 2003 P. 466). This is based on: How confident you need to be that the estimate is
accurate (the level of confident in the estimate). How accurate the estimate needs to be (the
margin of error that can be tolerated).

The proportion of responses you expect to have some particular attribute. You need to collect
a pilot sample of about 30 observations and from this to infer the likely proportion from your
survey. Once you have all the information you substitute it into the formula

n=p% *q%*[z/e%] 2

Where n is the minimum sample size required


p% is the proportion belonging to the specified category
q% is the proportion not belonging to the specified category
z is the z value corresponding to the level of confidence required ( see table below)
e% is the margin of error required

Example
To answer a research question you need to estimate the proportion of a total population of
4000 home care clients who receive a visit from their home care assistant at least once a
week. You have been told that you need to be 95% certain that ‘estimate’ is accurate (the
level of confidence in the estimate); this corresponds to a z score of 1.96 (In table above).
You have also been told that ‘estimate’ needs to be accurate to within plus or minus 5% of the
true percentage (the margin of error that can be tolerated)
You still need to estimate the proportion of responses that receive a visit from their home care
assistant at least once a week. From your pilot survey you discover that 12 out of the 30
clients receive a visit at least once a week-in other words that 40% belong to the specified
category. This means that 60% do not. These figures can then be substituted into the formula:

Table 1 Levels of confidence and associated Z values


Level of confidence Z value
90% 1.65
95% 1.96
99% 2.57

n=40*60*(1.96/5)2
=2400*(0.392) 2
=2400*0.154

Muathe SMA 2007 42


=369.6

Where your population is less than 10,000 a smaller sample size can be used without affecting
the accuracy. This is called adjusted minimum sample size. Calculated as follows
n1 =n/1+ (n/N)

Where n1 is the adjusted minimum sample size n is the minimum sample size (as calculated)
and N is the total population

As the total population of home care clients is 4000 the adjusted minimum sample size can
now be calculated.

n1=369.6/1+ (369.6/4000)
=369.6/1+0.092
= 369.6/1.092
=338.46
Because of the small total population you need a minimum sample of only 339. However, this
assumes a response rate of 100%

Other considerations
In completing this discussion of determining sample size, there are three additional issues.
First, the above approaches to determining sample size have assumed that a simple random
sample is the sampling design. More complex designs, e.g., stratified random samples, must
take into account the variances of subpopulations, strata, or clusters before an estimate of the
variability in the population as a whole can be made.

Another consideration with sample size is the number needed for the data analysis. If
descriptive statistics are to be used, e.g., mean, frequencies, and then nearly any sample size
will suffice. On the other hand, a good sample, e.g., 200-500, is needed for multiple
regressions, analysis of covariance, or log-linear analysis, which might be performed for more
rigorous state impact evaluations. The sample size should be appropriate for the analysis that
is planned.

In addition, an adjustment in the sample size may be needed to accommodate a comparative


analysis of subgroups (e.g., such as an evaluation of program participants with non-
participants). Sudman (1976) suggests that a minimum of 100 elements is needed for each
major group or subgroup in the sample and for each minor subgroup, a sample of 20 to 50
elements is necessary. Similarly, Kish (1965) says that 30 to 200 elements are sufficient when
the attribute is present 20 to 80 percent of the time (i.e., the distribution approaches
normality). On the other hand, skewed distributions can result in serious departures from
normality even for moderate size samples (Kish, 1965 p.17). Then a larger sample or a census
is required.

Finally, the sample size formulas provide the number of responses that need to be obtained.
Many researchers commonly add 10% to the sample size to compensate for persons that the
researcher is unable to contact. The sample size also is often increased by 30% to compensate
for non-response. Thus, the number of mailed surveys or planned interviews can be
substantially larger than the number required for a desired level of confidence and precision.

3.9 Data Collection Methods and Procedure

3.9.1 Data Collection Methods


In this section, the researcher should describe the major methods for collecting data from the
subjects. The major methods for obtaining data in a study may include:

Muathe SMA 2007 43


• Personal Interviews,
• Questionnaires
• Observation techniques
• Focus Groups
Questionnaire and interview as data-gathering tools

Questionnaire

A questionnaire is a means of eliciting the feelings, beliefs, experiences, perceptions, or attitudes


of some sample of individuals. As a data collecting instrument, it could be structured or
unstructured.

The questionnaire is most frequently a very concise, preplanned set of questions designed to yield
specific information to meet a particular need for research information about a pertinent topic. The
research information is attained from respondents normally from a related interest area. The
dictionary definition gives a clearer definition: A questionnaire is a written or printed form used in
gathering information on some subject or subjects consisting of a list of questions to be submitted to
one or more persons.

Objectives:
1. Must translate the information needed into a set of specific questions that the
respondents can and will answer.
2. Must uplift, motivate and encourage the respondent to become involved in
the interview, to cooperate, and to complete the interview.
3. Should minimize response error

Advantages

Economy - Expense and time involved in training interviewers and sending them to interview are
reduced by using questionnaires.
Uniformity of questions - Each respondent receives the same set of questions phrased in exactly the
same way. Questionnaires may, therefore, yield data more comparable than information obtained
through an interview.
Standardization - If the questions are highly structured and the conditions under which they are
answered are controlled, then the questionnaire could become standardized.

Disadvantages

Respondent’s motivation is difficult to assess, affecting the validity of response.


Unless a random sampling of returns is obtained, those returned completed may represent biased
samples.

Factors affecting the percentage of returned questionnaires

Length of the questionnaire.


Reputation of the sponsoring agency.
Complexity of the questions asked.
Relative importance of the study as determined by the potential respondent.
Extent to which the respondent believes that his responses are important.
Quality and design of the questionnaire.
Time of year the questionnaires are sent out.

Muathe SMA 2007 44


The questionnaire is said to be the most "used and abused" method of gathering information by the
lazy man. Because often it is poorly organized, vaguely worded, and excessively lengthy.

Overcoming unwillingness to answer:


a. Provide a list of answer categories
b. Explain the sponsor of the study
c. Explain why the data are needed.
 Place the sensitive topics ate the end of the questionnaire.
 Preface the question with a statement that the behaviour of interest is
common. For example: “Recent studies show that most Kenyans live in
debt.”
 Ask the question using third-person technique.
 Hide the question in a group of other questions that respondents are willing
to answer.

Two types of questionnaires

Structured questions /Closed or restricted form - calls for a "yes" or "no" answer, short
response, or item checking; is fairly easy to interpret, tabulate, and summarize. Specify the set
of response alternatives and the response format.

May be multiple-choice or dichotomous.


a. Multiple-choice questions
 The researcher provides a choice of answer categories.
 The categories should be mutually exclusive and exhaustive.
Advantages:
 Interviewer bias is reduced
 Coding and processing of data is less costly
 Suitable particularly for self-administered questionnaires
Disadvantages:
 Requires considerable effort to design
 It is difficult to obtain information on alternatives not listed.
 Have potential for order bias.

b.Dichotomous questions
 A dichotomous question has only two response alternatives, such as a yes or
no, or agrees or disagrees.

Open or unrestricted form - calls for free response from the respondent; allows for greater depth
of response; is difficult to interpret, tabulate, and summarize.

Advantages:
• Enable the respondents to express general attitudes and opinions that can help the
researcher interpret their responses to structured questions.
• Respondents are free to express any views. Hence, are useful in exploratory research.
Disadvantages:
• Potential for interviewer bias is high. The data depend on the skills of the interviewers.
• Coding of responses is costly and time consuming.
• Not very suitable for self-administered questionnaires, respondents tend to be briefer in
writing than in speaking.

Muathe SMA 2007 45


Characteristics of a good questionnaire

Deals with a significant topic, a topic the respondent will recognize as important enough to
justify spending his time in completing. The significance should be clearly stated on the
questionnaire or in the accompanying letter.
Seeks only that information which cannot be obtained from other sources such as census data.
As short as possible, only long enough to get the essential data. Long questionnaires frequently
find their way into wastebaskets.
Attractive in appearance, neatly arranged, and clearly duplicated or printed.
Directions are clear and complete, important terms are defined, each question deals with a single
idea, all questions are worded as simply and clearly as possible, and the categories provide an
opportunity for easy, accurate, and unambiguous responses.
Questions are objective, with no leading suggestions to the desired response.
Questions are presented in good psychological order, proceeding from general to more specific
responses. This order helps the respondent to organize his own thinking, so that his answers are
logical and objective. It may be wise to present questions that create a favorable attitude before
proceeding to those that may be a bit delicate or intimate. If possible, annoying or embarrassing
questions should be avoided.
Easy to tabulate and interpret. It is advisable to pre-construct a tabulation sheet, anticipating how
the data will be tabulated and interpreted, before the final form of the question is decided upon.
Working backward from a visualization of the final analysis of data is an important step in
avoiding ambiguity in questionnaire form. If mechanical tabulating equipment is to be used, it is
important to allow code numbers for all possible responses to permit easy transfer to machine-
tabulation cards.

Questionnaire Design process


i. Specification of the information needed
ii. Specification of the type of interviewing method
iii. Determination of the content of individual questions
iv. Designing of the questions
v. Deciding on the question structure
vi. Determining the question wording
vii. Determining the order of questions
viii. Determining questionnaire form and layout
ix. Pre-testing the questionnaire

i. Specification of the Information Needed


1. Review the components of the problem and the approach, particularly:
a. The research questions
b. Hypotheses
2.Prepare a set of dummy tables describing how the analysis will be
structured once the data have been collected.
3.Identify the target population

Rules for proper construction of a questionnaire

• Determining the Order of Questions. Opening questions should be interesting,


simple, and not threatening. In some instances, they can be used to filter respondents
One is how the question and answer choice order can encourage people to
complete your survey. The other issue is how the order of questions or the
order of answer choices could affect the results of your survey.

Muathe SMA 2007 46


• Effect on subsequent questions. Questions asked early in a sequence can
influence the responses to subsequent questions. As a rule of thumb, general
questions should precede the specific questions.

• Ideally, the early questions in a survey should be easy and pleasant to answer.
These kinds of questions encourage people to continue the survey. In
telephone or personal interviews they help build rapport with the interviewer.

• Difficult questions comprise of questions that are sensitive, embarrassing,


complex, or dull. Whenever possible leave difficult or sensitive questions until near
the end of your survey. Any rapport that has been built up will make it more likely
people will answer these questions. If people quit at that point anyway, at least they
will have answered most of your questions.

• Rating scales higher numbers should mean a more positive or more agreeing answer.

• Keep the questionnaire as short as possible.

• Reassure your respondent that his or her responses will be kept confidential.
• Type of information. The type of information obtained in a questionnaire may
be classified as:

a) Basic information: relates directly to the research problem.


b) Classification information: consisting of socioeconomic and
demographic characteristics
c) Identification information: includes name, address, and telephone
number.

• Include a cover letter with all questionnaires, which includes your name and
telephone number for the respondent to call if they have any questions. Include
instructions on how to complete the questionnaire. The most effective cover letters
and invitations include the following elements: Ask the recipient to take the survey.
Explain why taking it will improve some aspect of the recipient's life (it will help
improve a product, make an organization better meet their needs, make their opinions
heard). Appeal to the recipient's sense of altruism ("please help"). Ask the recipient
again to take the survey.
• You may want to leave a space for the respondent to add their name and title. Some
people will put in their names, making it possible for you to re-contact them for
clarification or follow-up questions.
• Avoid technical terms and acronyms, unless you are absolutely sure that respondents
know they mean.
• Make sure your questions accept all the possible answers. A question like "Do you
use regular or premium fuel in your car?" does not cover all possible answers
• Be aware of cultural factors.
• Reproduction of the Questionnaire. The questionnaire should have a
professional appearance. Splitting a question into separate pages should be
avoided. Vertical columns should be used for individual questions. Avoid
crowding the questions to conserve space. Directions or instructions for individual
questions should be placed as close to the question as possible.
• Leave your demographic questions (age, gender, income, education, etc.) until
the end of the questionnaire. By then the interviewer should have built a

Muathe SMA 2007 47


rapport with the interviewee that will allow honest responses to such personal
questions.
• Determining Questionnaire Form and Layout. It is good practice to divide the
questionnaire into several parts. The questions in each part should be
numbered, particularly when branching questions are used. Grouping together
questions on the same topic also makes the questionnaire easier to answer.
The questionnaires themselves should be numbered serially.
• Do not have an interviewer ask a respondent's gender, unless they really have
no idea.
• Determining the Question Wording. Guidelines to reduce wording problems:

i) Define the issue: Consider: who, what, when, where, why,


and way (6 Ws)
ii) Use ordinary words: consider the level of education and
understanding of the respondents
iii) Use unambiguous words: the words used in a questionnaire
should have a single meaning that is known to the
respondents.
iv) Avoid emotionally charged words or leading questions/
biasing questions that point towards a certain answer. "What
do you think of the XYZ proposal?" than from “XYZ
proposal okay?"

• Leave a space at the end of a questionnaire entitled "Other Comments."


Sometimes respondents offer casual remarks that are worth their weight in
gold and cover some area you did not think of, but which respondents consider
critical.
• Determination of the Content of Individual Questions Is the question
necessary? Consider the contribution of the question to the information needed
or serve some specific purpose.

Source: http://psych.athabascau.ca/html/aupr/tools.shtml

Personal Interview

An interview is a direct face-to-face attempt to obtain reliable and valid measures in the form
of verbal responses from one or more respondents. It is a conversation in which the roles of
the interviewer and the respondent change continually.

Advantages

Allows the interviewer to clarify questions.


Can be used with young children and illiterates.
Allow the informants to respond in any manner they see fit.
Allows the interviewers to observe verbal and non-verbal behavior of the
respondents.
Means of obtaining personal information, attitudes, perceptions, and beliefs.
Reduces anxiety so that potentially threatening topics can be studied.

Disadvantages

Muathe SMA 2007 48


Unstructured interviews often yield data too difficult to summarize or evaluate.
Training interviewers, sending them to meet and interview their informants, and
evaluating their effectiveness all add to the cost of the study.

Structured interviews are rigidly standardized and formal.

The same questions are presented in the same manner and order to each subject.
The choice of alternative answers is restricted to a predetermined list.
The same introductory and concluding remarks are used.
They are more scientific in nature than unstructured interviews.
They introduce controls that permit the formulation of scientific generalizations.

Limitation of the structured interviews - Collecting quantified, comparable data from all
subjects in a uniform manner introduces rigidity into the investigative procedures that may
prevent the investigator from probing in sufficient depth.

Unstructured interviews are flexible.

They have few restrictions.


If preplanned questions are asked, they are altered to suit the situation and subjects.
Subjects are encouraged to express their thoughts freely.
Only a few questions are asked to direct their answers.
In some instances, the information is obtained in such a casual manner that the
respondents are not aware they are being interviewed.
Advantages of the unstructured interview:
One can penetrate behind initial answers.
One can follow up unexpected clues.
One can redirect the inquiry into more fruitful channels.
It is very helpful in the exploratory stage of research.
Disadvantages of the unstructured interview:
Difficult to quantify the accumulated qualitative data.
One usually cannot compare data from various interviews and derive generalizations
that are universally applicable because of the non-uniform tactics employed.
Unstructured interviews are not ordinarily employed when testing and verifying
hypotheses.

Factors to be considered before interviewing

Determine when to interview.


Determine if the respondent is telling the truth.
Consideration for sources of bias.

Four specific sources of error

Errors in asking questions occur whenever an inappropriate question is asked where the
response to the question will not satisfy the objectives of the investigation.
Errors in probing occur when the interviewer does not allow the respondent sufficient
time to respond or when he anticipates what the response will be.
Errors in motivating respondents can be a source of invalidity. Unless respondents are
motivated by interviewers to answer questions to the best of their ability, they are likely
to be uncooperative.

Muathe SMA 2007 49


Errors in recording responses occur when the interviewer records the respondent’s
answers inaccurately by omitting information.

Evaluation of a Questionnaire or Interview Script

Is the question necessary? How will it be used? What answers will it provide? How will
it be tabulated, analyzed, and interpreted?
Are several questions needed instead of one?
Do the respondents have the information or experience necessary to answer the
questions?
Is the question clear?
Is the question loaded in one direction? Biased? Emotionally toned?
Will the respondents answer the question honestly?
Will the respondents answer the question?
Is the question misleading because of unstated assumptions?
Is the best type of answer solicited?
Is the wording of the question likely to be objectionable to the respondents?
Is a direct or indirect question best?
If a checklist is used, are the possible answers mutually exclusive, or should they be?
If a checklist is used, are the possible answers "exhaustive"?
Is the answer to a question likely to be influenced by preceding questions?
Are the questions in psychological order?
Is the respondent required to make interpretations of quantities or does the respondent
give data which investigator must interpret?

Direct Observation

Direct observation is a measuring instrument used to measure such traits as self-control,


cooperativeness, truthfulness, and honesty. In many cases, systematic direct observation of
behavior is the most desirable measurement method. An investigator identified the behavior
of interest and devises a systematic procedure for identifying, categorizing, and recording the
behavior in either a natural or "staged" situation.

Five Types of Participant Observation

External Participation constitutes the lowest degree of involvement in observation.


Observing situations on television or videotape can do this type of observation.
Passive Participation means the researcher is present at the scene of action but does not
interact or participate. The researcher finds an observation post and assumes the role of a
bystander or spectator.
Balanced Participation means that the researcher maintains a balance between being an
insider and being an outsider. The researcher observes and participates in some activities,
but does not participate fully in all activities.
Active Participation means that the researcher generally does what others in the setting
do. While beginning with observation to learn the rules, as they are learned the researcher
becomes actively engaged in the activities of the setting.
Total Participation means the researcher is a natural participant. This is the highest level
of involvement and usually comes about when the researcher studies something in which
he or she is already a natural participant.

Conference

Muathe SMA 2007 50


The Conference technique is a face-to-face discussion of a topic of interest.

Experts are brought together at a common site.


The group brainstorms to generate as many ideas on the problem as possible. The only
rule regarding this step is that there are no negative reactions to any suggestions.
The experts then evaluate and rate the suggestions.
The most popular responses are determined, and an arbitrary number are chosen based on
natural breaks or logic.
Finally, the group discusses the strengths and weaknesses of the top suggestions and
ranks the final choices.

One major drawback to this method of data gathering is the influence of personalities as a
strong factor in determining consensus.

Delphi Technique: the Delphi technique is used in the planning process, especially with
appraising the future political, economic, and social environment, ascertaining the role of the
organization in this environment, and anticipating and perceiving the needs and requirements
of client groups.

The Delphi technique is a means of securing expert convergent opinion without bringing the
experts together in face-to-face confrontation. This opinion of experts is usually gained
through the use of successive questionnaires and feedback with each round of questions being
designed to produce more carefully considered group opinions.

Procedure

A questionnaire is mailed to respondents who remain anonymous to one another. The first
questionnaire may call for a list of opinions involving experienced judgment, a list of
predictions, or a list of recommended activities.
On the second round, each expert receives a copy of the list and is asked to rate or
evaluate each item by some such criterion as importance, probability of success, etc.
The third questionnaire, which includes the list and ratings, indicates the consensus, if
any, and asks the experts either to revise their opinion or specify their reasons for
remaining outside the consensus.
The fourth questionnaire includes lists, ratings, consensus, and minority opinions. It
provides the final chance for revision of opinions.

Advantages

It allows planners to get the views in a broad perspective rather than from an isolated
point of view.
Delphi in combination with other tools is a very potent device for teaching people to
think about the future of education in much more complex ways than they ordinarily
would.
It is a useful instrument even for a general teaching strategy.
It is a planning tool, which may aid in probing priorities held by members and
constituencies of an organization.
Delphi saves time and travel, which are required to bring people together for a
conference.
Delphi prevents personality biases from affecting the results.

Disadvantages

Muathe SMA 2007 51


Interpretation of the participants’ responses and the meaning of the importance of the
factors in planning are difficult.
It is unknown how the findings can be generalized to Delphis which cover a 30 year
extension into the future.
Delphi at present can render no rigorous distinction between reasonable judgment and
mere guessing.
It is difficult to determine the amount of bias injected into the results by the person
administering the Delphi.

Nominal Group Technique

The committee chairman reiterates that the role of everyone present is to contribute his
perceptions, expertise, and experience to the identification of priority problems. He
emphasizes that the purpose is to identify and describe priority problems and needs. He
indicates that each member is to work individually without interacting verbally with each
other. The committee chairman distributes the problem or need identification form to the
members and ask them to respond in writing to the questions or statement on the form, giving
an example of the kind of response.

Without discussion, silently and independently, each member lists on the form the
problems and/or needs for approximately 10 minutes. The chairman enforces silence by
requesting that those who have stopped writing not talk with others and think more
deeply for other possible items.
The recorder now asks each member to state an item from his or her list. The items are
recorded until all lists have been included. Discussion of items in not allowed and no
concern is given to overlap of items at this time. However, members are encouraged to
generate new ideas on their forms based on items presented by other group members.
The group now discusses for approximately 20 minutes the items listed for the purpose of
clarification, elaboration, combination of items, or addition of new items.
Without discussion and acting independently, each group member should select and
prioritize the items believed to be most critical. The recorder asks for each member to
give the items selected in priority order. The most critical items are determined by the
total amount of interest and votes. Discussion of voting and priority items continues until
a consensus is reached.

Focus Groups Discussion (FGD)

A focus group is an organized discussion session. A panel of people meets for a short duration
to exchange ideas, feelings, and experiences on a specific topic. A trained facilitator, using
group dynamics principles, guides participants through the meeting. Increasingly used in
social and business research, focus group meetings enable a researcher to gain much
information in a relatively short period of time (Morgan & Kruger 1993).

Focus groups have been a mainstay in private sector marketing research for the past three
decades. More recently, public sector organizations are beginning to discover the potential of
these procedures. Educational and nonprofit organizations have traditionally used face-to-face
interviews and questionnaires to get information. Unfortunately, these popular techniques are
sometimes inadequate in meeting information needs of decision makers. The focus group is
unique from these other procedures; it allows for group interaction and greater insight into
why certain opinions are held. Focus groups can improve the planning and design of new
programs, provide means of evaluating existing programs, and produce insights for
developing marketing strategies.

Characteristics

Muathe SMA 2007 52


Involve people. It must be small enough for everyone to have opportunity to share
insights and yet large enough to provide diversity of perceptions. Focus groups are
typically composed of 6 to 10 people, but the size can range from as few as 4 to as many
as 12.
Conducted in series. Multiple groups with similar participants are needed to detect
patterns and trends across groups.
Possess certain characteristics. Participants are reasonably homogeneous and unfamiliar
with each other.
Provide data. Focus groups pay attention to the perceptions of the users and consumers of
solutions, products, and service. They are not intended to develop consensus, to arrive at
an agreeable plan, or to make decisions about which course of action to take.
Produce qualitative data.

Advantages

It is a socially oriented research procedure.


The format allows the moderator to probe.
Discussions have high face validity.
Discussions can be relatively low cost.
The format can provide speedy results.
Focus groups enable the researcher to increase the sample size of qualitative studies.

Limitations

The researcher has less control in the group interview as compared to the individual
interview.
Data are more difficult to analyze.
The technique requires carefully trained interviewers.
Groups can vary considerably.
Groups are difficult to assemble
The discussion must be conducted in an environment conducive to conversation.

Types of Focus Group Questions

Opening Question. This is the round robin question that everyone answers at the
beginning of the focus group. It is designed to be answered rather quickly (within 10-20
seconds) and to identify characteristics that the participants have in common. Usually it is
preferably for these questions to be factual as opposed to attitude or opinion-based
questions.
Introductory Questions. These questions introduce the general topic of discussion and/or
provide participants an opportunity to reflect on past experiences and their connection
with the overall topic. Usually these questions are not critical to the analysis and are
intended to foster conversation and interaction among the participants.
Transition Questions. These move the conversation into the key questions that drive the
study. The transition questions help the participants envision the topic in a broader scope.
They serve as the logical link between the introductory questions and the key questions.
The participants are becoming aware of how others view the topic.
Key Questions. These questions drive the study. Typically, there are two to five questions
in this category. These are usually the first questions to be developed and also the ones
that require the greatest attention in the subsequent analysis.
Ending Questions. These questions bring closure to the discussion, enable participants to
reflect back on previous comments, and are critical to analysis. These questions can be of

Muathe SMA 2007 53


three types:

Telephone Surveys- surveying by telephone is the most popular interviewing method.

Advantages

• People can usually be contacted faster over the telephone than with other methods. If
the Interviewers are using CATI (computer-assisted telephone interviewing), the
results can be available minutes after completing the last interview.
• You can dial random telephone numbers when you do not have the actual telephone
numbers of potential respondents.
• CATI software, such as The Survey System, makes complex questionnaires practical
by offering many logic options. It can automatically skip questions, perform
calculations and modify questions based on the answers to earlier questions. It can
check the logical consistency of answers and can present questions or answers
choices in a random order (the last two are sometimes important for reasons described
later).
• Skilled interviewers can often elicit longer or more complete answers than people will
give on their own to mail, email surveys (though some people will give longer
answers to Web page surveys). Interviewers can also ask for clarification of unclear
responses.
• Some software, such as The Survey System, can combine survey answers with pre-
existing information you have about the people being interviewed.

Disadvantages

• Many telemarketers have given legitimate research a bad name by claiming to be


doing research when they start a sales call. Consequently, many people are reluctant
to answer phone interviews and use their answering machines to screen calls.
• The growing number of working- women often means that no one is at home during
the day.
• You cannot show or sample products by phone.

Mail Surveys

Advantages

• Mail surveys are among the least expensive.


• This is the only kind of survey you can do if you have the names and addresses of the
target population, but not their telephone numbers.
• The questionnaire can include pictures - something that is not possible over the
phone.
• Mail surveys allow the respondent to answer at their leisure, rather than at the often-
inconvenient moment they are contacted for a phone or personal interview. For this
reason, they are not considered as intrusive as other kinds of interviews.

Disadvantages

• Time, mail surveys take longer than other kinds. You will need to wait several weeks
after mailing out questionnaires before you can be sure that you have gotten most of
the responses.
• In populations of lower educational and literacy levels, response rates to mail surveys
are often too small to be useful.

Muathe SMA 2007 54


Computer Direct Interviews

These are interviews in which the Interviewees enter their own answers directly into a
computer. They can be used at malls, trade shows, offices, and so on. The Survey System's
optional Interviewing Module and Interview Stations can easily create computer-direct
interviews. Some researchers set up a Web page survey for this purpose.

Advantages

• The virtual elimination of data entry and editing costs.


• You will get more accurate answers to sensitive questions.
• The elimination of interviewer bias. Different interviewers can ask questions in
different ways, leading to different results. The computer asks the questions the same
way every time.
• Response rates are usually higher. Computer-aided interviewing is still novel enough
that some people will answer a computer interview when they would not have
completed another kind of interview.

Disadvantages

• The Interviewees must have access to a computer or one must be provided for them.
• As with mail surveys, computers direct interviews may have serious response rate
problems in populations of lower educational and literacy levels. This method may
grow in importance as computer use increases.

Email Surveys

Email surveys are both very economical and very fast. More people have email than have full
Internet access. This makes email a better choice than a Web page survey for some
populations. On the other hand, email surveys are limited to simple questionnaires, whereas
Web page surveys can include complex logic.

Advantages

• Speed. An email questionnaire can gather several thousand responses within a day or
two.
• There is practically no cost involved once the set up has been completed.
• You can attach pictures and sound files.
• The novelty element of an email survey often stimulates higher response levels than
ordinary “snail” mail surveys.

Disadvantages

• You must possess (or purchase) a list of email addresses.


• Some people will respond several times or pass questionnaires along to friends to
answer. Many programs have no check to eliminate people responding multiple times
to bias the results.
• Many people dislike unsolicited email even more than unsolicited regular mail. You
may want to send email questionnaires only to people who expect to get email from
you.
• You cannot use email surveys to generalize findings to the whole populations. People
who have email are different from those who do not, even when matched on
demographic characteristics, such as age and gender.

Muathe SMA 2007 55


• Email surveys cannot automatically skip questions or randomize question or answer
choice order or use other automatic techniques that can enhance surveys the way Web
page surveys can.

Summary of Survey Methods

Your choice of survey method will depend on several factors. These include:

Speed Email and Web page surveys are the fastest methods, followed by telephone
interviewing. Mail surveys are the slowest.
Cost Personal interviews are the most expensive followed by telephone and then mail.
Email and Web page surveys are the least expensive for large samples.
Internet Web page and Email surveys offer significant advantages, but you may not be
Usage able to generalize their results to the population as a whole.
Literacy Illiterate and less-educated people rarely respond to mail surveys.
Levels
Sensitive People are more likely to answer sensitive questions when interviewed directly
Questions by a computer in one form or another.

3.9.2 Research Procedure


A detailed description of the steps taken in the administration of the data collection tools
should be provided for the purposes of replicability. The researcher should be aware of social
desirability bias [the tendency for people to give answers that they believe (consciously
/unconsciously) will make them look good rather than those that are accurate e.g. people tend
to under report their participation in activities that others might disapprove like driving when
drug] The researcher should be also prepared to deal with people who respond ‘I do Not
Know’. The researcher should provide a complete account of the research process including
• Pilot testing,
• Scheduling of the subjects or participants,
• Distribution and collection of the instruments
• Timing of interviews or questionnaire

3.10 Data Analysis Methods


The researcher should identify and describe appropriate data analysis methods for the study.

• Quantitative approaches in terms of descriptive statistics or inferential statistics


should be described.
• Descriptive statistics include frequencies, measures of central tendencies (mean,
medium or mode) and measures of dispersion (standard deviation, range or variance).
• Inferential statistics involve measurement or relationships and differences between or
among the variables. Inferential statistics include correlation, regression and analysis
of variance among others.
• Data presentation methods in terms of tables, graphs or charts should also be
described in this section.
• Qualitative data should be summarized and categorized according to common themes
and presented in frequency distribution tables

The word "statistics" is used in several different senses. In the broadest sense, "statistics"
refers to a range of techniques and procedures for analyzing data, interpreting data, displaying
data, and making decisions based on data. A second usage, "statistic" is defined as a

Muathe SMA 2007 56


numerical quantity (such as the mean) calculated in a sample. Such statistics are used to
estimate parameters.

What is data Analysis?

Data Analysis is defined as the whole process, which starts immediately, after data collection
and end at point of interpretation of the processes results (Obure 2002)

The whole process includes data sorting, data editing, data coding, data entry, data cleaning,
data processing and interpretation of the results.

1. Data Sorting-involves the rearrangement of the collected data/questionnaires to bring


some order allowing systematic handling. It’s the beginning of detection, correction
and avoidance of errors occurring as a result of mix-ups.
2. Data Editing-Involves reading through the filled questionnaires (primary data),
records to spot any inconsistencies and/or errors, which occurred during data
collection. The objective is to identify problems with the instruments owing to
apparent misunderstanding by the enumerators or the respondents: to detect any
questionnaires that may have been faked by the enumerators. Its also helps to correct
mistakes that may have occurred due to the slip of the pen. It makes the task of
coding easier and help in achieving reliable results.
3. Data coding-Process of creation of dummy variables names (short names assigned to
each study variable). These entire dummies are in turn assigned numeric values that
can be processed or understood by SPSS for windows. The code allows the researcher
to minimize errors during data entry and processing and provides easy interpretations
of results.
4. Data Entry-the actual keying of data according to the assigned codes. It requires a
high degree of keenness and patience. You need to have the principle of Garbage in
Garbage out) GIGO) in mind.
5. Data Cleaning-Involve conducting a final check on the data file for accuracy,
erroneous data, completeness and consistency. It enables you to avoid going back to
the original questionnaires too many times to correct errors when you are at the
middle of Analysis
6. Data Processing-subjected the prepared data to SPSS processor which then
manipulates/ computes/processes the data and output results. You must decide what is
the best statistical tool to be used in your hypothesis testing
7. Interpretation of results-Its deriving some understanding from the output relative to
the subject matter of the researcher. Its from the derived understanding that
conclusions are made

Hypothesis testing is a statistical inferential procedure in which a statement based on some


experimental or observational study is formulated, tested, and then put through a decision
process. The decision process either accepts or rejects the statement. It is the null hypothesis
that is actually tested, not the research hypothesis. The object of the test is to see whether the
null hypothesis should be rejected or accepted. If the null hypothesis is rejected, that is taken
as evidence in favor of the research hypothesis, which is called as the alternative hypothesis
(denoted by Ha). In usual practice we do not say that the research hypothesis has been
"proved" only that it has been supported.

More precisely, the hypothesis testing procedure can be broken down into three steps:

1. Formulation/specification of a (hypothetical) statement. The hypothetical statement


formed is called the null hypothesis (Ho), or. Accompanying the null hypothesis is
the alternative hypothesis (Ha).

Muathe SMA 2007 57


2. Appropriate statistical technique is selected to test the hypothesis e.g. correlation
studies, regression Analysis, chi-square test, T-test and F-Test
3. Deciding whether to accept or reject the statement. Once the value of the test statistic
is obtained, it is used to find a corresponding probability of obtaining such a statistic
given that Ho is true. When a statement (whether it is null hypothesis or the
alternative hypothesis) is accepted, it merely says that, statistically, there is not
enough evidence to reject the statement. Acceptance of a hypothetical statement does
not prove that the underlying statement is true.
4. Calculating the value of the test statistics and performing the test/Testing of the
statement. This is usually the most mathematical part of the procedure. Then apply an
appropriate test statistic using values obtained from the study. There are many test
statistics, depending on the nature of the study.
5. Stating your conclusion from the perspective of the original research problem or
questions.

Level of significance- In normal English, "significant" means important, while in Statistics


"significant" means probably true (not due to chance). A research finding may be true without
being important. When statisticians say a result is "highly significant" they mean it is very
probably true. They do not (necessarily) mean it is highly important.

Significance levels show you how likely a result is due to chance. The most common level,
used to mean something is good enough to be believed, is .95. This means that the finding has
a 95% chance of being true. However, this value is also used in a misleading way. No
statistical package will show you "95%" or ".95" to indicate this level. Instead it will show
you ".05," meaning that the finding has a five percent (.05) chance of not being true, which is
the converse of a 95% chance of being true. To find the significance level, subtract the
number shown from one. For example, a value of ".01" means that there is a 99% (1-.01=.99)
chance of it being true

Confidence limits: The limits (or range) within which the hypothesis should lie with
specified probabilities are called the confidence limits or fiducial limits. It is customary to
take these limits as 5% or 1% levels of significance. If sample values lies between the
confidence limits, the hypothesis is accepted; if it does not, the hypothesis is rejected at the
specified level of significance.

Muathe SMA 2007 58


Errors in Testing Of Hypothesis

In testing any hypothesis, we get only two results: either we accept or we reject it. We do not
know whether it is true or false. Hence four possibilities may arise.

1. The hypothesis is true but test rejects it (Type I error)


2. The hypothesis is false but test accepts it (Type II error)
3. The hypothesis is true and test accepts it (correct decision)
4. The hypothesis is false and test rejects it (correct decision)

In a statistical hypothesis testing experiment, a Type I error (alpha) is committed when the
null hypothesis is rejected though it is true.

A Type II error (beta) is committed by not rejecting (i.e. accepting) the null hypothesis, when
it is false.

Tests for testing of Hypothesis

1. Correlation studies- to determine the extent of the relationship between two or


more variables. The coefficient of correlation can be positive and negative
correlation. Correlation analysis is a statistical technique that evaluates the
relationship between two variables; i.e., how closely they match each other in
terms of their individual mathematical change. The question addressed is: if one
variable (X) moves or changes in a certain direction does the second variable (Y)
also move or change in a similar or complementary direction? Correlation
Coefficient- is a measure of the strength of the linear relationship between two
variables x and y.

2. Regression Analysis- it’s the process of estimating the coefficient of linier


equation, it involves one or more independent variables that best predict the value
of the dependent variable. e.g. you can try to predict a salesperson’s total yearly
sales (the dependent variable) from independent variable such as age, education
and years of experience. The goal of regression analysis is to determine the
values of parameters for a function that cause the function to best fit a set of data
observations that you provide. In linear regression, the function is a linear
(straight-line) equation. For example, if we assume the value of an automobile
decreases by a constant amount each year after its purchase, and for each mile it
is driven, the following linear function would predict its value (the dependent
variable on the left side of the equal sign) as a function of the two independent
variables which are age and miles: Value = price + de-page*age + dep-
miles*miles Where value, the dependent variable, is the value of the car, age is
the age of the car, and miles are the number of miles that the car has been driven.
The regression analysis performed by NLREG will determine the best values of
the three parameters, price, the estimated value when age is 0 (i.e., when the car
was new), de-page, the depreciation that takes place each year, and dep-miles, the
depreciation for each mile driven. The values of dep-age and dep-miles will be
negative because the car loses value as age and miles increase.
:
3. Chi-square test-compares the observed and expected frequencies in each
category to test either that all categories have the same proportion of values or
that each category going, a user-specified proportion of values. e.g. Chi-square
can be used to test if a bag of jelly beans contain equal proportion of blue, brown,
green and red beans. You could also test to see whether the bag contains 50%
blue, 20% brown, 20% green and 10% red beans.

Muathe SMA 2007 59


4. One sample T-test- it tests whether the mean of a single variable differs from a
specified constant. e.g. A researcher might want to test whether the average I.Q
score for a group of students differ from 100
5. Independent T-test- Compares means of two groups of cases ideally, for this
test, the subjects should be randomly assigned to two groups so that any
difference is due to the treatment of or lack of treatment of the group and not
other factors. This is not the case if you compare average income fro males and
females for persons are not randomly assigned to be male or female
6. Paired sample T-test-Compares the mean of two variables for a single group. It
computes the difference between values of the variable for each case and test
whether the average differs from zero. In a study of high blood pressure, all
patients are measured at the beginning of the study, given a treatment and
measured again
7. F-Test- An F-test (Snedecor and Cochran, 1983) is used to test if the standard
deviations of two populations are equal/whether two samples are drawn from
different populations have the same standard deviation, with specified confidence
level. Samples may be of different sizes. This test can be a two-tailed test or a
one-tailed test. The two-tailed version tests against the alternative that the
standard deviations are not equal. The one-tailed version only tests in one
direction that is the standard deviation from the first population is either greater
than or less than (but not both) the second population standard deviation. The
choice is determined by the problem. For example, if we are testing a new
process, we may only be interested in knowing if the new process is less variable
than the old process.

Muathe SMA 2007 60


CHAPTER FOUR
DATA ANALYSIS AND PRESENTATION OF RESULTS

4.1 Introduction

Introduce us to what we expect to find in the chapter. In addition tell us how you have
organized the chapter.

4.2 Analysis of the Response Rate


Tell us how many questionnaires you issues, how many where returned and what is
that response rate (Insert a table also)

4.3 Analysis of the Background Information


This section will enable you to analyze the background information of the questionnaire. i.e.
the gender, level of education, marital status .etc

4.4 Quantitative Analysis


Used to analyze and present results of the closed- ended questions in the questionnaire
and you should organize the section based on your objectives in chapter one.

4.5 Qualitative Analysis


Used to analyze and present results of the open-ended questions in the questionnaire
through capturing the common answers in the open -ended questions based on their
commonality through content analysis. Also organize the section based on your
objectives in chapter one.

4.3 Chapter Summary


Bring out a summary of the chapter pointing out the key issues from your data
analysis

Muathe SMA 2007 61


CHAPTER FIVE
SUMMARY, CONCLUSIONS AND RECOMMENDATIONS

5.1 Introduction

Introduce us to the chapter, for example this chapter will present summary of findings,
answers to research questions, conclusions will be drawn from the data analysis performed in
this chapter four. Implications drawn from this study and its impact on policy and
professional practice will be discussed, subsequently. The study will end with
recommendations reached.

5.2 Summary of Findings

You are supposed to provide a pre-view of the main purpose (i.e. the general objective or the
problem) of the study and the specific objectives, which directed the study, then provide a
summary of findings from chapter four supporting your findings with statistics from data
analysis. Organize this section based on your specific objectives such that each objective
forms a paragraph on its own.

5.3 Answers to Research Questions/ Discussion

This section serves to present the specific discussion about each of the research
questions as a first step in addressing the research problem. The research questions
first developed in Chapter one are re-stated and answers provided from chapter four
(Data analysis & Presentation of Results). The answers are discussed by comparing
with your literature review in chapter two reviewed at proposal level.

5.4 Conclusions

Conclusions should be drawn about each of the specific objectives based on the answers
provided above for the research questions/based on research findings.

5.5 Recommendations
In this section, the implications of the study for policy, professional practice and future
research are considered in turn. The recommendations are based on the conclusion and divided
into two: -
1. Recommendations for policy and practice. Provide recommendation fro each specific
objective
2. Recommendations for theory development and future research.

Muathe SMA 2007 62


REFERENCE

Adcock, R. & Collier, D. (2001). Measurement validity: A shared standard for qualitative and
quantitative research. American Political Science Review 95: 529-546.

Campbell, D. T. & Stanley, J. C. (1963b): Experimental and quasi- experimental designs for research.
Chicago: Rand-McNally.

Campbell, D. T. and. Stanley, J. C (1963a): Experimental and quasi-experimental designs for research
on teaching, Chicago: Rand McNally, 1963: 171-246. The seminal article on types of validity

Campbell, D.T. & Fiske, D.W. (1959). Convergent and discriminant validation by the multi-trait-multi-
method matrix. Psychological Bulletin, 56, 81-105.

Carmines, E. G. & Zeller, R. A. (1979): Reliability and validity assessment. Quantitative Applications
in the Social Sciences series 07-017, Newbury Park, CA: Sage Publications.

Cook, T. D. & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field
settings. Boston: Houghton-Mifflin. Chapter 2 is a classic statement of types of validity

Cook, T.D. and Campbell, D.T. (1979). Quasi-Experimentation: Design and Analysis for Field
Settings. Rand McNally, Chicago, Illinois.

Duffy, M.E. (1887). Methodological Triangulation: A Vehicle for Merging Quantitative and
Qualitative Research Methods. IMAGE: Journal of Nursing Scholarship,19(3), 130-133.

http://www.georgetown.edu/departments/psychology/researchmethods/writing/webresearch.htm

Krefting, L. (1991). Rigor in qualitative research: The assessment of trustworthiness. The American
Journal of Occupational Therapy, 45(3), 214-222.

Mady, D. (1982). Benefits of Qualitative and Quantitative methods in program evaluation, with
illustrations. Educational Evaluation and Policy Analysis, 4, 223-236.

Merriam, S.B. (1988). Case study research in education: A qualitative approach. San Francisco: Jossey-
Bass, p. 18.

Patten, Mildred L. (2002). Understanding research methods: An Overview of the essentials (3rd ed.).
Los Angeles: Pyrczak Publishing.

Pyrczak, Fred. (2002). Success at statistics: A Worktext with humor (2nd ed.). Los Angeles: Pyrczak
Publishing.

Shaddish, W, Cook, T. Leviton, L. (1991). Foundations of Program Evaluation: Theories of Practice.


SAGE Publishing Company. Newbury Park.

Solso, Robert L., Johnson, Homer H., & Beal, M. Kimberly. (1998). Experimental psychology: a case
approach
Solso, Robert L., Johnson, Homer H., & Beal, M. Kimberly. (1998). Experimental psychology: a case
approach (6th ed.). New York: Longman.

Trochim, W. (1989). An Introduction to Concept Mapping for Planning and Evaluation. Evaluation and
Program Planning, 12, 1-16.

Wolcott, H.R. (1990). Qualitative inquiry in education: The continuing debate.

Muathe SMA 2007 63

You might also like