You are on page 1of 31

UNIT I

1. MARKETING RESEARCH

Def: Marketing research is a key element within the total field of marketing information. It links
the consumer, customer and public to the marketer through information which is used to identify
and define marketing opportunities and problems; to generate, refine and evaluate marketing
actions; and to improve understanding of marketing as a process and of the ways in which
specific marketing activities can be made more effective.

Marketing Research Process:


2. MARKETING INFORMATION SYSTEM

A. Components of a marketing information system

A marketing information system (MIS) is intended to bring together disparate items of data into
a coherent body of information. An MIS is, as will shortly be seen, more than raw data or
information suitable for the purposes of decision making. An MIS also provides methods for
interpreting the information the MIS provides. Moreover, as Kotler's1 definition says, an MIS is
more than a system of data collection or a set of information technologies:

"A marketing information system is a continuing and interacting structure of people, equipment
and procedures to gather, sort, analyse, evaluate, and distribute pertinent, timely and accurate
information for use by marketing decision makers to improve their marketing planning,
implementation, and control".

Figure 9.1 illustrates the major components of an MIS, the environmental factors monitored by
the system and the types of marketing decision which the MIS seeks to underpin.

Figure 9.1 The marketing information systems and its subsystems

The explanation of this model of an MIS begins with a description of each of its four main
constituent parts: the internal reporting systems, marketing research system, marketing
intelligence system and marketing models. It is suggested that whilst the MIS varies in its degree
of sophistication - with many in the industrialised countries being computerised and few in the
developing countries being so - a fully fledged MIS should have these components, the methods
(and technologies) of collection, storing, retrieving and processing data notwithstanding.

B. Internal reporting systems: All enterprises which have been in operation for any period
of time nave a wealth of information. However, this information often remains under-
utilised because it is compartmentalised, either in the form of an individual entrepreneur
or in the functional departments of larger businesses. That is, information is usually
categorised according to its nature so that there are, for example, financial, production,
manpower, marketing, stockholding and logistical data. Often the entrepreneur, or
various personnel working in the functional departments holding these pieces of data, do
not see how it could help decision makers in other functional areas. Similarly, decision
makers can fail to appreciate how information from other functional areas might help
them and therefore do not request it.

The internal records that are of immediate value to marketing decisions are: orders received,
stockholdings and sales invoices. These are but a few of the internal records that can be used by
marketing managers, but even this small set of records is capable of generating a great deal of
information. Below, is a list of some of the information that can be derived from sales invoices.

Product type, size and pack type by territory


Product type, size and pack type by type of account
Product type, size and pack type by industry
Product type, size and pack type by customer
Average value and/or volume of sale by territory
Average value and/or volume of sale by type of account
Average value and/or volume of sale by industry
Average value and/or volume of sale by sales person

By comparing orders received with invoices an enterprise can establish the extent to which it is
providing an acceptable level of customer service. In the same way, comparing stockholding
records with orders received helps an enterprise ascertain whether its stocks are in line with
current demand patterns.

C. Marketing research systems: The general topic of marketing research has been the
prime ' subject of the textbook and only a little more needs to be added here. Marketing
research is a proactive search for information. That is, the enterprise which commissions
these studies does so to solve a perceived marketing problem. In many cases, data is
collected in a purposeful way to address a well-defined problem (or a problem which can
be defined and solved within the course of the study). The other form of marketing
research centres not around a specific marketing problem but is an attempt to
continuously monitor the marketing environment. These monitoring or tracking exercises
are continuous marketing research studies, often involving panels of farmers, consumers
or distributors from which the same data is collected at regular intervals. Whilst the ad
hoc study and continuous marketing research differs in the orientation, yet they are both
proactive.
D. Marketing intelligence systems: Whereas marketing research is focused, market
intelligence is not. A marketing intelligence system is a set of procedures and data
sources used by marketing managers to sift information from the environment that they
can use in their decision making. This scanning of the economic and business
environment can be undertaken in a variety of ways, including
Unfocused The manager, by virtue of what he/she reads, hears and watches exposes
scanning him/herself to information that may prove useful. Whilst the behaviour is
unfocused and the manager has no specific purpose in mind, it is not unintentional
Semi- Again, the manager is not in search of particular pieces of information that he/she
focused is actively searching but does narrow the range of media that is scanned. For
scanning instance, the manager may focus more on economic and business publications,
broadcasts etc. and pay less attention to political, scientific or technological media.
Informal This describes the situation where a fairly limited and unstructured attempt is made
search to obtain information for a specific purpose. For example, the marketing manager
of a firm considering entering the business of importing frozen fish from a
neighbouring country may make informal inquiries as to prices and demand levels
of frozen and fresh fish. There would be little structure to this search with the
manager making inquiries with traders he/she happens to encounter as well as with
other ad hoc contacts in ministries, international aid agencies, with trade
associations, importers/exporters etc.
Formal This is a purposeful search after information in some systematic way. The
search information will be required to address a specific issue. Whilst this sort of activity
may seem to share the characteristics of marketing research it is carried out by the
manager him/herself rather than a professional researcher. Moreover, the scope of
the search is likely to be narrow in scope and far less intensive than marketing
research

Marketing intelligence is the province of entrepreneurs and senior managers within an


agribusiness. It involves them in scanning newspaper trade magazines, business journals and
reports, economic forecasts and other media. In addition it involves management in talking to
producers, suppliers and customers, as well as to competitors. Nonetheless, it is a largely
informal process of observing and conversing.

Some enterprises will approach marketing intelligence gathering in a more deliberate fashion and
will train its sales force, after-sales personnel and district/area managers to take cognisance of
competitors' actions, customer complaints and requests and distributor problems. Enterprises
with vision will also encourage intermediaries, such as collectors, retailers, traders and other
middlemen to be proactive in conveying market intelligence back to them.

E. Marketing models: Within the MIS there has to be the means of interpreting information
in order to give direction to decision. These models may be computerised or may not.
Typical tools are:
Time series sales modes
Brand switching models
Linear programming
Elasticity models (price, incomes, demand, supply, etc.)
Regression and correlation models
Analysis of Variance (ANOVA) models
Sensitivity analysis
Discounted cash flow
Spreadsheet 'what if models

These and similar mathematical, statistical, econometric and financial models are the analytical
subsystem of the MIS. A relatively modest investment in a desktop computer is enough to allow
an enterprise to automate the analysis of its data. Some of the models used are stochastic, i.e.
those containing a probabilistic element whereas others are deterministic models where chance
plays no part. Brand switching models are stochastic since these express brand choices in
probabilities whereas linear programming is deterministic in that the relationships between
variables are expressed in exact mathematical terms.

2. MARKETING DECISION SUPPORT SYSTEM

A marketing decision support system (sometimes abbreviated MKDSS) is a decision support


system for marketing activity. It consists of information technology, marketing data and
modeling capabilities that enable the system to provide predicted outcomes from different
scenarios and marketing strategies, so answering "what if?" questions.

A MKDSS is used to support the software vendors’ planning strategy for marketing products. It
can help to identify advantageous levels of pricing, advertising spending, and advertising copy
for the firm’s products. This helps determines the firms marketing mix for product software.

Components of a Marketing Decision Support System

Figure 3.3: Components of MDSS


As shown in Figure 3.3, the components of decision support system include:

· Database

· Report and display

· Analysis

· Models

Database is a collection of data organized to service many applications at the same time by
storing and managing data so that they appear to be in one location.

A marketing database contains information on customers and their characteristics. A database


helps the marketer predict the future preferences of the customers from the past data.

Contents of database
Contents of databases are:
· Identification of each customer through code.

· Name of the organisation.

· Address and postal code, e-mail ID.

· Time-period when the transaction was carried out.

· Amount in rupees (volume of transaction).

Benefits of data base marketing

· Retention of the customer: It should be remembered that an organization needs to spend five
times more to acquire a new customer, compared to retaining an existing customer. Generally, it
is an accepted fact that 20% of the customers are responsible for 80% of the business. Therefore,
maintaining an excellent relationship with customer becomes imperative.

· Estimate the value for lifetime of a customer: Each customer when valued is an asset to the
organisation. If a cell phone subscriber pays Rs. 300 per month, he is worth 2.5 lakh rupees
assuming he continues with the same service-provider for 30 to 40 years.

Report and display: Report consists of tables, charts, graphs and other graphic displays. It also
consists of other important inferences of a particular product, company, market, etc.

Analysis: Calculations such as average, percentage changes, seasonal changes, statistical


procedure used are all parts of analysis.
Models: Models represent the assumptions and how it really works. For example, how brand
sales responds to changes in marketing mix. Strategy models are used to test alternative
marketing programme. Models help in setting objectives.

Characteristics of a Marketing Decision Support System


A good MDSS should have the following characteristics:

· Interactive: The process of interaction with the MDSS should be simple and direct. With just a
few commands the user should be able to obtain the results immediately. There should be no
need for a programmer in between.

· Flexible: A good MDSS should be flexible. It should be able to present the available data in
either discrete or aggregate form. It should satisfy the information needs of the managers in
different hierarchical levels and functions.

· Discovery oriented: The MDSS should not only assist managers in solving the existing
problems but should also help them to probe for trends and ask new questions. The managers
should be able to discover new patterns and be able to act on them using the MDSS.

· User friendly: The MDSS should be user friendly. It should be easy for the managers to learn
and use the system. It should not take hours just to figure out what is going on. Most MDSS
packages are menu driven and are easy to operate.

Levels of Marketing Decision Support System


There are three levels of MDSS:

1. Level 1: Data Management: Data management is a process of data acquisition, storage and
retrieval. This consists of Tools, Database, Database Management System (DBMS), Query
facilities, Report writers, Document and image management system.

2. Level 2: Data Analysis: Data analysis involves, finding and analyzing relationships between
variables. It comprises basic data analysis tools, spreadsheet, what-if analysis, goal seeking
analysis, graphical tools, statistical tools, etc.

3. Level 3: Decision Analysis: Decision analysis is a procedure of prioritization and choice


among various alternatives. It addresses both qualitative and quantitative issues. For example,
sales growth, market share, market position, customer satisfaction, etc.

Types of Marketing Decision Support System


The various types of MDSS are:

· Model-driven MDSS
· Data-driven MDSS
· Communications-driven MDSS
· Document-driven MDSS
A model-driven MDSS emphasizes access to and manipulation of financial, optimization and/or
simulation models. It analyses what-if analysis, goal seeking analysis and sensitivity analysis.
In general, a data-driven MDSS emphasizes access to and manipulation of a time-series of
internal company data and sometimes external and real-time data. Simple file systems accessed
by query and retrieval tools provide the most elementary level of functionality.

Communications-driven MDSS use network and communications technologies to facilitate


decision-relevant collaboration and communication. In these systems, communication
technologies are the dominant architectural component. Tools used include groupware, video
conferencing and computer-based bulletin boards.

A document-driven MDSS uses computer storage and processing technologies to


provide document retrieval and analysis.

Knowledge-driven DSS can suggest or recommend actions to managers. These DSS are person-
computer systems with specialized problem-solving expertise. The “expertise” consists of
knowledge about a particular domain, understanding of problems within that domain, and “skill”
at solving some of these problems. These systems have been called suggestion DSS and
knowledge-based DSS.

Apart from these, world wide web and internet provided a technology platform for further
extending the capabilities and deployment of computerized decision support. Power (1998),
defined a Web-based decision support system as a computerized system that delivers decision
support information or decision support tools to a manager or business analyst using a Internet
Explorer.

Activity 2:
If you are a research manager from a telecom organization. What are
your sources of collecting information from market?

Self Assessment Questions


6. MDSS is a coordinated collection of ____________, __________, ___________and
__________ with supporting software and hardware.

7. _________, ___________, __________ and __________ are the components of decision


support system.

8. Database_________ and ____________customer information.

9. A model-driven MDSS emphasizes access to and manipulation of __________, ___________


and/or simulation models

10. A data-driven MDSS emphasizes __________ and _______ a time-series of data.


UNIT II

Research Design
A research design is a framework or blueprint for conducting a marketing research project. It
details the procedures necessary for obtaining the information needed to structure or solve
marketing research problems. Although a broad approach to the problem has already been
developed, the research design specifies the details – the practical aspects – of implementing that
approach. A research design lays the foundation for conducting the project. A good research
design will ensure that the marketing research project is conducted effectively and efficiently.

Classification of Research Designs:

1 Exploratory &

2 Conclusive Designs
3) Causal Research Design or Causative Research Design
Causal research is used to obtain evidence of cause-and-effect (causal) relationships. Marketing
managers continually make decisions based on assumed causal relationships. These assumptions
may not be justifiable, and the validity of the causal relationships should be examined via formal
research.28 For example, the common assumption that a decrease in price will lead to increased
sales and market share does not hold in certain competitive environments.

Causal research is appropriate for the following purposes:


1 To understand which variables are the cause (independent variables) and which variables
are the effect (dependent variables) of marketing phenomena.
2 To determine the nature of the relationship between the causal variables and the effect
to be predicted.
3 To test hypotheses.
4) Experimental Research Design
Experimental research designs are used for the controlled testing of causal processes.The general
procedure is one or more independent variables are manipulated to determine their effect on a
dependent variable. These designs can be used where:
There is time priority in a causal relationship (cause precedes effect),
There is consistency in a causal relationship (a cause will always lead to the same effect), and
The magnitude of the correlation is great.
The most common applications of these designs in marketing research and experimental
economics are test markets and purchase labs. The techniques are commonly used in other social
sciences including sociology, psychology, and social work.

Types of Experimental Research Design


In an attempt to control for extraneous factors, several experimental research designs have been
developed, including:

 Classical pretest-post test - The total population of participants is randomly divided into
two samples; the control sample, and the experimental sample. Only the experimental sample
is exposed to the manipulated variable. The researcher compares the pretest results with the
post test results for both samples. Any divergence between the two samples is assumed to be
a result of the experiment.
 Solomon four group design - The sample is randomly divided into four groups. Two of the
groups are experimental samples. Two groups experience no experimental manipulation of
variables. Two groups receive a pretest and a post test. Two groups receive only a post test.
This is an improvement over the classical design because it controls for the effect of the
pretest.
 Factorial design - this is similar to a classical design except additional samples are used.
Each group is exposed to a different experimental manipulation.

Sources and Methods of gathering Marketing Information

The search for answers to research questions calls of collection of data. Data are facts, figures
and other relevant materials, past and present, serving as bases for study and analysis.

Types of Data

The data needed for a social science research may be broadly classified into (a) Data pertaining
to human beings, (b) Data relating to organisations, and (c) Data pertaining to territorial areas.

Personal data or data related to human beings consist of Demographic and socio-economic
characteristics of individuals like age, sex, race, social class, religion, marital status, education,
occupation, income, family size, location of the household, life style, etc. and Behavioural
variables like attitudes, opinions, awareness, knowledge, practice, intentions, etc.
Organisational data consist of data relating to an organisation’s origin, ownership, objectives,
resources, functions, performance and growth.

Territorial data are related to geophysical characteristics, resources endowment, population,


occupational pattern, infrastructure, economic structure, degree of development, etc. of spatial
divisions like villages, cities, Tabias, Woredas, state/ regions and the nation.

Importance of data

The data serve as the bases or raw materials for analysis. Without an analysis of factual data, no
specific inferences can be drawn on the ques-tions under study. Inferences based on imagination
or guesswork cannot provide correct answers to research questions. The relevance, adequacy and
reliability of data determine the quality of the findings of a study.

Data form the basis for testing the hypotheses formulated in a Study. Data also provide the facts
and figures required for constructing measure-ment scales and tables, which are analysed with
statistical techniques. Inferences on the results of statistical, analysis and tests of significance
provide the answers to research questions. Thus the scientific process of measurement, analysis,
testing and inferences depends on the availability of relevant data and their accuracy. Hence the
importance of data for any research studies.

SOURCES OF DATA

The sources of data may be classified into (a) primary sources and (b) secondary sources.

Primary Sources

Primary sources are original sources from which the researcher directly collects data that have
not been previously collected, e.g., collection of data directly by the researcher on brand
awareness, brand preference, brand loyalty and other aspects of consumer behaviour from a
sample of consumers by interviewing them. Primary data are first-hand information collected
through various methods such as observation, interviewing, mailing etc.

Secondary Sources

These are sources containing data that have been collected and compiled for another purpose.
The secondary sources consist of readily available compendia and already compiled statistical
statements and reports whose data may be used by researches for their studies, e.g., census
reports, annual reports and financial statements of companies, Statistical statements, Reports of
Government Departments, Annual Reports on currency and finance published by the National
Bank for Ethiopia, Statistical Statements relating to Cooperatives, Federal Cooperative
Commission, Commercial Banks and Micro Finance Credit Institutions published by the
National Bank for Ethiopia, Reports of the National Sample Survey Organisation, Reports of
trade associations, publications of international organisations such as UNO, IMF, World Bank,
ILO, WHO, etc., Trade and Financial Journals, newspapers, etc.
Secondary sources consist of not only published records and reports, but also unpublished
records. The latter category includes various records and registers maintained by firms and
organisations, e.g., accounting and financial records, personnel records, register of members,
minutes of meetings, inventory records, etc.

Features of Secondary Sources: Though secondary sources are diverse and consist of all sorts of
materials, they have certain common charac-teristics.

First, they are readymade and readily available, and do not require the trouble of constructing
tools and administering them.

Second, they consist of data over which a researcher has no original control over collection and
classification. Others shape both the form and the content of secondary sources. Clearly, this is a
feature, which can limit the research value of secondary sources.

Finally, secondary sources are not limited in time and space. That is, the researcher using them
need not have been present when and where they were gathered.

USE OF SECONDARY DATA

Uses

The secondary data may be used in three ways by a researcher. First, some specific information
from secondary sources may be used for refer-ence purposes.

Second, secondary data may be used as bench marks against which the findings of a research
may be tested.

Finally, secondary data may be used as the sole source of information for a research project.
Such studies as Securities Market Behaviour, Financial Analysis of Companies, and Trends in
credit allocation in commercial banks, Sociological Studies on crimes, historical studies, and the
like depend primarily on secondary data. Year books, Statistical reports of government
departments, reports of public organisations like Bureau of Public Enterprises, Census Reports
etc. serve as major data sources for such research studies.

Advantages

1. Secondary data, if available, can be secured quickly and cheaply.


2. Wider geographical area and longer reference period may be covered without much cost.
Thus the use of secondary data extends the researcher's space and time reach.
3. The use of secondary data broadens the database from which scientific generalizations
can be made.
4. The use of secondary data enables a researcher to verify the findings based on primary
data.

Disadvantages/limitations
1. The most important limitation is the available data may not meet, our specific research
needs.
2. The available data may not be as accurate as desired.
3. The secondary data are not up-to-date and become obsolete when they appear in print,
because of time lag in producing them.
4. Finally information about the whereabouts of sources may not be available to all social
scientists.

METHODS OF COLLECTING PRIMARY DATA: GENERAL

The researcher directly collects primary data from their original sources. In this case, the
researcher can collect the required data precisely according to his research needs, he can collect
them when he wants them and in the form he needs them. But the collection of Primary data is
costly and time consuming. Yet, for several types of social science research such as socio-
economic surveys, social anthropological studies of rural communities and tribal communities,
sociological studies of social problems and social institutions, marketing research, leadership
studies, opinion polls, attitudinal surveys, readership, radio listening and T.V. viewing surveys,
knowledge-awareness practice (KAP) studies, farm management studies, business management
studies, etc., required data are not available from secondary sources and they have to be directly
gathered from the primary sources.

In all cases where the available data are inappropriate, inadequate or obsolete, primary data have
to be gathered. .

Methods of Primary Data Collection

There are various methods of data collection. A ‘Method’ is different from a ‘Tool’. While a
method refers to the way or mode of gathering data, a tool is an instrument used for the method.
For example, a schedule is used for interviewing. The important methods are (a) observation, (b)
interviewing, (c) mail survey, (d) experimentation, (e) simulation, and (f) projective technique.

Observation involves gathering of data relating to the selected research by viewing and/or
listening. Interviewing involves face-to-face con-versation between the investigator and the
respondent. Mailing is used for collecting data by getting questionnaires completed by
respondents. Ex-perimentation involves a study of independent variables under controlled
conditions. Experiment may be conducted in a laboratory or in field in a natural setting.
Simulation involves creation of an artificial situation similar to the actual life situation.
Projective methods aim at drawing inferences on the characteristics of respondents by presenting
to them stimuli. Each method has its advantages and disadvantages.

Choice of Methods of Data Collection

Which of the above methods of data collection should be selected for a proposed research
project? This is one of the questions to be considered while designing the research plan. One or
More methods has/have to be chosen. No method is universal. Each method's unique features
should be compared with the needs and conditions of the study and thus the choice of the
methods should be decided.

OBSERVATION

Meaning and Importance

Observation means viewing or seeing. We go on observing some thing or other while we are
awake. Most of such observations are just casual and have no specific purpose. But observation
as a method of data collection is different from such casual viewing.

Observation may be defined as a systematic viewing of a specific phenomenon in its proper


setting or the specific purpose of gathering data for a particular study. Observation as a method
includes both 'seeing' and 'hearing.' It is accompanied by perceiving as well.

Observation also plays a major role in formulating and testing hypothesis in social sciences.
Behavioural scientists observe interactions in small groups; anthropologists observe simple
societies, and small com-munities; political scientists observe the behaviour of political leaders
and political institutions.

Types of Observation

Observation may be classified in different ways. With reference to the investigator’s role, it may
be classified into (a) participant observation, and (b) non-participant observation. In terms of
mode of observation, it may be classified into (c) direct observation, and (d) indirect observation.
With reference to the rigour of the system adopted, observation is classified into (e) controlled
observation, and (f) uncontrolled observation

EXPERIMENTATION

Experimentation is a research ‘process’ used to study the causal relationships between variables.
It aims at studying the effect of an inde-pendent variable on a dependent variable, by keeping the
other inde-pendent variables constant through some type of control. For example, a -social
scientist may use experimentation for studying the effect of a method of family planning
publicity on people's awareness of family plan-ning techniques.

Why Experiment?

Experimentation requires special efforts. It is often extremely difficult to design, and it is also a
time consuming process. Why should then one take such trouble? Why not simply
observe/survey the phenomenon? The fundamental weakness of any non-experimental study is
its inability to specify causes and effect. It can show only correlations between variables, but
correlations alone never prow causation. The experiment is the only method, which can show the
effect of an independent variable on dependent variable. In experimentation, the researcher can
manipulate the independent variable and measure its effect on the dependent variable. For
example, the effect of various types of promotional strategies on the sale of a given product can
be studies by using different advertising media such as T.V., radio and Newspapers. Moreover,
experiment provides “the opportunity to vary the treatment (experimental variable) in a
systematic manner, thus allowing for the isolation and precise specification of important
differences.”

Applications

The applications of experimental method are ‘Laboratory Experiment’, and ‘Field Experiment’.

SIMULATION

Meaning

Simulation is one of the forms of observational methods. It is a process of conducting


experiments on a symbolic model representing a phenomenon. Abelson defines simulation as
“the exercise of a flexible imitation of processes and outcomes for the purpose of clarifying or
explaining the underlying mechanisms involved.” It is a symbolic abstrac-tion, simplification
and substitution for some referent system. In other words, simulation is a theoretical model of the
elements, relations and processes which symbolize some referent system, e.g., the flow of money
in the economic system may be simulated in a operating model consisting of a set of pipes
through which liquid moves. Simulation is thus a techni-que of performing sampling
experiments on the model of the systems. The experiments are done on the model instead of on
the real system, because the latter would be too inconvenient and expensive.

Simulation is a recent research technique; but it has deep roots in history. Chess has often been
considered a simulation of medieval warfare.

INTERVIEWING

Definition

Interviewing is one of the major methods of data collection. It may be defined as two-way
systematic conversation between an investigator and an informant, initiated for obtaining
information relevant to as a specific study.

It involves not only conversation, but also learning from the respondents’ gestures, facial
expressions and pauses, and his environment. Interviewing requires face-to-face contact or
contact over telephone and calls for interviewing skills. It is done by using a structured schedule
or an unstructured guide.

Importance

Interviewing may be us either as a main method or as a supplemen-tary one in studies of persons.


Interviewing is the only suitable method for gathering information from illiterate or less educated
respondents. It is useful for collecting a wide range of data from factual demographic data to
highly personal and intimate information relating to a person's opinions, attitudes, values, beliefs,
past experience and future intentions. When qualitative information is required or probing is
necessary to draw out fully, then interviewing is required. Where the area covered for the survey
is a compact, or when a sufficient number of qualified interviewers are available, personal
interview is feasible.

Interview is often superior to other data-gathering methods. People are usually more willing to
talk than to write. Once rapport is established, even confidential information may be obtained. It
permits probing into the context and reasons for answers to questions.

Interview can add flesh to statistical information. It enables the inves-tigator to grasp the
behavioural context of the data furnished by the respondents. It permits the investigator to seek
clarifications and brings to the forefront those questions, that, for one reason or another,
respondents do not want to answer.

Types of Interviews

The interviews may be classified into: (a) structured or directive interview, (b) unstructured or
non-directive interview, (c) focused inter-view, and (d) clinical interview and (e) depth
interview.

Telephone Interviewing

Telephone interviewing is a non-personal method of data collection.

Group Interviews

Group interview may be defined as a method of collecting primary data in which a number of
individuals with a common interest interact with each other. In a personal interview, the flow of
information is multidimensional.

Interviewing Process

The interviewing process consists of the following stages:

 Preparation.
 Introduction
 Developing rapport
 Carrying the interview forward
 Recording the interview, and
 Closing the interview

PANEL METHOD

The panel method is a method of data collection, by which data is collected from the same
sample respondents at intervals either by mail or by personal interview. This is used for
longitudinal studies on economic conditions, expenditure pattern; consumer behaviour,
recreational pattern, effectiveness of advertising, voting behaviour, and so on. The period, over
which the panel members are contacted for information may spread over several months or
years. The time interval at which they are contacted repeatedly may be 10 or 15 days, or one or
two months depending on the nature of the study and the memory span of the respondents.

Characteristics

The basic characteristic of the panel method is successive collection of data on the same items
from the same persons over a period of time. The type of information to be collected should be
such facts that can be accurately and completely furnished by the respondent without any
reservation. The number of item should be as few as possible so that they could be furnished
within a few minutes, especially when mail survey is adopted. The average amount of time that a
panel member has to spend each time for reporting can be determined in a pilot study. The panel
method requires carefully selected and well-trained field workers and effective supervision over
their work.-

Types of Panels

The panel may be static or dynamic. A static or continuous panel is one in which the membership
remains the same throughout the life of the panel, except for the members who drop out. The
dropouts are not replaced.

MAIL SURVEY

Definition

The mail survey is another method of collecting primary data. This method involves sending
questionnaires to the respondents with a request to complete them and return them by post. This
can be used in the case of educated respondents only. The mail questionnaire should be simple so
that the respondents can easily understand the questions and answer them. It should preferably
contain mostly closed-end and multiple-choice questions so that it could be completed within a
few Minutes.

The distinctive feature of the mail survey is that the questionnaire is self-administered by the
respondents themselves and the responses are recorded by them, and not by the investigator as in
the case of personal interview method. It does not involve face-to-face conversation between the
investigator and the respondent. Communication is carried out only in writing and this requires
more cooperation from the respondents than does verbal communication.

Alternative modes of sending questionnaires

There are some alternative methods of distributing questionnaires to the respondents. They are:
(1) personal delivery, (2) attaching question-naire to a, product, (3) advertising questionnaire in a
newspaper or magazine, and (4) newsstand inserts.

PROJECTIVE TECHNIQUES
The direct methods of data collection, viz., personal interview, telephone interview and mail
survey rely on respondents' own report of their behaviour, beliefs, attitudes, etc. But respondents
may be unwilling to discuss controversial issues or to reveal intimate information about
themselves or may be reluctant to express their true views fearing that they are generally
disapproved. In order to overcome these limitations, indirect methods have been developed.
Projective Techniques are such indirect methods. They become popular during 1950s as a part of
motivation research.

Meaning

Projective techniques involve presentation of ambitious stimuli to the respondents for


interpretation. In doing so, the respondents reveal their inner characteristics. The stimuli may be
a picture, a photograph, an inkblot or an incomplete sentence. The basic assumption of projective
techniques is that a person projects his own thoughts, ideas and attributes when he perceives and
responds to ambiguous or unstructured stimulus materials. Thus a person's unconscious
operations of the mind are brought to a conscious level in a disguised and projected form, and the
person projects his inner characteristics.

Types of Projective Techniques

Projective Techniques may be divided into three broad categories: (a) visual projective
techniques (b) verbal projective techniques, and (c) Expressive techniques.

SOCIOMETRY

Sociometry is “a method for discovering, describing and evaluating social status, structure, and
development through measuring the extent of acceptance or rejection between individuals in
groups.” Franz defines sociometry as “a method used for the discovery and manipulation of
social configurations by measuring the attractions and repulsions between in-dividuals in a
group.” It is a means for studying the choice, communication and interaction patterns of
individuals in a group. It is concerned with attractions and repulsions between individuals in a
group. In this method, a person is asked to choose one or more persons according to specified
criteria, in order to find out the person or persons with whom he will like to associate.

Sociometry Test

The basic technique in sociometry is the “sociometric test.” This is a test under which each
member of a group is asked to choose from all other members those with whom he prefers to
associate in a specific situation. The situation must be a real one to the group under study, e.g.,
'group study', 'play', 'class room seating' for students of a public school.

A specific number of choices, say two or three to be allowed is determined with reference to the
size of the group, and different levels of preferences are designated for each choice.

Suppose we desire to find out the likings and disliking of persons in a work group consisting of 8
persons. Each person is asked to select 3 persons in order or preference with whom he will like
to work on a group assignment. The levels of choices are designated as: the first choice by the'
number 1, the second by 2, and the third by 3.

Sampling Concepts

Population:
The aggregate of all the elements, sharing some common set of characteristics, that comprise the
universe for the purpose of the marketing research problem.

Census:
A complete enumeration of the elements of a population or study objects.

Sample:
A subgroup of the elements of the population selected for participation in the study.

Sampling Design Process


Sampling design begins by defining the target population in terms of elements, sampling units,
extent and time. Then the sampling frame should be determined. A sampling frame is a
representation of the elements of the target population. It consists of a list of directions for
identifying the target population. At this stage, it is important to recognise any sampling frame
errors that may exist. The next step involves selecting a sampling technique and determining the
sample size. In addition to quantitative analysis, several qualitative considerations should be
taken into account in determining the sample size. Execution of the sampling process requires
detailed specifications for each step in the sampling process. Finally, the selected sample should
be validated by comparing characteristics of the sample with known characteristics of the target
population.
Sample size:
It refers to the number of elements to be included in the study. Determining the sample size
involves several qualitative and quantitative considerations.

Sample Size Determination

Qualitative Factors
Important qualitative factors to be considered in determining the sample size include
(1) the importance of the decision, (2) the nature of the research, (3) the number of variables,
(4) the nature of the analysis, (5) sample sizes used in similar studies, (6) incidence rates, (7)
completion rates, and (8) resource constraints.

Quantitative Factors
The statistical approaches to determining sample size are based on confidence intervals.
These approaches may involve the estimation of the mean or proportion. When estimating the
mean, determination of sample size using the confidence interval approach requires the
specification of precision level, confidence level and population standard deviation. In the case
of proportion, the precision level, confidence level and an estimate of the population proportion
must be specified. The sample size determined statistically represents the final or net sample size
that must be achieved. To achieve this final sample size, a much greater number of potential
respondents have to be contacted to account for reduction in response due to incidence rates and
completion rates.

MEASUREMENT AND SCALING

1. Nominal scale
A nominal scale is a figurative labelling scheme in which the numbers serve only as labels or
tags for identifying and classifying objects. For example, the numbers assigned to the
respondents in a study constitute a nominal scale; thus a female respondent may be assigned a
number 1 and a male respondent 2.

When a nominal scale is used for the purpose of identification, there is a strict one-to-one
correspondence between the numbers and the objects. Each number is assigned to only one
object, and each object has only one number assigned to it. Common examples include student
registration numbers at their college or university and numbers assigned to football players or
jockeys in a horse race. In marketing research, nominal scales are used for identifying
respondents, brands, attributes, banks and other objects.

2. Ordinal scale
An ordinal scale is a ranking scale in which numbers are assigned to objects to indicate the
relative extent to which the objects possess some characteristic. An ordinal scale allows you to
determine whether an object has more or less of a characteristic than some other object, but not
how much more or less. Thus, an ordinal scale indicates relative position, not the magnitude of
the differences between the objects. The object ranked first has more of the characteristic as
compared with the object ranked second, but whether the object ranked second is a close second
or a poor second is not known. Common examples of ordinal scales include quality rankings,
rankings of teams in a tournament and occupational status. In marketing research, ordinal scales
are used to measure relative attitudes, opinions, perceptions and preferences. Measurements of
this type include ‘greater than’ or ‘less than’ judgements from respondents.

3. Interval scale
In an interval scale, numerically equal distances on the scale represent equal values in the
characteristic being measured. An interval scale contains all the information of an ordinal scale,
but it also allows you to compare the differences between objects. The difference between any
two scale values is identical to the difference between any other two adjacent values of an
interval scale. There is a constant or equal interval between scale values. The difference between
1 and 2 is the same as the difference between 2 and 3, which is the same as the difference
between 5 and 6. A common example in everyday life is a temperature scale. In marketing
research, attitudinal data obtained from rating scales are often treated as interval data.

4. Ratio scale
A ratio scale possesses all the properties of the nominal, ordinal and interval scales, and, in
addition, an absolute zero point. Thus, in ratio scales we can identify or classify objects, rank the
objects, and compare intervals or differences. It is also meaningful to compute ratios of scale
values. Not only is the difference between 2 and 5 the same as the difference between 14 and 17,
but also 14 is seven times as large as 2 in an absolute sense. Common examples of ratio scales
include height, weight, age and money. In marketing, sales, costs, market share and number of
customers are variables measured on a ratio scale.
5. Likert scale
Named after its developer, Rensis Likert, the Likert scale is a widely used rating scale that
requires the respondents to indicate a degree of agreement or disagreement with each of a series
of statements about the stimulus objects.22 Typically, each scale item has five response
categories, ranging from ‘strongly disagree’ to ‘strongly agree’. We illustrate with a Likert scale
for evaluating attitudes towards Renault cars.

6. Semantic differential scale


The semantic differential is a seven-point rating scale with end points associated with bipolar
labels that have semantic meaning. In a typical application, respondents rate objects on a number
of itemised, seven-point rating scales bounded at each end by one of two bipolar adjectives, such
as ‘boring’ and ‘exciting’. We illustrate this scale in Figure 12.8 by presenting a respondent’s
evaluation of Formula One racing on five attributes.
7. Thurstone Scaling

For this type of survey item, your goal is to formulate a series of sequential statements about
some target attribute. You would attempt to 'space' or scale those statements such that they they
represent equal intervals of increasing or decreasing intensity of the attribute.

Subjects would only be given TWO CHOICES for each statement: "agree" and "disagree." The
researcher then looks at which items 'triggered' agreement. Due to the equal interval desired
phrasing of the original items, it is then hoped that the agreements reveal "how much of" that
attribute the respondent has, or agrees with. This, then, would make the scores "behave" like
interval data.

Again, an actual example might help clarify the above ideas! It's contained in Figure 3, below:

Figure 3. An Example of a Thurstone Scale (target attribute:"measuring parents' aspirations for


their children's educational & career attainments")

As you can imagine, the process of writing 'successively equally spaced interval gradations-of-
the-attitude' items can be subjective and tricky! For these reasons of reliability and validity, as
well as other problems arising from their use and application, Thurstone items should in most
cases not be the preferred method of choice.

8. Guttman Scales

Guttman scaling was developed by Louis Guttman (1944, 1950) and was ¯rst used as part of the
classic work on the American Soldier. Guttman scaling is applied to a set of binary questions
answered by a set of subjects. The goal of the analysis is to derive a single dimension that can be
used to position both the questions and the subjects. The position of the questions and subjects
on the dimension can then be used to give them a numerical value. Guttman scaling is used in
social psychology and in education.

Figure4. An Example of a Guttman Scale (target attribute: "measuring parents' aspirations for
their children's educational & career attainments")
With the Thurstone scale, there was a 'switch in directionality,' while with the Guttman there is a
progression in the same direction. Thus, for the Thurstone, we look for the occurrence of a single
affirmative response, while in the Guttman we look for the point of transition from affirmative to
negative responses.

One problem with the Guttman, in addition to the subjectivity issue raised with theThurstone, is
that it implies if you respond positively to a given item, you would also be assumed to respond
positively to those items "below it in the hierarchy." Whether or not this 'order effect' will always
be true depends on the phenomenon or attribute in question. For instance, in Figure 4 (Guttman
example), a subject could conceivably agree with Item #3 but disagree with Item #4. This could
be the case if, for example, the respondent perceived "success" as a rather complex & multi-
faceted variable, one that could possibly both help and hinder happiness in various ways.

Reliability of Rating Scales

Reliability refers to the extent to which a scale produces consistent results if repeated
measurements are made Systematic sources of error do not have an adverse impact on reliability,
because they affect the measurement in a constant way and do not lead to inconsistency. In
contrast, random error produces inconsistency, leading to lower reliability. Reliability can be
defined as the extent to which measures are free from random error,

XR. If XR = 0, the measure is perfectly reliable.

Reliability is assessed by determining the proportion of systematic variation in a scale. This is


done by determining the association between scores obtained from different administrations of
the scale. If the association is high, the scale yields consistent results and is therefore reliable.
Approaches for assessing reliability include the test–retest,
alternative-forms and internal consistency methods.
In test–retest reliability, respondents are administered identical sets of scale items at two
different times, under as nearly equivalent conditions as possible. The time interval between tests
or administrations is typically two to four weeks. The degree of similarity between the two
measurements is determined by computing a correlation coefficient (see Chapter 20). The higher
the correlation coefficient, the greater the reliability. Several problems are associated with the
test–retest approach to determining reliability. First, it is sensitive to the time interval between
testing. Other things being equal, the longer the time interval, the lower the reliability. Second,
the initial measurement may alter the characteristic being measured. For example, measuring
respondents’ attitude towards low-alcohol beer may cause them to become more health
conscious and to develop a more positive attitude towards low-alcohol beer. Third, it may be
impossible to make repeated measurements (e.g. the research topic may be the respondent’s
initial reaction to a new product). Fourth, the first measurement may have a carryover effect to
the second or subsequent measurements. Respondents may attempt to remember answers they
gave the first time. Fifth, the characteristic being measured may change between measurements.
For example, favourable information about an object between measurements may make a
respondent’s attitude more positive. Finally, the test–retest reliability coefficient can be inflated
by the correlation of each item with itself. These correlations tend to be higher than correlations
between different scale items across administrations. Hence, it is possible to have high test–retest
correlations because of the high correlations between the same scale items measured at different
times even though the correlations between different scale items are quite low. Because of these
problems, a test–retest approach is best applied in conjunction with other approaches, such as
alternative-forms reliability.

In alternative-forms reliability, two equivalent forms of the scale are constructed. The same
respondents are measured at two different times, usually two to four weeks apart (e.g. by initially
using Likert scaled items and then using Stapel scaled items). The scores from the
administrations of the alternative scale forms are correlated to assess reliability.
The two forms should be equivalent with respect to content, i.e. each scale item should attempt
to measure the same items. The main problems with this approach are that it is difficult, time
consuming and expensive to construct an equivalent form of the scale. In a strict sense, it is
required that the alternative sets of scale items should have the same means, variances and
intercorrelations. Even if these conditions are satisfied, the two forms may not be equivalent in
content. Thus, a low correlation may reflect either an unreliable scale or non-equivalent forms.

Internal consistency reliability is used to assess the reliability of a summated scale where
several items are summed to form a total score. In a scale of this type, each item measures some
aspect of the construct measured by the entire scale, and the items should be consistent in what
they indicate about the construct. This measure of reliability focuses on the internal consistency
of the set of items forming the scale.

The simplest measure of internal consistency is split-half reliability. The items on the scale are
divided into two halves and the resulting half scores are correlated. High correlations between
the halves indicate high internal consistency. The scale items can be split into halves based on
odd- and even-numbered items or randomly. The problem is that the results will depend on how
the scale items are split. A popular approach to overcoming this problem is to use the coefficient
alpha.
The coefficient alpha, or Cronbach’s alpha, is the average of all possible split-half coefficients
resulting from different ways of splitting the scale items. This coefficient varies from 0 to 1, and
a value of 0.6 or less generally indicates unsatisfactory internal consistency reliability. An
important property of coefficient alpha is that its value tends to increase with an increase in the
number of scale items. Therefore, coefficient alpha may be artificially, and inappropriately,
inflated by including several redundant scale items. Another coefficient that can be employed in
conjunction with coefficient alpha is coefficient beta. Coefficient beta assists in determining
whether the averaging process used in calculating coefficient alpha is masking any inconsistent
items. Some multi-item scales include several sets of items designed to measure different aspects
of a multidimensional construct. For example, car manufacturer image is a multidimensional
construct that includes country of origin, range of cars, quality of cars, car performance, service
of car dealers, credit terms, dealer location and physical layout of dealerships. Hence, a scale
designed to measure car manufacturer image could contain items measuring each of these
dimensions. Because these dimensions are somewhat independent, a measure of internal
consistency computed across dimensions would be inappropriate. If several items are used to
measure each dimension, however, internal consistency reliability can be computed for each
dimension.

Validity of Rating Scales

The validity of a scale may be considered as the extent to which differences in observed scale
scores reflect true differences among objects on the characteristic being measured, rather than
systematic or random error. Perfect validity requires that there be no measurement error

(XO = XT, XR = 0, XS = 0).

Researchers may assess content validity, criterion validity or construct validity.

Content validity, sometimes called face validity, is a subjective but systematic evaluation of
how well the content of a scale represents the measurement task at hand. The researcher or
someone else examines whether the scale items adequately cover the entire domain of the
construct being measured. Thus, a scale designed to measure car manufacturer image would be
considered inadequate if it omitted any of the major dimensions (country of origin, range of cars,
quality of cars, car performance, etc.). Given its subjective nature, content validity alone is not a
sufficient measure of the validity of a scale, yet it aids in a common-sense interpretation of the
scale scores. A more formal evaluation can be obtained by examining criterion validity.

Criterion validity reflects whether a scale performs as expected in relation to other selected
variables (criterion variables) as meaningful criteria. If, for example, a scale is designed to
measure loyalty in customers, criterion validity might be determined by comparing the results
generated by this scale with results generated by observing the extent of repeat purchasing.
Based on the time period involved, criterion validity can take two forms, concurrent validity and
predictive validity.

Concurrent validity is assessed when the data on the scale being evaluated (e.g. loyalty scale)
and the criterion variables (e.g. repeat purchasing) are collected at the same time. The scale being
developed and the alternative means of encapsulating the criterion variables would be
administered simultaneously and the results compared.

Predictive validity is concerned with how well a scale can forecast a future criterion. To assess
predictive validity, the researcher collects data on the scale at one point in time and data on the
criterion variables at a future time. For example, attitudes towards how loyal customers feel to a
particular brand could be used to predict future repeat purchases of that brand. The predicted and
actual purchases are compared to assess the predictive validity of the attitudinal scale.

Construct validity addresses the question of what construct or characteristic the scale is, in fact,
measuring.When assessing construct validity, the researcher attempts to answer theoretical
questions about why the scale works and what deductions can be made concerning the
underlying theory. Thus, construct validity requires a sound theory of the nature of the construct
being measured and how it relates to other constructs. Construct validity is the most
sophisticated and difficult type of validity to establish. As Figure12.14 shows, construct validity
includes convergent, discriminant and nomological validity.

Convergent validity is the extent to which the scale correlates positively with other
measurements of the same construct. It is not necessary that all these measurements be obtained
by using conventional scaling techniques. Discriminant validity is the extent to which a
measure does not correlate with other constructs from which it is supposed to differ. It involves
demonstrating a lack of correlation among differing constructs.

Nomological validity is the extent to which the scale correlates in theoretically predicted ways
with measures of different but related constructs. A theoretical model is formulated that leads to
further deductions, tests and inferences. An instance of construct validity can be evaluated in the
following example. A researcher seeks to provide evidence of construct validity in a multi-item
scale, designed to measure the concept of ‘self-image’. These findings would be sought:
 High correlations with other scales designed to measure self-concepts and with reported
classifications by friends (convergent validity).
 Low correlations with unrelated constructs of brand loyalty and variety seeking
(discriminant validity).
 Brands that are congruent with the individual’s self-concept are more preferred, as
postulated by the theory (nomological validity).
 A high level of reliability.
Note that a high level of reliability was included as evidence of construct validity in this
example. This illustrates the relationship between reliability and validity.

Relationship between reliability and validity

The relationship between reliability and validity can be understood in terms of the true score
model. If a measure is perfectly valid, it is also perfectly reliable. In this case,

XO =XT, XR = 0 and XS = 0.
Thus, perfect validity implies perfect reliability. If a measure is unreliable, it cannot be perfectly
valid, since at a minimum XO = XT + XR.

Furthermore, systematic error may also be present, i.e., XS ≠ 0. Thus, unreliability implies
invalidity. If a measure is perfectly reliable, it may or may not be perfectly valid, because
systematic error may still be present (XO = XT + XS). In other words, a reliable scale can be
constructed to measure ‘customer loyalty’ but it may not necessarily be a valid measurement of
‘customer loyalty’. Conversely, a valid measurement of ‘customer loyalty’ has to be reliable.
Reliability is a necessary, but not sufficient, condition for validity.

Questionnaire

A questionnaire, whether it is called a schedule, interview form or measuring instrument, is a


formalised set of questions for obtaining information from respondents. Typically, a
questionnaire is only one element of a data collection package that might also include (1)
fieldwork procedures, such as instructions for selecting, approaching and questioning
respondents (see Chapter 16); (2) some reward, gift or payment offered to respondents; and (3)
communication aids, such as maps, pictures, advertisements and products (as in personal
interviews) and return envelopes (in mail surveys).

Questionnaire Design Process

You might also like