You are on page 1of 48

Ans.

1 (a) "Marketing research is the function that links the consumer, customer, and
public to the marketer through information - information used to identify and define
marketing opportunities and problems; generate, refine, and evaluate marketing actions;
monitor marketing performance; and improve understanding of marketing as a process.
Marketing research specifies the information required to address these issues, designs the
methods for collecting information, manages and implements the data collection process,
analyzes, and communicates the findings and their implications." - American Marketing
Association (AMA) - Official Definition of Marketing Research

Market research is the collection and analysis of information about consumers, competitors and
the effectiveness of marketing programs.
Small business owners use market research to determine the feasibility of a new business, test
interest in new products or services, improve aspects of their businesses, such as customer
service or distribution channels, and develop competitive strategies.
In other words, market research allows businesses to make decisions that make them more
responsive to customers' needs and increase profits.
While market research is crucial for business start up, it's also essential for established
businesses. It's accurate information about customers and competitors that allows the
development of a successful marketing plan.
Nature, Scope and Principle of importance of Marketing Research
I. THE ROLE OF MARKETING RESEARCH IN STRATEGIC PLANNING AND
DECISION MAKING
Our primary focus is on the role of marketing research in marketing strategy decisions.
A. Identifying Marketing Opportunities and constraints (a logical starting point for
developing marketing strategies.)
Marketing research to understand customers is growing as more firms embrace the
marketing concept
 Philosophy of customer orientation
 Firms to uncover customer needs first
Marketing Concept Opportunities
 Coordinate all their activities to satisfy those needs

 Marketing research is vital to maintaining and


improving a company's overall competitiveness.

 Understanding the external environment helps to


Competitive intelligently plan for the future.
Constraints
Environment
 Many firms continually collect and evaluate
environmental information to identify future
market opportunities and threats

B. Role of Marketing Research in Developing and Implementing Marketing Strategies


 Nature of its product
 Ways to promote the product
Can Help
 Price charged to potential customers
Understand
 Means used to make the product available to them

 Will identify whether the marketing mix is effective enough to maximize the benefits to the
firm from available opportunities
o sales
o profits
o customer satisfaction
o value
 Many successful new-product launches were preceded by extensive marketing research  

C. Evaluating the Effectiveness of Marketing Plans


 Getting feedback from the marketplace
 Taking corrective action for elements of products or services that
Control Function
need fixing
 Important component of the planning and decision-making process
Controlling  Another area for marketing research to provide solutions

 What is the market share of our product?


 Is its share increasing, decreasing, or staying the same?
Control-Related  Who are its users?
Questions  Are the nature of the users and the volume of their purchases
consistent with our expectations (goals)? If not, why not?

Only marketing research, not marketers' intuition, can yield accurate answers to such
questions.

II. BASIC MARKETING RESEARCH PRINCIPLES


Principle 1. Attend to the Timeliness and Relevance of Research "
#1: 2. Marketing research can lead to erroneous decisions if not done on a
timely basis. "
3. The research must be relevant to the current situation.
1. Define Research Objectives Carefully and Clearly
2. Careful and clear definition of research objectives is a key requirement
Principle a. For accuracy
#2: b. For beneficial research outcomes
3. Researchers must pay careful attention to research objectives.

1. Do Not Conduct Research to Support Decisions Already Made


Principle 2. Conducting marketing research when potential research users have
#3: already made up their minds is not a productive use of scarce resources.
Ans. 1 (b) Marketing research process is a set of six steps which defines the tasks to be
accomplished in conducting a marketing research study. These include problem definition,
developing an approach to problem, research design formulation, field work, data preparation
and analysis, and report generation and presentation.

The research process provides a systematic, planned approach to the research project and
ensures that all aspects of the research project are consistent with each other. It is especially
important that the research design and implementation be consistent with the research purpose
and objectives.

Research studies evolve through a series of steps, each representing the answer to a key question.

1. Why should we do research? This establishes the research purpose as seen by the
management team that will be using the results. This step requires understanding the
decisions to be made and the problems or opportunities to be diagnosed.

2. What research should be done? Here the management purpose is translated into objectives
that tell the managers exactly what questions need to be answered by the research study or
project.

3. Is it worth doing the research? The decision has to be made here about whether the value of
the information that will likely be obtained is going to be greater than the cost of collecting it.

4. How should the research be designed to achieve the research objectives? Design issues
include the choice of research approach—reliance on secondary data versus conducting a
survey or experiment—and the specifics of how to collect the data.

5. What will we do with the research? Once the data have been collected, how will it be
analyzed, interpreted, and used to make recommendations for action?
Major Steps in the Marketing Research Process
o Dividing the marketing research process into a series of chronological steps is
convenient.
o In reality the steps are highly interrelated
o Each step may have an impact on the step(s) preceding or following it.

Step 1. Justify the Need for Market Research


o Potential usefulness KFC In Brazil – In a major push to become prominent
player in the Brazilian market researched Brazilian market and decided to open
operations in Sao Paulo. Although research identified the potential customers but did
not include thorough research in possible competitions. KFC failed to ask potential
customers to compare the product which was found locally and the Brazilians found
local chicken to be tastier than the colonel’s recipe. The information generated was
therefore not complete enough to be useful.
o Management Attitudes - Whirlpool – it conducted a market research for washing
machines in Europe, the research found diverse consumer preferences but the whirlpool
management ignored the findings and launched a common product. Even today the
management continues to believe that this strategy will pay off in united Europe and in
the meantime the established European players are revamping their products and
operations to give whirlpool a tougher competition.
o Resources available to implement results. Eg. X-Disk Corporation
 Operating at 95% Capacity
 Marketing research determines there is a market for new product
 Capacity is not enough to meet new product demand
 Back to square one, as the marketing research was useless without the
resources to implement
o Research Cost versus Research Benefits determines whether or not to proceed.
 Determining Research Costs :
 Quantify the necessary research steps
 Determine the Benefits:
 Nature of the uncertainty it will alleviate
 Projected financial benefits
 Decision maker intuition
o Marketing research budgets are extremely limited. Use them prudently!
o Research on A Small Budget

Step 2. Define the Research Objective


o What we want to find out?
o Why do we need to know this?
o Accurately research objective is the key
 To determining whether to conduct research
 What the nature should be.
o Effective Communication is a Must
 Dialogue is critical for diagnosing any potential research situation
 Especially important when purpose of research is to explore
opportunities
 The chances that a wrong or nonexistent problem will be researched are
greatly when there is no healthy dialog
o Pack n' Sac
o McDonalds Arch Deluxe

Step 3. Identify Data Needs


o Scrutinize the research purpose
o Listing the kinds of data required to accomplish that purpose.
Step 4. Identify Data Sources
o Ease or difficulty of locating data sources will depend on the nature of the
information.
o Secondary Data.
 Can obtain much factual information
 If the objective can be accomplished through secondary data then
subsequent stages may not be relevant.
o Primary Data
 Not all data may be readily available.
o Is a source is available for
 the kinds of data needed
 in a particular project
 for a specific time frame

Step 5. Choose an Appropriate Research Design (and Method)


o The research design serves as a blueprint for the execution of the project
o Typically cast in the form of a research proposal.
o The research design will influence the other tasks to be performed

o Gain some initial insights


Exploratory research o May pave the way for further research

o Verifies insights
o Points to the appropriate course of action
Conclusive research
o Can be either descriptive or experimental.

o A formal, structured survey of individual


Descriptive research design customers

o Investigation of the effects of one variable on


Experimental (Causal) studies another.

Step 6. Design the Data Collection Instrument (or Form)


o Data collection instrument or form is relevant in primary-data collection.
o Primary data collection methods
 Interviews
 Observation
o Some instrument must be designed to record the data being collected.
o Designing a data collection form
 May appear easy
 Certain aspects of the form can seriously affect the quality and nature of
the data
Step 7. Identify the Sample

o Clearly specify who, or what units will provide the needed data.
o This step may offer some general guidance for designing the sample.
o For example, the method of choosing individuals depends on whether a
probability or a non-probability sampling method is used.

Each element in the population has a known, nonzero chance of


Probability sample
inclusion.
Non-probability
Any sample that does not fit into the definition of a probability sample.
sample
Step 8. Collect the Data
o Check Data For
 completeness
 consistency
 Adherence to pre-specified instructions.

o Examining the responses


Editing o Taking corrective action to ensure high quality

o Transforming Data into a form that is ready for


Coding analysis

Step 9. Analyze the Data and Interpret the Results


o The types of analyses used may depend on
 nature of the data
 Type of data collection method used.
Step 10. Present Research Findings to Decision Makers
o Prepare a report that communicates the results of the research.
o Only through a clear and convincing report can the findings and conclusions
reached be implemented.

II. Interdependence of Process Steps


A. New Product Launches at Burger King
o Any research project can be broken into a series of logical steps
 Start with a determination of the worth of the project
 End with analysis and interpretation of the findings.
o In reality few stages are independent of one another.
o A major challenge for researchers and decision makers is to think ahead.
o Planning a potentially valuable research project requires a much broader
perspective than focusing solely on one step at a time.

B. Eg: Marketing Research for the BK Broiler? Chicken Sandwich


Step 1. Justify the Need for Market Research
o The BK Broiler? Chicken sandwich was suffering from low sales.
o BK's customers are typically males
o BK believed it could increase sales by appealing to women with its broiled
chicken sandwich.
o BK needed research to identify and develop a winning positioning strategy that
appealed to women.
o The proposed research project appeared to be worthwhile
 The results were needed
 Adequate resources were available to implement the research results

Step 2. Define the Research Objective


 To find out what would be the best way to position a new broiled chicken sandwich
among the target market (women).

Steps 3 and 4. Identify Data Needs and Data Sources


Four classes of Data required
o Purchase intention
purchase intention
o Purchase frequency
measures
o Reasons underlying the purchase intention
overall product diagnostics o Uniqueness, differentiation
o Data regarding specific Attributes to focus product
attribute diagnostics development

respondent profiling o Demographic information


variables
Identify data sources
o to collect consumer impressions
Primary data
o to serve as a benchmark
Secondary data

Step 5. Choose an Appropriate Research Design (and Data Collection Method)


Project was conclusive experimental research
o
o Developed by outside agency
o Descriptive research
o Collection through mall-intercept
For Primary Data:
o Prescreen for fast-food restaurant chicken eaters.
o Neutral or positive respondents were invited to return to try
the product.

o Choice white meat/chicken breast


o Backyard BBQ Taste
Four Positioning o Marinated Special Blend/ Homestyle Taste
o Competitive Claim

o 9 different combinations of buns and sauce


Taste Tests

Step 6. Develop the Research Instrument or Form


o Well-structured questionnaire to collect the necessary data about the concept and
the respondents.
 Pre-recruit screener questions
 Concept evaluation questions
 Taste Test
 Classification questions

6. Step 7. Identify the Sample


o People who had eaten chicken in a fast-food restaurant at least once in the past
three months.
7. Step 8. Collect the Data
Interviews 835
consumers Pre-recruited at malls
geographic
10
locations
taste tests 150
65% female 35% male
50% ages 18-34, 50% ages 35-54
50% BK consumers 50% non-BK consumers
8. Step 9. Analyze and Interpret the Data
o The "Choice White Meat/Chicken Breast" positioning generated the highest
level of positive interest among the non--Burger King Users
o Interest product trial was driven by
 Positive predisposition toward the chicken sandwich
 How appetizing the product looked.
 Healthfulness---broiled rather than fried---appeared to be a secondary
driver.
o The "Choice White Meat/Chicken Breast" positioning generated the highest
level of purchase intent.
o Consumers rated the product very favorably in the Taste Test Study,

9. Step 10. Present Research Findings


o BK Consumer Research Group recommended the "Choice White Meat/Chicken
Breast" positioning.
o The sandwich performed well
 Among women (the intended target market)
 Among non-BK users
 The 35--54 age group.
o Additional studies were needed to determine
 Best name
 Best price
Ans. 2 (a)

Exploratory research provides insights into and comprehension of an issue or situation. It should
draw definitive conclusions only with extreme caution. Exploratory Research is a type of
research conducted because a problem has not been clearly defined. Exploratory research helps
determine the best research design, data collection method and selection of subjects. Given its
fundamental nature, exploratory research often concludes that a perceived problem does not
actually exist. Exploratory research is to explore or search through a problem or situation to
provide insights and understanding used for following purposes:
- Formulate a problem or define a problem more precisely.
- Identify alternative courses of action.
- Develop hypotheses.
- Isolate key variables and relationships for further examination.
- Gain insights for developing an approach to the problem.
- Establish priorities for further research

Exploratory research often relies on secondary research such as reviewing available literature
and/or data, or qualitative approaches such as informal discussions with consumers, employees,
management or competitors, and more formal approaches through in-depth interviews, focus
groups, projective methods, case studies or pilot studies. The Internet allows for research
methods that are more interactive in nature:

Methods used in Exploratory Research:


- A review of academic and trade literature to identify the relevant demographic and
psychographic factors that influence consumer patronage of department stores
- Interviews with retailing experts to determine trends, such as emergence of new types of
outlets and shifts in consumer patronage patterns (e.g., shopping on the Internet)
- A comparative analysis of the three best and three worst stores of the same chain to gain
some idea of the factors that influence store performance
- Focus groups to determine the factors that consumers consider important in selecting
department stores

E.g. RSS feeds efficiently supply researchers with up-to-date information; major search engine
search results may be sent by email to researchers by services such as Google Alerts;
comprehensive search results are tracked over lengthy periods of time by services such as
Google Trends; and Web sites may be created to attract worldwide feedback on any subject.

The results of exploratory research are not usually useful for decision-making by themselves, but
they can provide significant insight into a given situation. Although the results of qualitative
research can give some indication as to the "why", "how" and "when" something occurs, it
cannot tell us "how often" or "how many." Exploratory research is not typically generalizable to
the population at large.

Conclusive research provides information that helps the manager evaluate and select a course of
action. This involves clearly defined research objectives and information needs. Some
approaches to this research include surveys, experiments, observations, and simulation.
Conclusive research is conducted to draw some conclusion about the problem. It is essentially,
structured and quantitative research, and the output of this research is the input to management
information systems (MIS).

A similar distinction exists between exploratory research and conclusive research. Exploratory
research provides insights into and comprehension of an issue or situation. It should draw
definitive conclusions only with extreme caution. Conclusive research draws conclusions: the
results of the study can be generalized to the whole population.

Conclusive research provides information that helps the manager evaluate and select a course of
action. This involves clearly defined research objectives and information needs. Some
approaches to this research include surveys, experiments, observations, and simulation.
Conclusive research can be sub-classified into descriptive research and causal research.

Descriptive research, as its name suggests, is designed to describe something—for example, the
characteristics of consumers of a certain product; the degree to which the use of a product varies
with age, income, or sex; or the number of people who saw a specific TV commercial. A
majority of Marketing Research studies are of this type (Boyd and West face, 1992).
- Descriptive Research is to describe market characteristics or functions
- Descriptive research is conducted for the following reasons
- Describing the characteristics of relevant groups, such as consumers, sales people,
organizations, or market areas.
- Estimating the percentage of units in a specified population exhibiting a certain behavior.
- Determining the perceptions of product characteristics.
- Determining the degree to which marketing variables are associated.
- Making specific predictions.
Descriptive Design requires clear specifications of:
- Who should be considered a patron of a particular department store?
- What information should be obtained from the respondents?
- When should the information be obtained from the respondents?
- Where the respondents should be contacted to obtain the required information?
- Why are we obtaining information from the respondents? Why is the marketing research
project being conducted?
- In what way are we going to obtain information from the respondents?

Causal research is designed to gather evidence regarding the cause-and-effect relationships that
exist in the marketing system. For example, if a company reduces the price of a product and then
unit sales of the product increase, causal research would show whether this effect was due to the
price reduction or some other reason. Causal research must be designed in such a way that the
evidence regarding causality is clear. The main sources of data for causal research are
interrogating respondents through surveys and conducting experiments.

Longitudinal studies repeatedly measure the same sample units of a population over time.
Longitudinal studies often make use of a panel which represents sample units who have agreed
to answer questions at periodic intervals. Many large research firms maintain panels of
consumers.

Cross-sectional studies measure units from a sample of the population at only one point in time.
Sample surveys: are cross-sectional studies whose samples are drawn in such a way as to be
representative of a specific population. These studies are usually presented with a margin of
error. Cross-sectional studies take “snapshots” of the population at a point in time.

Longitudinal Cross-Sectional Data


Allows turnover analysis if panel is a true Tends to produce more representative samples
panel of the population interest
Allows collection of a great deal more Produces fewer errors due to respondent’s
classification information from respondents behavior being affected by measurement task
Allows longer and more exacting interviews Allows the investigation of a great many
relationships
Produces fewer errors in reporting
past behavior because of natural forgetting
Produces fewer interviewers – interviewee
interaction errors.
Ans. 2 (b) A Geographic Information System (GIS) is a collection of computer hardware,
software and geographic data used to analyze and display geographically referenced
information.

GIS Techniques
GIS is a relatively new and fast developing methodological approach designed to look at data
geographically and spatially. The U.S. Census Bureau for example, utilizes GIS capabilities to
map to look at median household income, level of education, employment and a host of
indicators gathered from their survey of the universe of United States residents at a fine
resolution down to the street level. Environmental Systems Research Institute (ESRI), one of the
leading GIS software manufacturers, characterizes this software as linking the location of
information with what information represents (2002). GIS has been frequently applied in a
variety of ways, including market research, landscape design, epidemiology, and classroom
instruction
Modern GIS maps are created using Computer Aided Design (CAD) software to digitally render
geographical maps, onto which can be superimposed any spatially located variables (i.e. rainfall,
demographic data, etc.). GIS maps are used to create a visual representation of raw data
(attributes) to allow for more efficient analysis and better decision making.
Geographical Information System (GIS) are a new and valuable tool of the 'information
revolution' with the capability of combining attribute and spatial data with mapping systems and
cartographic modeling tools. They permit the acquisition, storage, analysis, management and
presentation of large amounts of geographic or spatial data (Goodchild, 1992; Tomlin, 1990). In
the recent years, the application of GIS in business has grown rapidly. Major retailers,
automobile dealerships, video rental companies, media organizations, and fast food corporations
are just some of the many businesses around the world that have discovered the value of GIS.
Research has shown that since more than 80 per cent of all information in an organization can be
geographically referenced, business strategists are finding GIS to be an ideal tool for identifying
and expanding markets, and increasing profits.
The application of GIS in business has grown rapidly in the recent years. Research has shown
that since more than 80 per cent of all information held by organizations can be geographically
referenced, business strategists are finding GIS to be an ideal tool for identifying and expanding
markets, and increasing profits. Although most widely used GIS packages perform a range of
functions, many of them have weaknesses when it comes to business applications. Combining
GIS and other techniques will create appropriate and diverse approaches to problem solving. One
advantage of linking statistical methods with GIS is to integrate GIS capabilities with the power
of statistical analysis, and to effectively use data from different sources for market analysis.
Another advantage of this is that users can visualize spatial data in different forms: visualize the
spatial distribution of data on maps prior to further statistical analysis, and visualize spatial data
in various statistical graphs and diagrams which may yield more insights into the nature of
distributions. In this way, the application of GIS in market analysis may be seen as a tool for
reaching a desired solution for the client and not merely as an end in itself.
Despite the widespread adoption of GIS by the business community, there is an increasing
awareness that current proprietary GIS packages are limited in their capability to address
business objectives because they lack appropriate analytical tools. One aspect of developments in
marketing research is the increased emphasis being placed on the use of statistical methods,
particularly in unraveling the relationships between a large number of demographic
characteristics from Census data and the profiles of customers in an organization’s database.
Most packages for statistical analysis are not particularly relevant to marketing and do nothing to
help a largely non-technical user community. There is a need to develop "black-box" automated
analysis tool

Geocoding is the process of finding associated geographic coordinates (often expressed as


latitude and longitude) from other geographic data, such as street addresses, or zip codes (postal
codes). With geographic coordinates the features can be mapped and entered into Geographic
Information Systems, or the coordinates can be embedded into media such as digital photographs
via geotagging

GIS provide powerful tools for addressing geographical and environmental issues. Consider the
schematic diagram below. Imagine that the GIS allows us to arrange information about a given
region or city as a set of maps with each map displaying information about one characteristic of
the region. In the case below, a set of maps that will be helpful for urban transportation planning
have been gathered. Each of these separate thematic maps is referred to as a layer, coverage, or
level. And each layer has been carefully overlaid on the others so that every location is precisely
matched to its corresponding locations on all the other maps. The bottom layer of this diagram is
the most important, for it represents the grid of a locational reference system (such as latitude
and longitude) to which all the maps have been precisely registered.

Once these maps have been registered carefully within a common locational reference system,
information displayed on the different layers can be compared and analyzed in combination.
Transit routes can be compared to the location of shopping malls, population density to centers
of employment. In addition, single locations or areas can be separated from surrounding
locations, as in the diagram below, by simply cutting all the layers of the desired location from
the larger map. Whether for one location or the entire region, GIS offers a means of searching for
spatial patterns and processes.
Case Study
Travel pattern data collected from a rural county in Western North Carolina, United States, was
analyzed using a geographic information system. Travel pattern data are valuable to destination
marketers as it highlights potential regional promotions and development partners. Geographic
information systems (GIS) can easily display in map form spatially oriented concepts such as
travel patterns and provides easy viewing of the data. The generated maps provided a clear
indication of regions, counties, and towns that tourism promoters in the rural county may
consider as potential marketing and development collaborators. Additionally, by modeling trip
distances per travel pattern, potential new markets were identified.

GIS for Customer and Market Analytics

Businesses are able to maximize their return on assets using ESRI GIS software. GIS gives any
organization the ability to go beyond standard data analysis with tools to integrate, view, and
analyze data using geography. And the applications can be used across an entire organization, in
the field, and on the Internet. Market analysis, customer analytics, and site selection are ways
businesses can combine geographic analysis for better business intelligence, customer
relationship management, financial modeling, and enterprise resource planning. Business
analytics is a key subcomponent of business intelligence, one to which GIS is naturally
connected. GIS helps any business gain more accurate predictive analysis, business activity, and
performance monitoring.
Internet Mapping Helps Customers Find Stores

American Suzuki Motors Corporation

American Suzuki Motors Corporation (ASMC) started business in 1909 as Suzuki Loom Works.
After World War II, Suzuki's motorized bike, Power Free, was successfully introduced followed
by the 125 cc motorcycle, Colleda. The lightweight car, Suzulight, helped bring along Japan's
automotive revolution, solidifying Suzuki's reputation as a company optimizing the most
advanced technologies available. Today, the company is constantly thinking ahead to meet
changing lifestyles, and the Suzuki name is seen on a full range of motorcycles, automobiles,
outboard motors, and related products including generators and motorized wheelchairs.

The Suzuki trademark is recognized by people throughout the world as a brand of quality
products that offer both reliability and originality. Customers’ specifically look for Suzuki
motorcycles and automobiles. To accommodate this, the company needed a more cost-effective,
immediate means for its customers to find dealerships.

- Provide cost-effective mapping application over the Internet.


- Give customers user-friendly experience when fi nding information.

ASMC came to ESRI, the world's leading provider of geographic information system (GIS)
software products, to find the right answer to its problem. After discussing its needs, ASMC
realized it needed a dealer locator service for customers searching for vehicles on the Suzuki
Web site.
ASMC knows the automotive industry well but did not want to become an expert in GIS
technology. ASMC contracted with ESRI for its ArcWeb Services. ArcWeb Services gave
ASMC access to both GIS content and capabilities without the overhead of purchasing and
maintaining large data sets or software.
ASMC provided its store locations in digital format to ESRI, who then geocoded, or created
points on a map, with these locations. These dealer locations can be viewed on a street map on
the Suzuki Web site, www.suzuki.com. When a visitor to the Suzuki Web site types in a ZIP
Code, ArcWeb Services process the query and return a list of Suzuki automobile dealers within a
50-mile radius. Along with a list of dealers, the query also returns hyperlinks to dynamic map
displays and driving directions. A visitor can click a particular dealership and retrieve a map to
that particular dealership. The visitor can pan and zoom on the map to discover more information
about the area. By hosting associated geographic and customer specific data sets and GIS
processing, ArcWeb Services are the most cost-efficient solution in the marketplace for
integrating location finding services into applications. Potential clients can easily find
dealerships. Suzuki provides a powerful mapping application for little overhead and improved
customer service.
Ans. 3(a) Data collection is a term used to describe a process of preparing and collecting data -
for example as part of a process improvement or similar project. The purpose of data collection
is to obtain information to keep on record, to make decisions about important issues, to pass
information on to others. Primarily, data is collected to provide information regarding a specific
topic. Data collection usually takes place early on in an improvement project, and is often
formalized through a data collection Plan which often contains the following activity.
1. Pre collection activity – Agree goals, target data, definitions, methods
2. Collection – data collection
3. Present Findings – usually involves some form of sorting analysis and/or presentation.
Prior to any data collection, pre-collection activity is one of the most crucial steps in the process.
It is often discovered too late that the value of their interview information is discounted as a
consequence of poor sampling of both questions and informants and poor elicitation techniques.
After pre-collection activity is fully completed, data collection in the field, whether by
interviewing or other methods, can be carried out in a structured, systematic and scientific way.
A formal data collection process is necessary as it ensures that data gathered is both defined and
accurate and that subsequent decisions based on arguments embodied in the findings are valid.
The process provides both a baseline from which to measure from and in certain cases a target on
what to improve.
Data collection is a way of gathering information for use in various studies or decision making
situations. Depending on the required outcome or information needed methods of data collection
can vary and even be combined to achieve needed results.
All data collection methods boil down to five basic types:
- Registration
- Questionnaires
- Interviews
- Direct Observations
- Reporting
Each method of data collection has its uses, advantages and disadvantages. Most often using
more than one method of data collection will gain better results.

Registration
Registration is a data collection method mainly used to gather information about a certain group
or demographic population. This method is primarily used in the following ways:
• Drivers licenses
• Welfare programs
• School programs
• Voter records
Questionnaires
This type of data collection method is one of the inexpensive ways to gain information. Most of
the information gathered is from co-operative and highly literate people such as college
graduates or people in professional fields. Many times questionnaires will be used by service
providers to gain needed information. Such providers would include:
- Medical Surveys
- Insurance Applications
- Higher paying job applications
- Scientific Research
Interviews
Interviews are more expensive than questionnaires as a method of data collection because of the
labor involved. The tradeoff is that an interview can contain more complex questions. Interviews
are more useful with lower literacy rates and less co-operative participants. The following fields
tend to use the interview method of data collection as a main resource.
- Government agencies such as the IRS or Welfare Department
- Census takers
- Law enforcement
Direct Observation
This type of data collection method is the most accurate way of gathering information, and can
be the most cost effective over a long time frame. This method is mainly used in institutional and
professional settings such as:
- Medical analysis
- Corrections Facilities
- Psychology and Sociology clinical settings
- Indirect research
Reporting
Reporting is a direct opposite to the interview and questionnaire where the study group is
required to provide information without being asked specific questions. This type of data
collection method is most frequently used for:
- Tracking parolees and ex-offenders
- Government tracking and analysis of community needs
- Field teams gathering information using other methods
Data analysis depends on the method of data collection used. While some analysis will be simple
statistics, other analysis will be far more complex depending on the information and combination
of data collection methods used.
The data collection process can be relatively simple depending on the type of data collection
tools required and used during the research. Data collection tools are instruments used to collect
information for performance assessments, self-evaluations, and external evaluations. The data
collection tools need to be strong enough to support what the evaluations find during research.
Here are a few examples of data collection tools used within three main categories. There are 3
main tools for data collection as follows:

Secondary Participation
Data collection tools involving secondary participation require no direct contact to gather
information. Examples of secondary data collection tools would include:
- Postal mail
- Electronic mail
- Telephone
- Web-based surveys
These data collection tools do not allow the researcher to truly gauge the accuracy of the
information given by the participants who responded.

In-Person Observations
Data collection tools used in personal contact observations are used when there is face to face
contact with the participants. Some examples of this type of data collection tool would include:
- In-person surveys – used to gain general answers to basic questions
- Direct or participatory observations – where the researcher is directly involved with the study
group
- Interviews – used to gain more in depth answers to complex questions
- Focus groups – where certain sample groups are asked their opinion about a certain subject
or theory
These data collection tools not only allow for a true measurement of accuracy but also let the
researcher obtain any unspoken observations about the participants while conducting research.

Case Studies and Content Analysis


Case studies and content analysis are data collection tools which are based upon pre-existing
research or a search of recorded information which may be useful to the researcher in gaining the
required information which fills in the blanks not found with the other two types during the data
collection process. Some examples of this type of data collection tool would include:
- Expert opinions – leaders in the field of study
- Case studies – previous findings of other researchers
- Literature searches – research articles and papers
- Content analysis of both internal and external records – documents created from internal
origin or other documents citing occurrences within the research group
These three data collection tools are the primary sources for gaining information during research.
The most effective being the In-Person Observations with the use of Case Studies and analysis
for verification resources. While each type of data collection tool can be used alone, most often
they are used in either combination or conjunction with each other in various ways.
Other main types of collection include census, sample survey, and administrative by-product and
each with their respective advantages and disadvantages. A census refers to data collection about
everyone or everything in a group or population and has advantages, such as accuracy and detail
and disadvantages, such as cost and time.

A sample survey is a data collection method that includes only part of the total population and
has advantages, such as cost and time and disadvantages, such as accuracy and detail.
Administrative by-product data is collected as a byproduct of an organization’s day-to-day
operations and has advantages, such as accuracy, time simplicity and disadvantages, such as no
flexibility and lack of control.

Ans. 3 (b) Sampling is the process of selecting units (e.g., people, organizations) from a
population of interest so that by studying the sample we may fairly generalize our results back to
the population from which they were chosen. Measurement is the process observing and
recording the observations that are collected as part of a research effort. There are two major
issues that will be considered here.
First, you have to understand the fundamental ideas involved in measuring. Here we consider
two of major measurement concepts. In Levels of Measurement, I explain the meaning of the
four major levels of measurement: nominal, ordinal, interval and ratio. Then we move on to the
reliability of measurement, including consideration of true score theory and a variety of
reliability estimators. Second, you have to understand the different types of measures that you
might use in social research. We consider four broad categories of measurements. Survey
research includes the design and implementation of interviews and questionnaires. Scaling
involves consideration of the major methods of developing and implementing a scale.
Qualitative research provides an overview of the broad range of non-numerical measurement
approaches. An unobtrusive measure presents a variety of measurement methods that don't
intrude on or interfere with the context of the research. By the time you get to the analysis of
your data, most of the really difficult work has been done. It's much more difficult to: define the
research problem; develop and implement a sampling plan; conceptualize, operationalise and test
your measures; and develop a design structure. If you have done this work well, the analysis of
the data is usually a fairly straightforward affair.
In most social research the data analysis involves three major steps, done in roughly this order:
• Cleaning and organizing the data for analysis (Data Preparation)
• Describing the data (Descriptive Statistics)
• Testing Hypotheses and Models (Inferential Statistics)
Data Preparation involves checking or logging the data in; checking the data for accuracy;
entering the data into the computer; transforming the data; and developing and documenting a
database structure that integrates the various measures.
Descriptive Statistics are used to describe the basic features of the data in a study. They provide
simple summaries about the sample and the measures. Together with simple graphics analysis,
they form the basis of virtually every quantitative analysis of data. With descriptive statistics you
are simply describing what is, what the data shows. Inferential Statistics investigate questions,
models and hypotheses. In many cases, the conclusions from inferential statistics extend beyond
the immediate data alone. For instance, we use inferential statistics to try to infer from the
sample data what the population thinks. Or, we use inferential statistics to make judgments of the
probability that an observed difference between groups is a dependable one or one that might
have happened by chance in this study. Thus, we use inferential statistics to make inferences
from our data to more general conditions; we use descriptive statistics simply to describe what's
going on in our data.
In most research studies, the analysis section follows these three phases of analysis. Descriptions
of how the data were prepared tend to be brief and to focus on only the more unique aspects to
your study, such as specific data transformations that are performed. The descriptive statistics
that you actually look at can be voluminous. In most write-ups, these are carefully selected and
organized into summary tables and graphs that only show the most relevant or important
information. Usually, the researcher links each of the inferential analyses to specific research
questions or hypotheses that were raised in the introduction, or notes any models that were tested
that emerged as part of the analysis. In most analysis write-ups it's especially critical to not "miss
the forest for the trees." If you present too much detail, the reader may not be able to follow the
central line of the results. Often extensive analysis details are appropriately relegated to
appendices, reserving only the most critical analysis summaries for the body of the report itself.
Ans. 5 (a) Data analysis -The process of evaluating data using analytical and logical reasoning
to examine each component of the data provided. This form of analysis is just one of the many
steps that must be completed when conducting a research experiment. Data from various sources
is gathered, reviewed, and then analyzed to form some sort of finding or conclusion. There are a
variety of specific data analysis method, some of which include data mining, text analytics,
business intelligence, and data visualizations.

Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the
goal of highlighting useful information, suggesting conclusions, and supporting decision making.
Data analysis has multiple facets and approaches, encompassing diverse techniques under a
variety of names, in different business, science, and social science domains.

Data mining is a particular data analysis technique that focuses on modeling and knowledge
discovery for predictive rather than purely descriptive purposes. Business intelligence covers
data analysis that relies heavily on aggregation, focusing on business information. In statistical
applications, some people divide data analysis into descriptive statistics, exploratory data
analysis, and confirmatory data analysis. EDA focuses on discovering new features in the data
and CDA on confirming or falsifying existing hypotheses. Predictive analytics focuses on
application of statistical or structural models for predictive forecasting or classification, while
text analytics applies statistical, linguistic, and structural techniques to extract and classify
information from textual sources, a species of unstructured data. All are varieties of data
analysis.

Data integration is a precursor to data analysis, and data analysis is closely linked to data
visualization and data dissemination. The term data analysis is sometimes used as a synonym for
data modeling,

Qualitative data analysis

Qualitative research uses qualitative data analysis (QDA) to analyze text, interview transcripts,
photographs, art, field notes of (ethnographic) observations, et cetera.

The process of data analysis

Data analysis is a process, within which several phases can be distinguished

- Data cleaning
- Initial data analysis (assessment of data quality)
- Main data analysis (answer the original research question)
- Final data analysis (necessary additional analyses and report)
Data cleaning

Data cleaning is an important procedure during which the data are inspected, and erroneous data
are -if necessary, preferable, and possible- corrected. Data cleaning can be done during the stage
of data entry. If this is done, it is important that no subjective decisions are made. The guiding
principle provided by Adèr (ref) is: during subsequent manipulations of the data, information
should always be cumulatively retrievable. In other words, it should always be possible to undo
any data set alterations. Therefore, it is important not to throw information away at any stage in
the data cleaning phase. All information should be saved (i.e., when altering variables, both the
original values and the new values should be kept, either in a duplicate dataset or under a
different variable name), and all alterations to the data set should carefully and clearly
documented, for instance in a syntax or a log.

Initial data analysis

The most important distinction between the initial data analysis phase and the main analysis
phase, is that during initial data analysis one refrains from any analysis that are aimed at
answering the original research question. The initial data analysis phase is guided by the
following four questions:[3]

Quality of data

The quality of the data should be checked as early as possible. Data quality can be assessed in
several ways, using different types of analyses: frequency counts, descriptive statistics (mean,
standard deviation, median), normality (skewness, kurtosis, frequency histograms, normal
probability plots), associations (correlations, scatter plots).

Other initial data quality checks are:

Checks on data cleaning: have decisions influenced the distribution of the variables? The
distribution of the variables before data cleaning is compared to the distribution of the variables
after data cleaning to see whether data cleaning has had unwanted effects on the data.

Analysis of missing observations: are there many missing values, and are the values missing at
random? The missing observations in the data are analyzed to see whether more than 25% of the
values are missing, whether they are missing at random (MAR), and whether some form of
imputation (statistics) is needed.

Analysis of extreme observations: outlying observations in the data are analyzed to see if they
seem to disturb the distribution.

Comparison and correction of differences in coding schemes: variables are compared with
coding schemes of variables external to the data set, and possibly corrected if coding schemes are
not comparable.
The choice of analyses to assess the data quality during the initial data analysis phase depends on
the analyses that will be conducted in the main analysis phase.[4] by philip kotler

Quality of measurements

The quality of the measurement instruments should only be checked during the initial data
analysis phase when this is not the focus or research question of the study. One should check
whether structure of measurement instruments corresponds to structure reported in the literature.

There are two ways to assess measurement quality:

Confirmatory factor analysis

Analysis of homogeneity (internal consistency), which gives an indication of the reliability of a


measurement instrument, i.e., whether all items fit into a unidimensional scale. During this
analysis, one inspects the variances of the items and the scales, the Cronbach's α of the scales,
and the change in the Cronbach's alpha when an item would be deleted from a scale.[5]

Initial transformations

After assessing the quality of the data and of the measurements, one might decide to impute
missing data, or to perform initial transformations of one or more variables, although this can
also be done during the main analysis phase.[6]

Possible transformations of variables are:[7]

- Square root transformation (if the distribution differs moderately from normal)
- Log-transformation (if the distribution differs substantially from normal)
- Inverse transformation (if the distribution differs severely from normal)
- Make categorical (ordinal / dichotomous) (if the distribution differs severely from
normal, and no transformations help)

Did the implementation of the study fulfill the intentions of the research design?

One should check the success of the randomization procedure, for instance by checking whether
background and substantive variables are equally distributed within and across groups.

If the study did not need and/or use a randomization procedure, one should check the success of
the non-random sampling, for instance by checking whether all subgroups of the population of
interest are represented in sample.

Other possible data distortions that should be checked are:

- dropout (this should be identified during the initial data analysis phase)
- Item non response (whether this is random or not should be assessed during the initial
data analysis phase)
- Treatment quality (using manipulation checks).[8]

Characteristics of data sample

In any report or article, the structure of the sample must be accurately described. It is especially
important to exactly determine the structure of the sample (and specifically the size of the
subgroups) when subgroup analyses will be performed during the main analysis phase.

The characteristics of the data sample can be assessed by looking at:

- Basic statistics of important variables


- Scatter plots
- Correlations
- Cross-tabulations[9]

Final stage of the initial data analysis

During the final stage, the findings of the initial data analysis are documented, and necessary,
preferable, and possible corrective actions are taken.

Also, the original plan for the main data analyses can and should be specified in more detail
and/or rewritten.

In order to do this, several decisions about the main data analyses can and should be made:

- In the case of non-normals: should one transform variables; make variables categorical
(ordinal/dichotomous); adapt the analysis method?
- In the case of missing data: should one neglect or impute the missing data; which imputation
technique should be used?
- In the case of outliers: should one use robust analysis techniques?
- In case items do not fit the scale: should one adapt the measurement instrument by omitting
items, or rather ensure comparability with other (uses of the) measurement instrument(s)?
- In the case of (too) small subgroups: should one drop the hypothesis about inter-group
differences, or use small sample techniques, like exact tests or bootstrapping?
- In case the randomization procedure seems to be defective: can and should one calculate
propensity scores and include them as covariates in the main analyses?[10]

Analyses

Several analyses can be used during the initial data analysis phase:[11]

- Univariate statistics
- Bivariate associations (correlations)
- Graphical techniques (scatter plots)

It is important to take the measurement levels of the variables into account for the analyses, as
special statistical techniques are available for each level:[12]

- Nominal and ordinal variables


o Frequency counts (numbers and percentages)
o Associations
 circumambulations (crosstabulations)
 hierarchical loglinear analysis (restricted to a maximum of 8 variables)
 loglinear analysis (to identify relevant/important variables and possible
confounders)
o Exact tests or bootstrapping (in case subgroups are small)
o Computation of new variables
- Continuous variables
o Distribution
 Statistics (M, SD, variance, skewness, kurtosis)
 Stem-and-leaf displays
 Box plots
Ans. 5 (b) Social researches involves many weird and wonderful methods over which debate,
often bitter, rages continuously. However, at some stage even the most virulently anti-positivist
and anti-empiricist will need to be able to name, sort and count things, or to read, understand or
even act on, reports based on things which have been named, sorted and counted. Perhaps the
easiest way of explaining one of the most basic skills in statistics is to try to make sense of raw
data through a process of naming, sorting and counting. For instance, take the following data
relating to 20 sixth form students. Information is provided on their sex and on their intentions
towards higher education.

Student Sex H.E.?


1 Male Yes
2 Male No
3 Female Yes
4 Female No
5 Female No
6 Male No
7 Female No
8 Male No
9 Female No
10 Female Yes
11 Male Yes
12 Male No
13 Male Yes
14 Female No
15 Male Yes
16 Male No
17 Female No
18 Female No
19 Male No
20 Male No
It is not easy to tell from these data how many males and females there are, let alone make any
meaningful statement about the relationship between sex and plans for higher education. What
can we do to make them easier to understand?
The first thing we need to do is to sort them into some kind of order. We can do this by arranging
all the males in one group and the females in another, or we can do it by sorting all those with
H.E. Plans into one group and the rest into another.
Thus by sex:
Female Yes
Female No
Female No
Female No
Female No
Female No
Female Yes
Female No
Female No
Total Females = 9

Male Yes
Male No
Male Yes
Male No
Male Yes
Male No
Male Yes
Male No
Male No
Male No
Male No
Total Males = 11
...and by college plans:

Male No
Female No
Male No
Female No
Male No
Female No
Male No
Female No
Male No
Male No
Female No
Female No
Male No
Female No
Total with no college plans = 14

Male Yes
Male Yes
Female Yes
Male Yes
Female Yes
Male Yes
Total with college plans = 6
If we want to look at both distributions together we can sort on both variables to yield:
By sex and college plans:
Female No
Female No
Female No
Female No
Female No
Female No
Female No
Total females with no college plans = 7
Female Yes
Female Yes
Total females with college plans = 2
Male No
Male No
Male No
Male No
Male No
Male No
Male No
Total males with no college plans = 7
Male Yes
Male Yes
Male Yes
Male Yes
Total males with college plans = 4
These data can be summarized by tabulating one variable at a time in frequency distributions.

Sex:
Female 9 45%
Male 11 55%
-----------
Total 20 100%
College:
No 14 70%
Yes 6 30%
-----------
Total 20 100%
If we want to summarize data from both variables at the same time we need to construct a
contingency table. We do this by constructing a blank table with the same number of rows as
there are categories in one of the variables, and the same numbers of columns as there are
categories in the other. Let us take “Sex" as the column variable and "College plans" as the row
variable. In this case both variables have only two categories, and so the table will have 2 rows
and 2 columns, and therefore 4 cells.

Sex Male   Female


I I I
No I I I
I I I
Colleg
e  
I I I
Yes I I I
I I I

These four cells form the body of the table into which we can now enter the counts from the list
sorted on both variables at once. At the same time we enter outside the table the row-totals and
column-totals from the original frequency distributions for each variable and the grand total for
the number of cases in the whole table. Thus:

Sex
(Raw Ma Fema Ro Tot
Data) le   le   w al
I   I   I  
No I 7 I 7 I 14
I   I   I  
College        
I   I   I  
Yes I 4 I 2 I 6
I   I   I  
Column 1
Total   1       20
This is at least a little easier to interpret than the original sorted lists, but it is still difficult to
answer a question as to whether males are more likely to want to go college than are females,
or vice versa. To answer this question we need to ask not, ”How many?", but, "What
proportion?" Of each sex have college plans. One further operation is now necessary - to
standardize the data by converting the raw counts for each sex into percentages - to enable direct
comparison between sexes.
Sex
Mal Fema Ro Tot
(% Data) e   le   w al
I   I   I  
No 63. 77. 70.
I 6 I 8 I 0
College        
I   I   I  
Yes 36. 22. 30.
I 4 I 2 I 0
Column 10 10
Total   0   0   100
Base for
%   11   9   20

From this table we can now state that female sixth-formers are less likely to have plans for
Higher Education. From the above example we can state the importance of tabulation in Market
Research.
Ans. 7 (1) QUESTIONAIRE FORMAT
A questionnaire is a research instrument consisting of a series of questions and other prompts for
the purpose of gathering information from respondents. Although they are often designed for
statistical analysis of the responses, this is not always the case. The questionnaire was invented
by Sir Francis Galton. Questionnaires are frequently used in quantitative marketing research and
social research. They are a valuable method of collecting a wide range of information from a
large number of individuals, often referred to as respondents. Good questionnaire construction is
critical to the success of a survey. Inappropriate questions, incorrect ordering of questions,
incorrect scaling, or bad questionnaire format can make the survey valueless. A useful method
for checking a questionnaire and making sure it is accurately capturing the intended information
is to pretest among a smaller subset of target respondents.
The design of a questionnaire will depend on whether the researcher wishes to collect
exploratory information (i.e. qualitative information for the purposes of better understanding or
the generation of hypotheses on a subject) or quantitative information (to test specific hypotheses
that have previously been generated).
Exploratory questionnaires: If the data to be collected is qualitative or is not to be statistically
evaluated, it may be that no formal questionnaire is needed. For example, in interviewing the
female head of the household to find out how decisions are made within the family when
purchasing breakfast foodstuffs, a formal questionnaire may restrict the discussion and prevent a
full exploration of the woman's views and processes. Instead one might prepare a brief guide,
listing perhaps ten major open-ended questions, with appropriate probes/prompts listed under
each.
Formal standardized questionnaires: If the researcher is looking to test and quantify
hypotheses and the data is to be analyzed statistically, a formal standardized questionnaire is
designed. Such questionnaires are generally characterized by:
- prescribed wording and order of questions, to ensure that each respondent receives the same
stimuli
- prescribed definitions or explanations for each question, to ensure interviewers handle
questions consistently and can answer respondents' requests for clarification if they occur
- Prescribed response format, to enable rapid completion of the questionnaire during the
interviewing process.

There are no hard-and-fast rules about how to design a questionnaire, but there are a number of
points that can be borne in mind:
1. A well-designed questionnaire should meet the research objectives. This may seem obvious,
but many research surveys omit important aspects due to inadequate preparatory work, and
do not adequately probe particular issues due to poor understanding. To a certain degree
some of this is inevitable. Every survey is bound to leave some questions unanswered and
provide a need for further research but the objective of good questionnaire design is to
'minimise' these problems.
2. It should obtain the most complete and accurate information possible. The questionnaire
designer needs to ensure that respondents fully understand the questions and are not likely to
refuse to answer, lie to the interviewer or try to conceal their attitudes. A good questionnaire
is organised and worded to encourage respondents to provide accurate, unbiased and
complete information.
3. A well-designed questionnaire should make it easy for respondents to give the necessary
information and for the interviewer to record the answer, and it should be arranged so that
sound analysis and interpretation are possible.
4. It would keep the interview brief and to the point and be so arranged that the respondent(s)
remain interested throughout the interview.

Each of these points will be further discussed throughout the following sections. Figure 4.1
shows how questionnaire design fits into the overall process of research design that was
described in chapter 1 of this textbook. It emphasizes that writing of the questionnaire proper
should not begin before an exploratory research phase has been completed.

Even after the exploratory phase, two key steps remain to be completed before the task of
designing the questionnaire should commence. The first of these is to articulate the questions that
research is intended to address. The second step is to determine the hypotheses around which the
questionnaire is to be designed.
It is possible for the piloting exercise to be used to make necessary adjustments to administrative
aspects of the study. This would include, for example, an assessment of the length of time an
interview actually takes, in comparison to the planned length of the interview; or, in the same
way, the time needed to complete questionnaires. Moreover, checks can be made on the
appropriateness of the timing of the study in relation to contemporary events such as avoiding
farm visits during busy harvesting periods.

Preliminary decisions in questionnaire design


There are nine steps involved in the development of a questionnaire:
1. Decide the information required.
2. Define the target respondents.
3. Choose the method(s) of reaching your target respondents.
4. Decide on question content.
5. Develop the question wording.
6. Put questions into a meaningful order and format.
7. Check the length of the questionnaire.
8. Pre-test the questionnaire.
9. Develop the final survey form.
Ans. 7 (2) CO-RELATION - The concept of ‘correlation’ is a statistical tool which studies the
relationship between two variables and Correlation Analysis involves various methods and
techniques used for studying and measuring the extent of the relationship between the two
variables. “Two variables are said to be in correlation if the change in one of the variables results
in a change in the other variable”.

Correlation is a measure of association between two variables. The variables are not designated
as dependent or independent. The two most popular correlation coefficients are: Spearman's
correlation coefficient rho and Pearson's product-moment correlation coefficient.

When calculating a correlation coefficient for ordinal data, select Spearman's technique. For
interval or ratio-type data, use Pearson's technique.

The value of a correlation coefficient can vary from minus one to plus one. A minus one
indicates a perfect negative correlation, while a plus one indicates a perfect positive correlation.
A correlation of zero means there is no relationship between the two variables. When there is a
negative correlation between two variables, as the value of one variable increases, the value of
the other variable decreases, and vise versa. In other words, for a negative correlation, the
variables work opposite each other. When there is a positive correlation between two variables,
as the value of one variable increases, the value of the other variable also increases. The
variables move together.

The standard error of a correlation coefficient is used to determine the confidence intervals
around a true correlation of zero. If your correlation coefficient falls outside of this range, then it
is significantly different than zero. The standard error can be calculated for interval or ratio-type
data (i.e., only for Pearson's product-moment correlation).

The significance (probability) of the correlation coefficient is determined from the t-statistic. The
probability of the t-statistic indicates whether the observed correlation coefficient occurred by
chance if the true correlation is zero. In other words, it asks if the correlation is significantly
different than zero. When the t-statistic is calculated for Spearman's rank-difference correlation
coefficient, there must be at least 30 cases before the t-distribution can be used to determine the
probability. If there are fewer than 30 cases, you must refer to a special table to find the
probability of the correlation coefficient.
Example 1
A company wanted to know if there is a significant relationship between the total number of
salespeople and the total number of sales. They collect data for five months.
Variable 1 Variable 2
207 6907
180 5991
220 6810
205 6553
190 6190
--------------------------------
Correlation coefficient = .921
Standard error of the coefficient =..068
t-test for the significance of the coefficient = 4.100
Degrees of freedom = 3
Two-tailed probability = .0263

Example 2
Respondents to a survey were asked to judge the quality of a product on a four-point Likert scale
(excellent, good, fair, poor). They were also asked to judge the reputation of the company that
made the product on a three-point scale (good, fair, poor). Is there a significant relationship
between respondents’ perceptions of the company and their perceptions of quality of the
product?
Since both variables are ordinal, Spearman's method is chosen. The first variable is the rating for
the quality the product. Responses are coded as 4=excellent, 3=good, 2=fair, and 1=poor. The
second variable is the perceived reputation of the company and is coded 3=good, 2=fair, and
1=poor.
Variable 1 Variable 2
4 3
2 2
1 2
3 3
4 3
1 1
2 1
-------------------------------------------
Correlation coefficient rho = .830
t-test for the significance of the coefficient = 3.332
Number of data pairs = 7
Probability must be determined from a table because of the small sample size.
Ans. 7 (5) QUALITATIVE AND QUANTITATIVE MARKET RESEARCH
Qualitative marketing research - Qualitative marketing research is a set of research techniques,
used in marketing and the social sciences, in which data is obtained from a relatively small group
of respondents and not analyzed with inferential statistics. This differentiates it from quantitative
analyzed for statistical significance. Small number of respondents - not generalizable to the
whole population - statistical significance and confidence not calculated - examples include
focus groups, in-depth interviews, and projective techniques
The main types of qualitative research are
Depth Interviews
- interview is conducted one-on-one, and lasts between 30 and 60 minutes
- best method for in-depth probing of personal opinions, beliefs, and values
- very rich depth of information and flexible
- they are unstructured (or loosely structured)- this differentiates them from survey
interviews in which the same questions are asked to all respondents
Focus Groups
- an interactive group discussion lead by a moderator
- unstructured (or loosely structured) discussion where the moderator encourages the free
flow of ideas
- usually 8 to 12 members in the group who fit the profile of the target group or consumer
but may consist of two interviewees (a dyad) or three interviewees (a triad) or a lesser
number of participants (known as a mini-group)
- usually last for 1 to 2 hours
- usually recorded on video/DVD
Projective Techniques
- these are unstructured prompts or stimulus that encourage the respondent to project their
underlying motivations, beliefs, attitudes, or feelings onto an ambiguous situation
- they are all indirect techniques that attempt to disguise the purpose of the research
Random Probability Sampling
- This type of qualitative research conducts random interviews within a defined universe,
e.g. a city- to understand consumer behavior beyond basic age-gender variables.
- Examples of random sample interviewing include telephone interviewing, mailing-
questionnaire's/booklets, personal interviewing,
Newer Methods
Observational Research
One of the more fundamental uses of qualitative research understands fundamental consumer
behaviour through Observational research. Nowadays, this kind of research is being
supplemented by more cutting edge fields like neuro-science where the observation is
accompanied by measuring brain activity. This is under the assumption that very often our brain
reacts without us even knowing it and asking questions or pure observation by themselves are
not enough to really pinpoint what goes on....
Quantitative marketing research - generally used to draw conclusions - tests a specific
hypothesis - uses random sampling techniques so as to infer from the sample to the population -
involves a large number of respondents - examples include surveys and questionnaires.
Quantitative marketing research is the application of quantitative research techniques to the field
of marketing. It has roots in both the positivist view of the world, and the modern marketing
viewpoint that marketing is an interactive process in which both the buyer and seller reach a
satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and
Promotion. Marketers use the information so obtained to understand the needs of individuals in
the marketplace, and to create strategies and marketing plans.
Techniques include choice modelling, maximum difference preference scaling, and covariance
analysis.

Choice modelling attempts to model the decision process of an individual or segment in a


particular context. Choice modelling may also be used to estimate non-market environmental
benefits and cost. Choice Models are able to predict with great accuracy how individuals would
react in a particular situation. Unlike a poll or a survey, predictions are able to be made over
large numbers of scenarios within a context, to the order of many trillions of possible scenarios.
Choice Modelling is believed to be the most accurate and general purpose tool currently
available for making probabilistic predictions about human decision making behaviour.

Maximum difference scaling (MaxDiff) is a discrete choice model. With MaxDiff, survey
respondents are shown a set of the possible items and are asked to indicate the best and worst
items (or most and least important or most and least appealing etc.). MaxDiff assumes that
respondents evaluate all possible pairs of items within the displayed set and choose the pair that
reflects the maximum difference in preference or importance. MaxDiff may be thought of as a
variation of the method of Paired Comparisons. Consider a set in which a respondent evaluates
four items: A, B, C and D. If the respondent says that A is best and D is worst, these two
responses inform us on five of six possible implied paired comparisons:

A > B, A > C, A > D, B > D, C > D

The only paired comparison that cannot be inferred is B vs. C. In a choice among five items,
MaxDiff questioning informs on seven of ten implied paired comparisons. MaxDiff
questionnaires are relatively easy for most respondents to understand. And since the responses
involve choices of items rather than expressing strength of preference, there is no opportunity for
scale use bias.

Analysis of covariance (ANCOVA) is a general linear model with a continuous outcome


variable (quantitative) and two or more predictor variables where at least one is continuous
(quantitative) and at least one is categorical (qualitative). ANCOVA is a merger of ANOVA and
regression for continuous variables. ANCOVA tests whether certain factors have an effect on the
outcome variable after removing the variance for which quantitative predictors (covariates)
account. The inclusion of covariates can increase statistical power because it accounts for some
of the variability.
Ans. 7 (6) EXPERIMENTATION IN MARKET RESEARCH
Causal research or Experimental Research is designed to gather evidence regarding the cause-
and-effect relationships that exist in the marketing system. For example, if a company reduces
the price of a product and then unit sales of the product increase, causal research would show
whether this effect was due to the price reduction or some other reason. Causal research must be
designed in such a way that the evidence regarding causality is clear. The main sources of data
for causal research are interrogating respondents through surveys and conducting experiments.

Experimental research designs are used for the controlled testing of causal processes. The
general procedure is one or more independent variables are manipulated to determine their effect
on a dependent variable. These designs can be used where:

1. There is time priority in a causal relationship (cause precedes effect),


2. There is consistency in a causal relationship (a cause will always lead to the same effect), and
3. The magnitude of the correlation is great.

The most common applications of these designs in marketing research and experimental
economics are test markets and purchase labs.

In an attempt to control for extraneous factors, several experimental research designs have been
developed, including:

 Classical pretest-post test - The total population of participants is randomly divided into
two samples; the control sample, and the experimental sample. Only the experimental sample
is exposed to the manipulated variable. The researcher compares the pretest results with the
post test results for both samples. Any divergence between the two samples is assumed to be
a result of the experiment.
 Solomon four group design - The sample is randomly divided into four groups. Two of the
groups are experimental samples. Two groups experience no experimental manipulation of
variables. Two groups receive a pretest and a post test. Two groups receive only a post test.
This is an improvement over the classical design because it controls for the effect of the
pretest.
 Factorial design - this is similar to a classical design except additional samples are used.
Each group is exposed to a different experimental manipulation.

An experiment is a procedure in which a company manipulates one (or sometimes more than
one) independent or cause variable and collects data on the dependent or effect variable while
controlling for other variables that may influence the dependent variable.
• A laboratory experiment is a research study conducted in a contrived setting in which
the effect of all, or nearly all, influential but irrelevant independent variables is kept to a
minimum
• A field experiment is a research study conducted in a natural setting in which the
experimenter manipulates one or more independent variables under conditions controlled
as carefully as the situation will permit
Internal Validity
- Internal validity is the extent to which observed results are solely due to the experimental
manipulation
- Laboratory experiments are generally high on internal validity
- Field experiments are generally low on internal validity
External Validity
- External validity is the extent to which observed results are likely to hold beyond the
experimental setting
- Laboratory experiments are generally low on external validity
- Field experiments are generally high on external validity

Deciding Which Type of Experiment to Use


• Practical Considerations
– Time
– Cost
– Exposure to competition
– Nature of the manipulation
Ans. 7 (7) CHI SQUARE DISTRIBUTION
The Chi Square distribution is a mathematical distribution that is used directly or indirectly in
many tests of significance. The most common use of the chi square distribution is to test
differences among proportions. Although this test is by no means the only test based on the chi
square distribution, it has come to be known as the chi square test. The chi square distribution
has one parameter, its degrees of freedom (df). It has a positive skew; the skew is less with more
degrees of freedom. As the df increase, the chi square distribution approaches a normal
distribution. The mean of a chi square distribution is its df. The mode is df - 2 and the median is
approximately df - 0 .7

The Chi Square Distribution is the distribution of the sum of squared standard normal deviates.
The degree of freedom of the distribution is equal to the number of standard normal deviates
being summed. Therefore, Chi Square with one degree of freedom, written as χ2(1), is simply the
distribution of a single normal deviate squared. The area of a Chi Square distribution below 4 is
the same as the area of a standard normal distribution below 2 since 4 is 22.

Consider the following problem: you sample two scores from a standard normal distribution,
square each score, and sum the squares. What is the probability that the sum of these two squares
will be six or higher? Since two scores are sampled, the answer can be found using the Chi
Square distribution with two degrees of freedom. A Chi Square calculator can be used to find
that the probability of a Chi Square (with 2 df) of being six or higher is 0.050.

The mean of a Chi Square distribution is its degrees of freedom. Chi Square distributions are
positively skewed, with the degree of skew decreasing with increasing degrees of freedom. As
the degrees of freedom increase, the Chi Square Distribution approaches a normal distribution.
Figure 1 shows density functions for three Chi square distributions. Notice how the skew
decreases as the degrees of freedom increases.
Figure 1. Chi Square Distributions with 2, 4, and 6 degrees of freedom

The Chi Square distribution is very important because many test statistics are approximately
distributed as Chi Square. Two of the more commonly tests using the Chi Square distribution are
tests of deviations of differences between theoretically expected and observed frequencies (one-
way tables) and the relationship between categorical variables (contingency tables). Numerous
other tests beyond the scope of this work are based on the Chi Square distribution.

You might also like