Market Research Notes Chapter 1 : Research Fundamental 1 Chapter 2: Research Process…………….... Chapter 3: Research Design…………….….

Chapter 4: Research Methods……………... Chapter 5: Data Collection Methods Chapter 6: Primary and Secondary Data Sources… Chapter 7: Sampling…………………………. Chapter 8: Scales and Attitude Measurement……… Prepared by Group 1 Chapter nos. 1 • • • • Topics Research fundamentals Definition of research Basics vs applied research Market and marketing research Information systems, decision support systems, and marketing research Advantages of MR • Limitations of the MR • Application of marketing research 2 Research process • Steps in the process of research • Problem discovery • Problem definition • Research objectives • Developing hypothesis for a particular problem • Research design • Research method • Data collection – sources and tools • Sampling methods • Use of scales in research • Data processing • Data analysis • Research report 01 03 03 03 05 06 07 Page nos.

18 19 19 19 20 21 21 22 23 23 24 24 25

Page 1

1. RESEARCH FUNDAMENTALS MEANING OF RESEARCH Research in common parlance refers to a search for knowledge. One can also define research as a scientific and systematic search for pertinent information on a specific topic. In fact, research is an art of scientific investigation. The Advanced Learner’s Dictionary of Current English lays down the meaning of research as a careful investigation or inquiry specially through search for new facts in any branch of knowledge. Redman and Mory define research as a “systematized effort to gain new knowledge.” Some people consider research as a movement from the known to the unknown. It is actually a voyage of discovery. We all possess the vital instinct of inquisitiveness for, when the unknown confronts us, we wonder and our inquisitiveness makes us probe and attain full and fuller understanding of the unknown. This inquisitiveness is the mother of all knowledge and the method, which man employs for obtaining the knowledge of whatever the unknown, can be termed as research. Research is an academic activity and such a term should be used in a technical sense. According to Clifford Woody research comprises defining and redefining problems, formulating hypothesis or suggested solutions; collecting, organizing and evaluating data; making deductions and reaching conclusions to determine whether they fit the formulating hypothesis. D.Slesinger and M.Stephenson in the Encyclopedia of Social Sciences define research as “the manipulation of things, concepts or symbols for the purpose of generalizing to extend, corrector verify knowledge, whether that knowledge aids in construction of theory or in the practise of art.” Research is thus an original contribution to the existing stock of knowledge making for its advancement. It is the pursuit of truth with the help of study, observation, comparison and experiment. In short, the search for knowledge through objective and systematic method of finding solution to a problem is research. The systematic approach concerning generalization and the formulation of a theory is also research. As such the term ‘research’ refers to the systematic method consisting of enunciating the problem, formulating a hypothesis, collecting facts or data, analyzing the facts and reaching certain conclusion either in the form of solutions towards the concerned problem or in certain generalization for some theoretical formulation.

OBJECTIVE OF RESEARCH

The purpose of research is to discover answers through the application of scientific procedures. The main aim of research is to find out the truth which is hidden and which has not been discovered as yet. Though each research study has its own specific purpose, we may think of research objectives as falling into number of broad grouping: • To gain familiarity with a phenomenon or to achieve new insights into it (studies with this object in view are termed as exploratory or formulative research studies. • To portray accurately the characteristics of a particular individual, situation or a group (studies with this object in view are known as descriptive research studies); • To determine the frequency with which something occurs or with which it is associated with something else (studies with this object in view are known as diagnostic research studies).
Page 2

To test a hypothesis of a casual relationship between variables (such studies are known as hypothesis-testing research studies).

BASIC VS APPLIED RESEARCH Research can either be applied (or action) research or fundamental (or basic or pure) research. Applied research aims at finding a solution for an immediate problem facing a society or an industrial/business organization, whereas fundamental research is mainly concerned with generalizations and with the formulation of a theory. “Gathering knowledge for knowledge’s sake is termed as pure or ‘basic’ research”. Research concerning some natural phenomenon or relating to pure mathematics are examples of fundamental. Similarly, research studies, concerning human behavior carried on with a view to make generalizations about human behavior, are also examples of fundamental research, but research aimed at certain conclusions (say, a solution) facing a concrete social or business problem is an example of applied research. Research to identify social, economic or political trends that may affect a particular solution or the copy research (research to find out whether certain communications will be read and understood) or the marketing research or evaluation research are examples of applied research. Thus, the central aim of applied research is to discover a solution for some pressing practical problem, whereas basic research is directed towards finding information that has a broad base of application and thus, adds to the already existing organized body of scientific knowledge. INFORMATION SYSTEMS, DECISION SUPPORT SYSTEMS, AND MARKETING RESEARCH An information system (IS) is a continuing and interacting structure of people, equipment, and procedures designed to gather, sort, analyze, evaluate, and distribute pertinent, timely, and accurate information to decision makers. While marketing research is concerned mainly with the actual content of the information and how it is to be generated, the information system is concerned with managing the flow of data from many different projects and secondary sources to the managers who will use it. This requires database to organize and store the information and a decision support system (DSS) to retrieve data, transform it into usable information, and disseminate to users. Database Information systems contain three types of information. 1. The first is recurring day-today information. 2. A second type of information is intelligence relevant to the future strategy of the business. 3. A third input to the information system is research studies that are not of a recurring nature. The potential usefulness of a marketing research study can be multiplied manifold if the information is accessible instead of filed and forgotten. However, the potential exists that others may use the study, although perhaps not in the way it was originally intended. Decision support system

Page 3

Database have no value if the insights they contain cannot be retrieved. A decision support system not only allows the manager to interact directly with the database to retrieve what is wanted, it also provides a modeling function to help make sense of what has been retrieved. Application information system to marketing research The information system serves to emphasize that marketing research should not exist in isolation as a single effort to obtain information. Rather, it should be part of a systematic and continuous effort by the organization to improve the decision-making process. MARKETING DECISION SUPPORT SYSTEMS A typical marketing manager regularly receives some or all of the following data: factory shipments or orders; consumer panel data; scanner data; demographic data; and internal cost and budget data. Managers don’t want data. They want, and need, decision-relevant information in accessible and preferably graphical form for (1) Routine comparison of current performance against past trends on each of the key measures of effectiveness (2) Periodic exception reports to assess which sales territories or accounts have not matched previous tears’ purchases and (3) Special analyses to evaluate the sales impact of particular marketing programs and to predict what would happen if changes were made. In addition, different divisions would like to be linked to enable product managers, sales planners, market researchers, financial analysts and production schedules to share information. The purpose of a marketing decision support system (MDSS) is to combine marketing data from diverse sources into a single database which line managers can enter interactively to quickly identify problems and obtain standards, periodic reports, as well as answers to analytical questions. Characteristics of a MDSS A good MDSS should have the following characteristics 1. Interactive. The process of interaction with the MDSS should be simple and direct. With just a few commands the user should be able to obtain the results immediately. There should be no need for a programmer in between. 2. Flexible. A good MDSS should be flexible. It should be able to present the available data in either discrete or aggregate form. It should satisfy the information needs of the managers in different hierarchical levels and functions. 3. Discovery oriented. The MDSS should not only assist managers in solving the existing problems but should also help them to probe for trends and ask new questions. The managers should be able to discover new patterns and be able to act on them using the MDSS. 4. User friendly. The MDSS should be user friendly. It should be easy for the managers to learn and use the system. It should not take hours just to figure out what is going on. Most MDSS packages are menu driven and are easy to operate.

Page 4

A typical MDSS is assembled from four components Manager

Modeling

Display

Analysis

Database

Manager Database Reports and displays 1. Analysis capabilities 2. Models ADVANTAGES OF MR For decision makers, faced with the decisions and doubts described above, what should be the benefits of having research conducted? The most universal and usually most vital is this: reduction of uncertainty. If research findings contribute any relevant knowledge of what exists, that the decision maker was ignorant of, or if it provides new clues to what is likely in the future, they should enable a more accurate conclusive decision to be reached. Uncertainty can not be wholly eliminated with relevant research, but it may be markedly reduced. Research also may be of benefit in ways ordinarily thought of as uncertainty reduction: (1) Problems may come to light that otherwise would be not be known until they became very serious or even insoluble (2) Objectives may come under reevaluation when evidence indicates that (a) they may be too high to be feasible under expected conditions or (b) they should be higher due to overlooked opportunity (3) Better alternatives may be revealed or their conception stimulated (4) Marketing research may be useful as evidence in legal matters. We would call attention to other benefits, such as the psychological one of making the decision-maker feel more confident and willing to be decisive. Prejudice against new ideas may be overcome by evidence from the market place. Sociologically, research can keep the executives attuned to changing consumer needs and wants and to the impacts of consumerism. Less laudable are political motives for marketing research, like the executives who wants it to confirm some preconceived ideas and overcome rivals in the organization (but would suppress if they fail to confirm).

Page 5

LIMITATIONS TO MR Some of the limitations faced by the researchers in MR are: 1. The lack of scientific training in the methodology of research is a great impediment for researchers in our country. There is paucity of competent researchers. Many researchers take a leap in the dark without knowing research methods. Most of the work, which goes in the name of research, is not methodologically sound. Research to many researchers and even to their guides, is mostly a scissors and paste job without any sight shed on the collated materials. The consequence is obvious, viz, the research results, quite often, do not reflect the reality or realities. Before undertaking research projects, researchers should be well equipped with all the methodological aspects. As such, efforts should be made to provide short-duration intensive courses for meeting this requirement. 2. There is insufficient interaction between the university research departments on one side and business establishments, government departments and research institutions on the other side. A great deal of primary data of non-confidential nature remain untouched/untreated by the researchers for want of proper contacts. Efforts should be made to develop satisfactory liaison among all concerned for better and realistic researchers. There is need for developing some mechanisms of a university – industry interaction programme so that academics can get ideas from practitioners on what needs to be researched and practitioners can apply the research done by the academics. 3. Most of the business units in our country do not have the confidence that the material supplied by them researchers will not be misused and as such they are often reluctant in supplying the needed information to researchers. The concept of secrecy seems to be sacrosanct to business organizations in the country so much so that it proves an impermeable barrier to researchers. Thus, there is the need for generating the confidence that the information/data obtained from a business unit will not be misused. 4. Research studies overlapping one another are undertaken quite often for want of adequate information. This results in duplication and fritters away resources. This problem can be solved by proper compilation and revision, at regular intervals, of a list of subjects on which and the places where the research problems in various disciplines of applied science which are of immediate concern to the industries. 5. There does not exist a code of conduct for researchers and inter-university and interdepartmental rivalries are also quite common. Hence, there is need for developing a code of conduct for researchers which, if adhered sincerely, can win over this problem. 6. Many researchers in our country also face the difficulty of adequate and timely secretarial assistance, including computerial assistance. This causes unnecessary delays in the completion of research studies. All possible efforts be made in this direction so that efficient secretarial assistance is made available to researchers and that too well in time. University Grants Commission must play a dynamic role in solving this difficulty. 7. Library management and functioning is not satisfactory at many places and much of the time and energy of researchers are spent in tracing out the books, journals, reports, etc., rather than in tracing out relevant material from them. 8. There is also the problem that many of our libraries are not able to get copies of old and new Acts/Rules, reports and other government publications in time. This problem is felt more in libraries, which are away in places from Delhi and/or the state capitals. Thus, efforts should be made for regular and speedy supply of all governmental publications to reach our libraries.
Page 6

APPLICATION OF MARKETING RESEARCH TRADITIONAL APPLICATION OF MARKETING RESEARCH Traditionally, marketing decisions have been divided into 4P’s – product, price, promotion and place decisions. New-product research New product development is critical to the life of most organizations as they adapt to their changing environment. Since, by definition, new products contain unfamiliar aspects for the organization, there will be uncertainty associated with new products. New product can be divided into four stages

Concept Generation Need Identification Concept Identification

Concept Evaluation and Development

Product Evaluation and Development

Testing the Marketing Program

Concept generation There are two types of concept generation research. They are: a. Need identification. The emphasis in need research is on identifying unfilled needs in the market. Marketing research can identify needs in various ways. Some are qualitative and others, such as segmentation studies can be quantitative. Following are some examples: i. Perceptual maps, in which products are positioned along the dimensions by which users perceive and evaluate, can suggest gaps into which new products might fit. Multidimensional scaling is used to generate these perception gaps.
Page 7

ii. iii.

Social and environment trends can be analyzed. An approach termed benefit structure analysis has product users identify the benefits desired and the extent to which the product delivers those benefits, for specification applications. The result is an identification of benefits sought that current product do not deliver. iv. Lead user analysis is the approach in which instead of just asking users what they have done, their solutions are collected more formally. Lead users are positioned to benefit significantly by solving problems associated with these needs. Once a lead user is identified, the concepts that company or person generates are tested. b. Concept identification. During the new product development process there is usually a point where a concept is formed but there is no tangible usable product that can be tested. The concept should be defined well enough so that it is communicable. There may be simply a verbal description, or there may be rough idea for a name, a package, or an advertisement approach. The role of marketing research at this stage is to determine if the concept warrants further development and to provide guidance on how it might be improved and refined. 2. Product Evaluation and development

Product evaluation and development, or product testing, is very similar to concept testing, in terms of both the objectives and the techniques. The aim is still to predict market response to determine whether or not the product should be carried forward. a. Use testing. The simplest form of use testing gives users the product and after a reasonable amount of time asks their reactions their intentions to buy it. b. Predicting trial. Trial levels (the percentage of a sample of consumers who had purchased the product at least once within 12 months after launch) were predicted on the basis of three variables: • Product class penetration (PCP) • Promotional expenditure • Distribution of the product c. Pretest marketing. Two approaches are used to predict the new brand’s market share. • The first one is based on preference judgments. The preference data are used to predict the proportion of purchases of the new brand that respondents will make given that the new brand is in their response set. • The second approach involves estimating trial and repeat purchase levels based on the respondent’s purchase decisions and intentions-to-buy judgments. 3. Test marketing

Test marketing allows the researcher to test the impact of the total marketing program, with all its interdependencies, in a market context as opposed to the artificial, context associated with the concept and product tests that have been discussed. Test marketing has two primary functions. • The first is to gain information and experience with the marketing program before making a total commitment to it. • The second is to predict the program’s outcome when it is applied to the total market. There are really two types of test markets:
Page 8

1. Sell-in test markets are cities in which the product is sold just as it would be in a national launch. In particular, the product has to gain distribution space. 2. Controlled-distribution scanner markets are cities for which distribution is prearranged and the purchase of a panel of customers are monitored using scanner data. Really new product Really new product normally take a long time (sometimes 15 to 20 years) from conception to national introduction. Really new products (RNPs) are those that: • Create or expand a new category, thereby making cross-category competition the key (e.g., fruit teas versus soft drinks) • Are new to customers, for whom substantial learning is often required (i.e., what it can be used for, what it competes with, why it is useful). • Raise broad issues such as appropriate channels of distribution and organizational responsibility. • Create (sometimes) a need for infrastructure, software and add-ons. PRICING RESEARCH Research may be used to evaluate alternatives price approaches for new products before launch or for proposed changes in products already on the market. There are two general approaches to pricing research. 1. The first is the well-established Gabor and Grainger method. In this method, different prices for a product are presented to respondents (often by using test-priced, with the corresponding number of affirmative purchase intentions is produced. 2. In a second approach, respondents are shown different sets of brands in the same product category, at different prices and are asked which they would buy. This multibrand-choice method allows respondents to take into account competitions’ brands as they normally would outside such a test. S such, this technique represents a form of simulation of the point of sale. Decisions regarding price ranges for new product have to be made early in the development stage. A product concept cannot be tested fully, for example, without indicating its price, so when the product is ready to be introduced, a decision must be made about its specific price. Decisions on price changes-should we change the price, and, if so, in which way and by how much? – will then need to be made over the product’s life cycle. Either of two pricing strategies can be followed. 1. Skimming strategy. The skimming strategy is based on the concept of pricing the product at the point at which profits will be the greatest until market conditions change or supply costs dictate a price change. Under this strategy, the optimal price is the one that results in the greatest positive difference between total revenues and total costs. 2. Share-penetration strategy

Page 9

Penetration strategy is a strategy based on the concept that average unit production costs continue to go down as cumulative output increases. Potential profits in the early stages of the product life cycle are sacrificed in the expectation that higher volumes in later periods will generate sufficiently greater profits to result in overall profit for the product over its life. Distribution research Traditionally, the distribution decisions in marketing strategy involve the number and location of salepersons, retail outlets, warehouses and the size of discount to be offered. The discount to be offered to the members in the channel of distribution usually is determined by what is being offered by existing or similar products, and also whether the firm wants to follow a “push” or a “pull” strategy. Marketing research, however, plays an important role in the number and location in decisions about numbers and locations. a. Warehouse and Retail Location Research The essential questions to be answered before a location decision is made are: “What costs and delivery times would result if we choose one location over another?” Simulation of scenarios is used to answer these questions. The simulation can be relatively simple, paper-and-pencil exercise for the location of a single warehouse in a limited geographic area, or it can be a complex, computerized simulation of a warehousing system for a regional or national market. i. Center of gravity simulation. The center for gravity method of simulation is used to locate a single warehouse or retail site. In this method, the approximate location that will minimize the distance to customers, weighted by the quantities purchased, is determined. The more symmetry there is in customer locations and weights, the more nearly the initial calculation approximates the optimal location. The location indicated by the first calculation can be checked to be determine if it is optimal (or near optimal) by using a “confirming” procedure. If it in not optimal, successive calculations can be made as necessary to “home in” on the best location. ii. Computerized simulation models. The concept involved in simulations for this purpose is quite simple. Data that describes the customer characteristics (location of plants, potential warehouse and retail sites) and distribution costs (costs per mile by volume shipped, fixed and variable costs of operating each warehouse, the effect of shipping delays on and variable costs of operating each warehouse, the effect of shipping delays on customer demand) are generated and input into the computer. The computer is programmed to simulate various combinations of numbers and locations of warehouses, and to indicate which one(s) gives the lowest total operating cost. Effective results have been achieved by using computer simulations to design distribution systems. iii. Trade area analysis. Formal models have been developed that can be used to predict the trading area of a given shopping center or retail outlet based on relative size, travel time, and image. A variety of other techniques can be used to establish trading areas. An analysis of the addresses of the credit card customers or license plates of the cars (by plotting the addresses of the car owners) can provide a useful estimate of the trading area. Check-clearance data can be used to supplement this information. The best, but also the most expensive way of establishing trading area bound Aries is to conduct surveys to determine them.
Page 10

iv.

Outlet location research. Individual companies and, more commonly, chains, financial institutions with multiple outlets, and franchise operations must decide on the physical location of their outlet(s). Three general methods involves plotting the area surrounding the potential site in terms of residential neighborhood, income levels, and competitive stores. Regression models have been used for location studies for a variety of retail outlets, including banks, grocery stores, liquor stores, chain stores and hotels. Data for building the model and for evaluating new potential locations are obtained through secondary data analysis and surveys.

Number and location of Sales Representatives How many sales representatives should be in a given territory? There are three general research methods for answering this question. • The first, the sales effort approach, is applicable when the product line is first introduced and there is no operating history to provide sales data. • The second involves the statistical analysis of sales data and can be used after the sales program is under way. • The third involves a field experiment and is also applicable only after the sales program has begun. Promotion research It focuses on the decision that are commonly made when designing a promotion strategy. The decision for the promotion part of a marketing strategy can be divided in to (1) Advertising and (2) Sales promotion. Sales promotion affects the company in the short term, whereas advertising decisions have long-term effects. Companies spend more time and resources on advertising research than on sales promotion research because of the greater risk and uncertainty in advertising research. 1. Advertising research Most companies concentrate on advertising because advertising decisions are more costly and risky than sales promotion decisions. Advertiseing reasearch typically, involves generating information for making decisions in the awareness, recognition, preference and purchasing stages. What separates an effective advertisement from a dud? The criteria will depend, on th brand involved and its advertising objective. However, four basic categories of responses are used in advertising research in general and copy testing in particular: a) Advertisement recognition b) Recall of the commercial and its contents c) The measure of commercial persuasion and the impact on purchase behavior. • Purchase behavior - Coupon stimulating purchasing - Split-cables tests. Information Resources Inc’s (IRI) BehaviorScan is one of several spiltcable testing operations. BehaviorScan monitors the purchases of panel members as well as in-store information such s special prices, features and displays. • Tracking studies
Page 11

When a campaign is running, its impact often is monitored via a tracking study Periodic sampling of the target audience provide a time trend of measures of interest. The purpose is to evaluate and reassess the advertising campaign, and perhaps also to understand why it is or is not working. Among the measures that often are traced are advertisement awareness, awareness of elements of the advertisement, brand awareness, beliefs about brand attributes, brand image, occasions of use, and brand preference. Of particular interest is knowing how the campaign is affecting the brand, as opposed to how the advertisement is communicating the message. • Diagnostics testing A whole category of advertising research methods is designed primarily not to test the impact of a total ad but rather to help creative people understand how the parts of the ad contribute to its impact. Which are weak and how do they interact? Most of these approaches can be applied to mock-ups of proposed ads as well as finished ads. • Copy test validity This test refers to the ability to predict advertising response. • Budget decision Arriving at analytical, research-based judgments as to the optimal advertising budget is surprisingly difficult. However, there are research inputs that can be helpful. Tracking studies that show advertising is either surpassing or failing to reach communication objectives can suggest that the budget should be either reduced or increased. • Media research In evaluating a particular media alternative, it is necessary to know how many advertising exposures it will deliver and what will be the characteristics of the audience. A first cut of the vehicle’s value is the cost per thousand (circulation), the advertisement insertion cost divided by the size of the audience. 2. Sales Promotion Research There are three major types of sales promotion: consumer promotion, retailer promotion and trade promotions. In general, the consumer promotion, manufactures offer of all sales promotion activities. In consumer promotion, manufacturers offer promotions promotions directly to consumers, whereas retail promotions involve promotions by retailers to consumers. Trade promotions involve manufacturers offering promotions to retailers or other trade entities. Trade entities can also promote to each other. For example ,a distributor can offer a steep temporary price cut to retailers in order to sell excess inventory. We call trade promotions, since the recipient of the promotion is a marketing intermediary. Sometimes several manufacturers or several retailers combine in one promotion. These are called cooperative promotions or promotion partnerships. Manufacture Trade promotions Trade

Consumer

Consumer Promotions

Retailer Promotions

CONTEMPORARY APPLICATIONS OF MARKETING RESEARCH
Page 12

1. Competitive Advantage. The notion that achieving superior performance requires a business to gain and hold an advantage competitors is central to contemporary strategic thinking. Businesses seeking advantage are exhorted to develop distinctive competencies at the lowest delivered cost or to achieve differentiation through superior value. The assessing competitive advantage can be done in number of ways. The methods can be broadly classified as market-based and processbased assessment. Market-based assessment is direct comparison with a few target competitors, whereas process-based assessment is a comparison of the methods employed. 2. Brand Equity. Brand equity is defined as a set of assets and liabilities linked to a brand that add to or subtract from the value of a product or service to a company and/ or its customers. The assets or liabilities that underlie brand equity must be linked to the name and/or symbol of the brand. The assets and liabilities on which brand equity is based will differ from context to context. However, they can be usefully grouped into five categories: a) Brand loyalty b) Name awareness c) Perceived quality d) Brand association In addition to perceived quality e) Other proprietary brand assets: patents, trademarks, channel relationships etc. Brand loyalty Provides Value to Customers by enhancing Customer’s • Interpretation/ processing of information • Confidence in the purchase decision Provides Value to firm by enhancing • Efficiency and effectivesness • Brand loyalty • Prices/margins • Brand extension • Trade leverage

Name awareness

Perceived quality

BRAND EQUITY Name Symbol

Brand Association

Other proprietary

3. Customer satisfaction. The measurement of customer satisfaction and its link to product/ service attributes is the vehicle for developing a market-driven quality approach. This approach requires a sequential research design that uses the results from each research phase to build and enhance the value of subsequent efforts. During this process, it is imperative to study customers who were lost, to
Page 13

determine why they left. This issue must be addressed early in the system design. The steps involved in customer satisfaction is a) Define goals and how information will be used b) Discover what is really important to customers and employees c) Measure critical needs d) Act on the information e) Measure performance over time f) Issues in questionnaire design and scaling in satisfaction research 4. Total quality management. TQM is a process of managing complex changes in the organization with the aim of improving quality. The power of measurements is clearly visible in applications of quality function deployment (QFD), a japanese import used to make product design better reflect customer requirements. In QFD, a multifunctional team measures and analyzes in great detail both customers attitudes and product attributes. Marketing research plays a crucial role at this stage of the process. Then the team creates a visual mtrix in order to find ways to modify product attributes (engineering characteristics) so as to improve the product on the customer-based measures of product performance. Along the way, the team must develop a series of measures of several different types. EMERGING APPLICATION OF MARKETING RESEARCH 1. Database marketing A database is a customer list to which has been added information about the characteristics and the transactions of these customers. Business use it to cultivate customers – as they seek new customers. Need A database provides the means for research to support decisions. It enables profiling of customers by searching for prospects who are similar to existing customers. It provides the means for implementation of profitable programs of repeat business and cross-selling. It assist in marketing planning and forecasting. Further a database can: • Match products or services to customers’ wants and needs • Help select new lists or use new media that fit the profile of existing customers. • Maximize personalization of all offers to each customer. • Provide for ongoing interaction with customers and prospects. • Pinpoint ideal timing and frequently for promotions • Measure response and be accountable for results • Help create the offers most likely to elicit responses from customers • Help achieve a unique selling proposition (USP), targeted to appeal to your customers. • Integrate direct-response communication with other forms of advertising • Demonstrate that customers are valuable asstes.
Page 14

Types of database 1. Active customers 2. Inactive customers 3. Inquiries Benefits of database marketing

a) Customers are easier to retain than acquire. The first reason is that it takes five times the
energy and budget to get new customer as it does to keep an existing one. Also, a disproportionately small number of your customers generate a very large proportion of your income. b) Determine their “Lifetime Value”. Building a lasting relationship becomes the obvious way to a prosperous and profitable future. c) Developing relationships with customers. Understanding your customers’ tastes and preferences on an individual basis is the foundation for relationship marketing. Relationship marketing combines elements of general advertising, sales promotion, public relations and direct marketing to create more effective and more effective ways of reaching consumers. It centers on developing a continuous relationship with consumers across a family of related products and services. 2. Relationship marketing The relationship marketing process incorporates three key elements: 1. Identifying and building a database of current and potential consumers, which records and cross-references a wide range of demographic, lifestyle and purchase information. 2. Delivering differential messages to these people through established and new media channels based on the consumers’ characteristics and preferences. 3. Tracking each relationship to monitor the cost of acquiring the consumer and the lifetime value of his or her purchases.

Page 15

Chapter 2. RESEARCH PROCESS

STEPS IN THE RESEARCH

PROCESS OF
Marketing planning and information system Marketing planning and information system Planning system Planning system Strategic plans Strategic plans Tactical plans Tactical plans Information system Information system Database Database DSS DSS

1. Agree on Research Purpose 1. Agree on Research Purpose
Problems or opportunities Problems or opportunities Decision alternatives Decision alternatives Research users Research users

2. Establish Research Objectives 2. Establish Research Objectives Research questions Research questions Hypotheses Hypotheses Boundaries of study Boundaries of study

ESTIMATE THE VALUE OF INFORMATION Is benefit > cost?

DO NOT DO NOT CONDUCT MR CONDUCT MR

PROBLEM It involves a causation symptoms, decisions. A condition the existence and we, must confuse this Symptoms essential problemfor the problem. A whenever question involves 4. Design the research 4. Design the research
Choose among alternative research approaches Choose among alternative research approaches Specify the sampling plan Specify the sampling plan Design the experiment Design the experiment Design the questionnaire Design the questionnaire

DISCOVERY search for among problems, and symptom is a that indicates of a problem, 5. Collect the data be careful not to 5. Collect the data with a problem. occupy an 6. Prepare and analyze the data place in the 6. Prepare and analyze the data solving process, underlying 7. Report the research results and provide strategic 7. Report the research results and provide strategic problem exists recommendations recommendations one faces a whose answer – or a need whose fulfillment – doubt and uncertainty. If there is no answer or
Page 16

solution, there is no problem (although the consequences might be terrible); and if there is only a single possible answer or solution, there is no problem. A decision is a determination or resolution of a question. In the terms of a business executive, a decision is the determination of a course of action to be taken. Many routines or repetitive decisions to which marketing research is applied often invoke a complex of problems, and considerable work is entailed in the choice of the best available course of action. Business problems are not found by surprise or accidental circumstances. The persons who find problems are sensitized to be on the alert and are prepared to find them. Always there is evidence that the searching mind penetrates with insight. Our abilities can go beyond intuition or a sixth sense. Fortunately, there are means available to sharpen our capacities in problem discovery. First, an understanding of the different types of difficulties or symptoms which may call for decisions is useful. Second, provision of a marketing information system may often signify the existence of the problem to a decision maker. PROBLEM DEFINITION The first step in any marketing research project is to define the problem. In defining the problem, the researcher should take into account the purpose of the study, the relevant background information, what information is needed, and how it will be used in decision making. Problem definition involves discussion with the decision-makers, interviews with industry experts, analysis of secondary data, and, perhaps, some qualitative research, such as focus groups. Once the problem has been precisely defined, the research can be designed and conducted properly. RESEARCH OBJECTIVES The research objective is a statement, in as precise terminology as possible, of what information is needed. The research objective should be framed so that obtaining the information will ensure that the research purpose is satisfied. Research objectives have three components: 1. Research question: It specifies the information the decision maker needs. The research question asks what specific information is required to achieve the research. If the research question is answered by the research, then the information should aid the decision maker. 2. Development of hypotheses: A hypotheses is a possible answer to a research question. The research determines which of these alternative answers is correct. There are three steps to develop the hypotheses a. The researcher can draw on previous research to generate hypotheses for future large-scale research efforts. The research purpose might be deciding whether to conduct the large-scale studies. b. Second source is theory from such disciplines as psychology, sociology, marketing or economics. Thus, the economic theory might suggest the importance of price in explaining a loss of retail sales. c. The most important source of developing hypotheses is the manager’s experience with related problems, coupled with knowledge of the problem situation and the use of judgment. 3. Research boundaries: Hypotheses development helps make the research question more precise. Another approach is to indicate the scope of the research or the research
Page 17

boundaries. For example, is the interest in current customers only or in all potential customers? 4. Hypotheses development model

Source • • • Theory Management experience Exploratory research Hypothesi s Research design

Research Purpose

Research Question

Research Objective

Development of an approach to the problem includes formulating an objective or theoretical framework, analytical models, research questions, hypotheses, and identifying characteristics or factors that can influence the research design. This process is guided by discussions with management and industry experts, case studies and simulations, analysis of secondary data, qualitative research, and pragmatic considerations.

RESEARCH DESIGN The research problem having been formulated in clear terms, the researcher will be required to prepare a research design, i.e. he will have to state the conceptual structure within which research would be conducted. The preparation of such a design facilitates research to be as efficient as possible yielding maximal information. But how all these can be achieved depends mainly on the research purpose. Research purposes may be grouped into four categories, viz., (i) Exploration (ii) Description (iii) Diagnosis (iv) Experimentation A flexible research design, which provides opportunity for considering many different aspects of a problem, is considered appropriate if the purpose of the research study is that of
Page 18

exploration. But when the purpose happens to be an accurate description of a situation or of an association between variables, the suitable design will be one that minimizes bias and maximizes the reliability of the data collected and analyzed. There are several research designs, such as, experimental and non-experimental hypothesis testing. Experimental designs can be either informal designs (such as before-and-after without control, after-only with control before-and-after with control) or formal designs (such as completely randomized design, randomized block design, Latin square design, simple and complex factorial designs), out of which the researchers must select one for his own project. The preparation of the research design, appropriate for a particular research problem, involves usually the consideration of the following: (i) the means of obtaining the information (ii) the availability and skills of the researcher and his staff (if any) (iii) explanation of the way in which selected means of obtaining information will be organized and the reasoning leading to the selection (iv) the time availability for research and (v) the cost factor relating to research, i.e. the finance available for the purpose. RESEARCH METHOD In dealing with any real life problem it is often found that data at hand are inadequate, and hence, it becomes necessary to collect data which differ considerably in context of money costs, time and other resources at the disposal of the researcher. Primary data can be collected either through experiment or through survey. If the researcher conducts an experiment, he observes some quantitative measurements, or the data, with the help of which he examines the truth contained in his hypothesis. But in the case of a survey, data can be collected by any one or more of the following ways: (i) By observation: This method implies the collection of information by way of investigator’s own observation, without interviewing the respondents. The information obtained relates to what is currently happening and is not complicated by either the past behavior or future intentions or attitudes of respondents. This method is no doubt an expensive method and the information provided by this method is also very limited. As such this method is not suitable in inquiries where large samples are concerned. (ii) Through personal interviews: The investigator follows a rigid procedure and seeks answers to a set of pre-conceived questions through personal interviews. This method of collecting data is usually carried out in a structured way where output depends upon the ability of the interviewer to a large extent. (iii) Through telephone interviews: This method of collecting information involves contacting the respondents on telephone itself. This is not a very widely used method but it plays an important role in industrial surveys in developed regions, particularly, when the survey has to be accomplished in a very limited time. (iv) By mailing of questionnaires: The researcher and the respondents do not come in contact with each other if this method of survey is adopted. Questionnaires are mailed to the respondents with a request to return after completing the same. It is the most extensively used method in various economic and business surveys. Before applying this method, usually a Pilot Study for testing the questionnaire is conducted which reveals the weakness, if any, of the questionnaire. Questionnaire to be used must be prepared very carefully so that it may prove to be effective in collecting the relevant information.
Page 19

(v)

Through schedules: Under this method the enumerators are appointed and given training. They are provided with schedules containing relevant questions. Data are collected by filling up the schedules by enumerators on the basis of replies given by respondents. Much depends upon the capability of enumerators so far as this method is concerned. Some occasional field checks on the work of the enumerators may ensure sincere work. The researcher should select one of these methods of collecting the data taking into consideration the nature of investigation, objective and scope of the inquiry, financial resources, available time and the desired degree of accuracy. Though he should put attention to all these factors but much depends upon the ability and experience of the researcher. DATA COLLECTION The research design has a wide variety of methods to consider either singly or in combination. They can be grouped first according to whether they use secondary or primary sources of data. • Secondary data are already available, because they were collected for some purpose other than solving the present problem. • Primary data are collected especially to address a specific research objective. A variety of methods, ranging from qualitative research to surveys to experiments, may be employed.

1. SAMPLING METHODS There are different types of sampling designs based on two factors viz., the representation basis and the element selection technique. On the representation basis the sample may be probability sampling or it may be non-probability sampling. Probability sampling is based on the concept of random selection, whereas non-probability sampling is ‘non-random’ sampling. On element selection basis, the sampling is non-random sampling. On element selection basis, the sample may be either unrestricted or restricted. When each sample element is drawn individually from the population at large, then the sample so drawn is known as ‘unrestricted sample’, whereas all other forms of sampling are covered under the term ‘restricted sampling’. The following chart exhibits the sample designs Representation basis Element technique selection Probability sampling Simple random sampling Non-probability sampling Haphazard sampling or convenience sampling

Unrestricted sampling Restricted sampling

Complex random sampling, Purposive sampling (such (such as cluster sampling, as quota sampling, systematic sampling, judgment sampling)
Page 20

Stratified sampling etc.) 1. a. b. c. d. e. 2. a. b. c. Probability sampling Simple random sampling. Systematic random sampling. Stratified random sampling Cluster sampling Multi stage sampling Non-probability sampling Judgment sampling Quota sampling Convenience sampling

USES OF SCALES IN RESEARCH In research when the concepts to be measured are complex and abstract and we don not possess the standardize measured tools. Alternatively, we can say that while measuring attitude and opinions, we face the problem of their valid measurement. A researcher may face similar problem. To avoid this problem the scaling technique is used. Different types of scaling methods are: 1. a. b. 2. a. b. 3. 4. 5. Rating scales The graphic rating scale The itemized rating scale Ranking scales Method of paired comparison Method of rank order Arbitrary scale Differential scale (Thrustone-type scale) Summated scales (Likert scale)

DATA PROCESSING Data processing’s total task in carrying out the analytical program is to convert crude fragments of observation and responses into orderly statistics for interpretation. The seven stages of data processing are given below: 1. Data preparation. There are three preparation stages necessary in either manual or computer processing: editing, classifying and coding. An additional stage with computers is card punching. 2. Programming. Every data-processing job, whatever the method used, needs preplanning that specifically lays out directions to the persons doing tabulating (manually) or to the machinery (with computers). This describes specifically the particular operations to take place, with what equipment, by whom, and so forth. 3. Sorting. All the bits of data have to be classified together with the other bits that are of the same nature, by being sorted into groups. 4. Counting. When the preparatory work has been done, the individual observations can be counted and accumulated in subtotals of the prescribed classifications.
Page 21

5. Summarizing. The various subtotals and totals are brought together and summarized in tables that will exhibit the data in an informative manner. 6. Computations. When computers are employed, various calculations May be performed with the data during the tabulation operations. When other methods are utilized, the computations are performed as separate stages subsequent to the preparation of tables. 7. Control. Means for making proper checks of the accuracy of the data processing are practically essential. This includes examination of the coding and, if machines are used, the card punching and programs. Also, a base total of the number of questionnaire or other data forms being processed should be determined before the processing begins, thereby providing a total with which to verify whether each data breakdown, or analysis, totals to exactly this base figure. DATA ANALYSIS After the data have been collected, the researcher turns to the task of analyzing them. The analysis of data requires a number of closely related operations such as establishment of categories, the application of these categories to raw data through coding, tabulation and then drawing statistical inferences. The unwieldy data should necessarily be condensed into as few manageable groups and tables for further analysis. Thus, researcher should classify the raw data into some purposeful and usable categories. Coding operation is usually done at this stage through which the categories of data are transformed into symbols that may be tabulated and counted. Editing is the procedure that improves the quality of the data for coding. With coding the stage is ready for tabulation. Tabulation is a part of the technical procedure wherein the classified data are put in the form of tables. The mechanical devices can be made use of at this juncture. Computers tabulate a great deal of data, especially in large inquiries. Computers not only save time but also make it possible to study large number of variables affecting a problem simultaneously. Analysis work after tabulation is generally based on the computation of various percentages, coefficients etc., by applying various well-defined statistical formulae. In the process of analysis, relationships or differences supporting or conflicting with original or new hypothesis should be subjected to tests of significance to determine with what validity data can be said indicate any conclusions. For instance, if there are two samples of weekly wages, each sample being drawn from factories in different parts of the same city, giving two different values, then our problem may be whether the two mean values are significantly different or the difference is just a matter of chance. Through the use of statistical tests we can establish whether such a difference is a real one or is the result of random fluctuations. If the difference happens to be real, the inference will be that the two samples come from different universes and if the difference is due to chance, the conclusion would be that the two samples belong to the same universe. Similarly, the technique of analysis of variance can help us in analyzing whether three or more varieties of seeds grown on certain fields yield significantly different results or not. In brief, the researcher can analyze the collected data with the help of various statistical measures. RESEARCH REPORT Finally, the researcher has to prepare the report of what has been done by him. Writing of report must done with great care keeping in view the following:
Page 22

(1) The layout of the report should be as follows: (i) the preliminary pages. (ii) the main text and (iii) the end matter In its preliminary pages the report should carry title and date followed by acknowledgments and foreword. Then there should be a table of contents followed by a list of tables and list of graphs and charts, if any, given in the report. The main text of the report should have the following parts: (a) Introduction: It should contain a clear statement of the objective of the research and an explanation of the methodology adopted in accomplishing the research. The scope of the study along with various limitations should as well as stated in this part. (b) Summary of findings: After introduction there would appear a statement of findings and recommendations in non-technical language. If the findings are extensive, they should be summarized. (c) Main report: The main body of the report should be presented in logical sequence and broken-down into readily identifiable sections. (d) Conclusions: Towards the end of the main text, researcher should again put down the results of his research clearly and precisely. In fact, it is the final summing up. At the end of the report, appendices should be enlisted in respect of all technical data. Bibliography, i.e., list of books, journals, reports etc., consulted, should also be given in the end. Index should also be given specially in a published research report. (2) Report should be written in a concise and objective style in simple language avoiding vague expressions such as ‘it seems,’ ‘ there may be’, and the like. (3) Charts and illustrations in the main report should be used only if they present the information more clearly and forcibly. (4) Calculated ‘confidence limits’ must be mentioned and the various constrains experienced in conducting research operations may as well be stated. Research Design Topics 1. Impact of Problem Definition on Research Design 2. Concepts Relating to Research Design 3. Types of Research Designs. 3.1 Exploratory Research Studies 3.2 Descriptive And Diagnostic Research Studies 3.3 Hypothesis- Testing Research Studies (Experimental Studies) 4. Difference between exploratory and descriptive research 5. Basic Principles of Experimental Design 6. Formal and Informal Experimental Design Page nos. 01 06 07 07 09 10 11 11 12

1. Impact of Problem Definition on Research Design
Page 23

-

Research design Problem Definition Components of a Problem Impact of Problem Definition.

1.1 Research Design A research design is the detailed blueprint used to guide a research study toward its objectives. The process of designing a research study involves many interrelated decisions. The most significant decision is the choice of research approach, because it determines how the information will be obtained. To design something also means to ensure that the pieces fit together. The achievement of this fit among objective, research approach, and research tactics is inherently an iterative process in which earlier decisions are constantly reconsidered in light of subsequent decisions. 1.2 Problem Definition A problem exists when the decision-maker faces uncertainty regarding which action to adopt in the situation. If only one action is available (or none at all) or if there is certainty about the outcomes of the alternatives, there really is no problem. Defining a problem is a situation where: 1) The decision-maker has not yet determined how to exploit an opportunity or 2) There are difficulties that are currently faced or are anticipated. For instance the marketing manager may state that sales of a product have fallen by 25% because its price is too high & hence may ask the researcher to throw more light on “what is a more effective price”? Actually the decline in sales may be due to any other factor or factor like poor product quality, competitor’s action, poor salesmanship etc. The research dealing solely with the price may be able to solve the problem correctly. The existence of a disorder or a problem is the reason why the research is needed. Once the problem is identified/disorder is located, the researcher may set the projects objectives. The project’s objectives are the specific purpose or goal of the research, since the objective flow from the disorder must precede the selection of the objectives. 1.3 Components of Problem A problem consists of a set of specific components: a) The decision maker and his or her objectives:
Page 24

The decision-maker may not always be represented by a single individual. Marketing decisions may be made by a marketing group of two or more people. Moreover, some members of the group may not agree with the choice made because of differences either in objectives (i.e., valued outcomes) or in their appraisal of the effectiveness of means chosen to achieve the objectives. In other situations an individual may be performing the role of agent for some superior or group of superiors. The objectives of the decision maker provide motivation for the decision. These objectives, or goals, may range from a desire to maintain or increase company profits and market share to personal goals concerned with maintaining prestige and a desire to advance in the corporation. The decision maker’s objectives may also be characterized by their hierarchical nature at any given moment and their evolution over time. For example, an increase in the firm’s profits may come about through an increase in the firm’s sales, which, in turn, may be accomplished by the firm’s sales personnel contacting a greater number of new accounts per month. The goal for the salesperson may be to increase sales contacts 10% over those made in some base period, but this represents a sub goal, consistent, it is hoped, with a higher-level objective. The decision theorist also faces the problem of estimating changes in objective over time. b) The environment or context of the problem;

Every problem exists within a context of the characteristics of the company and of the marketconsumer tastes and preferences, level of income and rate of growth in the market areas, the degree of competition and competitor action and reaction, and the type and extent of governmental regulation. These environmental factors may individually and collectively affect the outcome of the decision made. The researcher must assist the manager in identifying these relevant environmental factors. Consider the problem of deciding whether to introduce a new consumer product. Some of the environmental factors that could affect the decision are as follows: • • • • • • • • • • The types of consumers that comprise the potential market, The size and location of the market, The prospects for growth or contraction of the market over the planning period, The buying habits of consumers, The current competition for the product, The likelihood and timing of entry of new competitive products , The current and prospective competitive position with respect to price, quality, and reputation, The marketing and manufacturing capabilities of the company, The situation with respect to patents, trademarks, and royalties, The situation with respect to codes, trade agreements, taxes, and tariffs.

Although this listing is by no means exhaustive, it illustrates some of the more important environmental factors that could influence the outcome of the decision and so must be considered in the problem statement. Each problem has a comparable set of environmental factors to be considered.
Page 25

c)

Alternative courses of action;

A course of action is a specification of some behavioral sequence, such as the construction of a new warehouse, the adoption of a new package design, or the introduction of a new product. All courses of action involve, either implicitly or explicitly, the element of time. For example, “Construct a warehouse, starting next week” is a different course of action from “Construct a warehouse, starting next year.” Actions, of course, can be taken only in the present. A decision to stipulate a program of action becomes a commitment, made in the present, to follow some behavioral pattern in the future. Courses of action may range in complexity from a single act to be implemented immediately to a large set of related acts proceeding either in parallel or sequentially over time. The time interval, which becomes a part of the course of action, may be highly important, since both the costs of implementation and the probabilities of alternative outcomes will typically vary as a function of time. d) Consequences of Alternative Courses of Action;

The world of uncertainty is a familiar world for the marketer. When choosing a course of action, a marketer can rarely be certain of the consequences, since the choice is usually based on incomplete information about the various factors that influence the decision’s outcome. A primary job is thus to list the possible outcomes of various courses of action. But these outcomes will depend on various environmental factors. e) A state of doubt as to which course of action is best;

To solve a problem is to select the best course of action for attaining the decision maker’s objectives. A state of doubt as to which course of action is best can arise under three main classes of conditions: a. Certainty with respect to each course of action leading to a specific outcome. b. Risk with respect to each action leading to a set of possible outcomes, each outcome occurring with a known probability. For example, if a fair coin is tossed, we may assume that over the long run the proportion of heads will approach one-half; however, on any single toss we cannot predict whether a head or tail will appear. c. Uncertainty with respect to outcomes, given a particular course of action. In this view of decision-making we assume that the relative frequencies of the probabilities are not known. One version of this class of models, exemplified in the Bayesian approach to decision making (to be described later), assumes that the decision maker can express various “degrees of belief” as to the occurrence of alternative outcomes. Moreover, the decision maker may be able, in many cases, to collect more information regarding the “true” state of nature.

Page 26

1.4 Impact of Problem Definition A carefully formulated problem is a necessary point of departure for competently conducted research. There should be as clear and thorough an understanding as possible on the part of both the researcher and the decision maker as to the precise purposes of the research. In effect, this statement of purpose involves a translation of the decision maker’s problem into a research problem and study design. The decision maker is faced with a problem for which he or she must recognize alternative courses of action, choosing among them to accomplish one or more objectives. The research problem is to provide relevant information concerning recognized (or newly generated) alternative solutions to aid in this choice. To determine what information is required, the researcher will try to identify and understand the major elements of the problem faced by the decision maker. In a very real sense, problem formulation is the heart of the research process. As such it represents the single most important step to be performed. Concepts Relating to Research Design 1. Dependent and independent variable A concept, which can take on different quantitative values, is called a variable. eg: weight, height, income etc. Phenomena, which can take on quantitatively different values even in decimal points, are called ‘continuous variables’. Age is continuous variable but number of children is non – continuous variable. If one variable depends on upon or is a consequence of other variable, it is termed as a dependent variable, and the variable that is antecedent to the dependent variable is termed as independent variable. For example: if height depends on age, then height is dependent variable and age is independent variable. 2. Extraneous variable Independent variables that are not related to the purpose of the study, but may effect the dependent variable are termed as extraneous variables. Suppose the researcher wants to test the hypothesis that there is relationship between children’s gains in social studies achievement and their self-concepts. In this case self-concept is an independent variable and social studies achievement is a dependent variable. Intelligence may as well affect the social studies achievement, but since it is not related to the purpose of the study undertaken by the researcher, it will be termed as an extraneous variable. 3. Control One important characteristic of a good research design is to minimise the effect of extraneous variable(s). The technical term ‘control’ is used when we design the study minimising the effects of extraneous independent variables. In experimental searches, the term ‘control’ is used to refer to restrain experimental conditions. 4. Research hypothesis When a prediction or hypothesised relationship is to be tested by scientific methods, is it termed as research hypothesis. It is a predictive statement that relates an independent variable to a dependent variable.
Page 27

5. Experimental and non experimental hypothesis-testing research In this case the purpose of research is to test a research hypothesis. It can be of the experimental design or of the and non experimental design. Research in which the independent variable is manipulated is termed ‘experimental hypothesis-testing research’ and a research in which an independent is not manipulated is called ‘non experimental hypothesis-testing research’. 6. Experimental and control group In this research when a group is exposed to usual conditions, it is termed a control group, but when a group to exposed to some special conditions it is termed as experimental group. 7. Treatments The different conditions under which experimental and control groups are put are usually referred to as ‘treatments’. 8. Experiment The process of examining the truth of a statistical hypothesis, relating to some research problem, is known as an experiment. For eg: an experiment can be conducted to examine the usefulness of a certain newly developed drug. Experiments can be of two types viz., absolute experiment and comparative experiment. 9. Experimental unit(s) The pre-determined plots or the blocks, where different treatments are used, are known as experimental units. Such experimental units must be selected (defined) very carefully. 3. Types of Research Designs. The different research designs can be categorized into research design in case of: 1. Exploratory Research Studies. 2. Descriptive And Diagnostic Research Studies 3. Hypothesis- Testing Research Studies (Experimental Studies) Following are the details of different research designs: 3.1 Exploratory Research Studies Also termed as formulative research studies. Purpose of such studies is formulating a problem for more precise investigation. Major emphasis is on the discovery of ideas and insights. Research design has to be flexible enough to provide opportunity for considering different aspects of a problem under study.  Inbuilt flexibility is essential.    
Page 28

Following are three methods in the context of research design for studies:    The survey of concerning literature The experience survey The analysis of insight –stimulating examples.  The survey of concerning literature: This happens to be the most simple and fruitful method of formulating the research problem. Hypothesis stated by earlier workers may be reviewed and their usefulness be evaluated as a basis for further research. In this way researcher should review and build upon the work already done by others, but in cases where hypothesis has not been formulated hi task is to review the available material for deriving the relevant hypothesis from it.  Experience Survey: It is the survey of people who have had practical experience with the survey to be studied. The object is to obtain insight into the relationship between variables and new ideas relating to the research problem. For such a survey people who are competent and can contribute new ideas may be carefully selected as respondents to ensure representation of different of experience. The respondents selected can be interviewed by the investigator. An interview schedule is prepared by the researcher for systematic questioning of informants. The interview must ensure flexibility in the sense that the respondents should be allowed to raise issues and questions which the investigator has not previously considered. The interview may last for few hours. Hence, it its often considered desirable to send a copy of the questions to be discussed to the respondents well in advance. This gives an opportunity to the respondents for doing some advance thinking over various issues involved so that, at the time of interview they may be able to contribute effectively. Thus, an experience survey may enable the researcher to define the problem more concisely and help in formulation of research hypothesis. This survey may as well provide information about the practical possibilities for doing different types of research.  Analysis of insight stimulating examples: This is a fruitful method for suggesting hypothesis for research. It is particularly suitable in areas where there is little experience to serve as a guide. It consists of the intensive study of the selected instances of the phenomenon in which on is interested. For this purpose the existing records may be examined the unstructured interviewing may take place or some other approach may be adopted. Attitude of the investigator, the intensity of the study and the ability of the researcher to draw together diverse information into a unified interpretation are the main features which make this method an appropriate procedure for evoking insights. Examples for the above are: • • Reactions of strangers Reactions of marginal individuals
Page 29

• •

Study of individuals who are in a transition from one stage to another. Reactions of individuals from different social strata.

3.2 Descriptive And Diagnostic Research Studies Descriptive research studies are concerned with describing the characteristics of certain individuals or a group. E.g. studies concerning whether certain variables are associated. Diagnostic research studies determine the frequency of with which something occurs or its association with something else. E.g. studies concerned with specific predictions, with narration of facts and characteristics concerning individual, group or situation. The descriptive as well as diagnostic research studies share common requirements. In both the studies, the researcher must be able to define clearly, what he wants to measure and must find adequate methods of measuring it. The aim is to obtain complete and accurate information, hence, the procedure to be used must be carefully planned. It should make enough provision for protection against bias and must maximize reliability. The design must be rigid and not flexible. Following should be focussed: a) Formulating the objective of the study (what is the study about and why is it being made. b) Designing the methods of data collection (what techniques of gathering data will be adopted) c) Selecting the sample (how much material will be needed) d) Collecting the data (where can the required data be found and with what time period should the data be related) e) Processing and analyzing the data. f) Reporting the findings. Following are the steps involved in both the studies: Step 1.Specify the objectives with sufficient precision to ensure that ht data collected is relevant. Step 2.Select the methods by which the data are to be obtained. E.g. techniques of collecting the data must be devised. While designing data collection procedure, adequate safeguards against bias and unreliability must be ensured. Questions must be well examined and must be unambiguous, interests must not express their opinion. In most studies researcher takes down samples and then wishes to make statements about the population on the basis of the sample analyses. • The problem of designing samples should be tackled in such a form that the samples may yield accurate information with a minimum amount of research effort.
Page 30

• • • • • • • • • • • •

To obtain data free from errors, it is necessary to supervise closely the staff of field workers, as they collect and record information. As data are collected, they should be examined for completeness, comprehensibility, consistency and reliability. The data collected must be processed and analysed. This includes steps like coding the interview replies, observations, etc.; tabulating the data; and performing several statistical computations. The processing and analyzing procedure should be planned in detail before actual work is started. To avoid error in coding, the reliability of coders needs to be checked. Similarly, the accuracy of tabulation may be checked by having a sample of tables re-done. Last of all comes the task of reporting the findings, i.e. communicating the findings to others and the researcher must do it in an efficient manner. The layout of the report needs to be well planned so that all things relating to the research study may be well presented in a simple and effective style. Thus, the research design in the case of descriptive/diagnostic studies is a comparative design and must be prepared keeping the objective(s) of the study and the resources available. However, it must ensure the minimization of bias and maximisation of reliability of the evidence collected. It can be referred to as a survey design since it takes into account all the steps involved in a survey concerning a phenomenon to be studied.

3.3 Hypothesis- Testing Research Studies (Experimental Studies) • • • • • Hypothesis-tested research studies (experimental studies) are those where the researcher tests the hypothesis of casual relationship between variables. Such studies require procedures that will not only reduce bias and increase reliability, but will permit drawing inferences about casuality. Professor R.A. Fisher begun such designs when he was working at Rothamsted Experimental Station (Centre for Agricultural Research in England). Professor Fischer found that by dividing agricultural fields or plots into different blocks and then by conducting experiments in each of these blocks, the information collected and inferences drawn happen to be more reliable. This fact inspired him to develop certain experimental designs for testing hypotheses concerning scientific investigation.

4. Difference between exploratory and descriptive research RESEARCH DESIGN Types of study

Page 31

Exploratory of formulative Overall design Flexible design (design must provide opportunity for considering different aspects of the problem)

Descriptive / Diagnostic Rigid design (design must make enough provision for protection against and must maximize reliability)

(i) sampling design (ii) statistical design (iii) observational design (iv) operational design

Non- probability sampling design Probability sampling design (purposive or judgement (random sampling) sampling) No pre-planned design for analysis Unstructured instruments for collection of data No fixed design about the operational procedure Pre-planned design for analysis Structured or well thought out instruments for collection of data Advanced decisions about operational procedures

5. Basic Principles of Experimental Design Professor Fisher has enumerated three principles of experimental designs: 1. the Principle of Replication; 2. the Principle of Randomization; and the 3. the Principle of Local Control. According to the Principle of Replication, the experiment should be repeated more than once. Thus, each treatment is applied in many experimental units instead of one. By doing so the statistical accuracy of the experiments is increased. The entire experiment can even be repeated several times for better results. Conceptually replication does not present any difficulty, but computationally it does. It should be remembered that replication is introduced in order to increase the precision of a study; that is to say, to increase the accuracy with which main effects and interactions can be estimated. The Principle of Randomization provides protection, when we conduct an experiment, against the effects of extraneous factors by randomization. In other words, this principle indicates that we should design or plan the experiment in such a way that the variations caused by extraneous factors can be combined under the general heading of “chance”. The Principle of Local Control is another important principle of experimental designs. Under it the extraneous factors, the known source of variability, is made to vary deliberately over as wide a range as necessary and this needs to be done in such a way that the variability it causes can be measured and hence eliminated from the experimental error. This means that we should
Page 32

plan the experiment in manner that we can perform a two-way analysis of variance, in which the total variability of the data is divided into three components attributed to treatments (the subject), the extraneous factors and experimental error. 6. Formal and Informal Experimental Design Experimental design refers to the framework or structure of the experiment and as such there are several such experimental design. Experimental design can be classified into two broad categories. Informal experimental design and Formal experimental design. Informal experimental design are those design that normally uses a less sophisticated form of analysis based on differences in magnitude, whereas formal experimental design offer relatively more control and use precise statistical procedures for analysis. Important statically designs are as follows: 1. Informal experimental design: • Before and after without control design. • After only with control design. • Before and after with control design. 2. Formal experimental design: • Completely randomized design (C.R.design). • Randomized block design (R. B. design). • Latin square design (L.S. design). • Factorial design. The details of each of the above stated formal and informal experimental design are explained as follows. 1. Before-and-after control design: In such a design a single test group or area is selected and the dependant variable is measured before the introduction of the treatment. The treatment is then introduced and the dependant variable is measured again after the treatment has been introduced. The effect of the treatment would be equal to the level of the phenomena after the treatment minus the level of phenomenon before the treatment. The design can be represented as ______________________________________________________________ Test area: Level of phenomenon Treatment Level of phenomenon Before treatment (X) introduced after the treatment (Y) Treatment Effect = (Y) - (X) _______________________________________________________________________ The main difficulty of such a design is that with the passage of the time considerable extraneous variation may be there in the treatment effect.

Page 33

2. After only with control design: In this design two group of arise (test area and control area) are selected and the treatment is introduced in the test area only. The dependant variable is then measured in both the areas at the same time. Treatment impact is assessed by subtracting the value of the dependant variable in the control area from the value in the test area. This can be exhibited in the following form ________________________________________________________________ Test area: Treatment introduced Level of phenomenon after treatment (Y) Control area Level of phenomenon without treatment (Z)

Treatment effect = (Y) – (Z) ________________________________________________________________ The basic assumption in such a design is that the two areas are identical with respect to their behavior towards the phenomenon considered. If the assumption is not true, then there is then there is the possibility of extraneous variant entering into the treatment effect. However, data can be collected in such a design without the introduction of the problems with the passage of time. In this respect this design is superior to before-and-after without control design. 3. Before-and-after with control design: in this design two areas are selected and the dependent variable is measured in both the areas for identical time period before the treatment .The treatment is then introduced into the test area only, and the dependent area is measured both for an identical time period after the introduction of the treatment. The treatment effect is determined by subtracting change in the dependent variable in the control area from the change in the dependent variable in the test area. This can be shown in the following way: ________________________________________________________ Time Period 1 Time Period 2 Test area: Level of phenomenon Treatment Level of phenomenon before treatment (X) introduced after treatment(Y)

Control area: Level of phenomenon Level of phenomenon Without treatment without treatment (A) (B) Treatment Effect = (Y-X) – (Z-A) ___________________________________________________________________ _______________________

Page 34

This design is superior to the other two design for the simple reason that it avoids extraneous variation resulting both from the passage of time and from non-comparability of the test and control areas. But at times due to lack of historical data, time, so it is preferred to select one of the first two informal designs stated above. 4. Completely randomized design (C.R.): involves only two principles viz., the principle of replication and the principle of randomization of experimental designs. It is the simplest possible design and its procedure of analysis is also easier. The essential characteristic of this design is that subject are randomly are assigned to experimental treatment. For instance, if we have 10 subject and we to test 5under treatment A and 5 under treatment B, the randomized process gives the every possible group of 5 subjects selected from the group of 10 an equal opportunity of being assigned to treatment A and treatment B. One-way analysis of variance (or one way ANOVA) is used to analyze such a design. Even unequal application works in this design. It provides maximum number of degree of freedom to the error. Such a design is used when experimental areas happen to be homogenous. Technically, when all the variation due to uncontrolled extraneous factors are included under the heading of chance variation, we refer to the design of experiment as C.R.design. The brief description on the two form of such design is explained below: i. Two-group simple randomized design: in a two-group simple randomized design, first of all the population is defined and then from the population a sample is selected randomly. Further requirement of this design is that items, after being selected randomly from the population, be randomly assigned to the experimental and control groups(such random assignmentof items of two group is called as principle of randomization.). Thus this design yields two groups as representative of the population. In the diagram form this design can be shown in this way Two-group simple randomized design in Diagram form
T r e a t m e n t Experimenta l group Population A

Randomly Selected

Sample

Randomly assigned
Control group T r e a t m e n t B

I n d e p e n d e n t V a r i a b l e

Page 35

Since in the simple randomized design the elements constituting the sample are randomly from the same population and randomly assigned to the experimental and control groups, it becomes possible to draw conclusion on the basis of samples applicable for the population. The two group (experimental and control groups) of such a design are given different treatment of the independent variable. This design of experiment is quiet common in research studies concerning behavioral sciences. The merit of such a design is that it is simple and randomizes the difference among the sample items. But the limitation of it is that the individual differences among those conducting the treatments are not eliminated, i.e., it does not control the extraneous variable and as such the result of the experiment may not depict a correct picture. This can be illustrated by an example. Suppose that the researchers want to compare two groups of student who have been randomly selected and randomly assigned. Two different treatment viz., the usual training and the specialized training are being given to the two groups. The researchers hypothesis greater gain for the group who receives specialized training. To determine this, he tests each group before and after the training, and compares the amount of gain for the two groups to accept or reject his hypothesis. This is the illustration of the two group randomized design, wherein individual differences among students are being randomized. But this does not control the differential effects of the extraneous independent variable (in this case, the individual difference among those conducting the training programmes) Random replication design (IN A DIAGRAM FORM)

Page 36

Population (Available for study)

Population (Available to conduct treatment)

SAMPLE (To be studied) Random Selection

Random (To conduct Selection
treatments)

SAMPLE

Random Random assignment assignment

Independent variable or causal variable Group 2 E

Group

1

E

Group 3 Group 4

E E

Group 5 Group 6 Group 7 Experimental group Group group 8

C C C C

E= C= Control

Page 37

Treatment B Treatment A

(ii) Random replication design: The limitation of the two-group randomized design is usually eliminated within the random replications design. In the illustration just cited above, the teacher differences on the dependent variable were ignored, i.e., the extraneous variable was not controlled. But
Page 38

in a random replications design, the effect of such differences are minimised (or reduced) by providing a number of repetitions for each treatment. Each repetition is technically called a ‘replication’. Random replication design serves two purposes viz., it provides controls for the differential effects of the extraneous independent variables and secondly, it randomizes any individual differences among those conducting the treatments. From the diagram it is clear that there are two populations in the replication design. The sample is taken randomly from the population available for study and is randomly assigned to, say, four experimental and four control groups. Similarly, sample is taken randomly from the population available to conduct experiments (because of eight groups eight such individuals be selected) and the eight individuals so selected should be randomly assigned to the eight groups. Generally, equal number of items are put in each group so that the size of the group is not likely to affect the results of the study. Variables relating to both population characteristics are assumed to be randomly distributed among the two groups. Thus, this random replication design is, in fact an extension of the twogroup simple randomized design. 5. Randomized block design (R.B design) is an improvement over the C.R. design. In the R.B design the principle of local content can be applied along with the other two principles of experimental designs. In the R.B. design, subjects are first divided into groups, known as blocks, such that within each group the subjects are relatively homogeneous in respect to some selected variable. The variable selected for grouping the subjects is one that is believed to be related to the measures to be obtained in respect of the dependent variable. The number of subjects in a given block would be equal to the number of treatments and one subject in each block would be randomly assigned to each treatment. In general, blocks are levels at which we hold the extraneous factor fixed, so that its contribution to the total variability of data can be measured. The main feature of the R.B design is that in this each treatment appears the same number of times in each block. The R.B design is analysed by the two-way analysis of variance (two-way ANOVA) technique. Let us illustrate the R.B. design with the help of an example. Suppose four different forms of a standardized test in statistics were given to each of five students (selected one from each of the five I.Q. blocks) and following are the scores which they obtained.

Very low IQ Student A Form 1 Form 2 82 90

Low IQ Student B 67 68

Average IQ Student C 57 54

High IQ Student D 71 70

Very High IQ Student E 73 81

Page 39

Form 3 Form 4

86 93

73 77

51 60

69 65

84 71

If each student separately randomized the order in which he or she took the four tests (by using random numbers or some similar device), we refer to the design of this experiment as a R.B. design. The purpose of this randomization is to take care of such possible extraneous factors (say as fatigue) or perhaps the experience gained from repeatedly taking the test. 6. Latin squares design (L.S design) is an experimental design very frequently used in agricultural research. The conditions under which agricultural investigations are carried out are different from those in other studies for nature plays an important role in agriculture. For instance, an experiment has to be made through which the effects of five different varieties of fertilizers on the yield of a certain crop, say wheat, is to be judged. In such a case the varying fertility of the soil in different blocks in which the experiment has to be performed must be taken into consideration; otherwise the results obtained may not be very dependable because the output happens to be the effect not only of fertilizers, but it may also be the effect of fertility of soil. Similarly, there may be the impact of varying seeds on the yield. To overcome such difficulties, the L.S design is used when there are two major extraneous factors such as the varying soil fertility and varying seeds. The merit of this experimental design is that it enables differences in fertility gradients in the field to be eliminated in comparison to the effects of different varieties of fertilizers on the yield of the crop. But this design suffers from one limitation, and it is that although each row and each column represents equally all fertilizer varieties, there may be considerable difference in the row and column means both up and across the field. This, in other words, means that in L.S. design we must assume that there is no interaction between treatments and blocking factors. (7) Factorial designs: Factorial designs are used in experiments where the effects of varying more than one factor are to be determined. They are specially important in several economic and social phenomena where usually a large number of factors affect a particular problem. Factorial designs can be of two types: (i) simple factorial designs and (ii) complex factorial designs. (i) Simple factorial designs: In case of simple factorial designs, we consider the effects of varying two factors on the dependent variable, but when an experiment is done with more than two factors, we use complex factorial designs. Simple factorial design is also termed as a ‘two-factor-factorial design,’ whereas complex factorial design is known as ‘multi-factor-factorial design.’ Simple factorial design may either be a 2 x 2 simple factorial design, or it may be, say 3 x 4 or 5 x 3 or the like type of simple factorial design.

Page 40

Illustration : ( 4* 3 simple factorial design) The 4*3 simplex factorial design will usually include four treatments of the experimental variable and three levels of the control variable. Graphically it may take following form: Experimental Variable TREATMENT TREATMENT B C Cell 4 Cell 7 Cell 5 Cell 6 Cell 8 Cell 9

CONTROL VARIABLE Level 1 Level 2 Level 3

TREATMENT A Cell 1 Cell 2 Cell 3

TREATMENT D Cell 10 Cell 11 Cell 12

This model of a simplex factorial design includes four treatments viz. A, B, C and D of the experimental variable and three levels viz (I), (II), and (III) of the control variable and has 12 different cells as shown above. This shows that a 2 * 2 simple factorial design can be generalized to any number of treatments and levels. In such a design the means for the columns provide the researcher with an estimate of the main effects for the levels. Such a design also enables the researcher to determine the interaction between treatments and levels (ii) Complex factorial designs: experiments with more than two factors at a time involve the use of complex factorial designs. A design which considers three or more independent variables simultaneously is called a complex factorial design. In case of three actors with one experimental variable having two levels, the design used will be termed 2 * 2 * 2 * comple factorial design which will contain a total of eight cells as shown below:

Experimental Variable Treatment A Treatment B

Control Varable 2 Level I Level I Control Variable 1 Level II Cell 1 Cell 2

Control Variable 2 Level II Cell 3 Cell 4

Control Variable 2 level I Cell 5 Cell 6

Control Variable 2 Level II Cell 7 Cell 8

Page 41

To obtain the first order interaction say, for EV * CV 1in the above stated design, the researcher must necessarily ignore control variable 2 for which purpose he may develop 2 * 2 design from the 2 * 2* 2 design by combining the data of the relevant cells of the latter design as has been shown below:

Experimental Variable Treatment A Control Level 1 Level 2 Cells 1, 3 Cells 2, 4 Treatment B Cells 5, 7 Cells 6, 8

Similarly, the researcher can determine other first order interactions analysis of the first order interaction, here essentially a simple factorial analysis as only two variables are considered at a time and the remaining on eis ignored. But the analysis of the second interaction would not ignore one of the three independent variables in the case of a 2 * 2* 2 design. The analysis would be termed as a complex factorial analysis. Factorial analysis are used mainly because of two advantages : 1. They provide equivalent accuracy ( as happens in the case of experiments with only one factor) with less labour and as much are a source of economy. Using factorial designs, we can determine the main effects of two ( in simple factorial design ) or more ( in case of complex factorial design ) factors ( in simple factorial design ) or more ( in case of complex factorial design) factors ( or variables in one single experiment. 2. They permit various other comparisons of interest. For example, they give information about such effects which cannot be obtained by treating one single factor at a time. The determination of interaction effects is possible in case of factorial design. There are several research designs and the researcher must decide in advance of collection and analysis of data as to which design which prove to be more appropriate for his research project. One must give due weight to various points such as type of universe and its nature, objective of his study, source list or sampling frame desired standard of accuracy and the like when taking decision in respect of the design for his research project. Research Methods Chapter nos. 4. Topics Research Methods Observations Survey Method Experimentation Secondary Data
Page 42

Page nos.

02 05 08

11 OBSERVATION Definition • • • It is the process of recognizing people, objects and occurrences rather than asking for information. Instead of asking consumers what brand they buy the researchers arrange to observe what products are brought. E.g. a large food retailer tested a new slot-type shelf arrangement for canned foods by observing shoppers as they used the new shelves.

Advantages of observation method 1. When the researcher observes and records events, it is not necessary to rely on the willingness and ability of respondents to report accurately. 2. The biasing effects of interviewers or their phrasing of the questions is either eliminated or reduced. 3. Data collection by observation is more objective and hence more accurate. Disadvantages of observation method 1. Researchers have recognized the merits of observations opposed to questioning, yet the vast majority of researchers continue to rely on the use of a questionnaire. 2. The most limiting factor in the use of observation is the inability to observe things such as attitudes, motivation, etc. 3. Events of more than short-term duration such as a family’s use of leisure time and personal activities such as brushing of teeth are better discussed with questionnaires. 4. In some observational studies it is impractical to keep the respondent from knowing that they are being observed. This results in a biasing effect. 5. Cost is another major disadvantage. E.g. To observe the customers who come in to buy canned milk, an observer has to wait for the customers to come in and buy the milk. The unproductive time is an increased cost. METHODS OF OBSERVATION Observational studies can be classified on five bases: 1. Whether the situation in which the observation is made is natural or contrived 2. Whether the observation is obtrusive or unobtrusive. 3. Whether the observation is structured or unstructured 4. Whether the factor of interest is observed directly or indirectly 5. Whether observers or mechanical means makes observations.
Page 43

Direct observations • • • • When an observer is stationed in a grocery store to note how many different brands of canned soup each shopper picks up before selecting one, there is unobtrusive, direct observation in a natural situation. If a camera is positioned to record shopping actions, observation is by mechanical means If the observer counts the specific cans picked up, the observation is structured. If the observer has to go about observing how shoppers go about selecting a brand of soup, the situation is unstructured.

Structured direct observation • • • It is used when the problem at hand has been formulated precisely enough to enable researchers to define specifically the observations to be made E.g. Observers in a supermarket might note the number of soup cans picked up by each customer. A form can easily be printed for simple recordings of such observations. Not all observations are as simple as the above but experiments have shown that even observers with a different viewpoint on a given question tend to make similar observations under structured conditions.

Unstructured, direct observation • • Observers are placed in situations and observe whatever they deem significant. E.g. In an effort to find ways of improving the service of a store, observers may mingle with customers in the store and look for activities that suggest service problems. No one can observe everything that is going on, hence the observer must select certain things which he can make a note of. Customers standing at a counter with annoyed faces may be observed as irritated because of the service or lack of it.

Contrived observation • • When researchers rely on natural direct observation it results in a lot of wasted time while they wait for the desired events to take place. To reduce this, it may be more desirable to contrive situations so that observations may be made more efficiently. E.g. To study the bargaining between an automobile salesman and a customer, the observer can pose as a customer and take various bargaining attitudes from the most-eager-to-buy to the toughest price seeking. In each case the observer notes the salesperson’s response. As long as the sales person believes the researcher to be a bonafide customer, there is no bias in the observation. Contrived observations often have a validity and economic advantage.
Page 44

Mechanical observation A number of methods and devices have been developed for making such observations. a) Audimeter- used by the A C Nielsen company to record when television and radio sets are tuned on and the stations to which they are tuned. b) People meter- a device that can be held in the hand and has a number for each member of the household, which he/she is asked to punch when viewing TV. c) Psycho galvanometer- measures minute emotional reactions through changes in the rate of perspiration. It is almost like a lie detector. d) Eye camera- used to record the movements of the eye. Indirect observation One type of observation focuses on the physical traces left by the factors of interest. These traces are of two types; 1. Accretions left 2. Erosion 3. Accretions involve studies such as the observation of liquor bottles in th 4. Erosion • • • • Accretions involve a trash to eliminate the liquor consumption in cities without liquor stores. Erosion observations are less frequent. An example would be the study of a relative readership of different sections of an encyclopedia by measuring the wear and tear on the pages. Observation of the results of past actions will not bias the data if done on a one-time basis. E.g. Pantry audits determine what purchases have been made in the past.

Observation of records • • Whenever researchers use data collected for another purpose, they are employing the observation method in a manner similar in character to the observation of physical trace The records of previous activities such as population census are physical traces of previous periods.

Survey method Definition
Page 45

Survey research is one of the most important areas of measurement in applied social research. The broad area of survey research encompasses any measurement procedures that involve asking questions of respondents. Types of surveys • • • Surveys can be divided into two broad categories: the questionnaire and the interview. Questionnaires are usually paper-and-pencil instruments that the respondent completes. The interviewer based on what the respondent says completes interviews.

Questionnaires Mail survey: when a respondent receives a questionnaire by mail it is known as mail survey. Advantages: • They are relatively inexpensive to administer. • You can send the exact same instrument to a wide number of people. • They allow the respondent to fill it out at their own convenience. Disadvantages: • Response rates from mail surveys are often very low. • Mail questionnaires are not the best vehicles for asking for detailed written responses. Group-administered questionnaire • • • • • A sample of respondents is brought together and asked to respond to a structured sequence of questions. Traditionally, questionnaires were administered in-group settings for convenience. The researcher could give the questionnaire to those who were present and be fairly sure that there would be a high response rate If the respondents were unclear about the meaning of a question they could ask for clarification. And, there were often organizational settings where it was relatively easy to assemble the group (in a company or business, for instance).

INTERVIEWS Interviews are a far more personal form of research than questionnaires
Page 46

Personal interview The interviewer works directly with the respondent Advantages • The interviewer has the opportunity to probe or ask follow-up questions. • Interviews are generally easier for the respondent, especially if what is sought is opinions or impressions Disadvantages • Interviews can be very time consuming and they are resource intensive. • The interviewer is considered as a part of the measurement instrument and interviewers have to be well trained in how to respond to any contingency. Telephone Interview Telephone interviews enable a researcher to gather information rapidly. Advantages • They allow for some personal contact between the interviewer and the respondent. • They allow the interviewer to ask follow-up questions DISADVANTAGES MANY PEOPLE DON'T HAVE PUBLICLY-LISTED TELEPHONE NUMBERS. SOME DON'T HAVE TELEPHONES. PEOPLE OFTEN DON'T LIKE THE INTRUSION OF A CALL TO THEIR HOMES. TELEPHONE INTERVIEWS HAVE TO BE RELATIVELY SHORT OR PEOPLE WILL FEEL IMPOSED UPON.

Selecting the survey method Selecting the type of survey you are going to use is one of the most critical decisions in many social research contexts. You have to use your judgment to balance the advantages and disadvantages of different survey types. Following are the issues that the researcher must look into before conducting a research. Sampling issues • What data is available? What information do you have about your sample? Do you know their current addresses? Their current phone numbers? Are your contact lists up to date?
Page 47

• • •

Can your respondents be located? Who is the respondent in your study? If the specific individual is unavailable is the researcher willing to interview another? Are response rates likely to be a problem?

Questions What types of questions can be asked? Are they personal or require a detailed answer? Can question sequence be controlled? • Your survey is one where you can construct in advance a reasonable sequence of questions? Or, are you doing an initial exploratory study where you may need to ask lots of follow-up questions that you can't easily anticipate • Cost is often the major determining factor in selecting survey type. You might prefer to do personal interviews, but can't justify the high cost of training and paying for the interviewers. You may prefer to send out an extensive mailing but can't afford the postage to do so. • Do you have the facilities (or access to them) to process and manage your study? In phone interviews, do you have well-equipped phone surveying facilities? For focus groups, do you have a comfortable and accessible room to host the group? Do you have the equipment needed to record and transcribe responses • Some types of surveys take longer than others. Do you need responses immediately (as in an overnight public opinion poll)? Have you budgeted enough time for your study to send out mail surveys and follow-up reminders, and to get the responses back by mail? Have you allowed for enough time to get enough personal interviews to justify Types of questions Survey questions can be divided into two broad types: structured and unstructured Dichotomous Questions When a question has two possible responses, we consider it dichotomous. Surveys often use dichotomous questions that ask for a Yes/No, True/False or Agree/Disagree response Eg please enter your gender Male female • •

Likert response scale An opinion question is asked on a 1-to-5 bipolar scale (it's called bipolar because there is a neutral point and the two ends of the scale are at opposite positions of the opinion): The batting order of the Indian team shold be changed 1 2 3 4 5 Strongly agree disagree neutral agree strongly disagree
Page 48

Semantic differential Here, an object is assessed by the respondent on a set of bipolar adjective pairs (using 5-point rating scale). EXPERIMENTATION Experiments are frequently conducted to determine what effect advertising of an undesirable fact would have on consumer awareness of that fact. Advertising was conducted, and a measurement of consumer awareness was made. More advertising was followed by another measurement. As a consistent growth in consumer awareness took place, one would have confidence in the conclusion that the advertising was effective. Experiments are much more effective than descriptive techniques in establishing cause-effect relationships. Definition of Experiment It refers to that research process in which one or more variables are manipulated under conditions that permits the collection of data that show the effects, if any, of such variables in unconfused fashion. Under most circumstances, experiments must create “artificial “ situations. Artificiality in general is the essence of the experimental method, for it gives researchers more control over the factors they are studying. If they can control the factors present in a given situation, they can obtain more conclusive evidence of cause and effect relationships between any two of them. Thus, the ability to set up a situation for the express purpose of observing and recording accurately the effect on one factor when another is deliberately changed permits researchers to prove or disprove hypotheses that they otherwise could only partially test. Selected Experimental Designs The researcher has an hypothesis that if an experimental variable (e.g. advertising, shell display, training) is applied to an experimental unit (e.g. a group of consumers, a store, some sales representatives), it will have a measurable effect (e.g. the number remembering the brand name, units sold, calls made). The following are most of the common designs for marketing experiments.

♦ “After Only” Design.
This is the simplest of all experimental designs. As the “after only” name suggests, this design consists of applying the experimental variable (e.g. advertising) to an experimental group (e.g. consumers) and measuring the dependent variable (e.g. recall of brand name) after, and only after, the application of the experimental variable.

♦ “Before-After “ Design
Page 49

In this design, the experiments measure the dependent variable before exposing the subjects to the experimental variable and again after exposure to the experimental variable. The difference between the two is considered to be a measurement of the effect of the experimental variable. ♦ “Four-Group—Six Study” Design In this case, when the investigator is obtaining information in an undisguised manner directly from persons, the “before-after with control group” design is inadequate. Both the experimental and control groups are apt to be influenced, and in different ways, by the “before” measurement. To over these difficulties, this design is established as the ideal where there is interaction between the respondent and the questioning process. This design helps the researchers measure the size of the “interaction” effect. The design has little practical value and the use of “before” measurements also creates statistical difficulties in testing the significance of results. ♦ “After only with Control Group” Design” In the “four group—six study” design, it is possible to determine the effect of the experimental variable from only two groups i.e. experimental group 2 and control group 2. The difference between the “before” and “after” measurements of control group 2 is the result of uncontrolled variables. The “before-after” design permits an analysis of the process of change, whereas the “after only” design does not. . Thus, individual respondents can be identified and their reactions noted in a “before-after” study] The “after only with control group” design fits many marketing problems and is easy to use. Many promotional devices can be tested this way. Frequently, product tests are also of the “after only with control group “ design e.g. General Motors ran such an experiment to determine the desirability of nylon cord tyres as compared to the traditional rayon cord tyres. Ex Post Facto Design One variation of the “after-only” design is called the ex post facto design. This differs from the “after only” design because the experimental and control groups are selected after the experimental variables is introduced instead of before. One advantage is that the test subjects cannot be influenced, pro or con, toward the object be knowing they are being tested, since they are exposed to the experimental variable before being selected for the sample. Another advantage of this method is that it permits the experimenter to let the experimental variable be introduced realistically and to control only observations. This is useful in advertising tests, which use commercial media. E.g. A T.V public service announcement designed to inform consumers about the pro and cons of nuclear energy. This could be broadcast over cable T.V only, and interviewers could then determine with some objective accuracy whether a home had a cable TV or not. Continuous Diary Panel Design In most marketing research experiments, the subjects (individuals, dealers) from whom information is to be obtained are selected by some sampling procedure. After the information required by the product is obtained, these subjects are not “used” again. In some instances,
Page 50

however, a sample is recruited, and information is obtained from the members continuously or at intervals over a period of time. A permanent or fixed sample of this type is called a panel. Panels are used for both exploratory and conclusive studies. Factorial Designs In the above designs, a single experimental variable with usually one variable was considered. It is possible to test several “levels” of the experimental variable For example several different ads could be tested, each with a separate experimental group. All but one group alternatively could be considered as control groups against which to compare the experimental group, or an additional control group not exposed to any advertising could be used to protect against possible negative effects of all ads. Factorial designs permit the experimenter to test two or more variables at the same time and not only determine the main effects of each of the variables, but also to measure the interaction effects of variables. SECONDARY DATA Secondary data are data that were developed for some purpose other than helping to solve the problem at hand. Secondary data can be gathered quickly and is inexpensive as compared to primary data. Even when reports or publications are ordered, the time involved is generally less than the time required to collect original data. A thorough search on secondary data will often provide sufficient information to resolve the problem. In some cases where the secondary data cannot solve the problem, they can often help to structure the problem and eliminate some variables from consideration. Or, it may be possible to utilize the secondary data in conjunction with primary data. Secondary data can provide a complete or partial solution to many problems and help in structuring other problems. They tend to cost substantially less than primary data and can be collected in less time also. Problems Encountered with Secondary Data Before secondary data are applied to a particular marketing problem, their relevance and accuracy must be assessed. Relevancy refers to the extent to which the data fits the information needs of research problem. Even when the data covers the same general topic as that required by the research problem, they may not fit the requirements of the problem. Three general problem’s reduces the relevance of data that would otherwise be useful. They are: 1) There is often a difference in the units of measurement. E.g. many retail decisions require detailed information on the characteristics of the population within their trade area. However, the available population statistics may focus on countries, cities or census tracts that do not match the trade area of the retail outlet. 2) The second general problem that can reduce relevancy of secondary data is the definition of classes. E.g. a manufacturer may have a product that appeals to children 8 to 12 years old. If available secondary data are based on age categories 5 to 9 and 10 to 14, the firm will have a hard time utilizing it.
Page 51

3) The final major factor that is affecting relevancy is time. Generally, research problems require current, if not future, data. Most secondary data, on the other hand, have been in existence for some time. E.g. complete census reports are not available for several years. Data are frequently collected one to three years prior to its publication. Accuracy is the second major concern of the user of secondary data. The real problem is not inaccuracy, it is the difficulty of determining how inaccurate the data is likely to be. While using secondary data, the original source should be used if possible. This is important because, the original report is generally more complete than the second or third reports. Secondly using original source allows the data to be examined in context and may provide a better basis for assessing the competence and motivation of the collector. Sources of Secondary Data There are two general sources of secondary data – internal sources and external sources. Internal data are available within the firm whereas external sources provide data that are developed outside the firm. Internal Sources Internal sources include sales record, sales force reports, operating statements, budgets, previous research reports and the likes. The most useful type of internal information is generally sales data. But, unfortunately many companies do not collect or maintain sales data in the manner that allows the researcher to tap their full potential. Such records, if properly utilized, allows the researcher to isolate profitable and unprofitable customers, territories, and product lines, to identify developing trends and perhaps to measure the effects of manipulations of marketing mix variables. Internal data must be collected in a usable format and must be analyzed to be of value. Many firms have useful but unutilized data. By changing the format of collection forms (sales invoices, salesman call reports, etc) other useful data can be often collected. They are available and inexpensive; internal data are the best information buy.

External Sources Numerous sources external to the firm may produce data relevant to the firm’s requirements. There are four types of general external secondary information, they are: 1) Trade associations 2) Government Agencies 3) Other published sources, and 4) Syndicated services
Page 52

a) Trade Associations Trade associations frequently publish or maintain detailed information on industry sales, operating characteristics, growth patterns and the like. They may also conduct special studies of factors relevant to their industry. Since trade associations have good reputation for not revealing data on individual firms as well as good working relationships with the firms in the industry, they may be able to secure information that may be unavailable to other researchers. These materials may be published in the form of annual reports or as special reports. b) Government agencies Federal, state and local government agencies produce a massive amount of data that is of relevance to marketers. The federal government maintains five major agencies whose primary function is the collection and dissemination of statistical data, they are: a) Bureau of Census b) Bureau of Labor Statistics c) National Center for Educational Statistics d) National Center for Health Statistics, and e) Statistical Reporting Service, Department of Agriculture There are also a number of specialized analytic and research agencies, numerous administrative and regulatory agencies. These sources produce two types of data a) Statistics focused on people are produced. These include demographics, vital and health statistics, labor and social conditions. b) The second broad category focuses on economic activity – commerce, finance, agriculture and the like. Both types of data are widely used by business firms as an aid in decision-making. The data available may be standardized, such as census data, or it may be in the form of special reports. Census publications are one of the most widely used sources of secondary data. c) Other published Sources There is virtually endless array of periodicals, books, dissertations, newspapers and the like, that contain information relevant to marketing decisions. d) Syndicated Services A number of firms regularly collect data of relevance to marketers that they sell on a subscription basis. Two types of syndicated services are widely used by marketing researchers – channel information and omnibus surveys. Channel information is available to the firm at four levels – manufacturers, intermediaries, retailers and consumers. A manufacturers sales and shipment are generally available only through the firms own internal records. Therefore, although a firm can monitor its own activities at this level, it can only infer the output of other manufacturing firms. At the intermediary or wholesale level, several syndicated firms provide information on the flow of products and brands to retail outlets. Store audits provide data on the movement of brands through retail outlets.
Page 53

At the consumer level, consumer panels provide data on both purchasing pattern and media habits. Omnibus surveys collect data that are useful to a number of subscribers from a series of independent samples. Data Collection Methods USES OF PRIMARY AND SECONDARY DATA The task of data collection begins after problem has been identified. While deciding about the method of data collection to be used for the study the researcher should keep in mind two types of data viz, primary data and secondary. The primary data are those, which are collected afresh and for first time and thus happens to be original in character. The secondary are those which have been collected by someone else and which have already been passed through statistical process. The researcher would have to decide which sort of data he would be using for his study. The method collecting primary and secondary data differ since primary data are to be originally collected while in case of secondary data the nature of data collection work is merely that of compilation. There are several ways of collecting primary data. They are: 1. Observation method 2. Interview method 3. Through questionnaires 4. Through schedules OTHER PRIMARY METHODS Warranty cards Distributors audits Pantry audits Consumer panels Using mechanical devices Through projective techniques Depth interviews Content analysis COLLECTION OF SECONDARY DATA Secondary data means that are already available that is they refer to the data, which have already been collected and analyzed by someone else. When the researcher utilizes secondary data, then he has to look into various sources from where he can obtain them. In this case he is certainly not confronted with the problems that are usually associated with the collection of original data. Secondary data may be either published or unpublished data usually published data are available in: Various publications of the central, state and local government
Page 54

Various publications of foreign government or of international bodies and their subsidiary organization. Technical and trade journals Books magazines and newspapers Reports publication of various associations connected with business and industry, banks, stocks exchanges etc Reports prepared by various scholars’ universities economists etc in different field Public records and statistics, historical documents and other sources of publish information. The sources of unpublished data are many; they may be found in diaries, letters unpublished biographies and autobiographies and also may be available with scholar’s research workers. Trade organization, labor bureaus and other public/private organizations Researcher must be careful in using data .he must make a minute because it is just possible that the secondary data may be unsuitable or maybe be inadequate in the context of a problem which the researcher wants to study. It is observed that it is never safe to take publishes statistics at their face value without knowing their meaning and limitation. Before using secondary data following characteristics must be kept in mind Reliability of data: finding out such things about the said data can test the reliability Who collected the data What were the sources of data Were they colleted by using proper method at what time were they collected Was there any bias of the complier What level f accuracy was desired Was it achieved? Suitability of data: The data that are suitable for one enquiry may not necessarily be found in another enquiry. Hence if the available data are found to be suitable, they should not be used by the researcher .in the context, the researcher must be very carefully scrutinize the definition of various units and terms of collection used at the time of collecting the data from the primary source originally. similarly the object scope and nature of a original enquiry must also be studied .if the researcher finds differences the data will remain unsuitable for the present enquiry and should be used. Adequacy of the data: if the level of accuracy achieved in data found inadequate for the purpose of the present enquiry, they will be considered as inadequate and should not be used by the researcher. The data will also be considered inadequate, if they are related to an area which may be either narrower or wider than the area of the present enquiry.

Page 55

TYPES OF PRIMARY DATA COLLECTION -- OBSERVATIONS AND SURVEYS 1) OBSERVATION METHOD Observation becomes a scientific tool and the method of data collection for the researcher when it serves a formulated research purpose is systematically planned and recorded and is subjected to checks and controls on validity and reliability. Under the observation method the information is sought by way of investigators own direct observation without asking from respondent EXAMPLE In a study relating to consumer behaviour the investigator instead of asking the brand of wristwatch used by the respondent may himself look for the watch. ADVANTAGES 1. The method eliminates subjective bias 2. The information obtained under this method relates to what is current happening it is not complicated either by past behaviour or future intentions and attitudes. 3. This method is independent of respondent willingness to respondents as such is relatively less demanded of active co-operation on the part of the respondents as happens to be the case in interview or the questionnaire method. 4. This method is particularly suitable in studies, which deal with subjects who are not capable giving verbal reports of their feeling for one reason or the other. DISADVANTAGES 1. 2. 3. 4. Its s an expensive method The information provided by this method is very limited. Sometimes unforeseen factors may interfere with the observational task. The fact that some people are rarely accessible to direct observation creates obstacle for this method to collect data effectively.

2) SURVEYS Surveys are concerned with describing, recording, analyzing and interpreting conditions that exist or existed. The researcher does not manipulate the variable or arrange for events to happen Surveys are only concerned with conditions or relationships that exist, opinions that are held, processes that are going on, effects that are evident or trends that are developing. They are primarily concerned with present but at times do consider past events and influences as they relate to current conditions. 1. Survey type researches usually have larger samples because percentages of responses generally happen to be low, as low as 20 to 30%, especially in mailed questionnaire studies. Thus, the survey method gathers data relatively from the large number of cases at a peculiar time; it is essentially cross-sectional.
Page 56

2. Surveys are conducted in case of descriptive research studies, usually appropriate in case of social and behavioral sciences because many types of behavior that interest researcher cannot be arranged in realistic setting. 3. Surveys are example of field research and are concerned with hypothesis formulation and testing analysis of the relationship between non-manipulated variables. 4. Surveys may either be census or sample surveys. They may also be classified as social surveys, economic surveys, public opinion surveys. Whatever be their type, the method of data collection happens to be either observation or interview or questionnaire or opinionnaire or some projective technique. Case method may as well be used. 5. In case of surveys, research design must be rigid, must make economical provision for protection against bias and must maximize reliability, the aim happens to be to obtain complete and accurate information. 6. Possible relationships between the data and the unknowns in the universe can be studied through surveys. STRUCTURED Vs UNSTRUCTURED DATA COLLECTION The data collection through questionnaires can be done through four ways as follows; Structured disguised Structured - nondisguised Non-structured - disguised Non structured - nondisguised Note : non disguised data collection is also called as direct method & disguised is also called as indirect method . STRUCTURED DATA COLLECTION A structured data collection is a formal list of questions framed so as to get the facts. The interviewer asks the questions strictly in accordance with pre- arranged order. For e.g. this method can be used when the information is based on the expenditures of the consumer on different types of clothing like. Cotton woolen or synthetic, etc. This structured questionnaire can be of two types, disguised and non- disguised, based on whether the object or the purpose of the survey is revealed to the respondent. The main advantage of this method is that, the information can be collected in a systematic and orderly manner. However when it comes to personal questions, this method seems to be less effective. Structured disguised: - in this case the researcher does not disclose the object of the interview, because he feels that by revealing that the very purpose of the interview will get defeated. Structured - nondisguised: - in this case the everything is pre- arranged and the researcher reveals the objective of the survey to the respondent. This is the most widely followed approach in market research. This is because it is generally felt that the respondent should be taken into confidence, so that he can realize the relevance and give desired information.
Page 57

Non-structured data collection It is a kind of data collection method where the data to be collected is not pre- arranged or not listed in a proper structured format. Therefore the entire responsibility is left on the researcher to ask the respondent, in the way he feels fit. The researcher only has certain main points on which he develops the questions to be asked. Such a method is devoid of rigidity and the researcher has sufficient amount of freedom to collect the data in the order he wants. Normally this kind of method is used in exploratory research This kind of data collection is most suitable when it comes to personal or motivational factors. Again here there are two main types of non-structured methods of data collection. (1) Non structured disguised: - again here the objective of interview is not described to the respondent (2) Non structured - non-disguised: - like in case of structured non- disguised, the respondent is taken into confidence by revealing the purpose of the survey. CONCLUSION: The researcher should use the already viable data only when he finds them reliable, suitable and adequate. But he should not blindly discard the use of such data if they are readily available from authentic sources and are also suitable and adequate for in that case it will not be economical to spend time and energy in field surveys for collecting information. At times there may be wealth of usable information in the already available data which must be used by an intelligent researcher but with due precaution. Selection of appropriate methods for data collection: Nature scope and object of enquiry: this constitutes the most important factor affecting the choice of a particular method .the method selected should be such that it suits the type of enquiry that is to be conducted in the researcher, this factor is also important in deciding whether the data already available are to be used not yet available are to be collected. Availability of funds: availability of funds for the research project determines to a large extent the method to be used for the collection of data. When the funds at the disposal of the researcher are very limited, he will have to select a comparatively cheaper method. Finance in fact is big constraint in practice and the researcher has to act within this limitation Time factor: availability of time has also to be taken into account in deciding particular method of data collection. Some methods take relatively more time whereas with others the data can be collected in a comparatively shorter duration. The time at the disposal of the researcher thus affects the selection of the method by which the data is collected. Precision required: precision required is yet another important factor to be considered at the time of selecting the method of collection of data. STEPS IN QUESTIONNAIRE CONSTRUCTION

Page 58

A Questionnaire is often the heart of a survey operation. If the heart is not properly set up then the whole operation is bound to fail. Thus studying the main objective of the questionnaire is important. There are two main objectives in designing a questionnaire: 1. To maximize the proportion of subjects answering our questionnaire that is, the response rate: To maximize our response rate, we have to consider carefully how we administer the questionnaire, establish rapport, and explain the purpose of the survey. The length of the questionnaire should be appropriate. 2. To obtain accurate relevant information for our survey: In order to obtain accurate relevant information, we have to give some thought to what questions we ask, how we ask them, the order we ask them in, and the general layout of the questionnaire. Thus the most important parameters in questionnaire designing can be described as: 1. Question Content 2. Question Phrasing 3. Question Sequencing 4. Question Layout 1.Question content: For each question in the questionnaire, we should pay attention to how well it addresses the content we are trying to get at. Deciding what to ask there are three potential types of information: Information we are primarily interested in that is, dependent variables. Information which might explain the dependent variables-that is, independent variables. Other factors related to both dependent and independent factors, which may distort the results and have to be adjusted for - that is, confounding variables. Thus while forming the question content the following question must be answered appropriately. Is the Question Necessary/Useful? Examine each question to see if there is a need to ask it at all and if you need to ask it at the level of detail you currently have. Do Respondents Have the Needed Information? Look at each question to see whether the respondent is likely to have the necessary information to be able to answer the question. Does the Question Need to be More Specific? Sometimes the questions are too general and the information we obtain is more difficult to interpret. Is Question Biased or Loaded? One danger in question writing is that your own biases and blind spots may affect the wording.

Page 59

Will Respondents Answer Truthfully? For each question see whether the respondent will have any difficulty answering the question truthfully. If there is some reason why they may not, consider rewording the question. 2. Question phrasing: The way questions are phrased is important and there are some general rules for constructing good questions in a questionnaire. Use short and simple sentences Short, simple sentences are generally less confusing and ambiguous than long, complex ones. As a rule of thumb, most sentences should contain one or two clauses. Ask for only one piece of information at a time For example, "Please rate the lecture in terms of its content and presentation" asks for two pieces of information at the same time. It should be divided into two parts: "Please rate the lecture in terms of (a) its content, (b) its presentation." Avoid negatives if possible Negatives should be used only sparingly. For example, instead of asking students whether they agree with the statement, "Small group teaching should not be abolished," the statement should be rephrased as, "Small group teaching should continue." Double negatives should always be avoided. Ask precise questions Questions may be ambiguous because a word or term may have a different meaning. Level of details It is important to ask for the exact level of details required. On the one hand, you might not be able to fulfill the purposes of the survey if you omit to ask essential details. On the other hand, it is important to avoid unnecessary details. People are less inclined to complete long questionnaires. This is particularly important for confidential sensitive information, such as personal financial matters or marital relationship issues. Minimize bias People tend to answer questions in a way they perceive to be socially desired or expected by the questioner and they often look for clues in the questions

3. Question sequencing: In order to make the questionnaire effective and to ensure quality to the replies received, a researcher must pay attention to the question-sequence in preparing the questionnaire. • A proper question sequence reduces the chances of the questions being misunderstood • The question sequence must be clear and smooth- moving, with questions that are easiest to answer being put in the beginning.

Page 60

• •

The first few questions are particularly important because they are likely to influence the attitude of the respondent and in seeking his desired cooperation. Following the opening questions are the question that are rally vital to the research problem and a connecting thread should run through successive questions. Relatively difficult questions must be relegated towards the end so that even if the respondent decides not to answer such questions, considerable information would have been obtained. The order of the questions is also important. Some general rules are: -Go from general to particular. -Go from easy to difficult. -Go from factual to abstract. -Start with closed format questions. -Start with questions relevant to the main subject. -Do not start with demographic and personal questions.

4.Question layout: • Questions should form a logical part of a well thought out tabulation plan. • Questions should basically meet the following standards -Should be easily understood -Should be simple -Should be concrete and should conform as much as possible to the respondents way of thinking. • Items on a questionnaire should be grouped into logically coherent sections. Grouping questions that are similar will make the questionnaire easier to complete, and the respondent will feel more comfortable. Questions that use the same response formats, or those that cover a specific topic should appear together. Each question should follow comfortably from the previous question. Writing a questionnaire is similar to writing anything else. Transitions between questions should be smooth. Questionnaires that jump from one unrelated topic to another feel disjointed and are not likely to produce high response rates.

Conclusion: Questionnaire design is a long process that demands careful attention. Design begins with an understanding of the capabilities of a questionnaire and how they can help the research. If it is determined that a questionnaire is to be used, the greatest care goes into the planning of the objectives. Questionnaires are like any scientific experiment. One does not collect data and then see if they found something interesting. One forms a hypothesis and an experiment that will help prove or disprove the hypothesis.

Page 61

Questionnaires are versatile, allowing the collection of both subjective and objective data through the use of open or closed format questions. However, a questionnaire is only as good as the questions it contains. Mindful review and testing is necessary to weed out minor mistakes that can cause great changes in meaning and interpretation. When these guidelines are followed, the questionnaire becomes a powerful and economic evaluation tool. DATA COLLECTION INSTRUMENTS 1) PERSONAL INTERVIEW An interviewer asking questions generally face-to-face to other persons conducts personal interview. This sort of interview may be in the form of direct personal investigation or it may be an indirect oral investigation. This method is particularly suitable for intensive investigations. Advantages 1. More information and that too in greater depth can be obtained. 2. Interviewer can overcome any resistance, if any, of the respondents; this interview can be made to yield an almost perfect sample of he population. 3. There is greater flexibility as questions can be restructured as when needed, especially in the unstructured interviews. 4. Observation method can supplement verbal recording of answers. 5. Personal information can be obtained easily in this method. 6. Sample control can be maintained, as non-response generally remains low. 7. Unlike mailed questionnaire, the interviewer can usually control which persons will answer the questions. 8. The interviewer can catch the respondent off-guard and thus record the spontaneous reactions. 9. The language of the interview can be changed according to the education level of the respondent. 10. The interviewer can collect supplementary information about respondent's personal characteristics and environment, which helps while interpreting, results. Disadvantages 1. It can be quite expensive method, especially when large and widespread geographical sample is taken. 2. Possibility of bias of interviewer and respondent is maximum. 3. Certain respondents such as important officials cannot be approachable under this method. 4. It is time-consuming especially when sample is large and re-calls o respondents are to be made. 5. Sometimes the presence of he interview can over-stimulate he respondent and he may give imaginary answers to make the interview interesting. 6. Under the interview method the organization required for selection, training and supervising the field-staff is more complex with formidable problems.
Page 62

7. Interviewing at times may introduce systematic errors. 8. Interview presupposes a proper rapport with respondents for free and frank responses, which is not always possible. 2) TELEPHONE INTERVIEWS This method of collecting information consists contacting information consists contacting respondents on telephone itself. It is not a very widely used method, but plays important role in industrial surveys in developed regions. Advantages 1. 2. 3. 4. 5. 6. 7. 8. It is more flexible in comparison to mail method. It is faster in obtaining information than other methods. It is cheaper compared to personal interviews; here the cost per response is very low. Recall is easy; callbacks are economic and simple. There is higher rate of response than mailing method Replies can be recorded without causing embarrassment to respondents. Interviewer can explain requirements more easily. Access can be gained to respondents who otherwise cannot be contacted for one reason to other. 9. No field staff is required. 10. Wider distribution of sample is possible. Disadvantages 1. Little time is given to respondents to answer, as these types of interviews do not last for more than 5 minutes. 2. Survey is restricted to people who have telephones. 3. Cost plays a major part in extensive geographical coverage. 4. It is not suitable for interviews having comprehensive answers to various questions. 5. Some extent of interviewer's bias exists. 6. Questions have to be short and probes are difficult to handle. 3) COMMERCIAL SURVEYS Commercial surveys can be divided into three types: Periodic, Panel and Shared surveys. Each of them are discussed below Periodic surveys Periodic surveys are conducted at regular intervals, ranging from weekly to annually held surveys. They use a new sample of respondents for each survey, focusing on the same topic and allowing the analysis of trends over a period. Periodic surveys are conducted by mail, personal interview and telephone. The disadvantage here could be that when periodic surveys are conducted at known intervals, they might affect the behavior being measured.
Page 63

An example of this kind of surveys could be TRPs. Panel surveys Panel surveys, sometimes called interval panels, are conducted among a group of respondents who have agreed to respond to a number of mail, telephone or occasionally personal interviews over time. These need not occur regularly. But a continuous panel or panel data (explained more in panels) refers to a group of individuals who agree to report specified behaviors over time. The advantages of this method are The research firm initially collects all the personal information about the respondents and does not waste time again in collecting this information during interviews. This increases the quality of the research data. The response rate can be as high as 70% - 90%. Shared surveys Shared surveys, sometimes referred to as omnibus surveys, are administered by a research firm and consist of questions supplied by multiple clients. Such surveys can involve mail, telephone, or personal interviews. The respondents may be drawn from either an interval panel or random selection. The main advantage here is the cost factor. 4) AUDITS Audits involve the physical inspection of inventories, sales receipts, shelf facing and other aspects of marketing mix to determine sales, market share, relative price, distribution and other relevant information. The different types of audits are store audits, product audits and retail distribution audits. Store audits The basis for the store audit of retail stores sales is the simple accounting arithmetic of Opening inventory + Net purchases (receipts-transfers out-returned inventory +transfers in) - Closing inventory = Sales These audits provide sales data on packaged products. The clients receive report on the sales of their own brand and of competitor's brands, the resulting market shares, prices, shelf facing and other important information.

Page 64

Product audits Product audits are similar to store audits but focuses on products rather than store samples. Although they provide similar information as of store audits it differs as in it tries to cover all the types of retail outlets that handle a product category. Retail distribution audits Retail distribution audits are similar to store audits however these audits do not measure inventory sales: instead they are observational studies at the retail level. Field agents enter stores unannounced and without permission. They observe and record the brands present, price, shelf facings and other relevant data for selected product categories. 5) PANELS A panel is a group of individuals or organizations that have agreed to provide information to researcher over a period of time. A continuous panel, the focus of this section, has agreed to report specified behaviors on regular basis. There are 2 types of panels: retail and consumer, consumer further divided into diary panels and electronic panels. Retail panel In this method data is collected from the checkout scanner tapes of a sample of supermarkets and other retailers that use electronic scanning systems. For this to happen the product should carry the Universal Product Code (UPC) often referred to as bar code. The advantages of this method are 1. Greater frequency 2. Elimination of breakage and pilferage being counted as sales 3. More accurate price information The disadvantages are 1. Only big supermarkets have scanners 2. The quality of scanner data is dependent on checkout clerk. For e.g. if a person is buying 5 packets of packaged milk. In that case the clerk may put only 1 in the scanner and then multiply it by 5. So the rest 4 wont come in the scanner's data. Consumer panels Diary panels A diary panel as the name implies, is a panel of households who continuously record in a diary their purchase of selected products. It is used for those product categories for which purchasing is frequent like food and personal care products. Electronic panels Electronic panels are composed of households whose television viewing behavior is recorded electronically. The sets were wired to household meters. The meters were connected to a central computer by a telephone line and automatically recorded when the set was turned on and the station to which it is turned on.
Page 65

The problem here is that it is difficult to understand whom all and how many people were watching and what their demographics are. 6) Mail Questionnaire Advantages 1. It is easier to approach a large no. Of respondents spread all over the world through post. 2. A mail questionnaire is free from any interviewer’s bias and errors, which may undermine the reliability and validity of the results emerging from the survey. 3. A mail questionnaire will not have any distribution bias as it will not show any particular preference or dislike for a certain individual or household. 4. When the questions asked to the respondents need time to be answered and needs some thinking, mail questionnaire is ideal. 5. Mail Questionnaire saves time in collecting the desired information as a large no. Of respondents can be approached all over the country. 6. It saves money as cost of traveling, boarding and lodging of interviewers is not to be incurred. 7. There is no difficulty in having central supervision and control over the survey operations over a large region. 8. It avoids the bias arising from any inhibitions in answering questions. (During some personal questions the respondents may hesitate to answer them in the presence of the interviewer) 9. It will not have the problem of non-contacts in the strict sense, as might be the case in personal interviews when the interviewer finds that the respondent, being away from home is not available. Limitations 1. It is not suitable when questions are difficult & complicated. In such a case the help of interviewer is required to offer some introductory explanation to the respondent. 2. When the objective is to get the spontaneous answers of the respondent or his own answers uninfluenced by others who may influence his thinking. 3. It is not possible to verify whether the respondent himself has filled in the questionnaire. (e.g.: If a questionnaire is targeted to a housewife she may ask her husband to fill it up on her behalf). This can result into incorrect answers. 4. In case there is any ambiguity or any inconsistency in the answers it will be difficult for the researcher to make use of such questionnaire, as he has to accept it. 5. The respondent may go through his answers after he has filled in the entire questionnaire and may make certain modification in his original answers as a result of which these answers cannot be regarded as independent. 6. It does not allow the researcher to supplement the information by his personal observations. 7. A mail questionnaire normally has a relatively poor response compared to a questionnaire canvassed personally. PRIMARY AND SECONDARY DATA SOURCES
Page 66

Sr. No. 1. i. ii. iii. 2. i. ii. iii. 3.

Topic The Sources of Research data Nature of Secondary data Internal Sources of Secondary data External Sources of Secondary data Commercial Surveys, Audits & Panels Commercial Surveys Audits Panels Survey Research THE SOURCES OF RESEARCH DATA

Page No. 2 3 5 7 9 9 11 12 15

The design of the research project specifies both the data that are needed and how they are to be obtained. The first step in the data-collection process is to look for secondary data. These are data that were developed for some purpose other than for helping to solve the problem at hand. The data that are still needed after that search is completed will have to be developed specifically for the research project and are known as primary data. The secondary data that are available are relatively quick and inexpensive to obtain, especially now that computerized bibliographic search services and databases are available. The various sources of the secondary data and how they can be obtained and used are described ahead. Most secondary data are generated by specialized firms and are sold to marketers to help them deal with a category of problems. Nielsen’s television ratings, which marketers use in making advertising decisions, is the best-known example. Many of these services, broadly categorized as audits, commercial surveys, and panels, allow some degree of customization and thus fall between secondary and primary data. These sources are treated in detail ahead. An important source of primary data is survey research. The various types of surveys (personal, mail, computer, and telephone), are described ahead. Experiments are another important source of data for marketing research projects. The nature of experimentation, the types of experimental designs, and the uses and limitations of this method of obtaining data are also explained ahead. Experiments are conducted in either a laboratory setting (most advertising copy pretests) or in a field setting (test marketing). Electronic and computer technologies have revolutionized both these environments, which are described later. Secondary Data In April 1991, Buick and its advertising agency, McMann-Erickson Worldwide, launched its new Roadmaster station wagon with a revolutionary new advertising approach. A major component of the advertising for Roadster is a print campaign with ads appearing in Time,
Page 67

Newsweek, U.S. News. & World Report, People, Sports Illustrated, Entertainment Weekly, and Money. However, not all subscribers will see these ads. In fact, only 4,940 of the more than 40,000 ZIP codes in the United States will receive the ads. Subscribers is these ZIP codes will not only have a chance to see the ads, their magazines' will come with a personal addressed card inviting them to send for more information on the Roadmaster.

The target households, which are located mainly in affluent suburbs in the Northeast and Midwest, represent less than 20 percent of US households. However, these households buy over 50 percent of all large Station wagons. Buick was able to select the appropriate ZIP codes by using McMann-Erickson McMapping database. McMapping is based on data from several syndicated sources as well as the U.S. Census. It describes ZIP codes (and larger areas) in terms of standard demographics, values, primary lifestyle and media use. It works by matching the characteristics of the firm's target market with the characteristics of ZIP code residents. McMapping does more than simply allow precise targeting of ads and efficient media buys. It also helps develop effective commercials. For example, a traditional system might describe the typical buyer of a specific pickup truck as a mate between 25 and 54 with a household income of $30,000. The McMapping profile might add such information as he lives alone, owns a dog, likes sports which he often watches on cable with friends in a bar, and has a very macho self image. Obviously, this added information would be invaluable in developing effective ad copy. In a few short years the increase in the number of commercially available databases and of computers on which to access them have brought about dramatic changes in the utilization of secondary data. In this chapter we describe this development and discuss the traditional sources of secondary data. The Nature of Secondary Data Primary data are data that are collected to help solve a problem or take advantage of an opportunity on which a decision is pending. Secondary data are data that were developed for some purpose other than helping to solve the problem at hand. Obviously, the U.S Census was not conducted primarily to help target potential buyers of Buick station wagons. However, as the opening example illustrates, Census data and other data collected for other purposes can be used to target potential buyers or for other business applications. Advantages of Secondary Data Secondary data can be gathered quickly and inexpensively, compared to primary data (data gathered specifically for the problem at hand). It clearly would have been foolish for Buick to collect information directly on the population characteristics, values and lifestyles of every ZIP code in the United States. Such data are already available and can be obtained much faster and at a fraction of the cost of collecting them again. Problems Encountered with Secondary Data Secondary data tend to cost substantially less than primary data and can be collected in less time. Why, then, do we ever bother with primary data? Before secondary data can be used as
Page 68

the only source of information to help solve a marketing problem, they must be available, relevant, accurateand sufficient. If one or more of these criteria are not met, primary data may have to be used. Availability For some marketing problems, no secondary data are available. For example, suppose J.C. Penney’s management was interested in obtaining consumer evaluations of the physical layout of the company's current catalog as a guide for developing next year’s catalog. It is unlikely that such information is available from secondary sources. It is probable that no other organization that had collected such data would be willing to make it available. Sears may have performed such a study to guide in the development of their catalogs; it is, however, unlikely that a competitor would supply it to Penney’s. In this case, the company would have to conduct interviews of consumers to obtain the desired information. Secondary data on the spending patterns, media preferences, and lifestyles of some population segments are very limited. For example, there is a shortage of data on African-Americans, Hispanics, and Asian Americans. Relevance Relevance refers to the extent to which the data fit the information needs of the research problem. Even when data are available that cover the same general topic as that required by the research problem, they may not fit the requirements of the particular problem. Four general problems reduce the relevance of data that would otherwise be useful. First, there is often a difference in the units of measurement. For example, many retail decisions require detailed information on the characteristics of the population within the "trade area." However, available demographic statistics may be for counties, cities, census tracts, or ZIP code areas that do not match the trade area of the retail outlet. A second factor that can reduce the relevance of secondary data is the necessity in some applications to use surrogate data. Surrogate data are a substitute for more desirable data. This was discussed earlier as surrogate information error. Had Buick had access only to data on new car purchases by ZIP code, it would have been much less relevant than data on purchases of new station wagons. A third general problem that can reduce the relevance of secondary data is the definition of classes. Social class, age, income, firm size, and similar category-type breakdowns found in secondary data frequently do not coincide with the exact requirements of the research problem. For example, Gallup and other public opinion polls frequently collect data on alcohol consumption and attitudes toward alcohol as part of their periodic surveys. Bacardi Imports would like to use this readily available data. Unfortunately, Gallup and most other polls define adults as individuals 18 and over while Bacardi is interested in adults 21 and over. The different definitions of classes is one reason Gallup estimates that 56 percent of adults “ever consume” alcoholic beverages compared to the 70 percent indicated by Bacardi's surveys. The final major factor-affecting relevancy is time. Generally, research problems require current data. Most secondary data, on the other hand, have been in existence for some time. For example, the Census of Retail Trade is conducted only every five years, and two years are
Page 69

required to process and publish the results. A researcher using this source could easily be using data that is over four years old. This is becoming less of a problem as more and more data are being placed directly into electronic databases. Accuracy Accuracy is the third major concern of the user of secondary data. When using secondary data, the original source should be consulted if possible. This is important for two reasons. First, the original report is generally more complete than a second or third report. It often contains warnings, shortcomings, and methodological details not reported by the second or third source. Sufficiency Secondary data may be available, relevant, and accurate, but still may not be sufficient to meet all the data requirements for the problem being researched. For example, a database that contained accurate, current demographic information on the purchases of various brands and types of automobiles could still be insufficient in terms of providing information to assist in developing new products or advertisements. Internal Sources of Secondary Data Internal sources can be classified into four broad categories: • accounting records • sales force reports • miscellaneous records • internal experts Accounting Records The basis for accounting records concerned with sales is the sales invoice. The usual sales invoice has a sizable amount of information on it, which generally includes name of customer, location of customer, items ordered, quantities ordered, quantities shipped, dollar extensions, back orders, discounts allowed, date. In addition, the invoice often contains information on sales territory, sales representative, and warehouse of shipment. This information, when supplemented by data on costs and industry and product classification, as well as from sales calls, provides the basis for a comprehensive analysis of sales by product, customer, industry, geographic area, sales territory, and sales representative, as well as the profitability of each sales category. Unfortunately, most firms' accounting systems are designed primarily for tax reasons rather than for decision support. Sales Force Reports Sales force reports represent a rich and largely untapped potential source of marketing information. The word potential is used because evidence indicates that sales personnel do generally not report valuable marketing information. Sales personnel often lack the motivation and/or the means to communicate key information to marketing managers. To obtain the valuable data available from most sales forces, several
Page 70

elements are necessary: (1) a clear, concise statement, repeated frequently, of the types of information desired; (2) a systematic, simple process for reporting the information; (3) financial and other rewards for reporting information; and (4) concrete examples of the actual use of the data. Miscellaneous Reports Miscellaneous reports represent the third internal data source. Previous marketing research studies, special audits, and reports purchased from outside for prior problems may have relevance for current problems. As a firm becomes more diversified, the more likely it is to conduct studies that may have relevance to problems in other areas of the firm. For example, P&G sells a variety of distinct products to identical or similar target markets. An analysis of the media habits conducted for one product could be very useful for a different product that appeals to the same target market. Again, this requires an efficient marketing information system to ensure that those who need them can find the relevant reports. Internal Experts One of' the most overlooked sources of internal secondary data is internal experts. An internal expert is anyone employed by the firm who has special knowledge. The following statement by a senior research manager at a major consumer goods firm describes why his organization developed a research reports library and how they ensure its use. On the average, each brand is assigned a new brand manager every two years. These brand managers are young, aspiring, talented MBA-types and they believe in the value of marketing research. They also know that their own upward mobility is pegged to the mark they leave on the brand. So, the first thing they require is marketing research: segmentation studies or attitude/usage surveys, typically followed by lots of qualitative studies in the copy concept or positioning/ad strategy areas. Hell, for most brands you don't need new segmentation or positioning studies every two years! Go to the file and find the last one done, learn from it before you decide a new study is required. The same is true for copy concept issues. If the concept is worth a damn, it has been researched before. Reuse data, stretch it out to the max and reserve your budget for truly new, necessary primary studies. That's why we developed our "research library." Everything we have ever done is in there, including subsequent actions and results. And, it is organized for easy access. Now it is company policy that any research request has to include proof that the library has already been searched and found lacking-before any new can be conducted! related to the question at hand. While this knowledge is stored in individuals' minds rather than on paper or computer disk, it can be as valid and valuable as more formal sources. Had the marketing manager quickly asked the most obvious internal experts-members of the sales force-to explain the sales decline, work on a competitive new product could have begun almost a year earlier. In addition to the sales force, companies have discovered that marketing research personnel, technical representatives, advertising agency personnel, product managers, and public relations personnel often have expert knowledge of relevance to marketing problems.
Page 71

External Sources of Secondary Data Numerous sources external to the firm mav have data relevant to the firm's requirements. Seven general categories of external secondary information are described in the sections that follow: (1) computerized databases, (2) associations, (3) government agencies, (4) syndicated services, (5) directories, (6) other published sources, and (7) external experts. Databases A computerized database is a collection of numeric data and/or information that is made computer-readable form for electronic distribution. There are than 3,500 databases available from over 550 on-line service enterprises. Those that are available that are useful in bibliographic search, site location, media planning, market planning, forecasting and for many other purposes of interest to marketing researchers. Associations Associations frequently publish or maintain detailed information on industry sales, operating characteristics, growth patterns, and the like. Furthermore, they may conduct special studies of factors relevant to their industry. These materials may be published in the form of annual reports, as part of a regular trade journal, or as special reports. In some cases, they are available only on request from the association. Most libraries maintain reference works, such as the Encyclopedia of Associations that list the various associations and provide a statement of the scope of their activities. Government Agencies Federal, state, and local government agencies produce a massive amount of data that are of relevance to marketers. In this section, the nature of the data produced by the federal government is briefly described. However, the researcher should not overlook state and local government data. There are also a number of specialized analytic and research agencies, numerous administrative and regulatory agencies, and special committees and reports of the judicial and legislative branches of the government. These sources produce five broad types of data of interest to marketers. There are data on (1) population, housing, and income; (2) agricultural, industrial, and commercial product sales of manufacturers, wholesalers, retailers, and service organisations; (3) financial and other characteristics of firms; (4) employment; and (5) miscellaneous reports. Syndicated Services A wide array of data on both consumer and industrial markets is collected and sold by commercial organizations. Directories Any sound marketing strategy requires an understanding of existing and potential competitors and customers. Suppose you were asked to prepare a report on the forest products industry, to
Page 72

aid your organization in developing a sales and marketing approach to lumber manufacturers. A number of services and directories would prove useful. A general industry directory such as Thomas Register of American Manufacturers is a good starting place. This sixteen-volume, set lists manufacturers' products and services by product category. It provides the company name, address, telephone number, and an estimate of its asset size. It also contains an extensive trademark listing and samples of company catalogs. Other Published Sources There is a virtually endless array of periodicals, books, dissertations, special reports, newspapers, and the like that contain information relevant to marketing decisions. External Experts External experts are individuals outside your organization whose job provides them with expertise on your industry or activity. State and government officials associated with the industry, trade association officials, editors and writers for trade and publications, financial analysts focusing on the industry, government and university researchers, and distributors often have expert knowledge relevant to marketing problems. COMMERCIAL SURVEYS, AUDITS, AND PANELS Commercial Surveys Commercial surveys are conducted by research organizations and fall into three categories: periodic, panel, and shared. Periodic surveys measure the same attitudes,, knowledge, and/or behaviors using different samples at regular points in time. Panel surveys generally measure differing attitudes, knowledge, and/or behaviors using the same basic set of respondents at either regular or unique time intervals. Finally, shared surveys are surveys that are administered by a research firm and are composed of questions submitted by multiple clients. Periodic Surveys Periodic surveys are conducted at regular intervals, ranging from weekly to annually. They use a new sample of respondents (individuals, households, or stores) for each survey, focusing on the same topic and allowing the analysis of trends over time, though changes in individual respondents cannot be traced. These surveys cover topics ranging from values to media usage and food preparation. Periodic surveys are conducted by mail, personal interview, and telephone. They are subject to all of the problems of questionnaire design, sampling, and survey method that affect custom surveys. In addition, when periodic surveys are conducted at known intervals, they may affect the behavior being measured. For example, periodic surveys are used to measure television viewing. Telecasters have responded by scheduling specials and particularly popular shows to coincide with these surveys Panel Surveys

Page 73

Panel surveys, sometimes called interval panels, are conducted among a group of respondents who have agreed to respond to a number of mails, telephone, or, occasionally, personal interviews over time. The interviews may cover virtually any topic and need not occur on a regular basis. In contrast, a Panel, a continuous panel or panel data refers to a group of individuals who agree to report specified behaviors over time. In an interval panel, the research firm initially gathers detailed data on each respondent, including demographics and attitudinal and product-ownership items. Because the researchers need not collect this basic demographic data again, they can now obtain more relevant information from each respondent. These basic data also allow researchers to select very specific samples, For example, a researcher can select only those families within a panel that have one or more daughters between the ages of 12 and 16, or that own a dog, or that wear contact lens. This ability to select allows a tremendous swings over a random survey procedure if a study is to be made for a product for teenage girls, dog owners, or contact lens wearers, and so on. It is possible to survey the same interval panel members several times to monitor changes in their attitudes and purchase behavior in response to changes in the firm's or a competitor's marketing mix. However, interval panels are used more often for cross-sectional (one-time) surveys. A major advantage is the high response rate obtained by most interval panels. Return rates in the range of 70 to 90 percent are often obtained. In addition, the firm does not have to generate a sampling frame, a process that is both time consuming and costly. Finally, since panel members are convinced of the legitimacy of the firm maintaining the panel, they may supply more detailed and accurate data to both neutral and sensitive questions. Data are normally collected by mail, but telephone, personal, and even focus groups can be used. Clients can survey the entire panel, a stratified random sample of the larger panel, or a specific type, size, or location category. Panel surveys obtain very high response rates. However, the response rate when individuals are initially asked to join a panel may be quite low. Thus, panels do not eliminate nonresponse error. This issue is discussed in depth in the section on continuous panels. Shared Surveys Shared surveys, sometimes referred to as omnibus surveys, are administered by a research firm and consist of questions supplied by multiple clients. Such surveys can involve mail, telephone, or personal interviews. The respondents may be drawn from either an interval panel or randomly from the larger population. Shared surveys offer the client several advantages. First, since several clients share the fixed cost of sample design and most of the variable surveying costs, the cost per question is generally quite low. Since these data are collected frequently, responses can be obtained very quickly. This feature is helpful for measuring consumers’ responses to competetive moves, adverse publicity and environmental changes. Audits
Page 74

Audits involve the physical inspection of inventories, sales receipts, shelf facings, prices, and other aspects of the marketing mix to determine sales, market share, relative price, distribution, or other relevant information. Store Audits The simple accounting arithmetic of opening inventory + net purchases (receipts - transfers out - returned inventory + transfers in) - closing inventory sales is the basis for the audit of retail store sales. The most -, widely-used store audit service is the Nielsen Retail Index. It is based on audits every 30 or 60 days of a large national sample of food, drug, and mass merchandise stores. The index provides sales data on all the major packaged goods product lines carried by these storesfoods, pharmaceuticals, drug sundries, tobacco, beverages, and the like (but not soft goods or durables). Nielsen contracts with the stores to allow their auditors to conduct the audits and pays for that right by providing them with their own data plus cash. The clients receive reports on the sales of their own brand and of competitors' brands, the resulting market shares, prices, shelf facings, in-store promotional activity, stock outs, retailer inventory and stock turn, and local advertising. These data are provided for the entire United States and by region, by size classes of stores, and by chains versus independents. The data are available to subscribers on-line via computer as well as in printed reports Product Audits Product audits, such as Audits and Surveys' National Total Market Index, are similar to store audits but focus on products rather than store samples. Whereas product audits provide information similar to that provided by store audits, product audits attempt to cover all the types of retail outlets that handle a product category. Thus, a product audit for automotive wax would include grocery stores, mass merchandisers, and drugstores (in this way it is similar to the Nielsen store audits). In addition, it would include automotive supply houses, filling stations, hardware stores, and other potential outlets for automotive wax. Retail Distribution Audits Similar to store audits are retail distribution audits or surveys. These surveys do not measure inventory or sales; instead, they are observational studies at the retail level. Field agents enter stores unannounced and without permission. They observe and record the brands present, price, shelf facings, and other relevant data for selected product categories. Panels A panel is a group of individuals or organizations that have agreed to provide information to a researcher over a period of time. A continuous panel, the focus of this section, has agreed to report specified behaviors on a regular basis.
Page 75

Retail Panels A number of organizations offer services based on sales data from the checkout scanner tapes of a sample of supermarkets and other retailers that use electronic scanning systems. An estimated 99 percent of all packaged products in supermarkets carry the universal product code (UPC), often referred to as a bar code, and so are amenable to scanning. UPC codes are rapidly being expanded to soft goods and hardware; stores such as K mart, Wal-Mart, and Toys 'R' Us have or are installing scanners in all their outlets. Scanning data have many applications in marketing research. Safeway Stores, for example, has a manager of scanner marketing research whose department conducts studies on such topics as price elasticities, placement of products in the stores, and the effects of in-store advertising. One such scanner test showed that the sales of candy bars increased 80 percent when they were put on front-end racks near the checkout stands. Another study indicated that foil-packaged sauce mixes sold better when they were placed near companion products-spaghetti sauce near the spaghetti, meat sauce in the refrigerated meat cases, and so on rather than when they were displayed with other sauces. Scanner data as compared to store audit data, have the advantages of (1) greater frequencyweekly instead of bimonthly collection, (2) elimination of breakage and pilferage losses being counted as sales, and (3) more accurate price information. They have certain problems however, including (1) only the larger supermarkets have scanners, and (2) the quality of the scanner data is heavily dependent upon checkout clerk. Consumer Panels Continuous consumer panels allow firms to monitor shifts in individual or specific household behaviors or attitudes over time. This allows the firm to determine how its own or competitors' marketing mix changes affect specific consumers or market segments Consumer panel data are collected either electronically, by UPC scanners or Diaries. Diary Panels A diary panel, as the name implies, is a panel of households who continuously record in a diary their purchases of selected products. it is used for those product categories for which purchase is frequent-primarily food, household, and personal care products. Electronic Panels Electronic panels are composed of households whose television viewing behavior is recorded electronically. Nielsen Media Research is the main organization active in this area. Until recently, Nielsen used a national sample of homes with TV sets that were wired to household meters. The meters were connected to a central computer by telephone line and automatically recorded when the set was turned on and the station to which it was turned (a separate sample reported individual viewing in diaries). A major problem with audience measurements obtained from meters is that no information is provided on how many people, if any, are watching, and what their de-mogaphic characteristics are. A new kind of meter, called a people meter, has been developed to take care of this problem. It has a remote control coupled to the television meter that allows each of the family
Page 76

members plus visitors (who also record their age and gender) to "log on" when he or she begins viewing by punching an identifying button. This information is downloaded via a telephone line to a central computer where the demographics of the household members are stored. Thus viewing by demographic segments can be determined. While is considerable controversy over the accuracy of people meters (tile networks feel they underestimate the number of -viewers), they appear to superior to the available alternatives. Single-Source Data Single-source data are continuous data derived from the same respondent or household, covering at least television viewing and UPC product purchase information. In general the data are collected electronically and also contain in-store data such as price level, coupon use, and so forth. The advantages of such a system are substantial, as it can produce virtually real-time measures of advertising effectiveness, the effects of repetition, product changes, and so forth. Applications of Commercial Surveys, Audits, and Panels Retail Sales Retail sales data a-re available from both audits and scanner-based retail panels. Scanner panels provide more current data at shorter time intervals than do audits. However, audits cover outlets not required with scanners. Scanner data are particularly useful to both retailers and manufacturers for measuring the aggregate impact of coupons, in-store promotions, point-of purchase displaces, price discounts and so forth. Measuring only the sales of the promoted brand would lead a manager to conclude that the fifth (least) most popular brand should be promoted. However, an analysis of category sales reveals that sales increases of minor brands on sale come as a result of the cannibalization of the more popular brands. In contrast, price reductions on the leading brands appear to increase overall category sales. Household Purchases Data on household consumption are available from both diary- and scanner-based household panels. Household consumption data allows the firm to monitor shifts in an individual's or market segment's purchasing patterns over time. This allows the firm to evaluate the effects of both its own and its competitors' marketing activities on specific market segments. For example, if a competitor introduces a larger package, the firm can tell what type (demographic and product usage characteristics) and how many people are switching to the new size. Household panel data also serve as an important basis for forecasting the sales level or market share of a new product. A new product will often attract a number of purchasers simply because it is new. However, its ultimate success depends on how much of these initial purchasers become repeat purchasers.

Media Usage Given the billions spent on advertising, it is not surprising that substantial effort is expended to measure media usage.
Page 77

Attitudes/Knowledge/Behaviors Commercial surveys, both periodic and panel-based, are the primary general sources of data on consumer attitudes, knowledge, and behavior. For example, a firm desiring to improve or alter its corporate image could engage in a variety of advertising and public relations programs in different regions of the country. Using one of the weekly shared-interview services, it could economically determine the relative impact on each approach over time. SURVEY RESEARCH THE NATURE OF SURVEY RESEARCH Survey research is the systematic gathering of information from respondents for the purpose of understanding and/or predicting some aspect of the behavior of the population of interest. As the term is typically used, it implies that the information has been gathered with some version of a questionnaire. The administration of a questionnaire to an individual or group of individuals is called interview. TYPES OF INTERVIEWS Interviews are classified according to their degree of structure and directness Structure refers to the amount of freedom the interviewer has in altering the questionnaire to meet the, unique situation posed by each interview. Directness involves the extent to which the respondent is aware of (or is likely to be aware of) the nature and purpose of the survey. Characteristics Of Structured And Unstructured Interviews As stated earlier, the degree of structure refers to the extent to which an interviewer is restricted to following the wording and instructions in a questionnaire' Interviewer bias tends to be at a minimum in structured interviews. In addition it is possible to utilize less skilled (and less expensive) interviewers with a structured format because their duties are basically confined to reading questions and recording answers. These advantages of structured interviews may be purchased at the expense of richer or more complete information that skillful interviewers could elicit if allowed the freedom. Relatively unstructured interviews become more important in marketing surveys as less is known about the variables being investigated. Thus, unstructured techniques are used in exploratory surveys and for investigating complex or unstructured topic areas, such as personal values and purchase motivations. Characteristics of Direct and Indirect Interviews Direct interviewing involves asking questions such that the respondent is aware of the underlying purpose of the survey. Most marketing surveys are relatively direct. That is, although the name of the sponsoring firm is frequently kept anonymous, the general area of interest is often obvious to the respondent. Direct questions are generally easy for the respondent to answer, tend to have the same meaning across respondents, and have responses that are relatively easy to interpret. However, occasions may arise when respondents are either unable or unwilling to answer direct questions. For example, respondents may not be able to
Page 78

verbalize their subconscious reasons for purchases or they may not want to admit that certain purchases were made for socially unacceptable reasons. In these cases, some form of indirect interviewing is required. Indirect interviewing, often referred to as disguised, involves asking questions such that the respondent does not know what the objective of the study is. A person who is asked to describe the “typical person" who rides a motorcycle to work may not be aware that the resulting description is a measure of his own attitudes toward motorcycles and this use of them. Both structure and directness represent continuums rather than discrete categories. However, it is sometimes useful to categorize surveys based on which end of each continuum they are nearest. This leads to four types of interviews: structure-direct, structure-indirect, unstructureddirect and unstructured-indirect. TYPES OF SURVEYS Surveys are generally classified according to the method of communication used in the interviews: personal, telephone, mail, or computer. The relative popularity of three of these techniques. (Personal interviews are broken into mall intercept and door-to-door categories.) Computer interviews are less common. Each of the four methods is briefly described in the following section: Personal Interviews Personal interviews are widely used in marketing research. In a personal interview; the interviewer asks the questions of the respondent in a face-to-face situation. The interview ma., take place at the respondent's home or at a central location, such as a shopping mall or a research office. Mall intercept interviews are the predominant type of personal interview. The popularity of this type of personal interview is the result of its cost advantage over door-to-door interviewing, the ability to demonstrate products or use equipment that cannot be easily transported, greater supervision of interviewers, and less elapsed time required. Mall intercept interviews involve stopping shoppers in a shopping mall at random, qualifying them if necessary, inviting them into the research firm's interviewing facilities that are located at the mall, and conducting the interview. Qualifying a respondent means ensuring that the respondent meets the sampling criteria. This-could involve a quota sample where there is a desire to interview a given number of people with certain demographic characteristics such as age and gender. Or it could involve ring that all the respondents use the product category being investigated. Shopping mall interviews generally take place inside special facilities in the center that are operated by a commercial research firm. These facilities make possible a variety of interview formats not available when the interviews are conducted door-to-door. Individuals who visit malls are not a representative of the entire population. An additional problem with intercept interviews at malls where research firms maintain permanent facilities is "respondent burnout," That is, a significant portion of a given mall's customers shop at the mall
Page 79

regularly. over time, these regular shoppers will be randomly. selected into numerous studies. Both their willingness to cooperate and the nature of their responses will change as the% participate in more and more studies. Intercept interviews are not limited to shopping malls. Increasingly, interest interviews are conducted at locations relevant to the population of interest. An emerging type of personal interviewing is the in-store intercept. In-store intercept interviews involve interviewing individuals inside retail outlets, generally immediately after they have purchased the product category in question. One version of this approach, the purchase intercepts technique (PIT). Telephone interviews Telephone interviews involve the presentation of the questionnaire by telephone. Computerassisted telephone interviewing (CATI) dominates large-scale telephone interviews. A standalone CATI system involves programming a survey directly in one or more personal computers. The telephone interviewer then reads the questions front a television-type screen and records the answers directly on the terminal keyboard or directly on the screen with a light pen. The flexibility associated with the computer provides a number of advantages. often exact set of questions a respondent is to receive will depend on answers to earlier questions. For example, individuals who have a child under age three might receive one set of questions concerning food purchases whereas other individuals would receive a different set. The computer, 4 in effect, allows the creation of an "individualized” questionnaire for each respondent based on answers to prior questions. A second advantage is the ability of the computer to present different versions of the same question automatically. For example, when asking people to answer questions that have several stated alternatives, it is desirable to rotate the order in which the alternatives are presented. This is easy with a CAT1 system. Another advantage of CATI system- is the ease and speed with which a bad question can be changed or a new question added. CATI systems also edit data as they are entered. That is, the computer can be programmed to highlight inconsistent answers across questions, to refuse answers outside a defined range, to ensure that constant sum question responses total properly, and so forth. Finally, data can easily be analyzed and interim reports issued. Interim reports may allow one to stop a survey if the "answer" becomes clear before the scheduled number of interviews has been completed. Final reports can also be produced rapidly. Mail Interviews Mail interviews may be delivered in any of several ways. Generally, they are mailed to the respondent and the completed questionnaire is returned by mail to the researcher. However, the forms can be left and/or picked up by company personnel. They can be also distributed by means of magazine and newspaper inserts or they can be attached to products. The warranty card attached to most Consumer products as a useful source of survey data for many manufacturers. CRITERIA FOR THE SELECTION OF A SURVEY METHOD
Page 80

A of criteria are relevant for judging which type of survey to use in a particular situation. These criteria are (1) complexity, (2) required amount of data, (3) desired accuracy, (4) sample control, (5) time requirements, (6) acceptable level of Nonresponse and (7) cost. Complexity of the Questionnaire Although researchers generally attempt to minimize complexity, some subject areas still require relatively complex questionnaires. For example, the sequence of number of questions asked often depends on the answer to previous questions. A respondent seeing a questionnaire of this type for the first time can easily become confused or discouraged. Thus, computer, personal, and telephone interviews are better suited to collect this type of information than are mail interviews. Other aspects of complexity also tend to favor the use of personal or computer interviews. Visual cues are necessary for many projective techniques, such as the picture response. Multiple-choice questions often require a visual presentation of the alternatives because the respondent cannot remember More than a few when they are presented orally. However, most attitude scales can be administered via the phone. The telephone, and often mail, are inappropriate for studies that require the respondent to react to the actual product, advertising copy, package design, or other physical characteristics. Techniques that require relatively complex instructions are best administered by means of personal inter-views. Similarly if the response required by the technique is -extensive, such as with many conjoint analysis studies, personal inter-views are better, with computers second. Amount of Data Closely related to the issue of complexity is the amount of data to be generated by a given questionnaire. The amount of data actually involves two separate issues: (1) How much time will it take to complete the entire questionnaire? (2) How much effort is required by the respondent to complete the questionnaire? For example, one open-ended question may take a respondent five minutes to answer, and a 25item multiple-choice questionnaire may take the same length of time. Moreover, much more effort may go into writing down a five-minute essay than in checking off choices on 25 multiple-choice questions. Personal interviews can, in general, be longer than other types. Social motives play an important role in personal interviews. It would be "impolite" to terminate an interview with someone in a face-to-face situation.
Page 81

Accuracy Of The Resultant Data The accuracy of data obtained by surveys can be affected b a number of factors, such as interviewer effects, sampling effects, and effects caused by questionnaire design. In this section, we are concerned with errors induced by the survey method itself, particularly responses to sensitive questions and interviewer effects. Sensitive Questions Personal interviews and to a lesser extent, telephone interviews involve social interaction between the respondent and the interviewer. Therefore, there is concern that the respondent may not answer potentially embarrassing questions or questions with socially desirable responses accurately. Since mail and computer hat they will yield more interviews reduce social interaction, it is often assumed that they will yield more accurate responses. However, research indicates that well-constructed and well-administered questionnaires will generally yield similar results, regardless of the method of administration unless very sensitive topics such as illicit drug use are being investigated. Interviewer Effects The ability of interviewers to alter questions, their appearance, their manner of speaking, the intentional and unintentional cues provided, and the way they probe can be a disadvantage. It means that, in effect, each respondent may receive a slightly different interview. Depending on the topic of the survey, the interviewers social class, age, sex, race, authority, training, expectations, opinions, and voice can affect the results. The danger of interviewer effects is greatest in personal interviews. Telephone interviews are also subject minimal interviewer effects. Mail and computer survey have minimal interviewer effects. Questionnaire designs that minimize interviewer freedom also reduce the potential for interviewer bias. The most effective approach involves the skillful selection, training, control of interviewers. However, after the most cost effective design principles have been applied, some interviewer bias is apt to remain. This should be estimated subjectively or, preferably, statistically. One final problem that arises with the use of telephone and personal interviewer cheating. That is for various reasons, interviewers may falsify all or parts of an interview. This is a severe enough problem that most commercial survey researchers engage in a . process called validation or verification. Validation involves reinterviewing a sample of the Population that completed the initial interview. In this reinterview, verification is sought that the interview took place and was conducted properly and completely. Sample Control

Page 82

Each of the four interview, techniques allows substantially different levels of control over who is interviewed. Personal interviews offer the most potential for control over the sample. An explicit list of individuals or households is not required. Although such lists are desirable, various, forms of areas sampling can help the researcher to overcome most of the problems caused by the absence of a complete sampling frame. In addition, the researcher can control who is interviewed within the sampling unit and how much assistance from other members of the unit is permitted. Controlling who within the household is interviewed can be expensive. If the purpose of the research is to investigate household behavior, such as appliance ownership, any available adult will probably be satisfactory. However, if the purpose is to investigate individual behavior, inter-viewing the most readily available adult within the household will adult, will often produce a biased sample. Thus, the re-searcher must randomly select from among those living at each household. The simplest means of selection is to interview the adult who last had (or next will have) a birthday. The odds of a-household member being at home are substantially larger than the odds of a specific household member being available. This means that there will be more "not-at-homes:' which will increase interviewing costs substantially. Personal and computer interviews conducted in central locations, such as shopping malls, lose much of the control possible, with home interviews because the interview is limited to the individuals who visit the shopping mall. Mail questionnaires require an explicit sampling frame composed of addresses, if not names and addresses. Such lists are generally unavailable for the general population. Lists of specialized groups are more readily available However even with a good mailing list, the researcher maintains only limited control over who at the mailing address completes the questionnaire. Different family members frequently provide divergent answers to the same question. Although researchers can address the questionnaire to a specific household member, they cannot be sure who completes the questionnaire. Mailings to organizations have similar problems. It is difficult to determine an individual's sphere of responsibility from his or her job title. In some firms the purchasing agent may set the criteria by which brands are chosen, whereas in other Fr-ms this is either a committee decision or is made by the person who actually uses the product in question. Thus, a mailing addressed to a specific individual or job title may not reach the individual who is most relevant for the survey. In addition busy executives may often pass on a questionnaire to others, who are not as qualified to complete it. Telephone surveys are obviously limited to households with direct access to telephone. However, the fact that telephones are almost universally owned does not mean that lists of telephone numbers, such as telephone directories, are equally complete. As the current telephone directory becomes older, the percentage of households with unlisted numbers increases because of new families moving into the area and others moving within the area. Random Digit Dialing

Page 83

To ensure more representative samples, researchers generally utilize some form of random digit dialing. This technique requires that at least some of the digits of each sample phone number be generated randomly. A primary problem with pure random digit dialing is that only about 20 percent of all numbers within working prefixes are actually connected to home phones. A variety of techniques have been developed to minimize this problem. The most popular technique, Plus-one or add-adigit, simply requires the researcher to select a sample from an existing directory and add one to each number thus selected. Although the technique is more expensive than a sample selected directly from a directory, and it has a higher refusal rate,' it produces a high contact rate and a fairly representative sample.

Time Requirements Telephone surveys generally require the least total time for completion. In addition, it is generally easier to hire, train, control, and coordinate telephone interviewers. Therefore the number of interviewers can often be expanded until any time constraint is satisfied. the number of personal and computer interviewers can also be increased to reduce the total time required. However, problems with training, coordinating, and control tend to make this uneconomical after a certain point. Because “at home’ interviewers must travel between interviews and often set up appointments such interviews take substantially more time than telephone interviews, however mall intercept interviews can be done fairly rapidly. Reducing Nonresponse in Telephone and Personal Surveys Non response error is a potential problem for telephone, personal, and computer interviews. Not-at-homes and refusals are the major factors that reduce response rates. The major focus in reducing non response in telephone and personal interview situations has centered on contacting the potential respondent. This was based on the belief that the social motives that are present in a face-to-face or verbal interaction, situation operate to minimize refusals. However, refusal rates are increasing for both personal and telephone interviews. Therefore, researchers Must focus attention on gaining cooperation from, as well as making contact with, potential respondents. Contacting Respondents The percentage of not-at-homes in personal and telephone surveys can be reduced drastically with a series of callbacks. In general, the second round of calls with produce only slightly fever contacts than the first call. The minimum number of calls in most consumer surveys should be three, and Callbacks should generally be made at varying times of the day and on different days of the week. There is, as one might suspect, a definite relationship between both the day of the week and the time of day and the completion rate of telephone and personal interviews.
Page 84

Commercial survey research firms vary widely in the number of times they allow a phone to ring before dialing the next number. Some allow only three rings, whereas others 90 as high as ten. One study indicates that five rings may be optimal. Motivating Respondents Refusals are a problem in telephone and personal surveys. Most refusals occur immediately after the introductory remarks of the interviewer. After they begin, very few interviews are terminated prior to completion. Likewise the length of the interview has a significant impact. Gender of the interviewer does not appear to affect the refusal rate, but characteristics of the interviewer’s voice do. Prior notification by letter lowers the refusal rate for telephone surveys. Likewise, prior notification by telephone increases the cooperation rate to an at-home personal interview survey. The sponsor of the survey affects telephone response rates with the rate being higher for university and charity sponsors than for commercial sponsors. The promise of a large monetary incentive ($10) was effective in generating a high response rate to a telephone survey that required respondents to agree to watch a specific television program. Attempts to gain cooperation for long or complicated interviews occasionally use the foot-inthe-door technique. This technique involves two stages. First, respondents are asked to complete a relatively short, simple questionnaire. Then, at a later time, they are asked to complete a more complex questionnaire on the same topic. This technique generally produces at least a small gain in the response rate. However, given the added expense this involves in telephone and personal interviews, concentrating on Ion techniques and calbacks may provide a higher payoff. Refusal conversion or persuasion has been found to increase the overall response rate by an average of 7 percent. This involves not accepting a no response to a request for cooperation without malting an additional plea. The additional plea can stress the importance of the respondent's opinions or the brevity of the questionnaire. It may also involve offering to re contact the individual at a more convenient time finally, the time of day that contact is made appears to influence the refusal rate. Paradoxically, while evening is the optimal time to find respondents at home, it also generates the highest level of refusals. NON RESPONSE IN MAIL SURVEYS Predicting Response Most mail surveys produce similar response patterns. However, the speed of response and ultimate percentage responding can vary widely.Researchers can conduct small scale preliminary mailing to a subsample of their target respondent.. If a pilot study is not practical perhaps because of time pressure the observed response pattern to earlier similar surveys among similar respondents using similar respondents using similar response inducements can be used.
Page 85

Reducing Non response Attempts to increase the response rate to mail surveys focus on increasing the potential respondents motivation to reply. Two complementary approaches are frequently used. The first is to increase the motivation as such as possible in the initial contacts with respondents. The second approach is to remind the respondents through repeated mailings or other contacts. The initial response rate to a mail survey is strongly influenced by the respondents interest in the subject matter of the survey. Interest level can be a serious source of non-response bias in the survey results. Pre-notification, such as a letter or telephone call that informs the respondents that they will receive a questionnaire shortly and requests cooperation, is a cost effective means of increasing response rates. In the absence of monetary inducements, a number of studies have found pre-notification to double the response rate obtained without pre-notification. This technique works best with the general public, but it is also effective in industrial surveys. Evidence suggested that a preliminary letter or card is more effective than a preliminary phone call. The type of postage has a moderate impact on the response rate. First-class, hand-stamped outgoing and return envelopes produce higher response rates than do metered, second-class, or business reply envelopes. This impact is greatest for return envelopes, where it is clearly a cost-effective technique. Prepaid monetary incentives (cash) cause substantial increases in response rates in both commercial and general public populations, Although large incentives have a stronger effect than smaller ones. Lottery incentives have been found to have mixed results. Other types of incentives are generally more effective. The effect of gift incentives such as pens or key rings is generally positive but very moderate. Like cash incentives, gift incentives lose most or all of their effectivess when they are promised rather than provided with the questionnaire. The degree of personalization and the related variables respondent anonymity and assurances of confidentiality produce variable effects on both response rates and accuracy. Personalization appears generally to increase response rates on non-sensitive issues, whereas assurances of anonymity or confidentiality are most effective on questionnaires dealing with personally important or sensitive issues. However, these effects are generally small. The identity of the survey sponsor influences the response rate, with commercial sponsors generally receiving a lower response rate than noncommercial sponsors.

Page 86

The type of appeal used in the cover letter can take a number of approaches, such as egoistic (your opinion is important), altruistic (please help us), social utility (your opinion can help the community), or negative (if the questionnaire is not returned by a certain date, a telephone call or personal follow-up will result). Evidence indicates that the "best" appeal depends on the nature of the sponsor and purpose of the study, though negative appeals appear to be dysfunctional. The foot-in-the-door technique described earlier involves gaining compliance with an initial easy task and then at a later time requesting assistance with a larger or more complex version of the same task. In addition to attempting to maximize the initial return of mail questionnaires, most mail surveys also utilize follow-up contacts to increase the overall response rate. Followup contacts generally consist of a postcard or letter requesting the respondent to complete and return the questionnaire and/or the entire questionnaire may be resent. STRATEGIES FOR DEALING WITH NONRESPONSE After each successive wave of contacts with a particular group of potential respondents, the researcher should run a sensitivity analysis That is, one should ascertain how different the nonrespondents would have to be from the respondents in order to alter the decision one would make based on the data supplied by the current respondents. if the most extreme foreseeable answers by the non-respondents would not alter the decision, no further efforts are required. Subjective Estimates When it is no longer practical to increase the response rate, the researcher can estimate subjectively,, the nature and effect of the non-respondents. That is, the researcher, based on experience and the nature of the survey makes a subjective research evaluation of the probable effects of the non-response error. For example, the fact that those most interested in a product are most likely to I return a mail questionnaire gives the researcher some confidence that nonrespondents are less interested in the topic than respondents. Imputation Estimates Imputation estimates involve imputing attributes to the non-respondents based on the characteristics of the respondents. These techniques can be used missing Respondents or for item non-response. For example, a respondent who fails to report income may be "assigned" the income of a respondent with Similar demographic characteristic. A number of other imputation approaches to item non-response exist. A common approach to differential non-response by groups defined by age, race, social class, and so forth is to weigh the responses of those who reply in a manner that offsets the non-response rate. This, of course, assumes that the non-respondents in each group and that the percentage of the population belonging to each group is known. Trend Analysis Trend analysis is similar to the imputation technique, except that the attributes of the nonrespondents are assumed to be similar to a projection of the trend shown between early and late
Page 87

respondents. However, trend analysis should only be used when there are logical reasons to believe the trend will extend to the non-respondents. Measurement using Sub samples Sub-sampling of non-respondents, particularly when a mail survey was the original been found effective in reducing non-response error. Concentrated attention on a sub-sample of nonrespondents, generally using telephone or personal interviews, can often yield a high response rate within the sub-sample. using standard statistical procedures, the values obtained in the subsample can be projected to the entire group of non-respondents and the overall survey results adjusted to take into account the non-respondents. The primary drawback to this technique is the cost involved. Sampling Census versus Sample • • Census in simple terms means to measure each element in the group or population of interest. A part of a population, or a subset from a set of units, which is provided by some process or other, usually by deliberate selection with the object of investigating the properties of the parent population or set. • • Surveys of industrial consumers or of distributors of consumer products are frequently in the form of a census. However there are certain reasons, which make census impractical or even impossible. The reasons are as follows: 1. Cost: Cost is an obvious constraint on the determination of whether a census should be taken. If information is desired on grocery purchase and use behaviour (frequencies and amounts of purchase of each product category, average amount kept at home and the like) and the population of interest is all households in a country, the cost will preclude a census being taken. Thus a sample is the only logical way of obtaining new data from a population of this size. 2. Time: The kind of cost we have just considered is an outlay cost. The time involved in obtaining information from either a census or a sample involves the possibility of also incurring an opportunity cost. That is, the decision until information is obtained may result in a smaller gain or a larger loss than would have been the case from making the same decision earlier. The opportunity to make more (or save more, as the case may be) is, therefore, foregone.
Page 88

3. Accuracy: A study using a census, by definition, contains no sampling error. A study using a sample may involve sampling error in addition to other types of error. Other things being equal, a census will provide more accurate data than a sample. However it has been argued that a more accurate estimate of the population of a country could be made from a sample than from a census. Taking a census of a population on a “mail out – mail back” basis requires that the names and addresses of almost all households be obtained, census questionnaires mailed, and interviews conducted of those not responding. The questionnaires are sent to a population of which only about half have completed high school. The potential for errors in a returned questionnaire is therefore high. 4. Destructive nature of the measurement: Measurements are sometimes destructive in nature. When they are, it is apparent that taking a census would usually defeat the purpose of a measurement. If one were producing firecrackers, electrical fuses, or gas seed, performing a functional use test on all products for quality control purposes would not be considered from an economic standpoint. A sample is then the only practical choice. On the other hand, if the light bulbs, bicycles, or electrical appliances are to be tested, a 100% sample (census) may be entirely reasonable. Advantages of Sampling 1. Sampling is cheaper than a census survey. It is obviously more economical, for instance, to cover a sample of households than all households in a territory although the cost per unit of study may be higher in a sample survey than in a census. 2. Since magnitude of operations involved in a sample survey is small, both the execution of the fieldwork and the analysis of the results can be carried out speedily. 3. Sampling results in greater economy of effort as relatively small staffs is required to carry out the survey and to tabulate and process the survey data. 4. A sample survey enables the researcher to collect more detailed information than would otherwise be possible in a census survey. Also, information of a more specialised type can be collected, which would not be possible in a census survey on account of availability of a small number of specialists.

Page 89

5. Since the scale of operations involved in a sample survey is small, the quality of interviewing, supervision and other related activities can be better than the quality in a census survey. Limitations of Sampling 1. When the information is needed on every unit in the population such as individuals, dwelling units or business establishments, a sample survey cannot be of much help for it fails to provide information on individual count. 2. Sampling gives rise to certain errors. If these errors are too large, the results of the sample survey will be of extremely limited use. 3. While in a census survey it may be easy to check the omissions of certain units in view of complete coverage, this is not so in the case of sample survey. The Sampling Process Step 1. Define the population 2. Specify sampling frame Description The population is defined in terms of a) element, b) units, c) extent and d) time. The means of representing the elements of the population – for example telephone book, map, or city directory – 3. Specify sampling unit are described. The unit for sampling – for example, city block, company, or household – is selected. The sampling unit 4. Specify sampling method 5. Determine sample size 6. Specify sampling plan 7. Select the sample Step 1: Define the population It is the aggregate of all elements defined prior to selection of sample. A population must be defined in terms of
Page 90

may contain one or several population elements. The method by which sampling units are to be selected is described. The number of elements of the population to be sampled is chosen. The operational procedures for selection of the sampling units are selected. The office and fieldwork necessary for the selection of the sample are carried out.

• • • •

elements, sampling units, extent and time.

Eliminating any one of these specifications leaves an incomplete definition of the population that is to be sampled. Step 2: Specify the Sampling frame If a probability sample is to be taken, a sampling frame is required. A sampling frame is a means of representing the elements of the population. A sampling frame may be a telephone book, city directory, an employee roster, a listing of all students attending a university, or a list of possible phone numbers. Maps also serve frequently as sampling frames. A sample of areas within a city may be taken and another sample of household then be taken within each area. City blocks are sometimes sampled and all households on each sample block are included. A sampling of street intersections may be taken and interviewers given instructions as to how to take “Random walks”. From the intersection and select the households to be interviewed. A perfect sampling frame is one in which every element of the population is represented once but only once. One does not need a sampling frame to take a non-probability sample. Step 3: Specify the sampling Unit The sampling unit is the basic unit containing the elements of the population to be sampled. It may be the element itself or a unit in which the element is contained. For example, if one wanted a sample of males over 13 years of age, it might be possible to sample them directly. In this case, the sampling unit would be identical with the element. However, it might be easier to select households as the sampling unit and interview all males over 13 years of age in each household. Here the sampling unit and the population element are not the same. Step 4: Specify the Sampling Methods It indicates how the sample units are selected. One of the most important decisions in this regard is to determine which of the two –probability and non-probability sample –is to be
Page 91

chosen. Probability samples are also known as random samples and non-probability samples as non-random samples. There are various types of sample designs, which can be covered under two broad groups – random or probability samples and non-random, or non-probability samples. Step 5: Determination of the Sample size Traditional sampling theory generally ignores the concept of the cost versus the value of the information to be provided by various sized samples. The problem of determination of sample size is dealt later on in depth. Step 6: Specify the Sampling Plan The sampling plan involves the specification of how each of the decisions made thus far is to be implemented. It may have been decided that the household will be the element and the block the sampling unit. How is a household defined operationally? How is the interviewer to be instructed to distinguish between families and households in instances where two families and some distant relatives of one of them are sharing the same apartment? How is the interviewer to be instructed to take a systematic sample of households on the block? What should the interviewer do when a housing unit selected is vacant? What is the callback procedure for households at which no one is at home? What age respondent speaking for the household is acceptable? Step 7: Select the Sample The final step in the sampling process is the actual selection of the sample elements. This requires a substantial amount of office and fieldwork particularly if personal interview are involved. Characteristics of a good Sample Design A good sample design requires the judicious balancing of four broad criteria –goal orientation, measurability, practicality and economy. 1. Goal orientation: This suggests that a sample design “should be oriented to the research objectives, tailored to the survey design, and fitted to the survey conditions”. If this is

Page 92

done, it should influence the choice of the population, the measurement as also the procedure of choosing a sample. 2. Measurability: A sample design should enable the computation of valid estimates of its sampling variability. Normally, this variability is expressed in the form of standard errors in surveys. However, this is possible only in the case of probability sampling. In nonprobability samples, such a quota sample, it is not possible to know the degree of precision of the survey results. 3. Practicality: This implies that the sample design can be followed properly in the survey, as envisaged earlier. It is necessary that complete, correct, practical, and clear instructions should be given to the interviewer so that no mistakes are made in the selection of sampling units and the final selection in the field is not different from the original sample design. Practicality also refers to simplicity of the design, i.e. it should be capable of being understood and followed in actual operation of the field work. 4. Economy: Finally, economy implies that the objectives of the survey should be achieved with minimum cost and effort. Survey objectives are generally spelt out in terms of precision, i.e. the inverse of the variance of survey estimates. For a given degree of precision, the sample design should give the minimum cost. Alternatively, for a given per unit cost, the sample design should achieve maximum precision (minimum variance). It may be pointed out that these four criteria come into conflict with each other in most of the cases, and the researcher should carefully balance the conflicting criteria so that he is able to select a really good sample design. Sampling Techniques Sampling techniques may be broadly classified as non-probability and probability sampling techniques. Non-probability sampling techniques: 1. It relies on the personal judgment of the researcher rather than t he chance to select sample elements. 2. The researcher can arbitrarily or consciously decide which element to include in the sample.
Page 93

3. Non-probability may yield good estimates of the population characteristic. However they do not allow for objective evaluation of the precision of the sample results. 4. Since there is no way of determining the probability of selecting any particular element for inclusion in the sample, the estimates obtained are not statistically projectable to the population. Probability sampling techniques: 1. Sampling units are selected by chance. 2. It is possible to pre-specify every potential sample of a given size that could be drawn from the population, as well as the probability of selecting each sample. 3. Every potential sample need not have the same probability of selection, but it is possible to specify the probability of selecting any particular sample of a given size. 4. This requires not only a precise definition of the target population, but also a general specification of the sampling frame. Because sample elements are selected by chance. 5. It is possible to determine the precision of the sample estimated of the characteristics of interest. Confidence intervals, which contain the true population value with a given level of certainty, can be calculated. This permits the researcher to make inferences of projections about the target population from which the sample was drawn. Probability sampling techniques are classified based on : − Element versus cluster sampling − Equal unit probability versus unequal probabilities − Unstratified versus stratified selection − Random versus systematic selection − Single-stage versus multistage techniques Diagrammatic representation of the sampling techniques. Sampling techniques

Non

probability sampling techniques

Probability sampling techniques
Page 94

Convenience Sampling

Judgmental Sampling

Quota Sampling

Simple Random Sampling

Systematic Sampling

Stratified Sampling

Cluster Sampling

Multistage Sampling

Non-probability techniques: Convenience Sampling Definition A non-probability sampling technique that attempts to obtain a sample of convenient elements. The selection of sampling units is left primarily to the interviewer. Explanation 1. It is a form of Non-Probability sampling. 2. It is mainly used for Dipstick studies. This type of sampling is normally used to get basic information to take elementary decisions. 3. Convenience samples are often used in exploratory situations when there is a need to get only an approximation of the actual value quickly and inexpensively. 4. Commonly used Convenience samples are associates and “the man on the street”. Such samples are often used in the pre-test phase of the study, such as pre-testing of a questionnaire. Examples: • • • • Use of students, church groups, and members of social organizations, Mall-intercept interviews without qualifying the respondents, Department stores using charge account lists Tear out questionnaire included in a magazines, and
Page 95

• • • • • •

People on the street interviews Convenience sampling is the least expensive and least time consuming of all sampling techniques. The sampling units are accessible, easy to measure and co-operative. This technique is used in exploratory research for generating ideas, insight or hypothesis. Convenience samples contain unknown amounts of both variables and systematic selection errors. These errors can be very large when compared to the variable error in a simple random sampling of the same size.

Advantages

Disadvantages

Convenience samples are not representatives of any definable population. So they are not recommended for descriptive or casual research. Judgmental sampling Definition A form of convenience sampling in which the population elements are purposively selected based on the judgment of the researcher. Explanation A judgment sample is one in which there is an attempt to draw a representative sample of the population using judgmental selection procedures. Judgment samples are common in industrial market research. Example A sample of addresses taken by the municipal agency to which questionnaires on bicycle riding habits were sent. A judgment sample was taken after researchers looked at traffic maps of the city, considered the tax assessment on houses and apartment buildings (per unit), and kept location of schools and parks in mind. Advantages • • Judgmental sampling is low cost, convenient and quick. Judgmental sampling is subjective and its value depends entirely on the researchers judgment, expertise and creativity.
Page 96

• •

It is useful if broad population inferences are not required. It does not allow direct generalization to a specific population, usually because the population is not defined explicitly.

Disadvantage

Quota Sampling Definition A non probability sampling techniques that is a two stage restricted judgmental sampling. The first stage consists of developing control categories or quotas of population elements. In the second stag, sample elements are selected based on convenience or judgment. Explanation • • It is a form of Non-Probability sampling. In Quota Sampling, the samples are selected in such a way that the interest parameters represented in the sample are in the same proportion as they are in the universe/ population. • • Quota Sampling is widely used in consumer panels. The following aspects must be kept in mind while choosing the control variables: − The variables must be available and should be recent. − They should be easy for the interviewer to classify. − They should be closely related to the variable being measured in the study. − The number of variable must be kept to a reasonable number so as to avoid confusion while analyzing the data The cost of sample per unit is directly proportional to the number of control variables. In order to have a check mechanism about the quality of samples taken so as to reduce the selection errors, Quota Samples are “validated” after they are taken. The process of validation involves a comparison of the sample and the population with respect to characteristics not used as control variables. For e.g. in a quota sample taken from a consumer panel for which income, education, and age group are used as control variables. If the comparison of this panel and the population might be made with respect to such characteristics
Page 97

as average number of children, occupation of the chief wage earner and home ownership. Then if the panel differed significantly from the population with respect to any of these characteristics, it would be an indication of the potential bias in the selection procedures. It should be noted that the similarity does not necessarily mean the absence of bias. Example If one wants to select a Quota sample of persons for a test of flavored tea and wants to control (control variables are the parameters based on which he would like to classify the universe) it by ethnic background, income bracket, age group and geographical area. Then the sample taken would have the same proportion of people in each ethnic background, income bracket, age group and geographical area as the population. Disadvantages • • • Scope for high variances Scope for sizable selection errors. Selection errors arise from the way interviewers select the persons/ variables to fill the quota. Incorrect information of the proportions of the population in each of the control variables, biases in the relationship of the control variables to the variables being measured, and from other sources. Probability Techniques: Probability sampling techniques vary in terms of sampling efficiency. Sampling efficiency is a concept that reflects a trade-offs between sampling cost and precision. Precision refers to the level of uncertainty about the characteristic being measured. The greater the precision, the greater the cost and most studies require trade-off. Simple Random Sampling Definition A probability sampling technique in which each element in the population has a known and equal probability of selection is known as simple random sampling (SRS). Every element is selected independently of every other element and the sample is drawn by a random procedure from a sampling frame.

Page 98

Explanation In random sampling, each element in the population has a known and equal probability of selection. Furthermore, each possible sample of a given size (n) has a known and equal probability of being the sample actually selected. This implies that every other element is selected independently of every other element. The sample is drawn by a random procedure from a sampling frame. This method is equivalent to a lottery system in which names are placed in a container, the container is shaken, and the names of the winners are then drawn out in an unbiased manner. To draw a simple random sample, the researcher first compiles a sampling frame in which each element is assigned a unique identification number. Then random numbers are generated to determine which element to include in the sample. The random numbers may be generated with a computer routine or a table. Advantages • • • • • • It is easy to understand The sample result may be projected to the target population. It is often difficult to construct a sampling frame that will permit a simple random sample to be drawn. SRS can result in samples that are very large or spread over large geographic areas, thus increasing the time and cost of data collection. SRS often results in lower precision with larger standard errors than other probability sampling techniques. SRS may or may not result in a representative sample. Although samples drawn will represent the population well on average, a given simple random sample may grossly misrepresent the target population. This more likely if the size of the sample is small. Systematic sampling Definition A probability sampling technique in which the sample is chosen by selecting a random starting point and then picking every ith element in succession from the sampling frame.
Page 99

Disadvantages

Explanation In systematic sampling, the sample is chosen by selecting a random starting point and then picking every ith element in succession from the sampling frame. The sampling interval, i, is determined by dividing the population size N by the sample size n and rounding to the nearest integer. Example Suppose there are 100,000 elements in the population and a sample of 1000 desired. In this case the sampling interval, i, is 100. A random number between 1 to 100 is selected. If say number 23 is selected, the sample will then consists of elements 23, 123, 223, 323, 423, 523, and so on. Systematic sampling is similar to SRS in that each population element has a known and equal probability of selection. However, it is different from SRS in that only the permissible samples of size n that can be drawn have a known and equal probability of selection. The remaining samples of size n have a zero probability of being selected. For systematic sampling, the researcher assumes that the population elements are ordered in some respect. In some cases the ordering (alphabetic listing in a telephone book) is unrelated to the characteristic of interest. In other instances, the ordering is directly related to the characteristic under investigation. (Credit card customers may be listed in order of outstanding balances. If the population elements are arranged in a manner unrelated to the characteristic of interest, systematic sampling will yield result quite similar to SRS. On the other hand, when the ordering of the element is related to the characteristic of interest, systematic sampling increases the representatives of the sample. Advantages • • Systematic sampling is less costly and easier that SRS, because random selection is done only once. The random numbers do not have to be matched with individual element as in SRS. Since some lists contains millions of elements, considerable time can be saved. This in turn again reduces the cost. • If the information related to the characteristic of interest is available for the population, systematic sampling can be used to obtain a more representative and reliable sample than SRS.

Page 100

Systematic sampling can even be used without knowledge of the composition (elements) of the sampling frame.

Stratified Random Sampling Definition A probability sampling technique that uses a two-step process to partition the population into subpopulations, or strata is known as stratified random sampling. Elements are selected from each stratum by a random procedure. Explanation Stratified Random Sampling emerges from the word Stratum. A Stratum in a population is a segment of that population having one or more characteristics. E.g. people in the age strata of 35-40, people in the income strata to Rs. 20000 p.m. etc Stratified Sampling involves treating each stratum as a separate subpopulation for sampling purposes, and from each stratum sampling units would be drawn randomly. The reasons for conducting Stratified Random Sampling are: • • To reduce sampling error by ensuring representation from the population. The required sample size for the same level of sampling error will usually be smaller.

As compared to other methods of sampling, in Stratified Random Sampling representativeness to a certain degree is forced. The greater degree to which there is similarity within stratum, smaller is the sample size required to provide information about that stratum. Thus the more homogeneous each stratum is with respect to the variable of interest the smaller is the sample required. Example If the head of the household age strata (18-34, 35-49, 50+) are of interest in a study on household spending habits on household furnishings, then each of these groups would be taken separately for sampling purposes. That is, the total population could be divided into age groups and a separate sample is drawn from each group. Cluster Sampling

Page 101

Definition The target population is divided into mutually exclusive and collectively exhaustive subpopulation called clusters. Then a random sample of clusters is selected based on probability sampling techniques such as simple random sampling. For each selected clusters, either all the elements are included in the sample or a sample of elements is drawn probabilistically. Explanation • • • If all the elements in each selected cluster are included in the sample, the procedure is called one stage cluster sampling. If a sample of elements is drawn probabilistically from each selected cluster, the procedure is called two-stage cluster sampling. The key distinction between cluster sampling and stratified sampling is that in cluster sampling only a sample of subpopulations (clusters) is chosen, whereas in stratified sampling all the subpopulations are selected. • The objective of the cluster sampling is to increase the sampling efficiency by decreasing costs. Example If the study requires studying the households in the city then in cluster sampling the whole city is divided into Blocks and to take each household on each block selected. Thus to get a representative whole of the universe. Advantages • • • Low population heterogeneity / high population homogeneity Low expected cost of errors. The main advantage of cluster sampling is the low cost per sampling unit as compared to other sampling methods. Disadvantage • • High potential of sampling error as compared to other methods. For eg: The lower cost per unit and higher sampling error potential of a cluster sample is illustrated by considering a sample of 100 households to be selected for personal interviews from a particular city. In this method the city would be divided in blocks and 10 households from 10 selected blocks would be selected and interviewed. Thus the cost
Page 102

of personal interview per unit will be low because of the close proximity of the units in the cluster. This sample may not be the exact representation of the entire city. Thus there is a possibility of sampling error. Single Stage V/s Multistage Sampling Explanation The number of stages involved in the sampling method is partially a function of the number of sampling frame available. If a perfect frame were always available complete with all the associated information one might want for purposes of clustering and / or stratifying, there would be far fewer multiple samples taken than there are now. In practice, it is not uncommon to have a first stage area sample of, say, census tracts, followed by a second stage sample of blocks, and completed with a systematic sample of households within each block. These stages would not be necessary if a complete listing of households were available. Example AC Nielsen’s Multistage Sampling Procedure to select its PeopleMeter Panel The first stage involves the selection of counties using a stratified random sample based on population. Next within the selected counties there is a random selection of blocks or enumeration districts. These blocks then go through a process called prelisting. A trained field representative visits the selected blocks and creates a list of all the individual hosing units. This list is then returned to the home office where it is checked for internal consistency and external agreement with other data. Finally, individual household units are randomly selected from each block. STRENGTHS AND WEAKNESS OF BASIC SAMPLING TECHNIQUES Techniques Non probability sampling Convenience sampling Strengths Weaknesses

Least expensive, least time Selection bias; sample not consuming, most convenient representative; not recommended for descriptive or casual

Judgmental sampling

research. Low cost, convenient, not Does not allow generalization time consuming
Page 103

subjective

Quota sampling Snowball sampling Probability Sampling Simple (SRS) Random

Sample can be controlled for Selection bias, no assurance of certain characteristics Can estimate characteristics Sampling Easily understood Result projectable Difficult to construct sampling frame; expensive lower precison; no assurance of representativeness representativeness. rare Time consuming

Systematic Sampling

Can increase representativeness. Easier to implement than SRS sampling frame not necessary

Can decrease representativeness

Stratified sampling

Includes

all

important Difficult to select relevant stratification variables; not feasible to stratify on many variable; expensive

subpopulations; precision

Cluster sampling

Easy

to

implement,

cost Imprecise; difficult to compute and interpret results

effective

Choosing Non probability versus Probability Sampling The choice between non probability and probability samples should be based on considerations such as the nature of the research, relative magnitude of non sampling versus sampling errors, variability in the population, as well as statistical and operational considerations. For example, Conditions favoring the use of Non probability Factors Sampling Probability Sampling

Page 104

Nature of research Relative magnitude

Exploratory

Conclusive

of Non sampling errors are Sampling errors are larger.

sampling and non-sampling larger errors Variability in the population Statistical consideration Operational consideration Homogenous (low) Unfavorable Favorable Heterogeneous (high) Favorable Unfavorable

In exploratory research the findings are treated as preliminary and the use of probability sampling may not be warranted. On the other hand, in conclusive research in which the researcher wishes to use the results to estimate overall market shares or the size of the total market, probability sampling is favored. Probability samples allow statistical projection of the results to a target population. For some research problems, highly estimates of population characteristic are required. In these situations, the elimination of selection bias and the ability to calculate sampling error make probability sampling desirable. However probability sampling will not always result in more accurate results. If nonsampling errors are likely to be an important factor, then non-probability sampling may be preferable, as the use of judgment may allow greater control over the sampling process. Another consideration is the homogeneity of the population with respect to the variables of interest. A more heterogeneous population would favor probability sampling, because it would be important to secure a representative sample. Probability sampling is preferable from a statistical viewpoint, as it is the basis of most common statistical techniques. However, probability sampling is sophisticated and requires statistically trained researcher. It generally costs more and takes longer than does nonprobability sampling. In many marketing

Page 105

research projects, it is difficult to justify the additional time and expense. Therefore, in practice, the objectives of the study dictate which sampling method will be used. Methods of determining sample size There are six methods of determining sample size in market research. They are 1. Unaided Judgement: When no specific method is used to determine sample size, it is called Unaided Judgement. Such approach when used to arrive at sample size gives no explicit considerations to either the likely precision of the sample results or the cost of obtaining them (characteristics in which client should have interest). It is an approach to be avoided. 2. All –You –Can –Afford: In this method, a budget for the project is set by some (generally unspecified) process and, after the estimated fixed costs of designing the project, preparing a questionnaire (if required), analysing the data, and preparing the report are deducted, the remainder of the budget is allocated to sampling. Dividing this remaining amount by the estimated cost per sampling unit gives the sample size. This method concentrates on the cost of the information and is not concerned about its value. Although cost always has to be considered in any systematic approach to sample size determination, one also needs to give consideration to how much the information to be provided by the sample will be worth. This approach produces sample sizes that are larger than required as well as sizes that are smaller than optimal. 3.Required Size Per Cell: This method of determining sample size can be used on simple random, stratified random, purposive and quota samples. For example, in a study of attitudes with respect to fast food establishments in a local marketing area it was decided that information was desired for two occupational groups and for each of the four age groups. This resulted in 2 x 4 = 8 sample cells. A sample size of 30 was needed per cell for the types of statistical analyses that were to be conducted. The overall sample size was therefore 8 x 30 = 240. 4.Use of Traditional Statistical Model: The formula for traditional statistical model depends upon the type of sample to be taken and it always incorporates three common variables

Page 106

• • •

an estimate of the variance in the population from which the sample is to be drawn, the error from sampling that the researcher will allow, and the desired level of confidence that the actual sampling error will be within the allowable limits.

The statistical models for simple random sampling include estimation of means and estimation of proportion. 5.Use of Bayesian Statistical Model: The Bayesian model involves finding the difference between the expected value of the information to be provided by the sample size. This difference is known as expected net gain from sampling (ENGS). The sample size with the largest positive ENGS is chosen. The Bayesian model is not as widely used as the traditional statistical models for determining sample size, even though it incorporates the cost of sampling and the traditional models do not. The reasons for the relative infrequent use of Bayesian model are related to greater complexity and perceived difficulty of making the estimates required for Bayesian model as compared to the traditional models. The Sampling Distribution Sampling theory rests on the concept of a sampling distribution. Sampling distribution includes • Sampling distribution of the mean • Sampling distribution of the proportion

Simulated sampling distribution of the mean A sampling distribution of the mean is the relative frequency distribution of the means of all possible samples of size n taken from a population of size N. The definition specifies that all possible samples of size n from population of size N should be taken, and the mean of each sample should be calculated and plotted in relative frequency table. A sampling distribution of the mean for simple random samples that are large (30 or more) has • a normal distribution
Page 107

• a mean equal to the population (M) • a standard deviation, called the standard error of the mean( ), that is equal to the population standard deviation( ) divided by the square root of the sample size FORMULA:

Standard deviation is called standard error of the mean to indicate to indicate that it applies to a distribution of sample means and not to a single sample or a population. A basic characteristic of a sampling distribution is that the area under it (between any two points) can be calculated so long as each point is defined by the number of standard errors it is away from the mean. The number of standard error, a point is away from the mean is referred as the Z value for that point. Sampling Distribution of the Proportion A sampling distribution of the proportion is the relative frequency distribution of the proportion (p) of all possible samples of size n taken from population of size N. A sampling distribution of a proportion for a simple random sample has a • normal distribution • a mean equal to the population proportion (P) • a standard error ( ) equal to

FORMULA:

The estimated standard error of the proportion (given a large sample size that is a small proportion of the population) is FORMULA:

where p represents the sample population.
Page 108

Traditional Statistical Methods of Determining Sample Determination of Sample Size in Problem Involving Means Three kinds of specifications have to be made before the sample size necessary to estimate the population mean can be determined. These are 1.Specification of error (e) that can be allowed –how close must the estimate be (how accurate do we need to be)? 2.Specification of confidence coefficient –what level of confidence is required that the actual sampling error does not exceed that specified (how sure do we want to be that we have achieved our desired accuracy)? 3.Estimate of the population standard deviation( ) –what is the standard deviation of the population (how “spread out” or diverse is the population)? The three specifications are related in the following way: Number of standard errors implied by confidence coefficient = allowable error standard error or in symbols, FORMULA:

The only unknown variable is sample size (n). A simpler formula for the size of simple random samples can be derived from the above equation. FORMULA:

Determination of Sample Size in Problem Involving Proportions The specifications that must be made to determine the sample size for an estimation problem involving a proportion are very similar to those for a mean. They are 1.Specification of error (e) that can be allowed –how close must the estimate be?
Page 109

2.Specification of confidence coefficient –what level of confidence is required that the actual sampling error does not exceed that specified? 3.Estimate of the population proportion (P) using prior information –what is the approximate or estimated population proportion? Specifications, along with the sample size, collectively determine the sampling distribution for the problem. Because sample size is the only remaining unknown, it can be calculated. The above mentioned three specifications are related as follows: Number of standard errors implied by confidence coefficient = allowable error standard error or in symbols, FORMULA:

The formula for determining n that is sample size directly is FORMULA:

Determination of Sample Size in Problems Involving Hypothesis Testing A hypothesis is a proposition, which the researcher wants to verify. It may be mentioned that while a hypothesis is useful, it is not always necessary. Many a time, the researcher is interested in collecting and analysing data, indicating the main characteristics without a hypothesis excepting the one, which he may suggest incidentally during the course of his study. However, in a problem-oriented research, it is necessary to formulate a hypothesis. In such research, hypothesis are generally concerned with the causes of a certain phenomenon or a relationship between two or more variables under investigation.

Page 110

In order to determine the sample the size in a hypothesis testing problem involving proportion, the following specifications must be made: 1.the hypotheses to be tested: A null and an alternate hypothesis are involved in each hypothesis test. A null hypothesis, designated by Ho, is one that, if accepted, will result in no option being formed and/or action being taken that is different from those currently held or being used. The null hypothesis in the problem just described is Ho: order rate = 3.5% The alternate hypothesis, designated by H1, is one that will lead to opinions being formed and/or actions being taken that are different from those currently held or being used. The alternate hypothesis here is H1: order rate = 5.0% Although null hypothesis is always explicitly stated, this is sometimes not true of the alternate hypothesis. In those instances when it is not stated it is understood that it consists of all values of the proportion not reserved by the null hypothesis. In this situation if the alternate hypothesis were not explicitly stated, it would be understood that it would be H1: order rate (is not equal to) 3.5% 2.the level of sampling error permitted in the test of each hypothesis: Two types of error can be made in hypothesis testing problems. An error is made when null hypothesis is true but the conclusion is reached that the alternate hypothesis should be accepted. This is known as Type I error. The Type II error is made when the alternate hypothesis is accepted 3.the test statistic to be used. In order to determine the sample the size in a hypothesis-testing problem involving means, the following specifications must be made: 1.the hypotheses to be tested, 2.the level of sampling error permitted in the test of each hypothesis, 3.the standard deviation of population, and 4.the test statistic to be used. MARKET RESEARCH NOTES SCALES AND ATTITUDE MEASUREMENT
Page 111

TOPICS........................................................................................................................................1 RESEARCH FUNDAMENTALS ..............................................................................................1 ADVANTAGES OF MR..............................................................................................................1 RESEARCH PROCESS...............................................................................................................1 1. RESEARCH FUNDAMENTALS...........................................................................................2 MEANING OF RESEARCH.......................................................................................................2 BASIC VS APPLIED RESEARCH.........................................................................................3 INFORMATION SYSTEMS, DECISION SUPPORT SYSTEMS, AND MARKETING RESEARCH...................................................................................................................................3 A good MDSS should have the following characteristics......................................................4 A typical MDSS is assembled from four components ...........................................................5 DATABASE...................................................................................................................................5 Reports and displays..............................................................................................................5 ADVANTAGES OF MR..............................................................................................................5 LIMITATIONS TO MR...............................................................................................................6 APPLICATION OF MARKETING RESEARCH ...................................................................7 TRADITIONAL APPLICATION OF MARKETING RESEARCH ....................................7 NEW-PRODUCT RESEARCH .................................................................................................7 New product development is critical to the life of most organizations as they adapt to their changing environment. Since, by definition, new products contain unfamiliar aspects for the organization, there will be uncertainty associated with new products. New product can be divided into four stages...............................................................................................7 There are two types of concept generation research. They are:...........................................7 REALLY NEW PRODUCT........................................................................................................9 NUMBER AND LOCATION OF SALES REPRESENTATIVES.......................................11 PROBLEM DISCOVERY.........................................................................................................16 RESEARCH OBJECTIVES......................................................................................................17 RESEARCH DESIGN................................................................................................................18 RESEARCH METHOD.............................................................................................................19 DATA COLLECTION...............................................................................................................20 REPRESENTATION BASIS ...............................................................................................................20 Probability sampling Non-probability ......................................................20 DATA PROCESSING................................................................................................................21 DATA ANALYSIS......................................................................................................................22 RESEARCH REPORT...............................................................................................................22
Page 112

TOPICS......................................................................................................................................23 1.1 RESEARCH DESIGN.........................................................................................................24 1.2 PROBLEM DEFINITION..................................................................................................24 1.3 COMPONENTS OF PROBLEM.......................................................................................24 1.4 IMPACT OF PROBLEM DEFINITION..........................................................................27 3.1 EXPLORATORY RESEARCH STUDIES.......................................................................28 3.2 DESCRIPTIVE AND DIAGNOSTIC RESEARCH STUDIES.....................................30 3.3 HYPOTHESIS- TESTING RESEARCH STUDIES (EXPERIMENTAL STUDIES) 31 4. DIFFERENCE BETWEEN EXPLORATORY AND DESCRIPTIVE RESEARCH.. .31 5. BASIC PRINCIPLES OF EXPERIMENTAL DESIGN...................................................32 TOPICS......................................................................................................................................42 RESEARCH METHODS...........................................................................................................42 ADVANTAGES OF OBSERVATION METHOD..................................................................43 METHODS OF OBSERVATION..........................................................................................43 Structured direct observation...........................................................................................44 CONTRIVED OBSERVATION............................................................................................................44 MECHANICAL OBSERVATION..........................................................................................................45 INDIRECT OBSERVATION....................................................................................................45 DEFINITION OF EXPERIMENT...........................................................................................49 SECONDARY DATA.................................................................................................................51 PROBLEMS ENCOUNTERED WITH SECONDARY DATA.........................................................................51 Sources of Secondary Data..................................................................................................52 INTERNAL SOURCES.....................................................................................................................52 EXTERNAL SOURCES....................................................................................................................52 DEPTH INTERVIEWS .....................................................................................................................54 COLLECTION OF SECONDARY DATA............................................................................54 TYPES OF PRIMARY DATA COLLECTION.....................................................................56 -- OBSERVATIONS AND SURVEYS..................................................................................56 1) OBSERVATION METHOD .............................................................................................56 ADVANTAGES.....................................................................................................................56 DISADVANTAGES..............................................................................................................56 CONCLUSION: THE RESEARCHER SHOULD USE THE ALREADY VIABLE DATA ONLY WHEN HE FINDS THEM RELIABLE, SUITABLE AND ADEQUATE. BUT HE SHOULD NOT BLINDLY DISCARD THE USE OF SUCH DATA IF
THEY ARE READILY AVAILABLE FROM AUTHENTIC SOURCES AND ARE ALSO SUITABLE AND ADEQUATE FOR IN THAT CASE IT WILL NOT BE ECONOMICAL TO SPEND TIME AND ENERGY IN FIELD SURVEYS FOR COLLECTING INFORMATION.

AT TIMES THERE MAY BE WEALTH OF USABLE INFORMATION IN THE ALREADY AVAILABLE

DATA WHICH MUST BE USED BY AN INTELLIGENT RESEARCHER BUT WITH DUE PRECAUTION.....................58

DECIDING WHAT TO ASK THERE ARE THREE POTENTIAL TYPES OF INFORMATION: .......................................................................................................................59
Page 113

Does the Question Need to be More Specific?....................................................................59 IS QUESTION BIASED OR LOADED?...............................................................................................59 WILL RESPONDENTS ANSWER TRUTHFULLY?.................................................................................60 2. QUESTION PHRASING: THE WAY QUESTIONS ARE PHRASED IS IMPORTANT AND THERE ARE SOME GENERAL RULES FOR CONSTRUCTING GOOD QUESTIONS IN A QUESTIONNAIRE . ....................................................60 USE SHORT AND SIMPLE SENTENCES ..............................................................................................60 ASK FOR ONLY ONE PIECE OF INFORMATION AT A TIME ....................................................................60 AVOID NEGATIVES IF POSSIBLE .....................................................................................................60 ASK PRECISE QUESTIONS ..............................................................................................................60 LEVEL OF DETAILS ......................................................................................................................60 MINIMIZE BIAS ...........................................................................................................................60 1) PERSONAL INTERVIEW................................................................................................62 ADVANTAGES.............................................................................................................................62 DISADVANTAGES.........................................................................................................................62 2) TELEPHONE INTERVIEWS .........................................................................................63 ADVANTAGES.............................................................................................................................63 DISADVANTAGES ........................................................................................................................63 3) COMMERCIAL SURVEYS..............................................................................................63 PERIODIC SURVEYS......................................................................................................................63 PANEL SURVEYS..........................................................................................................................64 SHARED SURVEYS........................................................................................................................64 STORE AUDITS............................................................................................................................64 PRODUCT AUDITS........................................................................................................................65 RETAIL DISTRIBUTION AUDITS.......................................................................................................65 5) PANELS.............................................................................................................................65 RETAIL PANEL............................................................................................................................65 DIARY PANELS............................................................................................................................65 ELECTRONIC PANELS....................................................................................................................65 6) MAIL QUESTIONNAIRE....................................................................................................66 ADVANTAGES.............................................................................................................................66 LIMITATIONS..............................................................................................................................66 THE NATURE OF SURVEY RESEARCH...........................................................................78 TYPES OF INTERVIEWS....................................................................................................78 CENSUS VERSUS SAMPLE....................................................................................................88 ADVANTAGES OF SAMPLING.............................................................................................89 LIMITATIONS OF SAMPLING..............................................................................................90 The Sampling Process..........................................................................................................90 STEP 2: SPECIFY THE SAMPLING FRAME......................................................................91 SAMPLING TECHNIQUES.....................................................................................................93 CONVENIENCE SAMPLING..................................................................................................95 EXPLANATION.........................................................................................................................95 ADVANTAGES ............................................................................................................................96 DISADVANTAGES ........................................................................................................................96
Page 114

Definition..............................................................................................................................97 EXPLANATION.........................................................................................................................97 Example................................................................................................................................98 Disadvantages......................................................................................................................98 Probability Techniques:.......................................................................................................98 Simple Random Sampling....................................................................................................98 Definition..............................................................................................................................98 Explanation...........................................................................................................................99 Advantages...........................................................................................................................99 DISADVANTAGES.........................................................................................................................99 Systematic sampling ............................................................................................................99 Definition..............................................................................................................................99 Explanation.........................................................................................................................100 Example..............................................................................................................................100 Advantages.........................................................................................................................100 Definition............................................................................................................................101 Explanation.........................................................................................................................101 Example..............................................................................................................................101 DEFINITION..............................................................................................................................102 Explanation.........................................................................................................................102 Example..............................................................................................................................102 Advantages.........................................................................................................................102 Disadvantage......................................................................................................................102 Explanation ........................................................................................................................103 Example..............................................................................................................................103 AC NIELSEN’S MULTISTAGE SAMPLING PROCEDURE TO SELECT ITS PEOPLEMETER PANEL................103 ATTITUDE MEASUREMENT TECHNIQUES..................................................................117 NON-DISGUISED, NON-STRUCTURED TECHNIQUES................................................118 QUALITATIVE RESEARCH............................................................................................................118 Depth interviews.................................................................................................................118 Focus group discussions (F.G.Ds):...................................................................................119 DISGUISED, NON-STRUCTURED TECHNIQUES..........................................................121 PROJECTIVE TECHNIQUES...........................................................................................................121 Association Techniques......................................................................................................121 Completion Techniques......................................................................................................122 Construction Techniques ...................................................................................................122 Expressive Techniques.......................................................................................................123 Problems.............................................................................................................................123 Promises.............................................................................................................................123 WORD ASSOCIATION.................................................................................................................123 SENTENCE COMPLETION.............................................................................................................124 STORY COMPLETION..................................................................................................................124 PICTORIAL TECHNIQUES.............................................................................................................124 TAT.....................................................................................................................................124 Cartoon Tests.....................................................................................................................125
Page 115

RELIABILITY AND VALIDITY OF MEASUREMENTS.................................................125 RELIABILITY.............................................................................................................................126 Approaches to assessing reliability...................................................................................126 Test-Retest Reliability........................................................................................................126 Alternative-Form Reliability..............................................................................................127 VALIDITY.................................................................................................................................128 Basic Approaches to Validity Assessment.........................................................................128 Content Validity..................................................................................................................129 Criterion-Related Validity:................................................................................................129 Construct Validity...............................................................................................................130 NON- DISGUISED, STRUCTURED TECHNIQUES.........................................................130 NOMINAL DATA ......................................................................................................................131 ORDINAL SCALES......................................................................................................................131 INTERVAL SCALES.....................................................................................................................131 RATIO SCALES..........................................................................................................................132 SEMANTIC DIFFERENTIAL SCALE.................................................................................................132 THE CONSTANT SUM SCALE.......................................................................................................133 Advantages.........................................................................................................................133 Disadvantage......................................................................................................................134 THURSTONE SCALE....................................................................................................................134 Advantages.........................................................................................................................134 Disadvantages....................................................................................................................134 LIKERT SCALE..........................................................................................................................134 Advantages ........................................................................................................................135 Disadvantages....................................................................................................................135 COMPARISON OF THURSTONE AND LIKERT SCALE..........................................................................135 DISGUISED, STRUCTURED TECHNIQUES.....................................................................135 CONCEPT TESTING..............................................................................................................135

Page 116

ATTITUDE MEASUREMENT TECHNIQUES Definition of attitude: Attitude has been defined by Gene F. Summers as a predisposition to respond to an idea or an object. In marketing, this refers to the consumer’s predisposition about the product or service. If it is favorable, then the consumer is likely to purchase the product or service. Attitudes about products or services are composed of three elements:  Beliefs such as the product’s strength or the economy of the product or service  Emotional feelings such as likes or dislikes  Readiness to respond to the product or service, i.e. to buy it. These three elements combine together to form an image of the product or service in the mind of the consumer. When the car manufacturer, the movie producer or the insurance company refers to the company’s image, they are referring to some general averages of many individuals’ attitudes towards the company. Attitude measurement is commonly referred to as scaling. The measurement techniques are divided thus:

NonDisguised, NonStructured techniques
F.G.Ds

Disguised, NonStructured techniques

NonDisguised, Structured techniques

Disguised, Structured techniques

Word association

Ordinal Scale

Depth intervie ws

Story Completion

Interval Scale

Sentence Completion s Pictorial Techniques

Ratio Scale

Graphic Rating Scale
Semantic Differential

Thematic Apperception Tests (TAT)

Cartoon Method Likert Scale

Multiple Item Scale

Thurston Scale Page 117

NON-DISGUISED, NON-STRUCTURED TECHNIQUES The essence of these methods is that the purpose of the interview is not a secret and that there is no fixed structure for conducting the interview. Qualitative Research The most common method of obtaining information about the behavior, attitudes and other characteristics of people is to ask them. However it is not always possible, or desirable to use direct questioning to obtain information. People may be either unwilling or unable to give answers to questions they consider to be an invasion of their privacy, that adversely affect their self-perception or prestige, that are embarrassing that concern motivations that they do not fully understand or cannot verbalize, or for other reasons. Therefore additional approaches to obtaining such information may be necessary. Depth interviews and Projective techniques are frequently used by marketing researchers when direct questioning is impractical, more costly, or less accurate. These techniques generally referred to as Qualitative research. Depth interviews Individual depth interviews typically require 30-45minutes. The interviewer does not have a specific set of pre-specified questions that must be asked according to the order imposed by a questionnaire. Instead, there is freedom to create questions, to probe those responses that appear relevant, and generally to try to develop the best set of data in any way practical. However the interviewer must follow one rule; one must not consciously try to affect the content of the answers given by the respondents. The respondent. The respondent must feel free to reply to the various questions, probes, and other, subtler, ways of encouraging responses in the manner deemed most appropriate.  Subject of interest is discussed in detail.  There is no fixed pattern for eliciting information from the respondents.  Generally conducted by highly trained interviewers. They must be thorough in probing the respondents.  The interviewee is asked about the subject of his choice, coffee, for example, and an attempt is made to explore the respondents’ attitudes in depth by probing extensively into any other areas which may come up.  Interviewers have a general series of topics that they will introduce – perhaps such topics as coffee, or sleep, and will introduce them from time to time if the respondent does not bring them up.  Tone of the interview is permissive and the respondent is allowed to talk as much as he likes.  The interviewer must not influence the answers of the respondent.  The interpretation of the answers is very subjective and knowledge of human behavior is required to analyze the information received. Individual depth interviews uses three questioning techniques namely:
Page 118

1. Laddering involves having respondents identify attributes that distinguish brands by asking questions. Each distinguishing attribute is then probed to determine why it is important or meaningful. These reasons are then probed to determine why it is important, and so forth. The purpose is to uncover the “ network of meanings” associated with the product, brand, or concept. 2. Hidden-issue questioning focuses on individual respondents feelings about sensitive issues. Analysis on focus on common underlying themes across respondents. These themes can then be used to guide advertising development 3. Symbolic questioning requires respondents to describe the opposites of the product/ activity of interest or a specific attribute of the product/ activity. Individual depth interviews have been found to generate more and higher quality ideas on a per respondent basis than either focus or minigroups. They are particularly appropriate when: 1. Detailed probing of an individual’s behavior, attitude or needs is required; 2. The subject matter under discussion is likely to be of a highly confidential nature (e. g. personal investment) 3. The subject matter is of an emotionally charged or embarrassing nature; 4. Certain strong, socially acceptable norms exist (e.g. baby feeding) and the need to conform in a group discussion may influence responses; 5. Where highly detailed understanding of complicated behavior or decisionmaking pattern (e.g. planning the family holiday) are required; or The interviews are with professional people or with people on the subject of their jobs 9 e.g. finance directors) Focus group discussions (F.G.Ds): The standard focus group interview in the United States involves 8 and 12 individuals and lasts about 2 hours. Normally each group is designed to reflect the characteristics of a particular market segment. The respondents are selected according to the relevant sampling plan and meet at a central location that generally has facility for taping and/ or filming the interviews. In Europe, focus tend to consist of 6 to 8 respondents, vary in length from 1.5 to 4 hours and are often conducted in the home of the recruiter. Otherwise the interviewers are similar. The discussion itself is “led” by a moderator. The moderator attempts to progress through three stages during the interviewer: (1) establish rapport with the group, structure the rules of group interaction, and set objectives; (2) provoke intense discussion in the relevant areas; and (3) summarize the group’s responses to determine the extent of agreement. The general either the moderator or a second person prepares a summary of each session after analyzing the session’s transcript. Focus Group Interviews can be applied to: 1. Basic- need studies for product idea creation, 2. New product idea or concept exploration, 3. Product positioning studies,
Page 119

4. Advertising and communications research, 5. Background studies on consumer’s frames or reference, 6. Establishment of consumer vocabulary as a preliminary step in questionnaire development and, 7. Determination of attitudes and behavior. Advantages 1. Each individual is able to expand and refine their opinions in the interaction with the other members. This process provides more detailed and accurate information than could be derived from each separately. 2. A group interview situation is generally more exciting and offers more stimulation to the participants than the standard depth interviews. 3. The security of being in a crowd encourages some members to speak out when they otherwise would not. 4. As the questions raised by the moderator are addressed to the entire group rather than an individual the answer contains a degree of spontaneity that is not produced by other techniques. 5. Focus groups can be used successfully with children over five. They are also very useful with adults in developing countries where literacy rates are low and survey research is difficult. 88 6. A final major advantage of focus groups is that executives often observe the interview (from behind mirrors) or watch films of the interview. Disadvantages 1. Since focus group interviews last 1.5 to 3 hours and take place at a central location, securing cooperation from a random sample is difficult. 2. Those who attend group interviews and actively participate in them are likely to be different in many respects from those who do not. 3. There are chances that participants may go along with the popular opinion instead of expressing their own which may be contrary to the popular opinions. 4. The presence of a one-way mirror and /or an observer(s) has been found to distort participant’s responses. 5. The moderator can introduce serious biases in the interview by shifting topics too rapidly verbally or nonverbally encouraging certain answers, failing to cover specific areas, and so forth. 6. Focus groups are expensive on a per respondent basis. Minigroups Minigroups consist of a moderator and 4 and 5 respondents rather than the 8 to 12 used in most focus groups. They are used when the issue being investigated requires more extensive probing than is possible in a larger group. Minigroups do not allow the collection of a confidential or highly sensitive data as might be possible in an individual depth interview. However, they do allow the researcher to obtain substantially depth of response on the topics that are covered. Further the intimacy of the small group often allows discussion of quite sensitive issues.

Page 120

The advantages and disadvantages of minigroups are similar to those of standard focus groups, but on a smaller scale.  In principle, these interviews are the same as the previous ones, excepting that they are conducted in groups rather than for individuals.  This method is therefore less expensive and less time consuming than the depth interviews.  This method is advantageous because it gives excellent leads to consumer attitudes that no other method can give.  Another advantage of this method is that each respondent receives stimulation for responding from his group members and so the interviewer need not prompt the interviewee to answer.  The disadvantage here is that one or two members could dominate in the group and others might not get a chance to answer. This would again make it an individual effort. DISGUISED, NON-STRUCTURED TECHNIQUES The essence of these methods is that the interviewee either does not know that his attitude is being studied, or does not know for which company the survey is being done, or sometimes he does not know both. It involves using various vague stimuli to which the respondent is asked to respond. In doing so, it is believed that the respondent reveals several elements of his/ her attitude that he would not have revealed in the face of direct questions. These tests are not difficult to administer because they are like games played with the respondents. Generally, respondents seem to enjoy the exercise. Projective Techniques Projective Techniques are based on the theory that the description of vague objects requires interpretation and this interpretation can only be based on the individual’s own background, attitudes, and values. The more vague or ambiguous the object to be described the move one must reveal of oneself in order to complete the description. The following general categories of projective techniques are described: association, completion, construction and expression. Association Techniques Association techniques require the subject to respond to the presentation of a stimulus with the first things that come to mind. The word association technique requires the respondent to give the first word or thought that comes to mind after researcher presents a word or phrase. In free association only the first word or thought is required. In successive word association, the respondent is asked to give a series of words or thoughts that occur after hearing a given word. The respondent is generally read a number of relatively neutral terms to establish the technique. Then the words of interest to the researcher are presented, each separated by several neutral terms. The order of presentation of the key words is randomized to prevent any position or order bias from affecting the results. The most common approach to analyzing the resulting data is to

Page 121

analyze the frequency with a particular word or category or word is given in response to the word of interest to the researcher. Word association techniques are used in testing potential brand names and occasionally for measuring attitudes about particular products, product attributes, brands, packages or advertisements. Completion Techniques This technique requires the respondent to complete an incomplete stimulus. Two types of completion are of interest to marketing researchers- sentence completion and story completion. Sentence completion, as the name implies, involves requiring the respondent to complete a sentence. In most sentence completion tests the respondents are asked to complete the sentence with a phrase. Generally they are told to use the first thought that comes to their mind or “anything that makes sense”. Because the individual is not required directly to associate himself or herself with the answer conscious or subconscious defenses are more likely to be relaxed and allow a more revealing answer. Story completion is an expanded version of sentence completion. As the name suggests part of a story is told and the respondent is asked to complete it. Construction Techniques This technique requires the respondent to produce or construct something generally a story, dialogue, or description. They are similar to completion techniques except that less initial structure is provided. Cartoon techniques present cartoon-type drawings of one or more people in a particular situation. One or more of the individuals are shown with a sentence in bubble form above their heads and one of the others is shown with a blank bubble that the respondent is to “fill in”. Instead of having the bubble show replies or comments, it can be drawn to indicate the unspoken thoughts of one or more of the characters. This device allows the respondent to avoid any restraints that might be felt against having even a carton character speak as opposed to think certain thoughts. Third- person techniques allow the respondent to project attitudes onto some vague third person. This third person is generally “an average woman”, “your neighbors”, “the guys where you work”, “most doctors” or the like. Thus instead of asking the respondent why he or she did something or what he or she thinks about something the researcher asks what friends, neighbors or the average person thinks about the issue. Picture response, another useful construction technique, involves using pictures to elicit stories. These pictures are usually relatively vague, so that the respondent must use his or her imagination to describe what is occurring.

Page 122

Fantasy scenario requires the respondent to make up a fantasy about the product or brand. Personification asks the respondent to create a personally for the products or brands. Expressive Techniques Role-playing is the only expressive technique utilized to any extent by marketing researchers. In role playing the consumer is asked to assume the role or behavior of an object or another person, such as a sales representative for a particular department store. The role-playing customer can then be asked to try to sell a given product to a number of different “consumers” who raise varying objections. The means by which the role player attempts to overcome these objections can reveal a great deal about his or her attitudes. Another version of the technique involves studying the role-player’s attitudes on what type of people ”should” shop at the store in question. Problems As projective techniques generally require personal interviews with highly trained interviewers and interpreters to evaluate the responses, they tend to be very expensive. Small sample sizes can increase the probability of substantial sampling error. The reliance on small samples often has been accompanied by non-profitability selection procedures. Some of the projective techniques require the respondents to engage in behavior that may well be strange to them; this is particular true for techniques such as role-plays. Thus there is reason enough to believe that there might be an error in the findings. Measurement error is also a serious issue with respect to projective techniques. The possibility of interpreter bias is obvious. Promises They can uncover information not available through direct questioning or observation. They are particularly useful in the exploratory stages of research They can generate hypotheses for further testing and provide attributes lists and terms for more structures techniques such as the semantic differential. The results of projective techniques can be used directly for decision- making. Word Association  One of the oldest and simplest projection techniques.  Respondents are presented with a number of different words, one at a time. After each word, they are asked to give the first word that comes to mind.  The assumption here is that through free words, the respondents will indicate their inner feelings about the subject.  Responses are timed so that those responses that respondents “reason out” are identified and taken into account in the analysis. The time limit is usually 5 seconds.  The usual way of constructing such a test is to choose many stimulating and “neutral” words. The words are read out to the respondent one at a time, and the interviewer essentially records the “first word” association by the respondent.

Page 123

 Respondents should not be asked to write their responses because then the interviewer will not know if the responses were spontaneous or whether the respondent took time to think out the responses.  An example of such a test is: who would eat a lot of oatmeal? The first response is “athletes”. This means that the respondent feels that the product is more suited for sportspersons. More words on the same topic will reveal more about the respondent’s attitude about the product.  While analyzing the results of word-association tests, responses are arranged along such lines as “favorable - unfavorable” and “pleasant – unpleasant”. Sentence Completion  The respondent is given a number of incomplete sentences and asked to complete them.  The rule here too, is that respondent must fill in the first thought that comes to mind.  Responses are timed.  Here the interviewer gets more information than the word association technique.  However, it is difficult to disguise the motive of the study from the respondent, who is usually able to diagnose the investigator’s purpose of study.  For example, “a man who reads Sportstar is ------------------------------------------.”  The sentences can be worded in either first or third person. No evidence suggests that one of these approaches could be better than the other. Story Completion  Respondents are given a half-completed story. This is enough to draw their attention to a particular issue, but the ending is left vague, so that responses can be varied.  This technique is very versatile and has numerous applications to marketing problems.  The findings about products/ services give companies inputs to determine advertising and promotional themes and product characteristics. Pictorial Techniques  These are similar to story completion method, except that here pictures are used as the stimuli. The two main methods used here are a) Thematic Apperception Tests (TAT) b) Cartoon method TAT  Clinical psychologists have long used this method.  Here the respondent is shown many ambiguous pictures and he is asked to spin stories about them.  The interviewer may ask questions to help the respondent to think. For example “what is happening here?” makes the answer focused towards an action. Or “which one is the aggressor?” makes the respondent think about the picture as
Page 124

one of aggression. The reason that respondents must be asked such prompting questions is that the pictures are very abstract and general and as such are open to very broad and irreverent interpretations. So some amount of focus is needed to channel the respondent’s thinking.  Each subject in the pictures is a medium through which the respondent projects his feelings, ideas, emotions and attitudes. The respondent attributes these feelings to the characters because he sees in the picture something related to himself.  Responses differ widely and analysis depends upon the ambiguity of the picture, the extent to which the respondent is able to guess the conclusions and the vagueness of the support questions asked by the interviewer. Cartoon Tests They are a version or modification of the TAT, but they are simpler to administer and analyze. Cartoon Characters are shown in a specific situation pertinent to a problem. One or more “balloons” indicating the conversation of the characters is left open. The respondent has to then fill these balloons and then analyzed. RELIABILITY AND VALIDITY OF MEASUREMENTS The terms validity, reliability & accuracy are often used interchangeably. But each of these has a specific meaning based on the type of measurement error that is present. There are two types of measurement errors: 1. Systematic errors 2. Variable errors A systematic error, also known as bias, is one that occurs in a consistent manner each time something is measured. For e.g.: A biased question would produce an error in the same direction each time it is asked. Such an error would be a systematic error. A variable error is one that occurs randomly each time something is measured. For e.g.: A response that is less favorable than the true feeling because the respondent was in a bad mood (temporary characteristic) would not occur each time that individual’s attitude is measured. In fact, an error in the opposite direction (overly favorable) would occur if the individual were in a good mood. This represents a variable error. The term reliability is used to refer to the degree of variable error in a measurement. We define reliability as the extent to which a measurement is free of variable errors. This is reflected when repeated measures of the same stable characteristic in the same objects show limited variation. A common conceptual definition for validity is the extent to which the measure provides an accurate representation of what one is trying to measure. In this conceptual definition, validity includes both systematic and variable error components. However, it is more useful to limit the meaning of the term validity to refer to the degree of
Page 125

consistent or systematic error in a measurement. Therefore we define validity as the extent to which a measurement is free from systematic error. Measurement accuracy is then defined as the extent to which a measurement is free from systematic and variable error. Accuracy is the ultimate concern of the researcher since a lack of accuracy may lead to incorrect decisions. Reliability There are various operational approaches for estimation of reliability. The following table summarizes these approaches. Approaches to assessing reliability No. 1 2 3 4 Approach Test-retest reliability Alternative-forms reliability Internal-comparison reliability Scorer reliability Description Applying the same measure to the same objects a second time Measuring the same objects by two instruments that are designed to be as nearly alike as possible Comparing the responses among the various items on a multiple-item index designed to measure a homogeneous concept Comparing the scores assigned by two or more judges

No one approach is the best; several different assessment approaches should generally be used. The selection of one or more means of assessing a measure’s reliability depends on the errors likely to be present and the cost of each assessment method in the situation at hand. Test-Retest Reliability Test-retest reliability estimates are obtained by repeating the measurement using the same instrument using the same instrument under as nearly equivalent conditions as possible. The results of the two administrations are then compared and the degree of correspondence is determined. The greater the differences, the lower the reliability A number of practical and computational difficulties are involved in measuring testretest reliability. They are: 1. Some items can be measured only once. For e.g.: It would not be possible to remeasure an individual’s initial reaction to a new advertising slogan. 2. In many situations, the initial measurement may alter the characteristic being measured. Thus an attitude survey may focus the individual’s attention on the topic and cause new or different attitudes to be formed about it.

Page 126

3. There may be some form of a carryover effect from the first measure. The retaking of a measure may produce boredom, anger, or attempts to remember the answers given in the initial measurement. 4. Factors extraneous to the measuring process may cause shifts in the characteristic being measured. For e.g.: A favorable experience with the brand during the period between the test and the retest might cause a shift in individual ratings of that brand. Alternative-Form Reliability Alternative-form reliability estimates are obtained by applying two “equivalent” forms of the measuring instrument to the same subjects. As in test-retest reliability, the results of the two instruments are compared on an item-by-item basis and the degree of similarity is determined. The basic logic is the same as in test-retest approach. Two primary problems are associated with this approach. They are: 1. The extra time, expense and trouble involved in obtaining two equivalent measures. 2. More importantly, the problem of constructing two truly equivalent forms. Thus a low degree of response similarity may reflect either an unreliable instrument or nonequivalent forms. Despite these difficulties, researchers should use alternative measures of important concepts whenever possible to allow assessment of reliability (and validity) as well as to improve accuracy (by using the data from both the measures) 3. Internal-Comparison Reliability Internal-comparison reliability is estimated by the intercorrelation among the scores of the items on a multiple-item index. All items on the index must be designed to measure precisely the same thing. For e.g.: measures of store image generally involve assessing a number of specific dimensions of the store, such as price level, merchandise, service, and location. Because these are somewhat independent, an internal-comparison measure of reliability is not appropriate across dimensions. However, it can be used within each dimension if several items are used to measure each dimension. Split-half reliability is the simplest type of internal comparison. It is obtained by comparing the results of half the items on a multi-item measure with the results from the remaining items. The usual approach to split-half reliability involves dividing the total number of items into two groups on a random basis and computing a measure of similarity. A better approach to internal comparison is known as coefficient alpha. This measurement, in effect, produces the mean of all possible split-half coefficients resulting from different splitting of the measurement instrument. Coefficient alpha can range from 0 to 1. A value of .6 or less is usually viewed as unsatisfactory. 4. Scorer Reliability:
Page 127

Marketing researchers frequently rely on judgment to classify a consumer’s response. This occurs, for e.g., when projective techniques, focus groups, observation, or open – ended questions are used. In these situations, the judges or scorers, may be unreliable, rather than the instrument or respondent. To estimate the level of scorer reliability, each scorer should have some of the items he or she scores judged independently by another scorer. The correlation between the various judges is a measure of scorer reliability. Validity Validity like reliability is concerned with error. However it is concerned with consistent or systematic error rather than variable error. A valid measurement reflects only the characteristics of interest and random error. There are three basic types of validity. They are: 1. Content validity 2. Construct validity and 3. Criterion-related validity (predictive and concurrent) These are summarized in the table below: Basic Approaches to Validity Assessment No. 1 2 a. Approach Content validation Criterion-related validation CONCURRENT VALIDATI b ON PREDICTIVE VALIDATI 3 ON CONSTRUCT VALIDATIO N Description Involves assessing the representativeness or the sampling adequacy of the items contained in the measuring instrument Involves inferring an individual’s score or standing on some measurement, called a criterion, from the measurement at hand Involves assessing the extent to which the obtained score may be used to estimate an individual’s present standing with respect to some other variable Involves assessing the extent to which the obtained score may be used to estimate an individual’s future standing with respect to the criterion variable Involves understanding the meaning of the obtained measurements

Page 128

Content Validity Content validity estimates are essentially systematic, but subjective, evaluations of the appropriateness of the measuring instrument for the task at hand. The term face validity has a similar meaning. However, face validity generally refers to “nonexpert” judgments of individuals completing the instrument and/or executives who must approve its use. This does not mean that face validity is not important. Respondents may refuse to cooperate or may fail to treat seriously measurements that appear irrelevant to them. Managers may refuse to approve projects that utilize measurements lacking in face validity. Therefore, to the extent possible, researchers should strive for face validity. The most common use of face validity is with multi-item measures. In this case, the researchers or some other individual or group of individuals assesses the representativeness, or sampling adequacy, of the included items in light of the purpose of the measuring instrument. Thus, an attitude scale designed to measure the overall attitude towards a shopping center would not be considered to have content validity if it omitted any major attributes such as location, layout and so on. Content validation is the most common form of validation in applied marketing research. Criterion-Related Validity: Criterion-related validity can take two forms, based on the time period involved. They are: 1. Concurrent validity and 2. Predictive validity Concurrent validity is the extent to which one measure of a variable can be used to estimate an individual’s current score on a different measure of the same, or a closely related variable. For e.g.: a researcher may be trying to relate social class to the use of savings and loan associations. In a pilot study, the researcher finds a useful relationship between attitudes towards savings and loan associations and social class, as defined by Warner’s ISC scale. The researcher now wishes to test this relationship further in a national mail survey. Unfortunately, Warner’s ISC is difficult to use in a mail survey. Therefore, the researcher develops brief verbal descriptions of each of Warner’s six social classes. Respondents will be asked to indicate the social class that best describes their household. Prior to using this measure, the researcher should assess its concurrent validity with the standard ISC scale. Predictive validity is the extent to which an individual’s future level on some variable can be predicted by his or her performance on a current measurement of the same or a different variable. Predictive validity is the primary concern of the applied marketing researcher. Some of the predictive validity questions that confront marketing researchers are: (a) Will a measure of attitudes predict future purchases? (b) Will a measure of sales in a controlled store test predict future market share? (c) Will a measure of initial sales predict future sales? and
Page 129

(d) Will a measure of demographic characteristics of an area predict the success of a branch bank in the area? Construct Validity Construct validity – understanding the factors that underline the obtained measurement – is the most complex form of validity. It involves more than just knowing how well a given measure works; it also involves knowing why it works. Construct validity requires that the researcher have a sound theory of the nature of the concept being measured and how it relates to the other concepts. A number of approaches exist for assessing construct validity of which the most common is called multitrait-multimethod matrix approach. These multiple measures (by methods as different from each other as possible) of multiple traits or concepts can be analyzed by the Campbell-Fiske procedure, confirmatory factor analysis, or direct product model. These techniques generally involve ensuring that the measure correlates positively with other measures of the same construct (convergent validity), does not correlate with theoretically unrelated constructs (discriminant validity), correlates in the theoretically predicted way with measures of different but related constructs (nomological validity), and correlates highly with itself (reliability). For e.g.: suppose we develop a multi-item scale to measure the tendency to purchase prestige brands. Our theory suggests that this tendency is caused by three personality variables. They are: 1. Low self-focus 2. High need for status and 3. High materialism We believe that it is unrelated to brand loyalty and the tendency to purchase new products. Evidence of construct validity would exist if our scale: 1. Correlates highly with other measures of prestige brand preference such as reported purchases and classifications by friends (convergent validity); 2. Has a low correlation with the unrelated constructs brand loyalty and tendency to purchase new products (discriminant validity); 3. Has a low correlation with self-focus and high correlations with need for status and materialism (nomological validity); and 4. Has a high level of internal consistency (reliability) NON- DISGUISED, STRUCTURED TECHNIQUES The non – structured techniques for attitude measurement are primarily of value in exploratory studies, where the researcher is looking for the salient attributes of given products and the important factors surrounding purchase decisions as seen by the consumer. Structured techniques can provide a more objective measurement system, one that is more comparable to a scale or yardstick. The term scaling has been applied
Page 130

to the efforts to measure attitudes objectively, and a number of useful scales have been developed. Nominal Data A set of data is said to be nominal if the values / observations belonging to it can be assigned a code in the form of a number where the numbers are simply labels. You can count but not order or measure nominal data. For example, in a data set males could be coded as 0, females as 1; marital status of an individual could be coded as Y if married, N if single. Ordinal Scales  They are the simplest attitude measuring scales use din marketing research.  They serve to rank respondents according to some characteristics such as favorabiliy to a certain brand, or to rank items such as brands in order of consumer preference.  They do not measure the degree of favorability of the different rankings. All the scale tells is that the individual or item has more, less, or the same amount of the characteristic being measured as some other time.  They are the most widely used type of scales in marketing research. A set of data is said to be ordinal if the values / observations belonging to it can be ranked (put in order) or have a rating scale attached. You can count and order, but not measure, ordinal data. The categories for an ordinal set of data have a natural order, for example, suppose a group of people were asked to taste varieties of biscuit and classify each biscuit on a rating scale of 1 to 5, representing strongly dislike, dislike, neutral, like, strongly like. A rating of 5 indicates more enjoyment than a rating of 4, for example, so such data are ordinal. However, the distinction between neighboring points on the scale is not necessarily always the same. For instance, the difference in enjoyment expressed by giving a rating of 2 rather than 1 might be much less than the difference in enjoyment expressed by giving a rating of 4 rather than 3. Interval Scales  They separate individuals or items by rank order but measure the distance between rank positions in equal units.  Such a scale permits the researcher to say that the position 4 is above position 3 on the scale, and also the distance from position 5 to 4 is same as from 4 to 3.  Such a scale however does not permit conclusions that position 6 is twice as strong as position 3 because no zero position has been established. An interval scale is a scale of measurement where the distance between any two adjacent units of measurement (or 'intervals') is the same but the zero point is arbitrary. Scores on an interval scale can be added and subtracted but cannot be meaningfully multiplied or divided. For example, the time interval between the starts of years 1981 and 1982 is the same as that between 1983 and 1984, namely 365 days. The zero point, year 1 AD, is arbitrary; time did not begin then. Other examples of interval scales include the heights of tides, and the measurement of longitude.
Page 131

Ratio Scales  If one measures the distance between two points as four feet and between two other points as two feet, it is possible say that one distance is twice that of the other because each distance is measured from an absolute zero. A scale that permits such measurements is called ratio scale.  While ratio scales are common in physical science, the measurement of attitudes is still so crude that they are of little significance in marketing research. Semantic Differential Scale  It is a special type of graphic scale, which is increasingly being used, in marketing research.  It establishes a connection between the brand and company image studies and also permits the development of descriptive profiles that facilitates comparison of competitive items.  The unique characteristics of semantic differential is the use of bipolar scales to rate any product, company or concept of interest.  Respondents are given a group of these scales and asked to check on each one point that indicates their opinion of the subject in question.  Each scales consist of two opposing adjectives such as good/bad, clean/dirty, most popular/ least popular, etc. which are separated by a continuum divided into seven segments.  Respondents are asked to check the segment that represents the degree of the characteristics involved that most closely coincided with their opinion of the product or item being rated.  It is best when used for image descriptive purposes and is not recommended for overall attitude measurement.  The advantage of using semantic differential is its simplicity, while producing results comparable with those of the more complex scaling methods  The method is easy and fast to administer, but it is also sensitive to small differences in attitude, highly versatile, reliable and generally valid. For e.g.: 1. Perception of national brands and private brands: High quality Lower price Higher value Attractive Packaging 3 3 3 3 2 2 2 2 1 1 1 1 0 0 0 0 1 1 1 1 2 2 2 2 3 Low quality

3 High price 3 3 Low value Unattractive Packaging

Page 132

The Constant Sum Scale The constant sum scale requires the respondent to divide a constant sum, generally 10 or 100, among two or more objects or attributes on order to reflect the respondent’s relative preference for each object, the importance of the attribute, or the degree to which an object contains each attribute. The constant sum scale can be used in two cases: 1. For two objects at a time (paired comparison) or 2. More than two objects at a time (quadric comparison) Advantages When rank order data is used the researcher has no way of knowing the characteristic which is of overwhelming importance or of knowing a characteristic which is not much more important than other characteristics. This can be explained with the following example: Suppose a sample of respondents from the target market is requested to rank order several automobile characteristics with 1 being more important. Assume the individual ranks are similar and produce the following median ranks for each attribute: Price 1 Economy 2 Dependability 3 Safety 4 Comfort 5 Style 6 A constant sum measure of the importance of the same attributes could be obtained from the following procedure: Divide 100 points among the characteristics listed so that the division will reflect how important each characteristic is to your selection of a new automobile. Price ____ Economy ____ Dependability ____ Safety ____ Comfort ____ Style ____ Total 100 All three of the following groups’ average responses to the constant sum scale would be consistent with the rank order results just described: Price Economy Dependability Safety Comfort Style Group A 35 30 20 10 3 2 Group B 20 18 17 16 15 14
Page 133

Group C 65 9 8 7 6 5

100 100 100 With rank order scale the researcher has no way of knowing if price is of importance (GROUP C); part of a general, strong concern for overall cost (GROUP A); or not much important than the other attributes (GROUP B). Constant Sum Scale provides such evidence. Disadvantage A disadvantage could be that individuals could occasionally misassign points such that the total is more than, or less than 100. This can be adjusted for by dividing each point allocation by the actual total and multiplying the result by 100. Thurstone Scale It is one of the Multi Item Scales developed by L.L. Thurstone’s method of Equal Appearing Intervals on the concept that, even though people could not assign quantitative measures to their own attitudes, they could tell the difference between the attitude represented by two different statements and could identify items that were approximately halfway between the two. The procedure is as follows 1. Collect a large number of statements (perhaps as may as several hundred) related to the attitude in question 2. Have a number of judges (perhaps 20 or more) sort the statements independently into 11 piles that vary from the most favorable statement to neutral statements to most unfavorable statements. 3. Study the frequency distribution of ratings for each statement and eliminate those statements that the different judges have given widely scattered ratings – that are in a number of different piles 4. Determine the scale value of each of the remaining statements – that is, the number of the pile in pile in which the median of the distribution falls 5. Select one of the two statements from each of the 11 piles for the final scale. Those statements with the narrowest range of rating are preferred as the most reliable. Advantages It is important to note that there are 11 attitude positions because in a scale with odd number of parameters, it is easier to identify a neutral position. Disadvantages 1. Thurstone scales are not widely used for Marketing Research because the are time consuming during preparation 2. The ratings may be influenced by the Judges’ personal attitude 3. Different individuals can obtain exactly the same score from agreeing with quite different items. 4. It does not obtain information about the intensity of agreement with the ratings Likert Scale These scales are sometimes referred to as summated scales. It requires a respondent to indicate a degree of agreement or disagreement with each of a series of statements related to the attitude object.
Page 134

For Example: The service at a retail store is very important to me: ____ Strongly Agree ____ Agree ____ Neither Agree nor Disagree ____ Disagree ____ Strongly Disagree To analyze a Likert Scale, each response category is assigned a numerical value. These examples could be assigned values such as Strongly Agree=1, through Strongly Disagree=5 or the scoring could be reversed., or a –2 through +2 system could be used. They can be analyzed on an item-by-item basis, or they can be summed to form a single score for each individual. Advantages 1. It is relatively easy to construct and administer. 2. Instructions that accompany the scale are easily understood; hence it can be used for mail surveys and interviews with children. Disadvantages 1. It takes a longer time to complete as compared to Semantic Differential Scales, etc. 2. Care needs to be taken when using Likert Scales in cross cultural research, as there may be cultural variations in willingness to express disagreement. Comparison of Thurstone and Likert Scale It is obvious that these two scales have a lot in common. They have been widely used in the past. Due to the ordinal nature of the Likert scales, many individuals feel they it may be more reliable that the Thurstone Scale. DISGUISED, STRUCTURED TECHNIQUES The basis premise underlying such tests is that the respondents will reveal their attitudes by the extent to which their answers to the objective questions vary from the correct answers to objective questions that vary from the correct answers. Respondents are provided with questions that they are no able to answer correctly. Thus, they are forced to guess at the answers. The direction and extent of these guessing errors is assumed to reveal their attitudes on the subject. For example, individuals tend to gather information that supports their attitudes and, therefore, the extent and kind of information individuals possess on a given subject indicate something of their attitude. For example:  How much do u think it cost for the hot cereal alone in a average bowl of cereal such as you’d serve at the breakfast?  Do corn flakes cost less or more per bowl than cereal? CONCEPT TESTING Attitude Scale: Sets of rating scales used to measure one or more dimensions of an individual’s attitude toward some object. Attitude scales are constructed using likert, semantic differential or Stapel scales.

Page 135

Concurrent Validity: A measure of how accurate a measure of an object, state or event is now as opposed to how accurate it will be in the future (predictive validity), one measure of concurrent validity is how comparable the results of Instrument A and Instrument B are when both are used to measure the same characteristics in the same object at the same point of time. Constant Sum Scale: The constant sum scale requires the respondent to divide a constant sum, generally 10 or 100, among two or more objects or attributes on order to reflect the respondent’s relative preference for each object, the importance of the attribute, or the degree to which an object contains each attribute. Construct Validity: Understanding the factors that underlie the obtained measurement. It involves knowing how well and why a given measure works by having a sound theory of the nature of the concept being measured and hoe it relates to other concepts. Depth interview: An interviewing procedure in which the interviewer does not have a prespecified list of questions. The interviewer is free to create questions and probe responses that appear relevant. Respondents are free to respond to questions in any way they think appropriate. Types of depth interviews include individual, mini group and focus group. External Validity: The ability of the results from an experiment to predict the results in the actual situation. Face Validity: A form of content validity that exists when “non experts” such as respondents or executives judge the measuring instrument as appropriate for the task at hand. Free Word Association: A projective technique that requires the respondent to give the first word or thought that comes to mind after the researcher presents a word or phrase. Internal Validity: The degree of replicability of an experiment or assurance that experimental results are due to the variables manipulated in the experiment in that specific environment. Interval Scale: Numbers are used to rank items such that numerically equal distances on the scale represent equal distances in the property being measured. The location of the zero point and the unit of measurement is determined by the researcher; consequently, ratios calculated on data from interval scales are not meaningful. Ordinal Scale: A rating scale in which numbers, letters, or other symbols are used to assign ranks to items. An ordinal scale requires the respondent to indicate if one item has more or less of a characteristic than another item. The magnitude of difference between the items is not estimated. Predictive Validity: The extent to which the future level of some variable can be predicted by a current measurement of the same or a different variable.
Page 136

Projective Technique: The technique of inferring a subjects attitudes or values based on his or her description of vague objects requiring interpretation. Common types used in market research include cartoon, picture-response, third person and sentence completion. Ratio Scale: A rating scale in which items are ranked so that numerically equal scale distances represent equal distances in the property being measured. These scales have a natural and known zero point. Reliability: The extent of variable error in a measurement. Reliability exists when repeated measures of the same stable characteristics in the same objects or persons show limited variation. Scorer Reliability: The extent of agreement among judges (scorers) working independently to categorize a series of objects. The higher the degree of agreement between the judges, the greater the reliability of the categorization. Semantic Differential Scale: An attitude scaling device, it requires the respondent to rate the attitude object on a number of itemized, seven-point rating scales bounded at each end by one of two bipolar adjectives or phrases. Sentence Completion Technique: A projective technique requiring the subject to complete a sentence using the first phrase that comes to mind. The subject is not required to associate himself or herself with the response. Split Half Reliability: A measure of reliability in which the results form half the items on a multi-item measure are compared with the results for the remaining items. If there is a substantial variation between the groups, the reliability of the instrument is in doubt. Validity: The amount of systematic error in a measurement.

Page 137

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.