You are on page 1of 25

Full screen and Product testing

The Full Screen


step often seen as a necessary evil, yet very powerful and with long-lasting effects. pre-technical evaluation, and summarizes what must be done. range from simple checklists to complex mathematical models. professional firms undertake, last low-risk evaluation


Methods Most

Purposes of the Full Screen


decide whether technical resources should be devoted to the project.

Feasibility of technical accomplishment -- can we do it? Feasibility of commercial accomplishment -- do we want to do it?


help manage the process.

Recycle and rework concepts Rank order good concepts Track appraisals of failed concepts

encourage cross-functional communication

Avoid potholes


Screening Alternatives
Judgment/Managerial Concept


Test followed by Sales

(if only issue is whether consumers will like it)




A Simple Scoring Model

Figure 10.2

Factors: Degree of Fun Number of People Affordability Capability Student's Scores: Fun People Affordability Capability Totals

4 Points Much Over 5 Easily Very Skiing 4 4 2 1 11

3 Points Some 4 to 5 Probably Good Boating 3 4 4 4 15

Values 2 Points Little 2 to 3 Maybe Some Hiking 4 2 4 3 13

1 Point None Under 2 No Little

Answer: Go boating.


Source of Scoring Factor Models

Figure 10.3


A Scoring Model for Full Screen

Figure 10.4
Note: this model only shows a few sample screening factors.


Score (1-5)


Weighted Score

Technical Accomplishment: Technical task difficulty Research skills required Rate of technological change Design superiority assurance Manufacturing equipment... Commercial Accomplishment: Market volatility Probable market share Sales force requirements Competition to be faced Degree of unmet need...


The Scorers


Major Functions (marketing, technical, operations, finance) New Products Managers Staff Specialists (IT, distribution, procurement, PR, HR)


with Scorers:

May be always optimistic/pessimistic May be "moody" (alternately optimistic and pessimistic) May always score neutral May be less reliable or accurate May be easily swayed by the group May be erratic


Industrial Research Institute Scoring Model

Technical success factors:
Proprietary Technical Access

Figure 10.5

Commercial success factors:

Customer/Market Need



to and Effective Use of External Technology Capability

Market/Brand Recognition

Channels to Market Customer Strength


Raw Materials/Components Supply

Source: John Davis, Alan Fusfield, Eric Scriven, and Gary Tritle, Determining a Projects Probability of Success, Research-Technology Management, May-June 2001, pp. 51-57.

Safety, Health and Environmental Risks


Alternatives to the Full Screen


Sheet Model Systems Hierarchy Process

Empirical Expert



A Profile Sheet

Figure 10.6


Criteria Based on the NewProd Studies


Criteria (rated yes/no):

Strategic alignment Existence of market need Likelihood of technical feasibility Product advantage Environmental health and safety policies Return versus risk Show stoppers (killer variables)


Criteria Based on the NewProd Studies (continued)


Criteria (rated on scales):

Strategic (alignment and importance) Product advantage (unique benefits, meets customer needs, provides value for money) Market attractiveness (size, growth rate) Synergies (marketing, distribution, technical, manufacturing expertise) Technical feasibility (complexity, uncertainty) Risk vs. return (NPV, IRR, ROI, payback)


Analytic Hierarchy Process (AHP)

Figure 10.9

Goal: Select Best NPD Project

Market Fit
Prod P roduc u ct tLi Line ne Channel Logis tic s Tim ing P ric e S al es Force

Tech. Fit
Desi gn Mat erials S uppl y Mf g. Tec h. Mf g. Tim ing Diff erential A dv antage

Dollar Risk
P ay of fs Los ses

Uncer tainty
Unmit igated Mi tigat ed

Products 1, 2, 3, and 4


a critical measures of a new product's market potential important in FMCG Cos. testing for four purposes

Extremely Product

Against competition: which of the alternatives offered is preferred relative to competition Product improvement: whether an improved formula could replace the current product Cost saving: whether a less expensive product could replace the current one Concept fit: whether the resembles the selling message product variant

Product testing procedures

Blind Vs Branded test - a key issue Blind test - Reactions to "pure" product No brand name as yet Branded test - Difficult to conceal Measurement of effects of brand etc.


basic principles

Representative of the product that will be in the market ultimately Name, packaging should be similar If different formulas are used, size, shape, colour to be identical Avoid labels that bias (e.g. sequence of letters etc.)

Procedures for product tests


designs where a evaluates one product, having product for comparison products

consumer no other

Comparison Sequential

- Consumer rates 2 or more

monadic - rates one product and then is given a second product (rated) independently then compared - rates one product, is given a second product and compares both comparison - directly compares two products

Protomonadic Paired


comparison - consumer is given two or more sets of products to compare against each other at two different points of time robin - tests where a series of products is tested against each other designs - is given 2 samples of one product and one sample of another to identify the one that differs - a standard product is given and asked to determine which of the other (two) products are similar - asked to determine if one product is different from the other






- home testing Vs

Central location


1. Unrealistic members ignored

2. Opinions of other family

Periodicity Sales

- usually a week, depends on the product, purchase cycle wave extended product test consumers encouraged to buy at intervals coinciding with normal product cycle


Identification of novelty product wear outs Identification of problems Market share prediction Potential segments

Monadic Vs Paired test

Monadic is realistic. Typically a consumer uses a product at a time and decides Monadic tests are difficult to interpret. (e.g. 80% say "excellent") Comparison tests concentrate on product differences certain situations involving sensory evaluations, comparison tests are impractical.



asked - Preference, overall rating attribute rating, likes-dislikes, uniqueness, usage pattern etc.


non-probability 100-200 for in-home CLT around 20 Cost is a factor



preference that is statistically significant Where claims of superiority are made should have significant preference. Conventions may vary with MR agencies


asked - Preference, overall rating attribute rating, likes-dislikes, uniqueness, usage pattern etc.


non-probability 100-200 for in-home CLT around 20 Cost is a factor



preference that is statistically significant Where claims of superiority are made should have significant preference. Conventions may vary with MR agencies

Product testing in industrial markets

Buyers Only

cannot decide on the merits and demerits of a new product quickly a few product testers, distinct from/in contrast to from potential buyers need to adapt products to suit their needs have expertise in the product

Testers Buyers

Purposes of beta test

To To To To To To To

check product functioning in situ

confirm selection of features, both core and optional test accuracy and usefulness of support material assess level of training required evaluate perceived strengths and weaknesses compared to those of competitors promote sales with site chosen use site as a demo for product benefits

A few important aspects

A systems approach needed : Methods and procedures of product testing should constitute a standardized system for like products Normative databases need to be built over time for better interpretation Same research company Real environment tests Relevant variables from consumers' perspective (particularly while using qualitative methods) Conservative action established products while dealing with