You are on page 1of 26

Spotlight on Power

Definition, Validation,
Checks and Analysis

19th July 2021


Contents
Click on link to go straight to required section

ꟷDefinition
ꟷValidation
ꟷCalculation & Checks
ꟷAnalysis Examples
ꟷPutting Power in Context
ꟷCombining Power across different sub-brands, countries or categories

Other resources
ꟷFor Target Setting, see Spotlight on Simulator here
ꟷFor a comparison to Streamlined Power, see here

2
What is Power?
Consumer’s predisposition to choose

Is a measure of consumer demand for the


brand, which gives us a prediction of the
brand’s volume share, based purely on
perceptions, absent of activation factors

Power

4
Brands with high Power drive market share
Our pilot validation proves that consumers buy 5 times more of the brands they have high
Power share for, compared with low Power share

Volume Bought
(index to average)

Brands with a strong Power


score are those that consumers
are more likely to buy… 2.2

Power
0.68
0.44

Low Medium High

Source: Shopcom panel data merged with equity survey responses, comparing Power and Premium scores to shopping habits of 1,600 consumers
Analysis includes 65 brands in 4 categories 5
Low = bottom 25%; Medium = middle 50%; High = top 25%
Important context
The Power score is only intended to predict the volume share that the brand would deliver if volume
sales depended only on consumer brand associations (equity)
The Power score is not intended to account for in-market barriers & facilitators; this is the role of share flow analysis

Meaningful Power

Brand Associations

Barriers Facilitators
Different Premium
Brand Predisposition

Advertising
Salient Potential

6
Calculation and checks
Internal use only
How is Power calculated?

Power is calculated using a regression of Meaningful, Different and Salient to a dependent variable at a respondent level that
represents the likely volume share for the respondent’s next purchase or purchases (the Power dependent variable)
The weights given to each aspect (meaningful, different and salient) are customised to each category
ꟷThe model tends to be driven most by meaningful, then salience, and then difference

We report Power as a percentage share amongst total sample because we want to reflect the relationship it has with volume
share
ꟷIt can be reported even for brands with a low base size as long as the total sample size is robust

8
Internal use only
The calculations in more detail

First, we calculate raw power, at a respondent level, based on those aware of each brand
ꟷWe use the Meaningful, Different and Salient factor scores to create a regression against volume (CL, share of last 10
purchases or weighted consideration)
ꟷThe regression coefficients give us a weight for each (indicating how important they are in explaining volume) and a constant
ꟷWe create a raw power score for each brand the respondent is aware of by multiplying the Meaningful, Different and Salient
factor scores by these weights and adding the constant
Then we create final Power, again at respondent level, but based on total sample
ꟷWe recode raw power to zero for each brand that the respondent is unaware of or that originally produced a negative raw
power score (meaning the brands that they don’t have a relationship with); this means that when we report our final Power
score it is based on total sample
ꟷStill at respondent level, we re-share these raw power scores across all brands to create the final power score, which sums to
1 or 100% for each respondent

9
The Power model requires a dependent variable that is related to volume
There are 3 options are available

Option Component Question ID Pros Cons

Claimed market Last 10 purchases, share of LAST_TEN, AU3c1, AU3c2 Our pilot work showed that claimed market It requires more questionnaire length then
share last 10, share of spend or or AU3c3 share gave the equal best prediction of actual simple consideration
share of usage or proportion volume share, equal with CL
It is a transparent metric that is easy for
clients to understand
CL A composite metric built TYP_[STD/SERV] Our pilot work showed that CL gave the equal The calculation is proprietary to Kantar, so
from: CONSIDERATION best prediction of actual volume share equal can’t be shared and can feel like a black box
- Consumer typology with claimed market share to some clients, which can be a distraction
BRAND_BLAST
- Consideration CL gives a common currency across It requires more questionnaire length than
PRICE
- Brand bought last categories simple consideration
- Price CL is unique to Kantar and we have used for
many years (it was the volume share
surrogate in legacy BrandDynamics and is
used in BrandZ)
Share of Consideration CONSIDERATION Consideration is the quickest to ask and It gives the least precise prediction of actual
Weighted provides a common currency across purchase volumes of the three options
Consideration categories

For infrequently purchased categories, the purchase cycle may be as long as a year or more, so the last purchase may be little use in predicting the next purchase.
Therefore we do not recommend using Bought last. 10
Internal use only
How to check the Power model

The aim of the Power model is to set weights for Meaningful, Different and Salient that help predict the volume share the brand
would deliver if volume sales depended only on consumer brand associations (equity)
The Power r-squared (found in the Factor-Regression tab of the BrandDynamics output spreadsheet) usually ranges
between 0.35 and 0.55
ꟷIf above 0.65, the model is very good, which suggests that brand equity is a key determinant of purchase volumes in the
category (this is also likely to be reflected in the share flow analysis, if you use that)
ꟷIf it is below 0.2, please consult with your local Data Science team

11
Internal use only
Some context for the Power model fit (R²)
In creating a model for Power, it’s vital to understand that the likelihood of completely explaining brand choice (i.e. R²=100%), just using the
elements of equity that we have in our construct (Meaningful, Different and Salient), is close to nil. There are a number of reasons for this.
The first is the many other factors involved. Habit, price, availability, etc. These are examined in the market factors and volume flow analyses,
but we don’t want to include them and their collinearity issues in the equity model; partly because that complexity exceeds what’s realistic in an
automated analysis, but also because we want to separate the roles of predisposition from the decisions made at the point of purchase.
The next, possibly larger issue, is that we are using respondent level data to create a model that describes the average patterns across the
category, as defined by the brands we’ve included and the sample we’ve interviewed. Limited brand lists and interaction between adjacent
categories are increasingly important, but non-trivial models, based on respondent level scores(*) for continuous variables, just don’t see the
sort of impressive r-squared values that can be seen for pairs of surrogates or data based on aggregated (brand-level) scores. Indeed the
relationships between Power and Sales Share at the brand level are usually quite strong (see the chart below).
Some categories are more difficult to define or represent and some do see external
factors playing a larger role in brand choice, so there will be a range of r-squared
values seen in different studies. Key here is that models with the sorts of fits generally

Sales Volume Share


Kantar World Panel
seen are highly capable of representing the role of equity; and to be able to decompose
that into reliable, actionable assessments. r=0.91
Where values are really quite low, some additional caution may be needed, but the
primary judgement will remain that the M, D and S values attributed make sense.

Note * for a study with a 600 sample; 16 brands; average awareness of ~80%; the M,
D, S and Power Dependent mean a model with ~30,000 values. Power

12
Analysis
Meaningfully Different Signature Visual: The Propeller

The Power score is shown in the centre of The Propeller

168 165
152 111 111 111 114 111
3.0
116.4
1 107 110.6
1 119 18.71 1 1 71
111 111 111 100 111
50
84
129 145

Meaningful Different Salient Power Index: 100

Source: US Haircare pilot data 14


The brands can also be plotted on a map to give an overview of the market

Meaningful vs Different, with Salient determining PANTENE PRO-V


160
dot size and Power shown by depth of colour HEAD & SHOULDERS
(alternatively you could show the Power % within JOHN FRIEDA
140
each dot, rather than using depth of colour) NEXXUS
FRUCTIS
120 AUSSIE DOVE
Pantene Pro-V is by far the strongest brand in the

Different
market; a ‘triple threat’ with strength in being 100
TRESEMME
HERBAL ESSENCES
meaningful for consumers and different to other THERMASILK
MATRIX
L'OREAL

brands, with strong salience 80 JOHNSON’S NEUTROGENA

60
SUAVE

40

STORE BRAND
20
60 80 100 120 140 160
POWER Meaningful

15
Power can be trended over time, rolling 24 or 52 weekly recommended
Chart shows same brand rolling 12 weekly, rolling 24 weekly & rolling 52 weekly

All evidence from our Brand Pulse and custom 40%


tracking studies to date suggests that Power typically
only increases or decreases by 1 to 2 percentage
points per year.
30%
Assuming 100 interviews per week, a minimum of
rolling 12 weekly data is recommended (not rolling
4 weekly or rolling 8 weekly).
20%
Once sufficient data is available, it is recommended
that you move to rolling 24 weekly or rolling 52
weekly trends for greater stability and to manage
client expectations of long term shifts in Power. 10%

0%
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101

Power R12W Power R24W Power R52W

16
If Power is moving by more than 1 or 2 percentage points per year, check your
data before reporting this as a finding

1. Is your data correct?


ꟷThe following measures in your output spreadsheet should match your standard data tables: total awareness,
consideration (top box and mean), bought last, top of mind, total unaided awareness and absolute image endorsements
ꟷAsk DP to check that the correct settings and PRJ file have been used
ꟷSense check vs. trends for other measures, including Meaningful, Different, Salient, awareness, bought last,
consideration and image
2. Is your base size robust?
ꟷFor Brand Pulse or custom tracking studies, we recommend 50 to 100 interviews per week, generating a base size of at
least 2,400 over the course of the year (assuming 48 weeks of interviewing)
3. Are the changes statistically significant?
4. Have there been any methodology changes?
ꟷThink about the sample design, sample source, competitive set and questionnaire design

17
Power can be compared to share achieved to quantify the role of equity vs. the
role of in-market activation

NON-PURCHASE PURCHASE

20% POWER % of market predisposed to buying the brand

% of market buying the brand 17% MARKET SHARE

5% UNREALISED 15% SECURE 2% UNSUPPORTED

Unrealised share, Secure share supported Share won at point of purchase,


lost at the point of purchase by brand equity vulnerable to tactical POS activity

Share flow analysis, Section 10 of the BrandDynamics output spreadsheet 18


Putting Power in context
How to significance test Power
Significance test Power as a mean, taking the standard error into account

Power is a composite measure. It is not appropriate to significance test as a percentage because it is fundamentally different in construction to percentage data and
much more stable over time compared to percentage data from individual survey questions. Instead, significance test Power as a mean, taking into account the standard
error.

Key principles:
ꟷ Use the final Power scores (shown as a decimal) and standard errors from Section 7 of the BrandDynamics outputs spreadsheet as the basis for your significance
testing
ꟷ Use the table below to determine the most appropriate test

As with any significance testing, ensure that you are comparing like with like. You should not significance test across different time periods if there has been a
methodology change (including sample design change, or either brands or questions added, deleted or amended). Be very careful if you have reset your Power model.
Look at the metrics using old and new models for the time period where you made the switch to understand what influence the change of model has had on the metrics
for each brand and whether or not significance testing is appropriate

Comparison Required Recommended Approach


Brand vs category average Results are included in Section 7 (in green)
Two discrete time periods (for example, Q1 vs Q2) Use a significance testing tool for means, two independent samples
Two independent sub-samples (for example, Region A vs. Region B) Use a significance testing tool for means, two independent samples
Two different brands in same study (for example, Brand X vs. Brand Y) Contact DP or Data Science for an overlapping samples test

20
Comparing Power scores across categories or countries

Power (% share) is influenced by the number of brands in your survey


ꟷ For example, the average Power % share will be 25% if you have 4 brands in your survey, and 20% if you have 5 brands in your survey

The Power Index or the Cross Market Comparison score should be used when comparing Power across categories or countries
ꟷ They both provide a level playing field for comparing performance across markets, where that is both useful and meaningful

The Power Index takes into account the number of brands in the survey
ꟷ It is a simple and transparent metric. It is the result of dividing the Power score for each brand by the average Power of the category (the fair
share). Greater than 100 means you capture more than fair share, your brand is more Powerful than the average

The Power CMC takes into account the number of brands in the survey, and the extent to which large brands dominate
ꟷ For more information on the Cross Market Comparison score – how it’s calculated and how to use it – see here

21
Combining Power
How to create a parent brand perspective from different variants

As an indication of how a client’s parent brand is performing, you can simply add the individual Power % shares for each variant
together
The fact that respondents get two opportunities to endorse any two variants on the survey is a fair reflection of the fact that
those two variants are encountered alongside each other as two ‘competing’ brands, with two chances to buy in-market
This should only be done when all of the brands in your brand list are sensible separate brand entities that reflect how these
brands are generally encountered by consumers in-market

23
How to create a combined Power score across multiple categories or countries

As an indication of how a client’s parent brand is performing across categories or countries, you can manually calculate an
overall Power score by weighting the category Power scores based on total category (not just brand) value/revenue.
This can provide a good comparison with other brands who are also operating across all of the same categories or countries
and can be tracked over time.
As Power is always a % share of the brands covered in the category, it becomes more important to remember this when you
have created a parent brand score. For your analysis to be as accurate as possible, you need to ensure your brand list covers
the majority of the market in each market (80%+). It wouldn't be a fair comparison if your brand list covers 90% of the market in
France and only 50% of the market in Germany.

  Category A Category B Category C Total

Total category value: revenue $75m $50m $25m $150m

Category value as % of total revenue 50% 33% 17% n/a

Brand A Power 10% 20% 30% 17%

24
Target setting
See Spotlight on Simulator here

26

You might also like