This action might not be possible to undo. Are you sure you want to continue?

Cluster Analysis

© 2007 Prentice Hall

20-1

**Ch apter O utl ine
**

1) Overview 2) Basic Concept 3) Statistics Associated with Cluster Analysis 4) Conducting Cluster Analysis i. ii. Formulating the Problem Selecting a Distance or Similarity Measure

iii. Selecting a Clustering Procedure iv. Deciding on the Number of Clusters v. Interpreting and Profiling the Clusters vi. Assessing Reliability and Validity

© 2007 Prentice Hall 20-2

**Cha pter Ou tl ine
**

5) Applications of Nonhierarchical Clustering 6) Clustering Variables 7) Summary

© 2007 Prentice Hall

20-3

**Cl uster An al ysi s
**

Cluster analysis is a class of techniques used to classify objects or cases into relatively homogeneous groups called clusters. Objects in each cluster tend to be similar to each other and dissimilar to objects in the other clusters. Cluster analysis is also called classification analysis, or numerical taxonomy. Both cluster analysis and discriminant analysis are concerned with classification. However, discriminant analysis requires prior knowledge of the cluster or group membership for each object or case included, to develop the classification rule. In contrast, in cluster analysis there is no a priori information about the group or cluster membership for any of the objects. Groups or clusters are suggested by the data, not defined a priori.

20-4

© 2007 Prentice Hall

**An Idea l Clus tering S itu ati on
**

Fig. 20.1

Variable 1

Variable 2

© 2007 Prentice Hall

20-5

A Pr ac ti ca l Cl uste ri ng Situa ti on

Fig. 20.2

Variable 1

Variable 2

X

© 2007 Prentice Hall

20-6

**St at is tics Asso cia ted with Clus ter Analys is
**

Ag glo me ra tio n schedu le. An agglomeration schedule gives information on the objects or cases being combined at each stage of a hierarchical clustering process. Cl ust er cen tro id . The cluster centroid is the mean values of the variables for all the cases or objects in a particular cluster. Cl ust er cen ter s. The cluster centers are the initial starting points in nonhierarchical clustering. Clusters are built around these centers, or seeds. Cl ust er memb er sh ip. Cluster membership indicates the cluster to which each object or case belongs.

20-7

© 2007 Prentice Hall

**St at is tics Asso cia ted with Clus ter Analys is
**

De ndr ogram . A dendrogram, or tree graph, is a graphical device for displaying clustering results. Vertical lines represent clusters that are joined together. The position of the line on the scale indicates the distances at which clusters were joined. The dendrogram is read from left to right. Figure 20.8 is a dendrogram. Di st an ces be tween clu st er ce nt ers . These distances indicate how separated the individual pairs of clusters are. Clusters that are widely separated are distinct, and therefore desirable.

© 2007 Prentice Hall

20-8

**St at is tics Asso cia ted with Clus ter Analys is
**

Ici cle dia gram . An icicle diagram is a graphical display of clustering results, so called because it resembles a row of icicles hanging from the eaves of a house. The columns correspond to the objects being clustered, and the rows correspond to the number of clusters. An icicle diagram is read from bottom to top. Figure 20.7 is an icicle diagram. Simil ari ty/d ist an ce coef ficie nt ma tri x. A similarity/distance coefficient matrix is a lower-triangle matrix containing pairwise distances between objects or cases.

20-9

© 2007 Prentice Hall

**Co nduc ti ng Cl us ter Ana lys is
**

Fig. 20.3

Formulate the Problem Select a Distance Measure Select a Clustering Procedure Decide on the Number of Clusters Interpret and Profile Clusters Assess the Validity of Clustering

© 2007 Prentice Hall 20-10

**Atti tu dina l D ata F or Cl uster ing
**

Table 20.1 Cas e N o. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 © 2007 Prentice Hall

V1

6 2 7 4 1 6 5 7 2 3 1 5 2 4 6 3 4 3 4 2

V2

4 3 2 6 3 4 3 3 4 5 3 4 2 6 5 5 4 7 6 3

V3

7 1 6 4 2 6 6 7 3 3 2 5 1 4 4 4 7 2 3 2

V4

3 4 4 5 2 3 3 4 3 6 3 4 5 6 2 6 2 6 7 4

V5

2 5 1 3 6 3 3 1 6 4 5 2 4 4 1 4 2 4 2 7

V6

3 4 3 6 4 4 4 4 3 6 3 4 4 7 4 7 5 3 7

20-11

**Conduc ting Cl ust er Analy si s For mulat e the Pro blem
**

Perhaps the most important part of formulating the clustering problem is selecting the variables on which the clustering is based. Inclusion of even one or two irrelevant variables may distort an otherwise useful clustering solution. Basically, the set of variables selected should describe the similarity between objects in terms that are relevant to the marketing research problem. The variables should be selected based on past research, theory, or a consideration of the hypotheses being tested. In exploratory research, the researcher should exercise judgment and intuition.

20-12

© 2007 Prentice Hall

**Conduc ting Cl ust er Analy si s Se lect a D is tanc e or Sim ila ri ty Meas ure
**

The most commonly used measure of similarity is the Euclidean distance or its square. The Eu clidean dis tance is the square root of the sum of the squared differences in values for each variable. Other distance measures are also available. The city-block or Manhattan distance between two objects is the sum of the absolute differences in values for each variable. The Chebychev distance between two objects is the maximum absolute difference in values for any variable. If the variables are measured in vastly different units, the clustering solution will be influenced by the units of measurement. In these cases, before clustering respondents, we must standardize the data by rescaling each variable to have a mean of zero and a standard deviation of unity. It is also desirable to eliminate outliers (cases with atypical values). Use of different distance measures may lead to different clustering results. Hence, it is advisable to use different measures and compare the results.

20-13

© 2007 Prentice Hall

**A Cl as sif ic atio n of Cl ustering Pr oc edure s
**

Fig. 20.4

Clustering Procedures

Hierarchical Agglomerative Divisive

Nonhierarchical

Other Two-Step

Linkage Methods

Variance Methods Ward’s Method

Centroid Methods

Sequential Threshold

Parallel Threshold

Optimizing Partitioning

Single Linkage

© 2007 Prentice Hall

Complete Linkage

Average Linkage

20-14

**Conduc ting Cl ust er Analysi s Se lect a C lus tering Pr oc ed ur e– Hie rar chic al
**

Hi er ar ch ica l cl us ter in g is characterized by the development of a hierarchy or tree-like structure. Hierarchical methods can be agglomerative or divisive. Ag glo mer ativ e cl ust er ing starts with each object in a separate cluster. Clusters are formed by grouping objects into bigger and bigger clusters. This process is continued until all objects are members of a single cluster. Div isi ve clu st er ing starts with all the objects grouped in a single cluster. Clusters are divided or split until each object is in a separate cluster. Agglomerative methods are commonly used in marketing research. They consist of linkage methods, error sums of squares or variance methods, and centroid methods.

20-15

© 2007 Prentice Hall

**Co nd uc ting Cl ust er Analy sis Se lect a Clust ering Pro ce dur e – Link age Met ho d
**

The sin gl e l inkag e method is based on minimum distance, or the nearest neighbor rule. At every stage, the distance between two clusters is the distance between their two closest points (see Figure 20.5). The co mp let e lin kag e method is similar to single linkage, except that it is based on the maximum distance or the furthest neighbor approach. In complete linkage, the distance between two clusters is calculated as the distance between their two furthest points. The aver age li nka ge method works similarly. However, in this method, the distance between two clusters is defined as the average of the distances between all pairs of objects, where one member of the pair is from each of the clusters (Figure 20.5).

20-16

© 2007 Prentice Hall

**Li nka ge Met ho ds o f Cl uster ing
**

Fig. 20.5

Single Linkage

Minimum Distance Cluster 1 Cluster 2

Complete Linkage

Maximum Distance Cluster 1 Cluster 2

Average Linkage

Cluster 1

© 2007 Prentice Hall

**Average Distance Cluster 2
**

20-17

**Conduc ting Cl ust er Anal ysi s Se lect a C lus tering Pr oc ed ur e – V ari anc e Method
**

The var ianc e me tho ds attempt to generate clusters to minimize the within-cluster variance. A commonly used variance method is the Ward 's p roc edure . For each cluster, the means for all the variables are computed. Then, for each object, the squared Euclidean distance to the cluster means is calculated (Figure 20.6). These distances are summed for all the objects. At each stage, the two clusters with the smallest increase in the overall sum of squares within cluster distances are combined. In the cent roid me thod s, the distance between two clusters is the distance between their centroids (means for all the variables), as shown in Figure 20.6. Every time objects are grouped, a new centroid is computed. Of the hierarchical methods, average linkage and Ward's methods have been shown to perform better than the other procedures.

20-18

© 2007 Prentice Hall

**Ot he r Agglo merat ive Cl ustering Methods
**

Fig. 20.6

Ward’s Procedure

Centroid Method

© 2007 Prentice Hall

20-19

**Co nd ucting Cl uster Analy sis Se lec t a Clu steri ng Proc ed ur e– Nonh ierarchical
**

© 2007 Prentice Hall

The nonh ie rarch ical clu st ering methods are frequently referred to as k-means clustering. These methods include sequential threshold, parallel threshold, and optimizing partitioning. In the se quent ia l t hre shold me thod , a cluster center is selected and all objects within a prespecified threshold value from the center are grouped together. Then a new cluster center or seed is selected, and the process is repeated for the unclustered points. Once an object is clustered with a seed, it is no longer considered for clustering with subsequent seeds. The pa ralle l t hres hold m etho d operates similarly, except that several cluster centers are selected simultaneously and objects within the threshold level are grouped with the nearest center. The op tim izin g p artit ion in g meth od differs from the two threshold procedures in that objects can later be reassigned to clusters to optimize an overall criterion, such as average within cluster distance for a given number of clusters.

20-20

**Co nduc ti ng Cl us ter Ana lysis Sel ect a Cl uster ing Pro cedu re
**

It has been suggested that the hierarchical and nonhierarchical methods be used in tandem. First, an initial clustering solution is obtained using a hierarchical procedure, such as average linkage or Ward's. The number of clusters and cluster centroids so obtained are used as inputs to the optimizing partitioning method. Choice of a clustering method and choice of a distance measure are interrelated. For example, squared Euclidean distances should be used with the Ward's and centroid methods. Several nonhierarchical procedures also use squared Euclidean distances.

20-21

© 2007 Prentice Hall

**Resu lts of H iera rc hi cal Cl uster ing
**

Table 20.2

Ag glom er ation Sch edu le Us in g Wa rd’ s Procedu re Sta ge cl uster Cl uster s Cluster 2 ed Stage Cluster 1 com bin Coefficient Cluster 1fir st Cluster 2 Next stage appea rs 16 1 14 1.000000 0 0 6

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 6 2 5 3 10 6 9 4 1 5 4 1 1 2 1 4 2 1 7 13 11 8 14 12 20 10 6 9 19 17 15 5 3 18 4 2 2.000000 3.500000 5.000000 6.500000 8.160000 10.166667 13.000000 15.583000 18.500000 23.000000 27.750000 33.100000 41.333000 51.833000 64.500000 79.667000 172.662000 328.600000 0 0 0 0 0 2 0 0 6 4 9 10 13 3 14 12 15 16 0 0 0 0 1 0 0 6 7 8 0 0 0 11 5 0 17 18 7 15 11 16 9 10 11 12 13 15 17 14 16 18 19 18 19 0

© 2007 Prentice Hall

20-22

**Resu lts of H iera rc hi cal Cl uster ing
**

Table 20.2, cont.

**Cluster Membership of Cases Using Ward’s Procedure
**

Number of Clusters

4 1 2 1 3 2 1 1 1 2 3 2 1 2 3 1 3 1 4 3 2 3 1 2 1 3 2 1 1 1 2 3 2 1 2 3 1 3 1 3 3 2 2 1 2 1 2 2 1 1 1 2 2 2 1 2 2 1 2 1 2 2 2

20-23

Label case 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

© 2007 Prentice Hall

**Verti cal Icicle Plo t Us ing War d’ s Method
**

Fig. 20.7

© 2007 Prentice Hall

20-24

**Dendr ogra m Using Wa rd’s Meth od
**

Fig. 20.8

© 2007 Prentice Hall

20-25

**Co nduc ti ng Cl us ter Ana lys is Deci de o n th e Nu mbe r of Cl uster s
**

Theoretical, conceptual, or practical considerations may suggest a certain number of clusters. In hierarchical clustering, the distances at which clusters are combined can be used as criteria. This information can be obtained from the agglomeration schedule or from the dendrogram. In nonhierarchical clustering, the ratio of total withingroup variance to between-group variance can be plotted against the number of clusters. The point at which an elbow or a sharp bend occurs indicates an appropriate number of clusters. The relative sizes of the clusters should be meaningful.

20-26

© 2007 Prentice Hall

**Co nduc ti ng Cl us ter Ana lysis Inte rpre ti ng a nd Pro fi li ng t he Cl uster s
**

Interpreting and profiling clusters involves examining the cluster centroids. The centroids enable us to describe each cluster by assigning it a name or label.

It is often helpful to profile the clusters in terms of variables that were not used for clustering. These may include demographic, psychographic, product usage, media usage, or other variables.

© 2007 Prentice Hall

20-27

**Cl uster Cen troi ds
**

Table 20.3

Means of Variables Cluster No.

V1

V2

V3

V4

V5

V6

1

5.750

3.625

6.000

3.125

1.750

3.875

2

1.667

3.000

1.833

3.500

5.500

3.333

© 2007 Prentice Hall

3

3.500

5.833

3.333

6.000

3.500

6.000

20-28

**Co nduc ti ng Cl ust er Ana lysis Asse ss Reli abil ity a nd Vali di ty
**

1. Perform cluster analysis on the same data using different distance measures. Compare the results across measures to determine the stability of the solutions. 2. Use different methods of clustering and compare the results. 3. Split the data randomly into halves. Perform clustering separately on each half. Compare cluster centroids across the two subsamples. 4. Delete variables randomly. Perform clustering based on the reduced set of variables. Compare the results with those obtained by clustering based on the entire set of variables. 5. In nonhierarchical clustering, the solution may depend on the order of cases in the data set. Make multiple runs using different order of cases until the solution stabilizes.

© 2007 Prentice Hall

20-29

**Resu lts of No nhier ar chica l Cl uster ing
**

Table 20.4

**Initial Cluster Centers
**

1 V1 V2 V3 V4 V5 V6 4 6 3 7 2 7 Cluster 2 2 3 2 4 7 2 3 7 2 6 4 1 3

Iteration History

Change in Cluster Centers 1 2 3 2.154 2.102 2.550 0.000 0.000 0.000

a

Iteration 1 2

a. Convergence achieved due to no or small distance change. The maximum distance by which any center has changed is 0.000. The current iteration is 2. The minimum distance between initial centers is 7.746.

© 2007 Prentice Hall 20-30

**Res ul ts o f Nonh ier ar ch ica l Cl uster ing
**

Table 20.4 cont. Cluster Membership

Case Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Cluster 3 2 3 1 2 3 3 3 2 1 2 3 2 1 3 1 3 1 1 2 Distance 1.414 1.323 2.550 1.404 1.848 1.225 1.500 2.121 1.756 1.143 1.041 1.581 2.598 1.404 2.828 1.624 2.598 3.555 2.154 2.102

© 2007 Prentice Hall

20-31

**Resu lts of No nhier ar ch ica l Cl uster ing
**

Table 20.4, cont.

**Fi nal Cl ust er Cen ter s Cluster 1 2
**

V1 V2 V3 V4 V5 V6 4 6 3 6 4 6 2 3 2 4 6 3

3

6 4 6 3 2 4

**Dis tan ces bet wee n Fin al Clu st er Cen ter s Cluster 1 2 3
**

1 2 3

© 2007 Prentice Hall

5.568 5.698

5.568 6.928

5.698 6.928

20-32

**Res ul ts o f Nonh ier ar ch ica l Cl uster ing
**

Table 20.4, cont.

ANOVA

V1 V2 V3 V4 V5 V6 Cluster Mean Square 29.108 13.546 31.392 15.713 22.537 12.171 df 2 2 2 2 2 2 Error Mean Square 0.608 0.630 0.833 0.728 0.816 1.071 df 17 17 17 17 17 17 F 47.888 21.505 37.670 21.585 27.614 11.363 Sig. 0.000 0.000 0.000 0.000 0.000 0.001

The F tests should be used only for descriptive purposes because the clusters have been chosen to maximize the differences among cases in different clusters. The observed significance levels are not corrected for this, and thus cannot be interpreted as tests of the hypothesis that the cluster means are equal.

**Number of Cases in each Cluster
**

Cluster 1 2 3 6.000 6.000 8.000 20.000 0.000

20-33

Valid Missing

© 2007 Prentice Hall

**Res ul ts o f Two -S tep Cl uster ing
**

Table 20.5

Auto-Clustering Akaike's Information Criterion (AIC) 104.140 101.171 97.594 116.896 138.230 158.586 179.340 201.628 224.055 246.522 269.570 292.718 316.120 339.223 362.650 Ratio of Distance Measures(c) .847 1.583 2.115 1.222 1.021 1.224 1.006 1.111 1.588 1.001 1.055 1.002 1.044 1.004 AIC Change(a) -2.969 -3.577 19.302 21.335 20.355 20.755 22.288 22.426 22.467 23.048 23.148 23.402 23.103 23.427 Ratio of AIC Changes(b) 1.000 1.205 -6.502 -7.187 -6.857 -6.991 -7.508 -7.555 -7.568 -7.764 -7.798 -7.883 -7.782 -7.892

Number of Clusters 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

a The changes are from the previous number of clusters in the table. b The ratios of changes are relative to the change for the two cluster solution. c The ratios of distance measures are based on the current number of clusters against the previous number of clusters. © 2007 Prentice Hall

20-34

**Cl uster Distri bu ti on
**

Table 20.5, cont.

% of Combined 30.0% 30.0% 40.0% 100.0%

Cluster

1 2 3 Combined

N 6 6 8 20 20

% of Total 30.0% 30.0% 40.0% 100.0% 100.0%

Total

© 2007 Prentice Hall

20-35

**Cl uster Profi les
**

Table 20.5, cont.

Fun Cluster 1 2 3 Combined Mean 1.67 3.50 5.75 3.85 Std. Deviation .516 .548 1.035 1.899

Bad for Budget Mean 3.00 5.83 3.63 4.10 Std. Deviation .632 .753 .916 1.410

Eating Out Mean 1.83 3.33 6.00 3.95 Std. Deviation .753 .816 1.069 2.012

Best Buys Mean 3.50 6.00 3.13 4.10 Std. Deviation 1.049 .632 .835 1.518

Don't Care Mean 5.50 3.50 1.88 3.45 Std. Deviation 1.049 .837 .835 1.761

Compare Prices Mean 3.33 6.00 3.88 4.35 Std. Deviation .816 1.549 .641 1.496

© 2007 Prentice Hall

20-36

**Cl uster ing V ar iables
**

In this instance, the units used for analysis are the variables, and the distance measures are computed for all pairs of variables. Hierarchical clustering of variables can aid in the identification of unique variables, or variables that make a unique contribution to the data. Clustering can also be used to reduce the number of variables. Associated with each cluster is a linear combination of the variables in the cluster, called the cluster component. A large set of variables can often be replaced by the set of cluster components with little loss of information. However, a given number of cluster components does not generally explain as much variance as the same number of principal components.

20-37

© 2007 Prentice Hall

SP SS Wi ndow s

To select this procedures using SPSS for Windows click: An aly ze> Cl as sif y>H ie rar chi cal Cl ust er … An aly ze> Cl as sif y>K -Mea ns Clus te r … An aly ze> Cl as sif y>T wo -St ep Cl ust er …

© 2007 Prentice Hall

20-38

**SPSS W indo ws: Hiera rc hic al Clus teri ng
**

1. 2. 3.

Select ANALYZE from the SPSS menu bar. Click CLASSIFY and then HIERARCHICAL CLUSTER. Move “Fun [v1],” “Bad for Budget [v2],” “Eating Out [v3],” “Best Buys [v4],” “Don’t Care [v5],” and “Compare Prices [v6].” in to the VARIABLES box. In the CLUSTER box check CASES (default option). In the DISPLAY box check STATISTICS and PLOTS (default options). Click on STATISTICS. In the pop-up window, check AGGLOMERATION SCHEDULE. In the CLUSTER MEMBERSHIP box check RANGE OF SOLUTIONS. Then, for MINIMUM NUMBER OF CLUSTERS: enter 2 and for MAXIMUM NUMBER OF CLUSTERS enter 4. Click CONTINUE. Click on PLOTS. In the pop-up window, check DENDROGRAM. In the ICICLE box check ALL CLUSTERS (default). In the ORIENTATION box, check VERTICAL. Click CONTINUE. Click on METHOD. For CLUSTER METHOD select WARD’S METHOD. In the MEASURE box check INTERVAL and select SQUARED EUCLIDEAN DISTANCE. Click CONTINUE Click OK.

20-39

4.

5.

6.

7.

8.

© 2007 Prentice Hall

**SP SS Wi ndow s: K -Mea ns Cl uste ri ng
**

1. 2. 3.

Select ANALYZE from the SPSS menu bar. Click CLASSIFY and then K-MEANS CLUSTER. Move “Fun [v1],” “Bad for Budget [v2],” “Eating Out [v3],” “Best Buys [v4],” “Don’t Care [v5],” and “Compare Prices [v6].” in to the VARIABLES box. For NUMBER OF CLUSTER select 3. Click on OPTIONS. In the pop-up window, In the STATISTICS box, check INITIAL CLUSTER CENTERS and CLUSTER INFORMATION FOR EACH CASE. Click CONTINUE. Click OK.

20-40

4. 5.

6.

© 2007 Prentice Hall

**SP SS Wi ndow s: Two -S tep Cl uster ing
**

1. 2. 3.

Select ANALYZE from the SPSS menu bar. Click CLASSIFY and then TWO-STEP CLUSTER. Move “Fun [v1],” “Bad for Budget [v2],” “Eating Out [v3],” “Best Buys [v4],” “Don’t Care [v5],” and “Compare Prices [v6].” in to the CONTINUOUS VARIABLES box. For DISTANCE MEASURE select EUCLIDEAN. For NUMBER OF CLUSTER select DETERMINE AUTOMATICALLY. For CLUSTERING CRITERION select AKAIKE’S INFORMATION CRITERION (AIC). Click OK.

20-41

4. 5.

6.

7.

© 2007 Prentice Hall

- UK Plastics Waste.992389c5
- India Styrene
- Decreasing Trend of PC in Europe
- Country Wise EPS Information
- Harrah's CRM
- Business Strategy II
- Personality & Self Concept
- Perception & Learning
- Motivation
- Illusioni
- Customer Value Creation 97
- Consumer Attitude Formation Change[1]
- C.B. Intro
- Attitude
- 1)Customer Value Creation 97
- Business Strategy 1
- ITC Company Profile
- Developing Business Strategy
- Marketing Research Module 8 Sampling Design
- Marketing Research Module 8 Sampling Design Finalsize
- Marketing Research Module 5 Qualitative Research
- Marketing Research Module 4 Experimentation
- Marketing Research Module 3 Scaling
- Marketing Research Module 3 Non Comparitive Scaling
- Marketing Research Module 2 Secondary Data

- Malhotra Mr05 Ppt 18
- Malhotra Mr05 Ppt 21
- Malhotra Mr05 Ppt 17
- Malhotra Mr05 Ppt 19
- Malhotra Mr05 Ppt 16
- Malhotra Mr05 Ppt 22
- 3996-3719_malhotra_mr05_ppt_15
- Malhotra Mr05 Ppt 23
- Marketing Research Module 8 Sampling Design
- Marketing Research Module 8 Sampling Design Finalsize
- Marketing Research Module 3 Scaling
- Marketing Research Module 3 Non Comparitive Scaling
- Marketing Research Module 4 Experimentation
- Defining the Objectives
- Marketing Research Module 5 Qualitative Research
- Malhotra Mr05 Ppt 02
- Malhotra 08
- malhotra13.ppt
- Malhotra Causal
- malhotra_mr05_ppt_06
- Malhotra 07
- 3996-3707_malhotra_mr05_ppt_03
- Malhotra MR6e 01 - Copy
- Malhotra Mr05 Ppt 22(2)
- Malhotra Mr05 Ppt 01
- malhotra14_tif.doc
- malhotra02_tif.doc
- Malhotra Mr05 Ppt 14
- Malhotra10 Tif (1)
- Defining the Marketing Research Problem and Developing an Approach.finaL
- Malhotra Mr05 Ppt 20

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd