You are on page 1of 67

DATA MINING AND

BUSINESS INTELLIGENCE -
LECTURE 02
Dr. Mahmoud Mounir
mahmoud.mounir@cis.asu.edu.eg
Data Mining:
Concepts and Techniques
(3rd ed.)

— Chapter 6 —

Jiawei Han, Micheline Kamber, and Jian Pei


University of Illinois at Urbana-Champaign &
Simon Fraser University
©2011 Han, Kamber & Pei. All rights reserved.
2
Chapter 5: Mining Frequent Patterns, Association
and Correlations: Basic Concepts and Methods

◼ Basic Concepts

◼ Frequent Itemset Mining Methods

◼ Which Patterns Are Interesting?—Pattern

Evaluation Methods

◼ Summary

3
Association Rule Mining
◼ Given a set of transactions, find rules that will predict the
occurrence of an item based on the occurrences of other
items in the transaction

Market-Basket transactions
Example of Association Rules
TID Items
{Diaper} → {Juice},
1 Bread, Milk {Milk, Bread} → {Eggs,Coke},
2 Bread, Diaper, Juice, Eggs {Juice, Bread} → {Milk},
3 Milk, Diaper, Juice, Coke
4 Bread, Milk, Diaper, Juice Implication means co-occurrence,
5 Bread, Milk, Diaper, Coke not causality!

4
Definition: Frequent Itemset

◼ Itemset
◼ A collection of one or more items
◼ Example: {Milk, Bread, Diaper} TID Items
◼ k-itemset 1 Bread, Milk
◼ An itemset that contains k items 2 Bread, Diaper, Juice, Eggs
◼ Support count () 3 Milk, Diaper, Juice, Coke
◼ Frequency of occurrence of an itemset 4 Bread, Milk, Diaper, Juice
◼ E.g. ({Milk, Bread,Diaper}) = 2 5 Bread, Milk, Diaper, Coke
◼ Support (Relative Support)
◼ Fraction of transactions that contain
an itemset
◼ E.g. s({Milk, Bread, Diaper}) = 2/5
◼ Frequent Itemset
◼ An itemset whose support is greater
than or equal to a minsup threshold
Definition: Association Rule
TID Items
Association Rule
1 Bread, Milk
– An implication expression of the form
X → Y, where X and Y are itemsets 2 Bread, Diaper, Juice, Eggs
– Example: 3 Milk, Diaper, Juice, Coke
{Milk, Diaper} → {Juice} 4 Bread, Milk, Diaper, Juice
5 Bread, Milk, Diaper, Coke
Rule Evaluation Metrics
Example:
– Support (s)
◆ Fraction of transactions that contain
{Milk , Diaper}  {Juice}
both X and Y
– Confidence (c)  (Milk , Diaper, Juice) 2
◆ Measures how often items in Y
s= = = 0 .4
|T| 5
appear in transactions that
contain X  (Milk, Diaper, Juice) 2
c= = = 0.67
 (Milk , Diaper ) 3

6
Association Rule Mining Task
◼ Given a set of transactions T, the goal of association rule
mining is to find all rules having
◼ support ≥ minsup threshold

◼ confidence ≥ minconf threshold

◼ Brute-force approach:
◼ List all possible association rules

◼ Compute the support and confidence for each rule

◼ Prune rules that fail the minsup and minconf

thresholds
 Computationally prohibitive!

7
What Is Frequent Pattern Analysis?
◼ Frequent pattern: a pattern (a set of items, subsequences, substructures,
etc.) that occurs frequently in a data set
◼ First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context
of frequent itemsets and association rule mining
◼ Motivation: Finding inherent regularities in data
◼ What products were often purchased together?— Juice and diapers?!
◼ What are the subsequent purchases after buying a PC?
◼ What kinds of DNA are sensitive to this new drug?
◼ Can we automatically classify web documents?
◼ Applications
◼ Basket data analysis, cross-marketing, catalog design, sale campaign
analysis, Web log (click stream) analysis, and DNA sequence analysis.
8
Mining Association Rules
TID Items
Example of Rules:
1 Bread, Milk {Milk,Diaper} → {Juice} (s=0.4, c=0.67)
2 Bread, Diaper, Juice, Eggs {Milk, Juice} → {Diaper} (s=0.4, c=1.0)
3 Milk, Diaper, Juice, Coke {Diaper, Juice} → {Milk} (s=0.4, c=0.67)
4 Bread, Milk, Diaper, Juice {Juice} → {Milk,Diaper} (s=0.4, c=0.67)
5 Bread, Milk, Diaper, Coke {Diaper} → {Milk, Juice} (s=0.4, c=0.5)
{Milk} → {Diaper, Juice} (s=0.4, c=0.5)

Observations:
• All the above rules are binary partitions of the same itemset:
{Milk, Diaper, Juice}
• Rules originating from the same itemset have identical support but
can have different confidence
• Thus, we may decouple the support and confidence requirements

9
Mining Association Rules

◼ Two-step approach:
1. Frequent Itemset Generation

– Generate all itemsets whose support  minsup

2. Rule Generation
– Generate high confidence rules from each frequent
itemset, where each rule is a binary partitioning of
a frequent itemset

◼ Frequent itemset generation is still


computationally expensive
10
Frequent Itemset Generation
null

A B C D E

AB AC AD AE BC BD BE CD CE DE

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

Given d items, there


ABCD ABCE ABDE ACDE BCDE
are 2d possible
candidate itemsets
ABCDE
11
Frequent Itemset Generation Strategies

◼ Reduce the number of candidates (M)


◼ Complete search: M=2d
◼ Use pruning techniques to reduce M

◼ Reduce the number of transactions (N)


◼ Reduce size of N as the size of itemset increases
◼ Used by DHP and vertical-based mining algorithms

◼ Reduce the number of comparisons (NM)


◼ Use efficient data structures to store the candidates or
transactions
◼ No need to match every candidate against every transaction

12
Reducing Number of Candidates
◼ Apriori principle:
◼ If an itemset is frequent, then all of its subsets

must also be frequent

◼ Apriori principle holds due to the following


property of the support measure:
X , Y : ( X  Y )  s( X )  s(Y )

◼ Support of an itemset never exceeds the


support of its subsets
◼ This is known as the anti-monotone property of
support 13
Illustrating Apriori Principle
null

A B C D E

AB AC AD AE BC BD BE CD CE DE

Found to be
Infrequent
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE

Pruned
ABCDE
supersets

14
Why Is Freq. Pattern Mining Important?

◼ Freq. pattern: An intrinsic and important property of


datasets
◼ Foundation for many essential data mining tasks
◼ Association, correlation, and causality analysis

◼ Sequential, structural (e.g., sub-graph) patterns

◼ Pattern analysis in spatiotemporal, multimedia, time-

series, and stream data


◼ Classification: discriminative, frequent pattern analysis

◼ Cluster analysis: frequent pattern-based clustering

◼ Data warehousing: iceberg cube and cube-gradient

◼ Semantic data compression: fascicles

◼ Broad applications

15
Basic Concepts: Frequent Patterns

Tid Items bought ◼ itemset: A set of one or more


10 Juice, Nuts, Diaper items
20 Juice, Coffee, Diaper ◼ k-itemset X = {x1, …, xk}
30 Juice, Diaper, Eggs
◼ (absolute) support, or, support
40 Nuts, Eggs, Milk count of X: Frequency or
50 Nuts, Coffee, Diaper, Eggs, Milk occurrence of an itemset X
Customer Customer
◼ (relative) support, s, is the
buys both buys diaper fraction of transactions that
contains X (i.e., the probability
that a transaction contains X)
◼ An itemset X is frequent if X’s
support is no less than a minsup
Customer
buys juice
threshold

16
Basic Concepts: Association Rules
Tid Items bought ◼ Find all the rules X → Y with
10 Juice, Nuts, Diaper
minimum support and confidence
20 Juice, Coffee, Diaper
30 Juice, Diaper, Eggs ◼ support, s, probability that a
40 Nuts, Eggs, Milk transaction contains X U Y
50 Nuts, Coffee, Diaper, Eggs, Milk
◼ confidence, c, conditional
Customer
buys both
Customer probability that a transaction
having X also contains Y
buys
diaper
Let minsup = 50%, minconf = 50%
Freq. Pat.: Juice :3, Nuts:3, Diaper:4, Eggs:3,
Customer {Juice, Diaper}:3
buys juice ◼ Association rules: (many more!)
◼ Juice → Diaper (60%, 100%)
◼ Diaper → Juice (60%, 75%)
17
Closed Patterns and Max-Patterns
◼ Exercise. DB = {<a1, …, a100>, < a1, …, a50>}
◼ Min_sup = 1.
◼ What is the set of closed itemset?
◼ <a1, …, a100>: 1
◼ < a1, …, a50>: 2
◼ What is the set of max-pattern?
◼ <a1, …, a100>: 1
◼ What is the set of all patterns?
◼ !!
18
Computational Complexity of Frequent Itemset
Mining
◼ How many itemsets are potentially to be generated in the worst case?
◼ The number of frequent itemsets to be generated is senstive to the
minsup threshold
◼ When minsup is low, there exist potentially an exponential number of
frequent itemsets
◼ The worst case: MN where M: # distinct items, and N: max length of
transactions
◼ The worst case complexty vs. the expected probability
◼ Ex. Suppose Walmart has 104 kinds of products
◼ The chance to pick up one product 10-4
◼ The chance to pick up a particular set of 10 products: ~10-40
◼ What is the chance this particular set of 10 products to be frequent
103 times in 109 transactions?

19
Chapter 5: Mining Frequent Patterns, Association
and Correlations: Basic Concepts and Methods

◼ Basic Concepts

◼ Frequent Itemset Mining Methods

◼ Which Patterns Are Interesting?—Pattern

Evaluation Methods

◼ Summary

20
Scalable Frequent Itemset Mining Methods

◼ Apriori: A Candidate Generation-and-Test

Approach

◼ Improving the Efficiency of Apriori

◼ FPGrowth: A Frequent Pattern-Growth Approach

◼ ECLAT: Frequent Pattern Mining with Vertical

Data Format
21
The Downward Closure Property and Scalable
Mining Methods
◼ The downward closure property of frequent patterns
◼ Any subset of a frequent itemset must be frequent

◼ If {juice, diaper, nuts} is frequent, so is {juice,

diaper}
◼ i.e., every transaction having {juice, diaper, nuts} also

contains {juice, diaper}


◼ Scalable mining methods: Three major approaches
◼ Apriori (Agrawal & Srikant@VLDB’94)

◼ Freq. pattern growth (FPgrowth—Han, Pei & Yin

@SIGMOD’00)
◼ Vertical data format approach (Charm—Zaki & Hsiao

@SDM’02)
22
Apriori: A Candidate Generation & Test Approach

◼ Apriori pruning principle: If there is any itemset which is


infrequent, its superset should not be generated/tested!
(Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)
◼ Method:
◼ Initially, scan DB once to get frequent 1-itemset
◼ Generate length (k+1) candidate itemsets from length k
frequent itemsets
◼ Test the candidates against DB
◼ Terminate when no frequent or candidate set can be
generated

23
The Apriori Algorithm—An Example
Supmin = 2 Itemset sup
Itemset sup
Database TDB {A} 2
Tid Items
L1 {A} 2
C1 {B} 3
{B} 3
10 A, C, D {C} 3
1st scan {C} 3
20 B, C, E {D} 1
{E} 3
30 A, B, C, E {E} 3
40 B, E
C2 Itemset sup C2 Itemset
{A, B} 1
L2 Itemset sup 2nd scan {A, B}
{A, C} 2
{A, C} 2 {A, C}
{A, E} 1
{B, C} 2
{B, C} 2 {A, E}
{B, E} 3
{B, E} 3 {B, C}
{C, E} 2
{C, E} 2 {B, E}
{C, E}

C3 Itemset L3 Itemset sup


3rd scan
{B, C, E} {B, C, E} 2
24
The Apriori Algorithm (Pseudo-Code)
Ck: Candidate itemset of size k
Lk : frequent itemset of size k

L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1 that
are contained in t
Lk+1 = candidates in Ck+1 with min_support
end
return k Lk;
25
Implementation of Apriori

◼ How to generate candidates?


◼ Step 1: self-joining Lk
◼ Step 2: pruning
◼ Example of Candidate-generation
◼ L3={abc, abd, acd, ace, bcd}
◼ Self-joining: L3*L3
◼ abcd from abc and abd
◼ acde from acd and ace
◼ Pruning:
◼ acde is removed because ade is not in L3
◼ C4 = {abcd}
26
Scalable Frequent Itemset Mining Methods

◼ Apriori: A Candidate Generation-and-Test Approach

◼ Improving the Efficiency of Apriori

◼ FPGrowth: A Frequent Pattern-Growth Approach

◼ ECLAT: Frequent Pattern Mining with Vertical Data Format

◼ Mining Close Frequent Patterns and Maxpatterns

27
The Apriori Algorithm—An Example

Min Support = 2

Confidence = 70%

28
The Apriori Algorithm—An Example

29
The Apriori Algorithm—An Example

❑ Generating Association Rules

30
Rule Generation

◼ Given a frequent itemset L, find all non-empty


subsets f  L such that f → L – f satisfies the
minimum confidence requirement
◼ If {A,B,C,D} is a frequent itemset, candidate

rules:
ABC →D, ABD →C, ACD →B, BCD →A,
A →BCD, B →ACD, C →ABD, D →ABC
AB →CD, AC → BD, AD → BC, BC →AD,
BD →AC, CD →AB,

◼ If |L| = k, then there are 2k – 2 candidate


association rules (ignoring L →  and  → L)
31
Rule Generation
◼ In general, confidence does not have an anti-monotone
property
c(ABC →D) can be larger or smaller than c(AB →D)

◼ But confidence of rules generated from the same itemset


has an anti-monotone property
◼ E.g., Suppose {A,B,C,D} is a frequent 4-itemset:

c(ABC → D)  c(AB → CD)  c(A → BCD)

◼ Confidence is anti-monotone w.r.t. number of items


on the RHS of the rule

32
Rule Generation for Apriori Algorithm
Lattice of rules
ABCD=>{ }
Low
Confidence
Rule
BCD=>A ACD=>B ABD=>C ABC=>D

CD=>AB BD=>AC BC=>AD AD=>BC AC=>BD AB=>CD

D=>ABC C=>ABD B=>ACD A=>BCD


Pruned
Rules
33
Factors Affecting Complexity of Apriori
◼ Choice of minimum support threshold

◼ Dimensionality (number of items) of the data set

◼ Size of database

◼ Average transaction width


34
Factors Affecting Complexity of Apriori
◼ Choice of minimum support threshold
◼ lowering support threshold results in more frequent itemsets
◼ this may increase number of candidates and max length of
frequent itemsets
◼ Dimensionality (number of items) of the data set

◼ Size of database TID Items


◼ 1 Bread, Milk
2 Juice, Bread, Diaper, Eggs
◼ Average transaction width 3 Juice, Coke, Diaper, Milk

4 Juice, Bread, Diaper, Milk
5 Bread, Coke, Diaper, Milk

35
Impact of Support Based Pruning
TID Items Items (1-itemsets)
1 Bread, Milk
Item Count
2 Juice, Bread, Diaper, Eggs
Bread 4
3 Juice, Coke, Diaper, Milk Coke 2
4 Juice, Bread, Diaper, Milk Milk 4
Juice 3
5 Bread, Coke, Diaper, Milk Diaper 4
Eggs 1

Minimum Support = 3
Minimum Support = 2
If every subset is considered,
6C + 6C + 6C
1 2 3
If every subset is considered,
6 + 15 + 20 = 41 6C + 6C + 6C + 6C
With support-based pruning, 1 2 3 4
6 + 15 + 20 +15 = 56
6 + 6 + 4 = 16

36
Factors Affecting Complexity of Apriori

◼ Choice of minimum support threshold


◼ lowering support threshold results in more frequent itemsets
◼ this may increase number of candidates and max length of
frequent itemsets
◼ Dimensionality (number of items) of the data set
◼ More space is needed to store support count of itemsets
◼ if number of frequent itemsets also increases, both computation
and I/O costs may also increase
◼ Size of database TID Items
1 Bread, Milk
◼ Average transaction width 2 Juice, Bread, Diaper, Eggs

3 Juice, Coke, Diaper, Milk
4 Juice, Bread, Diaper, Milk
5 Bread, Coke, Diaper, Milk

37
Factors Affecting Complexity of Apriori
◼ Choice of minimum support threshold
◼ lowering support threshold results in more frequent itemsets
◼ this may increase number of candidates and max length of
frequent itemsets
◼ Dimensionality (number of items) of the data set
◼ More space is needed to store support count of itemsets
◼ if number of frequent itemsets also increases, both computation
and I/O costs may also increase
◼ Size of database
◼ run time of algorithm increases with number of transactions
◼ Average transaction width TID Items
1 Bread, Milk
2 Juice, Bread, Diaper, Eggs
3 Juice, Coke, Diaper, Milk
4 Juice, Bread, Diaper, Milk
5 Bread, Coke, Diaper, Milk
38
Factors Affecting Complexity of Apriori
◼ Choice of minimum support threshold
◼ lowering support threshold results in more frequent itemsets
◼ this may increase number of candidates and max length of
frequent itemsets
◼ Dimensionality (number of items) of the data set
◼ More space is needed to store support count of itemsets
◼ if number of frequent itemsets also increases, both computation
and I/O costs may also increase
◼ Size of database
◼ run time of algorithm increases with number of transactions
◼ Average transaction width
◼ transaction width increases the max length of frequent itemsets
◼ number of subsets in a transaction increases with its width,
increasing computation time for support counting

39
Factors Affecting Complexity of Apriori

40
Compact Representation of Frequent Itemsets

◼ Some frequent itemsets are redundant because their


supersets are also frequent
TID Consider
A1 A2 A3 the following
A4 A5 A6 A7 A8data set.
A9 A10 Assume
B1 B2 B3 B4 support
B5 B6 B7 threshold =5C2
B8 B9 B10 C1 C3 C4 C5 C6 C7 C8 C9 C10
1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
10 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

10 
= 3   
10

Number of frequent itemsets


k
k =1

◼ Need a compact representation


41
Maximal Frequent Itemset
An itemset is maximal frequent if it is frequent and none of its
immediate supersets is frequent null

Maximal A B C D E
Itemsets

AB AC AD AE BC BD BE CD CE DE

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE

Infrequent
Itemsets Border
ABCD
E
42
What are the Maximal Frequent Itemsets in this Data?

TID A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10


1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
10 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

Minimum support threshold = 5

(A1-A10)
(B1-B10)
(C1-C10)

43
An illustrative example
Items

A B C D E F G H I J Support threshold (by count) : 5


Frequent itemsets: ?
1 Maximal itemsets: ?

2
3
4
Transactions

5
6
7
8
9
10

44
An illustrative example
Items

A B C D E F G H I J Support threshold (by count) : 5


Frequent itemsets: {F}
1 Maximal itemsets: {F}

2 Support threshold (by count): 4


Frequent itemsets: ?
3 Maximal itemsets: ?

4
Transactions

5
6
7
8
9
10

45
An illustrative example
Items

A B C D E F G H I J Support threshold (by count) : 5


Frequent itemsets: {F}
1 Maximal itemsets: {F}

2 Support threshold (by count): 4


Frequent itemsets: {E}, {F}, {E,F}, {J}
3 Maximal itemsets: {E,F}, {J}

4
Transactions

Support threshold (by count): 3


Frequent itemsets: ?
5 Maximal itemsets: ?

6
7
8
9
10

46
An illustrative example
Items

A B C D E F G H I J Support threshold (by count) : 5


Frequent itemsets: {F}
1 Maximal itemsets: {F}

2 Support threshold (by count): 4


Frequent itemsets: {E}, {F}, {E,F}, {J}
3 Maximal itemsets: {E,F}, {J}

4
Transactions

Support threshold (by count): 3


Frequent itemsets:
5 All subsets of {C,D,E,F} + {J}
Maximal itemsets:
6 {C,D,E,F}, {J}

7
8
9
10

47
Another illustrative example
Items

A B C D E F G H I J Support threshold (by count) : 5


Maximal itemsets: {A}, {B}, {C}
1
Support threshold (by count): 4
2 Maximal itemsets: {A,B}, {A,C},{B,C}

3 Support threshold (by count): 3


Maximal itemsets: {A,B,C}
4
Transactions

5
6
7
8
9
10

48
Closed Itemset
◼ An itemset X is closed if none of its immediate supersets
has the same support as the itemset X.
◼ X is not closed if at least one of its immediate supersets
has support count as X.

49
Closed Itemset
◼ An itemset X is closed if none of its immediate supersets
has the same support as the itemset X.
◼ X is not closed if at least one of its immediate supersets
has support count as X.
Itemset Support
{A} 4
TID Items {B} 5 Itemset Support
1 {A,B} {C} 3 {A,B,C} 2
2 {B,C,D} {D} 4 {A,B,D} 3
3 {A,B,C,D} {A,B} 4 {A,C,D} 2
4 {A,B,D} {A,C} 2 {B,C,D} 2
5 {A,B,C,D} {A,D} 3 {A,B,C,D} 2
{B,C} 3
{B,D} 4
{C,D} 3

50
Maximal vs Closed Itemsets
null
Transaction Ids
TID Items
1 ABC 124 123 1234 245 345
A B C D E
2 ABCD
3 BCE
4 ACDE 12 124 24 4 123 2 3 24 34 45
AB AC AD AE BC BD BE CD CE DE
5 DE

12 2 24 4 4 2 3 4
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

2 4
ABCD ABCE ABDE ACDE BCDE

Not supported by
any transactions ABCDE
51
Maximal Frequent vs Closed Frequent Itemsets

TID Items Minimum support = 2 null


Closed but
1 ABC
not maximal
2 ABCD 124 123 1234 245 345
A B C D E
Closed and
3 BCE
maximal
4 ACDE
5 DE
12 124 24 4 123 2 3 24 34 45
AB AC AD AE BC BD BE CD CE DE

12 2 24 4 4 2 3 4
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

2 4
ABCD ABCE ABDE ACDE BCDE # Closed frequent = 9
# Maximal freaquent = 4

ABCDE
52
What are the Closed Itemsets in this Data?

TID A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10


1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
10 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

(A1-A10)
(B1-B10)
(C1-C10)

53
Example 1
Items

A B C D E F G H I J Itemsets Support Closed


(counts itemsets
1
)
2
{C} 3
3
{D} 2
4
Transactions

{C,D} 2
5
6
7
8
9
10

54
Example 1
Items

A B C D E F G H I J Itemsets Support Closed


(counts itemsets
1
)
2
{C} 3 ✓
3
{D} 2
4
Transactions

{C,D} 2 ✓
5
6
7
8
9
10

55
Example 2
Items

A B C D E F G H I J Itemset Support Closed


s (counts) itemsets
1
{C} 3
2
{D} 2
3
{E} 2
4
Transactions

{C,D} 2
5
{C,E} 2
6
{D,E} 2
7
{C,D,E} 2
8
9
10

56
Example 2
Items

A B C D E F G H I J Itemset Support Closed


s (counts) itemsets
1
{C} 3 ✓
2
{D} 2
3
{E} 2
4
Transactions

{C,D} 2
5
{C,E} 2
6
{D,E} 2
7
{C,D,E 2 ✓
8
}
9
10

57
Example 3
Items

A B C D E F G H I J Closed itemsets: {C,D,E,F}, {C,F}

1
2
3
4
Transactions

5
6
7
8
9
10

58
Example 4
Items

A B C D E F G H I J Closed itemsets: {C,D,E,F}, {C}, {F}

1
2
3
4
Transactions

5
6
7
8
9
10

59
Maximal vs Closed Itemsets

60
Example question
◼ Given the following transaction data sets (dark cells indicate presence of an item
in a transaction) and a support threshold of 20%, answer the following
questions

DataSet: A Data Set: B Data Set: C


a. What is the number of frequent itemsets for each dataset? Which dataset will produce the most
number of frequent itemsets?
b. Which dataset will produce the longest frequent itemset?
c. Which dataset will produce frequent itemsets with highest maximum support?
d. Which dataset will produce frequent itemsets containing items with widely varying support levels
(i.e., itemsets containing items with mixed support, ranging from 20% to more than 70%)?
e. What is the number of maximal frequent itemsets for each dataset? Which dataset will produce the
most number of maximal frequent itemsets?
f. What is the number of closed frequent itemsets for each dataset? Which dataset will produce the
most number of closed frequent itemsets?
61
Summary

◼ Basic concepts: association rules, support-


confident framework, closed and max-patterns
◼ Scalable frequent pattern mining methods
◼ Apriori (Candidate generation & test)

62
Ref: Basic Concepts of Frequent Pattern Mining

◼ (Association Rules) R. Agrawal, T. Imielinski, and A. Swami. Mining


association rules between sets of items in large databases. SIGMOD'93
◼ (Max-pattern) R. J. Bayardo. Efficiently mining long patterns from
databases. SIGMOD'98
◼ (Closed-pattern) N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal.
Discovering frequent closed itemsets for association rules. ICDT'99
◼ (Sequential pattern) R. Agrawal and R. Srikant. Mining sequential patterns.
ICDE'95

63
Ref: Apriori and Its Improvements
◼ R. Agrawal and R. Srikant. Fast algorithms for mining association rules.
VLDB'94
◼ H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for discovering
association rules. KDD'94
◼ A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for mining
association rules in large databases. VLDB'95
◼ J. S. Park, M. S. Chen, and P. S. Yu. An effective hash-based algorithm for
mining association rules. SIGMOD'95
◼ H. Toivonen. Sampling large databases for association rules. VLDB'96
◼ S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting and
implication rules for market basket analysis. SIGMOD'97
◼ S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining
with relational database systems: Alternatives and implications. SIGMOD'98

64
Ref: Depth-First, Projection-Based FP Mining
◼ R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for generation
of frequent itemsets. J. Parallel and Distributed Computing, 2002.
◼ G. Grahne and J. Zhu, Efficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc.
FIMI'03
◼ B. Goethals and M. Zaki. An introduction to workshop on frequent itemset mining
implementations. Proc. ICDM’03 Int. Workshop on Frequent Itemset Mining
Implementations (FIMI’03), Melbourne, FL, Nov. 2003
◼ J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation.
SIGMOD’ 00
◼ J. Liu, Y. Pan, K. Wang, and J. Han. Mining Frequent Item Sets by Opportunistic
Projection. KDD'02
◼ J. Han, J. Wang, Y. Lu, and P. Tzvetkov. Mining Top-K Frequent Closed Patterns without
Minimum Support. ICDM'02
◼ J. Wang, J. Han, and J. Pei. CLOSET+: Searching for the Best Strategies for Mining
Frequent Closed Itemsets. KDD'03
65
Ref: Vertical Format and Row Enumeration Methods

◼ M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm for


discovery of association rules. DAMI:97.
◼ M. J. Zaki and C. J. Hsiao. CHARM: An Efficient Algorithm for Closed Itemset
Mining, SDM'02.
◼ C. Bucila, J. Gehrke, D. Kifer, and W. White. DualMiner: A Dual-Pruning
Algorithm for Itemsets with Constraints. KDD’02.
◼ F. Pan, G. Cong, A. K. H. Tung, J. Yang, and M. Zaki , CARPENTER: Finding
Closed Patterns in Long Biological Datasets. KDD'03.
◼ H. Liu, J. Han, D. Xin, and Z. Shao, Mining Interesting Patterns from Very High
Dimensional Data: A Top-Down Row Enumeration Approach, SDM'06.

66
Ref: Mining Correlations and Interesting Rules
◼ S. Brin, R. Motwani, and C. Silverstein. Beyond market basket: Generalizing
association rules to correlations. SIGMOD'97.
◼ M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A. I. Verkamo. Finding
interesting rules from large sets of discovered association rules. CIKM'94.
◼ R. J. Hilderman and H. J. Hamilton. Knowledge Discovery and Measures of Interest.
Kluwer Academic, 2001.
◼ C. Silverstein, S. Brin, R. Motwani, and J. Ullman. Scalable techniques for mining
causal structures. VLDB'98.
◼ P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the Right Interestingness Measure
for Association Patterns. KDD'02.
◼ E. Omiecinski. Alternative Interest Measures for Mining Associations. TKDE’03.
◼ T. Wu, Y. Chen, and J. Han, “Re-Examination of Interestingness Measures in Pattern
Mining: A Unified Framework", Data Mining and Knowledge Discovery, 21(3):371-
397, 2010

67

You might also like