You are on page 1of 20

UNIT – 3

Data Warehousing and OLAP Technology

Introduction:
 Data warehouses generalize and consolidate data in multidimensional space.
 The construction of data warehouses involves data cleaning, data integration, and
data transformation, and can be viewed as an important preprocessing step for data
mining.
 Moreover, data warehouses provide online analytical processing (OLAP) tools for the
interactive analysis of multidimensional data of varied granularities, which facilitates
effective data generalization and data mining.
 Many other data mining functions, such as association, classification, prediction, and
clustering, can be integrated with OLAP operations to enhance interactive mining of
knowledge at multiple levels of abstraction.

1. What is a data warehouse?

 Definition1: “A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile


collection of data in support of management’s decision-making process.”—W. H. Inmon

 Data warehousing:

 The process of constructing and using data warehouses

 Definition2: A data warehouse is a repository of information collected from multiple sources,


stored under a unified schema, and that usually resides at a single site.
Features of a Data warehouse:
1. Subject-Oriented
 Organized around major subjects, such as customer, product, sales
 Focusing on the modeling and analysis of data for decision makers, not on daily
operations or transaction processing
2. Integrated
 Constructed by integrating multiple, heterogeneous data sources
– relational databases, flat files, on-line transaction records
 Data cleaning and data integration techniques are applied.
– Ensure consistency in naming conventions, encoding structures, attribute
measures, etc. among different data sources
 E.g., Hotel price: currency, tax, breakfast covered, etc.
– When data is moved to the warehouse, it is converted.
3. Time Variant
 The time horizon for the data warehouse is significantly longer than that of
operational systems
– Operational database: current value data
– Data warehouse data: provide information from a historical perspective (e.g.,
past 5-10 years)
4. Non volatile
 A physically separate store of data transformed from the operational environment
 Operational update of data does not occur in the data warehouse environment
– Does not require transaction processing, recovery, and concurrency control
mechanisms
– Requires only two operations in data accessing:
 initial loading of data and access of data

Differences between Operational Database Systems and Data Warehouses


 OLTP (on-line transaction processing)
– Major task of traditional relational DBMS
– Day-to-day operations: purchasing, inventory, banking, manufacturing,
payroll, registration, accounting, etc.
 OLAP (on-line analytical processing)
– Major task of data warehouse system
– Data analysis and decision making
 Distinct features (OLTP vs. OLAP):
– User and system orientation
– Data contents
– Database design
– View
– Access patterns

OLTP OLAP
Orientation Transaction Analysis
Users clerk, DBA knowledge workers (e.g., manager,
executive, analyst)
Function day to day operations decision support
DB design ER based, application- star/snowflake, subject-oriented
oriented
Data current, up-to-date historical, summarized,
detailed, flat relational multidimensional
integrated, consolidated
View detailed, flat relational summarized, multidimensional
Access read/write Mostly read
Unit of work short, simple transaction complex query
# records accessed tens millions
#users thousands hundreds
DB size 100MB-GB 100GB-TB
Operations index/hash on primary key lots of scans

Why Separate Data Warehouse?

 A major reason for such a separation is to help promote the High performance for
both systems
– DBMS— tuned for OLTP: access methods (Read/write), indexing on Primary
Key, concurrency control, recovery mechanism.
– Warehouse—tuned for OLAP: complex OLAP queries, multidimensional view,
consolidation
 Different functions and different data:
– Missing data: Decision support requires historical data which operational
DBs do not typically maintain
– Data consolidation: DS requires consolidation (aggregation, summarization)
of data from heterogeneous sources
– Data quality: Different sources typically use inconsistent data
representations, codes and formats which have to be reconciled
2. A multi-dimensional data model

 Data warehouses and OLAP tools are based on a multidimensional data model.
 This model views data in the form of a data cube.
 What is a data cube? A data cube allows data to be modeled and viewed in multiple
dimensions (Eg. 2D,3D etc.).
 It is defined by dimensions and facts

From Tables and Spreadsheets to Data Cubes

 A data cube, such as sales, allows data to be modeled and viewed in multiple
dimensions

o Dimension tables, such as item (item_name, brand, type), or time(day, week,


month, quarter, year)

o Fact table contains measures (such as dollars_sold) and keys to each of the
related dimension tables

 In data warehousing literature, an n-D base cube is called a base cuboid. The top most 0-
D cuboid, which holds the highest-level of summarization, is called the apex cuboid. The
lattice of cuboids forms a data cube
A Lattice of Cuboids: Each cuboid represents a different degree of summarization

all
0-D(apex) cuboid

time item location supplier


1-D cuboids

time,location item,location location,supplier


2-D cuboids
time,supplier item,supplier

time,location,supplier
3-D cuboids
time,item,supplier item,location,supplier

4-D(base) cuboid
Stars, Snowflakes, and Fact Constellations: Schemas for Multidimensional
Data Models
The most popular data model for a data warehouse is a multidimensional model, which can exist in the
formof a star schema, a snowflake schema, or a fact constellation schema

Star schema: The most common modeling paradigm is the star schema, in which the data warehouse
contains (1) a large central table (fact table) containing the bulk of the data, with no redundancy, and (2)
a set of smaller attendant tables (dimension tables), one for each dimension. The schema graph
resembles a starburst, with the dimension tables displayed in a radial pattern around the central fact
table.

Snowflake schema: The snowflake schema is a variant of the star schema model, where some
dimension tables are normalized, thereby further splitting the data into additional tables. The resulting
schema graph forms a shape similar to a snowflake.
Fact constellation: Sophisticated applications may require multiple fact tables to share dimension tables.
This kind of schema can be viewed as a collection of stars, and hence is called a galaxy schema or a
fact constellation.

Cube Definition Syntax in DMQL


 Data warehouses and data marts can be defined using two language primitives, one for cube
definition and one for dimension definition
 Cube Definition (Fact Table)
o define cube <cube_name> [<dimension_list>]: <measure_list>
 Dimension Definition (Dimension Table)
o define dimension <dimension_name> as (<attribute_or_subdimension_list>)

Defining Star Schema in DMQL


define cube sales_star [time, item, branch, location]: dollars_sold = sum(sales_in_dollars),
avg_sales = avg(sales_in_dollars), units_sold = count(*)

define dimension time as (time_key, day, day_of_week, month, quarter, year)


define dimension item as (item_key, item_name, brand, type, supplier_type)
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city, province_or_state, country)

Defining Snowflake Schema in DMQL


define cube sales_snowflake [time, item, branch, location]: dollars_sold = sum(sales_in_dollars),
avg_sales = avg(sales_in_dollars), units_sold = count(*)

define dimension time as (time_key, day, day_of_week, month, quarter, year)


define dimension item as (item_key, item_name, brand, type, supplier(supplier_key,
supplier_type))
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city(city_key, province_or_state, country))

Defining Fact Constellation in DMQL


define cube sales [time, item, branch, location]: dollars_sold = sum(sales_in_dollars), avg_sales
= avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type, supplier_type)
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city, province_or_state, country)

define cube shipping [time, item, shipper, from_location, to_location]: dollar_cost =


sum(cost_in_dollars), unit_shipped = count(*)
define dimension time as time in cube sales
define dimension item as item in cube sales
define dimension shipper as (shipper_key, shipper_name, location as location in cube sales,
shipper_type)
define dimension from_location as location in cube sales
define dimension to_location as location in cube sales

Measures of Data Cube: Three Categories


 Distributive: if the result derived by applying the function to n aggregate values is the same as
that derived by applying the function on all the data without partitioning
o E.g., count(), sum(), min(), max()
 Algebraic: if it can be computed by an algebraic function with M arguments (where M is a
bounded integer), each of which is obtained by applying a distributive aggregate function
o E.g., avg(), standard_deviation()
 Holistic: if there is no constant bound on the storage size needed to describe a subaggregate.
o E.g., median(), mode(), rank()

A Concept Hierarchy
 A concept hierarchy defines a sequence of mappings from a set of low-level concepts to higher-
level, more general concepts.
 Consider a concept hierarchy for the dimension location. City values for location include
Vancouver, Toronto, New York, and Chicago.
 Each city, however, can be mapped to the province or state to which it belongs.
Typical OLAP Operations

 In the multidimensional model, data are organized into multiple dimensions, and each dimension
contains multiple levels ofabstraction defined by concept hierarchies.
 This organization provides users with the flexibility to view data from different perspectives.
 A number of OLAP data cube operations exist to materialize these different views, allowing
interactive querying and analysis of the data at hand.
 Hence, OLAP provides a user-friendly environment for interactive data analysis.

 Roll up (drill-up): summarize data


 By climbing up hierarchy or by dimension reduction
 Drill down (roll down): reverse of roll-up
 from higher level summary to lower level summary or detailed data, or introducing new
dimensions
 Slice and dice: project and select
 Slice operation performs a selection on one dimension of the given cube, resulting in a
subcube.
 Dice operation defines a subcube by performing a selection on two or more dimensions.
 Pivot (rotate):
 visualization operation that rotates the data axes in view in order to provide an
alternative presentation of the data.
1. Roll up (drill-up): Summarize data By climbing up hierarchy or by dimension reduction

2. Drill down (roll down):

• Reverse of roll-up
• From higher level summary to lower level summary or detailed data, or introducing new
dimensions
3. Dice: Dice operation defines a subcube by performing a selection on two or more
dimensions.

4. Slice: Slice operation performs a selection on one dimension of the given cube, resulting in a
subcube.
5. Pivot (rotate):

Visualization operation that rotates the data axes in view in order to provide an alternative
presentation of the data.

A Starnet Query Model for Querying Multidimensional Databases

 The querying of multidimensional databases can be based on a starnet model,


 which consists of radial lines emanating from a central point, where each line represents a
concept hierarchy for a dimension.
 Each abstraction level in the hierarchy is called a footprint.
 These represent the granularities available for use by OLAP operations such as drill-down and
roll-up.
3. Data warehouse architecture
Design of Data Warehouse: A Business Analysis Framework
 To design an effective data warehouse we need to understand and analyze business
needs and construct a business analysis framework.
 For which the owner, architect, and builder have different views
 Four views regarding the design of a data warehouse
o Top-down view
 allows selection of the relevant information necessary for the data
warehouse
 This information matches the current and future business needs.
 Data source view
o exposes the information being captured, stored, and managed by
operational systems.
o This information may be documented at various levels of detail and accuracy,
from individual data source tables to integrated data source tables.
 Data warehouse view
o consists of fact tables and dimension tables
o It represents the information that is stored inside the data warehouse
 Business query view
o sees the perspectives of data in the warehouse from the view of end-user

The Process of Data Warehouse Design


 Top-down, bottom-up approaches or a combination of both
o Top-down: Starts with overall design and planning (Business problems-clear
and well understood)
o Bottom-up: Starts with experiments and prototypes
 From software engineering point of view
o Waterfall: structured and systematic analysis at each step before proceeding
to the next
o Spiral: involves the rapid generation of increasingly functional systems, with
short intervals between successive releases.
 Typical data warehouse design process consists of the following steps:
o Choose a business process to model, e.g., orders, invoices, sales etc.
o Choose the grain or Fact (fundamental, atomic level of data to be
represented in the fact table) of the business process
o Choose the dimensions that will apply to each fact table record
o Choose the measure that will populate each fact table record

 Data Warehousing: A Multitiered Architecture (Three - tier)


Data Warehouse Models: Enterprise Warehouse, Data Mart, and Virtual
Warehouse
 Enterprise warehouse
 collects all of the information about subjects spanning the entire
organization
 Data Mart
 a subset of corporate-wide data that is of value to a specific groups of users.
Its scope is confined to specific, selected groups, such as marketing data mart
 Virtual warehouse
 It is a set of views over operational databases
 For efficient query processing, only some of the possible summary views may
be materialized.

Data Warehouse Development: A Recommended Approach

Data Warehouse Back-End Tools and Utilities (ETL Tools)


 Data extraction, which typically gathers data from multiple, heterogeneous, and
external sources
 Data cleaning, which detects errors in the data and rectifies them when possible
 Data transformation, which converts data from legacy or host format to warehouse
format
 Load, which sorts, summarizes, consolidates, computes views, checks integrity, and
builds indices and partitions
 Refresh, which propagates the updates from the data sources to the warehouse
Metadata Repository
 Meta data is the data defining warehouse objects. It stores:
 Description of the structure of the data warehouse
o schema, view, dimensions, hierarchies, derived data defn, data mart
locations and contents
 Operational meta-data
o data lineage (history of migrated data and transformation path), currency of
data (active, archived, or purged), monitoring information (warehouse usage
statistics, error reports, audit trails)
 The algorithms used for summarization
 The mapping from operational environment to the data warehouse
 Data related to system performance
o warehouse schema, view and derived data definitions
 Business data
o business terms and definitions, ownership of data, charging policies

OLAP Server Architectures


 Relational OLAP (ROLAP)
o Is an extended relational DBMS that maps operations on multidimensional
data to standard relational operations.
 Multidimensional OLAP (MOLAP)
o a multidimensional OLAP (MOLAP) model, that is, a special-purpose server
that directly implements multidimensional data and operations.
 Hybrid OLAP (HOLAP)
o Flexibile
 Specialized SQL servers
o Specialized support for SQL queries over star/snowflake schemas

4. Data warehouse implementation

Efficient Data Cube Computation


 Data cube can be viewed as a lattice of cuboids
o The bottom-most cuboid is the base cuboid
o The top-most cuboid (apex) contains only one cell
o How many cuboids are there in an n-dimensional data
cube?
 If there were no hierarchies associated with each
dimension, then the total number of cuboids are 2n
 If n-dimensional cube with L levels ( For Time
dimension “day <month < quarter < year” )
n
T   ( Li 1)
i 1
Cube Operation
 Cube definition and computation in DMQL
define cube sales[item, city, year]: sum(sales_in_dollars)
compute cube sales
 Each cuboid represents a different group-by
o “Compute the sum of sales, grouping by
 city and item.”
o “Compute the sum of sales, grouping by city.”
 OLAP may need to access different
cuboids for different queries.
 (date, product, customer),
 (date, product),(date, customer),
 (product, customer),
 (date), (product), (customer), ()

Efficient Data Cube Computation

 Precomputation leads to fast response time and avoids some redundant


computation.
 A major challenge related to this precomputation, is that the required storage space
may explode if all the cuboids in a data cube are precomputed, especially when the
cube has many dimensions.
 The storage requirements are even more excessive when many of the dimensions
have associated concept hierarchies, each with multiple levels.
 This problem is referred to as the curse of dimensionality.
Partial Materialization: Selected Computation of Cuboids
 Materialization of data cube-Three choices
o No materialization: Do not precompute any of the cuboids
o Full materialization: Precompute all of the cuboids.
o Partial materialization: Selectively compute a proper subset of the whole set
of possible cuboids.

Iceberg Cube
 Computing only the cuboid cells whose count or other aggregates satisfying the
condition like
HAVING COUNT(*) >= minsup
o Motivation
 Only a small portion of cube cells may be “above the water’’ in a sparse
cube (For many cells in a cuboid, the measure (count or sum)value will be
zero. If zero valued tuples are stored in the cuboid, then we say that the
cuboid is sparse)
 Only calculate “interesting” cells—data above certain threshold
 Eg. Count >= 10 , sales $100.
 Avoid explosive growth of the cube
 Partially materialized cubes are known as iceberg cubes.
 The minimum threshold is called the minimum support threshold, or minimum
support (min_sup)
 Eg. Iceberg cube.
compute cube sales_iceberg as
select month, city, customer_group, count(*)
from salesInfo
cube by month, city, customer_group
having count(*) >= min_sup

Indexing OLAP Data: Bitmap Index and Join Index


 To facilitate efficient data accessing, most data warehouse systems support index
structures and materialized views (using cuboids).

 1. Bitmap indexing method is popular in OLAP products because it allows quick


searching in data cubes.
 Index on a particular column
 Each value in the column has a bit vector
 The length of the bit vector: # of records in the base table
 The i-th bit is set if the i-th row of the base table has the value for the
indexed column
 Not suitable for high cardinality domains (uniqueness of data values
contained in a column)

 2. Join indexing method gained popularity from its use in relational database query
processing.
 Join indexing registers the joinable rows of two relations from a relational
database.
 The linkage between a fact table and its corresponding dimension tables
comprises the fact table’s foreign key and the dimension table’s primary key

Example: the “Main Street” value in the location dimension table joins with tuples
T57, T238, and T884 of the sales fact table. Similarly, the “Sony-TV” value in the item
dimension table joins with tuples T57 and T459 of the sales fact table.
Suppose that there are 360 time values, 100 items, 50 branches, 30 locations, and 10
million sales tuples in the sales star data cube. If the sales fact table has recorded
sales for only 30 items, the remaining 70 items will obviously not participate in joins

Efficient Processing OLAP Queries


o Determine which operations should be performed on the available cuboids
 selection, projection, roll-up (group-by), and drill-down operations specified in
the query into corresponding SQL and/or OLAP operations
o Determine which materialized cuboid(s) should be selected for OLAP operation
 Identify all of the materialized cuboids that may potentially be used to answer
the query and select the cuboid with the least cost.
 OLAP query processing. Suppose that we define a data cube for AllElectronics of the form
“sales cube [time, item, location]: sum(sales in dollars).”
 The dimension hierarchies used are “day < month < quarter < year” for time; “item_ name <
brand < type” for item; and “street < city < province _or_state < country” for location.
 Let the query to be processed be on {brand, province_or_state} with the condition
“year = 2004”, and there are 4 materialized cuboids available:
{year, item_name, city}
{year, brand, country}
{year, brand, province_or_state}
{item_name, province_or_state} where year = 2004
 Which should be selected to process the query?
 if efficient indices are available for cuboid 4, then cuboid 4 may be a better choice.

You might also like