2: Data Warehouse and OLAP Technology for Data Mining
• What is a data warehouse? • A multi-dimensional data model • Data warehouse architecture • Data warehouse implementation • Further development of data cube technology • From data warehousing to data mining


What is Data Warehouse?
• Defined in many different ways, but not rigorously.
– A decision support database that is maintained separately from the organization’s operational database. – Support information processing by providing a solid platform of consolidated, historical data for analysis. – W. H. Inmon — “A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management’s decision-making process.”

• Data warehousing:
– The process of constructing and using data warehouses


Data Warehouse — Key Features (1)
• Subject-Oriented:
– Organized around major subjects:
• E.g., customer, product, sales. • Focusing on the modeling and analysis of data for decision makers, not on daily operations or transaction processing. • Provide a simple and concise view around particular subject issues by excluding useless data in the decision support process.

• Integrated:
– Constructed from multiple, heterogeneous data sources:
• RDBs, flat files, on-line transaction records, …

– Applying data cleaning and data integration techniques.
• Ensure consistency in naming conventions, encoding structures, attribute measures, etc. among different data sources • E.g., Hotel price: currency, tax, breakfast covered, etc.

– When data is moved to the warehouse, it is converted.

Data Warehouse — Key Features (2)
• Time Variant:
– The time horizon for the data warehouse is significantly longer than that of operational systems.
• Current value data ⇔ historical perspective info. (past 5-10 yr).

– Every key structure in the data warehouse:
• Contains an element of time [explicitly or implicitly], • But the key of tuples may or may not contain “time element”.

• Non-Volatile:
– A physically separate store of data transformed (copied) from the operational environment into one semantic data store.
• Operational data update does not occur in data warehouse. • Require no transaction processing, recovery, and concurrency control mechanisms. [⇐ on-line database operations]

– Requires only two operations in data accessing:
• initial loading of data and access of data.

Compared w. Heterogeneous DBMS
• Traditional heterogeneous DB integration:
– Build wrappers/mediators on top of heterogeneous DB.
• E.g., IBM Data Joiner and Informix DataBlade.

– Query-driven approach: [costly]
• When a query is posed to a client site, a meta-dictionary is used to translate the query into queries appropriate for individual heterogeneous sites involved, and the results are integrated into a global answer set • Complex information filtering, compete for resources.

• Data warehouse: update-driven, high performance
– Information from heterogeneous sources is integrated in advance and stored in warehouses for direct query and analysis.

integrated. registration. Operational DBMS • OLTP (on-line transaction processing): – Major task of traditional relational DBMS. • Distinct features (OLTP ⇔ OLAP): [ref.1 p. – Access patterns: update ⇔ read-only but complex queries. 6 . etc. local ⇔ evolutionary. – View: current. manufacturing. • OLAP (on-line analytical processing): – Major task of data warehouse system. accounting. – Data analysis and decision making. – Database design: ER + application ⇔ star + subject.43] – User and system orientation: customer (query) ⇔ market. detailed ⇔ historical.Compared w. Table 2. consolidated. banking. – Day-to-day operations: purchasing. payroll. inventory. – Data contents: current.

1 p. multidimensional integrated. Table 2. key short. simple transaction tens thousands 100MB-GB transaction throughput . summarized.OLTP vs. consolidated ad-hoc lots of scans complex query millions hundreds 100GB-TB query throughput. flat relational isolated repetitive [ref. up-to-date detailed. OLAP OLTP users function DB design data usage access unit of work # records accessed #users DB size metric clerk. IT professional day to day operations application-oriented current. response 7 read/write index/hash on prim.43] OLAP knowledge worker decision support subject-oriented historical.

etc. indexing.Why Separate Data Warehouse? • High performance for both systems: [asynchronous] – DBMS ⇒ tuned for OLTP: • access methods. recovery. summarization) of data from heterogeneous sources. consolidation. – data consolidation: DS requires consolidation (aggregation. • Different functions and different data: – missing data: Decision support (DS) requires historical data which operational DBs do not typically maintain. – Warehouse ⇒ tuned for OLAP: • complex OLAP queries. concurrency control. multidimensional view. – data quality: different sources typically use inconsistent data representations. codes & formats which have to be reconciled. 8 .

is called the apex cuboid. – The lattice of cuboids forms a data cube. supplier. each for a dimension. – The top most 0-D cuboid. location.. such as sales.g. type). …) and keys to each of the related dimension tables. time (day. branch. • E. quarter. year). dollars_sold. item. – Fact table contains numerical measures (say. units_sold. month. • E. [group by] 9 . … – Dimension tables. which holds the highest-level of summarization.g. week. – A data cube. • In data warehousing literature. – An n-D base cube is called a base cuboid. allows data to be modeled and viewed in multiple dimensions. • Dimensions: time.From Tables and Spreadsheets to Data Cubes • Data warehouses utilize an n-dimensional data model to view data in the form of a data cube. brand. amount_budgeted. item (item_name..

location time. supplier 10 .supplier item.supplier item.item 4 C2 time.location.supplier item.item.item.location location. item.Cube: A Lattice of Cuboids C 4 0 all time item location supplier 0-D (apex) cuboid 1-D cuboids C14 time. location.location time.supplier 2-D cuboids C34 4 C4 time.supplier time.supplier 3-D cuboids 4-D (base) cuboid time.location.

therefore called galaxy schema or fact constellation. 11 . or Galaxy schema : Multiple fact tables share dimension tables.Conceptual Modeling of Data Warehouses • Modeling data warehouses: dimensions & measures – Star schema: A fact table in the middle connected to a set of dimension tables – Snowflake schema: A refinement of star schema where some dimensional hierarchy is normalized into a set of smaller dimension tables. forming a shape similar to snowflake. – Fact constellations. viewed as a collection of stars.

Example of Star Schema time time_key day day_of_the_week month quarter year item Sales Fact Table time_key item_key branch_key location_key units_sold dollars_sold avg_sales item_key item_name brand type supplier_type branch branch_key branch_name branch_type location location_key street city province_or_street country Measures 12 .

Example of Snowflake Schema time time_key day day_of_the_week month quarter year item Sales Fact Table time_key item_key branch_key location_key units_sold dollars_sold avg_sales location location_key street city_key item_key item_name brand type supplier_key supplier supplier_key supplier_type branch branch_key branch_name branch_type city city_key city province_or_street country Measures 13 .

Example of Fact Constellation time time_key day day_of_the_week month quarter year item Sales Fact Table time_key item_key branch_key location_key units_sold dollars_sold avg_sales item_key item_name brand type supplier_type Shipping Fact Table time_key item_key shipper_key from_location to_location dollars_cost units_shipped branch branch_key branch_name branch_type location location_key street city province_or_street country shipper shipper_key shipper_name location_key shipper_type 14 Measures .

DMQL: Language Primitives • Cube Definition (Fact Table): – define cube <cube_name> [<dimension_list>]: <measure_list> • Dimension Definition (Dimension Table): – define dimension <dimension_name> as (<attribute_or_subdimension_list>) • Special Case (Shared Dimension Tables): – First time as “cube definition” – define dimension <dimension_name> as <dimension_name_first_time> in cube <cube_name_first_time> 15 .A Data Mining Query Language.

year) – define dimension item as (item_key. item. country) 16 . street. province_or_state. branch_name. brand. supplier_type) – define dimension branch as (branch_key. month. branch_type) – define dimension location as (location_key. day. quarter. day_of_week. avg_sales = avg(sales_in_dollars) – define dimension time as (time_key. location]: units_sold = count(*).Defining a Star Schema in DMQL • Procedure: – define cube sales_star [time. dollars_sold = sum(sales_in_dollars). type. branch. item_name. city.

street. type. day_of_week. branch. country)) 17 . year) – define dimension item as (item_key. location]: units_sold = count(*). city(city_key. item_name. month. supplier(supplier_key. brand. quarter. day. branch_type) – define dimension location as (location_key. avg_sales = avg(sales_in_dollars) – define dimension time as (time_key.Defining a Snowflake Schema in DMQL • Procedure: – define cube sales_snowflake [time. branch_name. province_or_state. dollars_sold = sum(sales_in_dollars). item. supplier_type)) – define dimension branch as (branch_key.

location]: units_sold = count(*). branch_name. dollars_sold = sum(sales_in_dollars). quarter. avg_sales = avg(sales_in_dollars) – – – – define dimension define dimension define dimension define dimension country) time as (time_key. day. brand. unit_shipped = count(*) – define dimension time as time in cube sales – define dimension item as item in cube sales – define dimension shipper as (shipper_key. location as location in cube sales. shipper. street. shipper_type) – define dimension from_location as location in cube sales – define dimension to_location as location in cube sales 18 . month. branch_type) location as (location_key. branch. province_or_state. item. item_name. year) item as (item_key. city. to_location]: dollar_cost = sum(cost_in_dollars). day_of_week. from_location. shipper_name. • define cube shipping [time. supplier_type) branch as (branch_key.Defining a Fact Constellation in DMQL • define cube sales [time. item. type.

g. – E.. mode().. min_N(). standard_deviation().g. avg(). median(). ⇐ aggregate • Algebraic: if it can be computed by an algebraic function with M arguments (where M is a bounded integer). – E. max(). count(). rank(). ⇐ statistical 19 . sum(). each of which is obtained by applying a distributive aggregate function.g.. ⇐ derived • Holistic: if there is no constant bound on the storage size needed to describe a sub-aggregate.Measures: Three Categories • Distributive: if the result derived by applying the function to n aggregate values is the same as that derived by applying the function on all the data without partitioning. min(). – E.

all . Toronto M. North_America Canada ..... Chan .. L. Wind 20 Lower: more specific concepts ....A Concept Hierarchy: Dimension (location) Higher: more general concepts all region country city office Germany Europe .. Mexico Spain Frankfurt .. Vancouver ...

10} < inexpensive Year Quarter Month Week Day 21 . week} < year • Set_grouping hierarchy – {1..View of Warehouses and Hierarchies Specification of hierarchies • Schema hierarchy – day < {month < quarter.

Location. and region. Time gio n Industry Category Region Country City Office Year Quarter Month Week Day Product Re Product Month Hierarchical summarization paths 22 . Dimensions: Product. month.Multidimensional Data • Sales volume as a function of product.

A Country Canada Mexico sum Pr 23 .S.A. U.S.A Sample Data Cube Date od uc t TV PC VCR sum 1Qtr 2Qtr 3Qtr 4Qtr sum Total annual sales of TV in U.

country date.Cuboids Corresponding to the Cube C C C 3 0 all 0-D(apex) cuboid product 3 1 date country 1-D cuboids 3 2 product. country 3 C3 3-D(base) cuboid 24 .date product. country 2-D cuboids product. date.

Browsing a Data Cube • Visualization • OLAP capabilities • Interactive manipulation 25 .

59 • Roll up (drill-up): summarize data – by climbing up hierarchy or by dimension reduction. – drill through: through the bottom level of the cube to its back-end relational tables (using SQL). • Pivot (rotate): – reorient the cube. • Slice and dice: project and select. Fig.10 p. 2. • Other operations – drill across: involving (across) more than one fact table. visualization. or introducing new dimensions.Typical OLAP Operations ref. 26 . 3D to series of 2D planes. • Drill down (roll down): reverse of roll-up – from higher level summary to lower level summary or detailed data.

A Star-Net Query Model Each circle (abstract level) is called a footprint. Shipping Method Customer Orders CONTRACTS AIR-EXPRESS TRUCK Time ANNUALY QTRLY DAILY CITY COUNTRY DISTRICT REGION Location Promotion DIVISION Organization 27 Customer ORDER PRODUCT LINE Product PRODUCT ITEM PRODUCT GROUP SALES PERSON .

Design of a Data Warehouse: A Business Analysis Framework • Four views regarding the design of a data warehouse: – Top-down view: • allows selection of the relevant information necessary for the data warehouse. and managed by operational systems. – Business query view: • sees the perspectives of data in the warehouse from the view of end-user. stored. – Data warehouse view: • consists of fact tables and dimension tables. 28 . – Data source view: • exposes the information being captured.

g.. orders. – The measure that will populate each fact table record. invoices. short turn around time. bottom-up approaches or hybrid: – Top-down: Starts with overall design and planning (mature). – The dimensions for each fact table record. – Spiral: rapid generation of increasingly functional systems.Data Warehouse Design Process • Top-down. e. … – The grain (atomic level of data) of the business process. • From software engineering point of view: – Waterfall: structured and systematic analysis at each step before proceeding to the next. quick turn around. 29 . • Typical data warehouse design process: Choose – A business process to model. – Bottom-up: Starts with experiments and prototypes (rapid).

Multi-Tiered Architecture Metadata other Monitor & Integrator OLAP Server sources Operational Extract Transform Load Refresh Data Warehouse Serve Analysis Query Reports Data mining DBs Data Marts Data Sources Data Storage 1 OLAP Engine Front-End Tools 2 3 30 .

such as marketing data mart. – Only some summary views are materialized. selected groups. • Data Mart: – a subset of corporate-wide data that is of value to a specific groups of users. dep. – Independent vs. 31 . • Virtual warehouse: – A set of views over operational databases. • Its scope is confined to specific. (directly from warehouse) data mart.1 Three Data Warehouse Models • Enterprise warehouse: – collects all of the information about subjects spanning the entire organization.

Data Warehouse Development: A Recommended Approach Multi-Tier Data Warehouse Distributed Data Marts Data Mart Data Mart Enterprise Data Warehouse Model refinement Model refinement Define a high-level corporate data model 32 .

implementation of aggregation navigation logic.g. – fast indexing to pre-computed summarized data. • Multidimensional OLAP (MOLAP): – Array-based multidimensional storage engine (sparse matrix techniques).. e. and additional tools and services. high-level: array. • Specialized SQL servers: – specialized support for SQL queries over star/snowflake schemas. • Hybrid OLAP (HOLAP): – User flexibility. low level: relational. – Include optimization of DBMS backend.2 OLAP Server Architectures • Relational OLAP (ROLAP): – Use relational or extended-relational DBMS to store and manage warehouse data and OLAP middle ware to support missing pieces. [Informix’s Red-brick] 33 . – greater scalability.

product. year) (customer). city. (date. customer). (city. city. year) 34 . customer). item) (city. introduced by Gray et al. year]: sum(sales_in_dollars) – compute cube sales • Transform it into a SQL-like language (with a new operator cube by. SUM(amount) FROM SALES CUBE BY item. year. city. (date).product). (product. (item. year) (date.’96): – SELECT item. item. ().Cube Operation • Cube definition and computation in DMQL: – define cube sales[item. n customer). (city. (product). year () (item) • Need to compute these Group-By’s: (city) (year) 2 – (date.

etc. if Li = 1. – The top-most cuboid (apex) contains only one cell. access frequency. – Selection of which cuboids to materialize. 35 .Efficient Data Cube Computation • Data cube can be viewed as a lattice of cuboids: – The bottom-most cuboid is the base cuboid. sharing. or some (partial materialization). • Based on size. none (no materialization). n • Materialization of data cube: – Materialize every (cuboid) (full materialization). – How many cuboids in an n-dimensional cube with Li levels? T = ∏ i =1 ( Li + 1) ⇒ 2n .

hashing. 36 . and grouping operations are applied to the dimension attributes to reorder and cluster related tuples. rather than from the base fact table.Cube Computation: ROLAP-Based Method (1) • Efficient cube computation methods: – ROLAP-based cubing algorithms (Agarwal et al’96) – Array-based cubing algorithm (Zhao et al’97) – Bottom-up computation method (Bayer & Ramarkrishnan’99) • ROLAP-based cubing algorithms: – Sorting. – Aggregates may be computed from previously computed aggregates. – Grouping is performed on some sub-aggregates as a “partial grouping step”.

37 . VLDB’96) – Smallest-parent: computing a cuboid from the smallest cuboid previously computed.Cube Computation: ROLAP-Based Method (2) • This is not in the textbook but in a research paper • Hash/sort based methods (Agarwal et. – Share-partitions: sharing the partitioning cost cross multiple cuboids when hash-based algorithms are used. al. – Cache-results: caching results of a cuboid from which other cuboids are computed to reduce disk I/Os. – Amortize-scans: computing as many as possible cuboids at the same time to amortize disk reads. – Share-sorts: sharing sorting costs cross multiple cuboids when sort-based method is used.

• Compressed sparse array addressing: (chunk_id. thereby reducing memory access and storage cost.Multi-way Array Aggregation for Cube Computation • Partition arrays into chunks (a small sub-cube fit in memory). offset) • Compute aggregates in “multiway” by visiting cube cells in the order which minimizes the # of times to re-visit each cell. C c3 61 62 63 64 c2 45 46 47 48 c1 29 30 31 32 c0 b3 B 13 9 5 1 a0 14 15 16 B b2 b1 b0 2 a1 3 a2 4 a3 60 44 28 56 40 24 52 36 20 ordering What is the best traversing order to do multi-way aggregation? 38 A .

Multi-way Array Aggregation for Cube Computation AC BC C c3 61 62 63 64 c2 45 46 47 48 c1 29 30 31 32 c0 B 13 9 5 1 a0 2 a1 3 a2 4 a3 14 15 16 28 24 20 44 40 36 60 56 52 b3 B b2 b1 b0 A b0c0 of BC AB 39 .

Multi-way Array Aggregation for Cube Computation a0b0c0 ⇒ a0b0. a0c0. AC BC C c3 61 62 63 64 c2 45 46 47 48 c1 29 30 31 32 c0 B 13 9 5 1 a0 2 a1 3 a2 4 a3 14 15 16 28 24 20 44 40 36 60 56 52 b3 B b2 b1 b0 A AB A=40 B=400 C=4000 ⇒ AB=16000 ⇒ AC=160000 ⇒ BC=1600000 40 . b0c0.

Store only partitions with aggregate value > threshold 41 . “bottom-up computation” and iceberg cube computation methods can be explored. – See the details of Example 2.Multi-Way Array Aggregation for Cube Computation (cont’d) • Method: the planes should be sorted and computed according to their size in ascending order.12 (pp. fetch and compute only one chunk at a time for the largest plane. • Limitation of the method: computing well only for a small number of dimensions – If there are a large number of dimensions. 75-78) – Idea: keep the smallest plane in the main memory.

Indexing OLAP Data: Bitmap Index • Index on a particular column • Each value in the column has a bit vector: bit-op is fast • The length of the bit vector: # of records in the base table • The i-th bit is set if the i-th row of the base table has the value for the indexed column • not suitable for high cardinality domains Base table Cust C1 C2 C3 C4 C5 Region Asia Europe Asia America Europe Index on Region Index on Type Type RecIDAsia Europe America RecID Retail Dealer Retail 1 1 0 1 1 0 0 Dealer 2 2 0 1 0 1 0 Dealer 3 3 0 1 1 0 0 Retail 4 1 0 4 0 0 1 5 0 1 0 1 0 Dealer 5 42 .

…) S (S-id. join index relates the values of the dimensions of a start schema to rows in the fact table.g. – E. 43 . fact table: Sales and two dimensions city and product • A join index on city maintains for each distinct city a list of R-IDs of the tuples recording the Sales in the city – Join indices can span multiple dimensions. …) • Traditional indices map the values to a list of record ids – It materializes relational join in JI file and speeds up relational join — a rather costly operation fact table • In data warehouses. S-id) where R (R-id.Indexing OLAP Data: Join Indices • Join index: JI(R-id.

dice = selection + projection • Determine to which materialized cuboids the relevant operations should be applied. etc. dense array structures in MOLAP.g. 44 . • Exploring indexing structures and compressed vs.. e. roll.Efficient Processing OLAP Queries • Determine which operations should be performed on the available cuboids: – transform drill. into corresponding SQL and/or OLAP operations.

monitoring information (usage statistics. – Business data. – The mapping from operational environment to the data warehouse. 45 . data mart locations and contents. • warehouse schema. currency of data (active. – Operational meta-data. – The algorithms used for summarization. charging policies.Metadata Repository • Meta data is the data defining warehouse objects. audit trails). • data lineage (history of migrated data and transformation path). – Data related to system performance. view. error reports. hierarchies. archived. • business terms & definitions. dimensions. ownership of data. • schema. It has the following kinds: – Description of the structure of the warehouse. view and derived data definitions. or purged). derived data definitions.

• Load: – sort.Data Warehouse Back-End Tools and Utilities • Data extraction: – get data from multiple. • Data transformation: – convert data from legacy or host format to warehouse format. and external sources. and build indices and partitions. consolidate. • Refresh – propagate the updates from the data sources to the warehouse. compute views. heterogeneous. check integrity. summarize. 46 . • Data cleaning: – detect errors in the data and rectify them when possible.

– Computation of exception indicator (modeling fitting and computing SelfExp. guide user in the data analysis. huge search space. – Visual cues such as background color are used to reflect the degree of exception of each cell. – Exception: significantly different from the value anticipated. 47 . based on a statistical model. and PathExp values) can be overlapped with cube construction. at all levels of aggregation. • Discovery-driven: (Sarawagi et al.Discovery-Driven Exploration of Data Cubes • Hypothesis-driven: exploration by user. InExp.’98) – pre-compute measures indicating exceptions.

Examples: Discovery-Driven Data Cubes more PathExp Drill down high InExp high SelfExp high InExp Drill down 48 .

Grouping by all subsets of {item. region. find the min and max shelf life. region. 1998): Compute complex queries involving multiple dependent aggregates at multiple granularities.Complex Aggregation at Multiple Granularities: Multi-Feature Cubes • Multi-feature cubes (Ross. • select item. and find the fraction of the total sales due to tuples that have min shelf life within the set of all max price tuples. month. month}. month: R such that R. max(price). et al.price = max(price) – Continuing the last example. find the maximum price in 1997 for each group. among the max price tuples. region. and the total sales among all maximum price tuples. sum(R.sales) from purchases Dependent! where year = 1997 grouping variable of group gi cube by item. – Ex. 49 .

and reporting using crosstabs. OLAP focuses on interactive data aggregation tools. and presenting the mining results using visualization tools. pivoting. performing classification and prediction. – Analytical processing: • multidimensional analysis of data warehouse data. slice-dice. constructing analytical models. basic statistical analysis. drilling. • supports basic OLAP operations. – Data mining: • knowledge discovery from hidden patterns.Data Warehouse Usage • Three kinds of data warehouse applications. 50 . deeper analysis. data mining emphasizes more automation. tables. • supports associations. – Information processing: • supports querying. charts and graphs.

• DW contains integrated. cleaned data. • Architecture of OLAM (OLAP Mining): 51 . • ODBC. consistent. service facilities.From On-Line Analytical Processing to On Line Analytical Mining (OLAM) • Why online analytical mining? – High quality of data in data warehouses. • integration and swapping of multiple mining functions. – On-line selection of data mining functions. dicing. and tasks. etc. OLEDB. reporting and OLAP tools. Web accessing. – Available information processing structure surrounding data warehouses. algorithms. pivoting. – OLAP-based exploratory data analysis • mining with drilling.

An OLAM Architecture (Layering) Mining query User GUI API Mining result Layer4 User Interface Layer3 OLAP/OLAM OLAM Engine Data Cube API OLAP Engine Layer2 MDDB Meta Data MDDB Filtering&Integration Databases Database API Data cleaning Filtering Data Warehouse Layer1 Data Repository 52 Data integration .

no materialization. & nonvolatile collection of data in support of management’s decision-making process. integrated. – Multiway array aggregation. dicing and pivoting. fact constellations. 53 . snowflake schema.Summary • Data warehouse: – A subject-oriented. • OLAP operations: drilling. slicing. – A data cube consists of dimensions & measures. rolling. HOLAP. full vs. • Efficient computation of data cubes: – Partial vs. time-variant. – Bitmap index and join index implementations. • OLAP servers: ROLAP. – From OLAP to OLAM (on-line analytical mining). • A multi-dimensional model of a data warehouse: – Star schema. MOLAP. • Further development of data cube technology: – Discovery-drive and multi-feature cubes.

Gupta. 1:29-54. In Proc. Bottom-Up Computation of Sparse and Iceberg CUBEs. and S. S. Sept. D. 1997 ACM-SIGMOD. and T. 417-427. Arizona. Abbadi. E.olapcouncil. May 1997. A. and S. Bombay. A. Efficient view maintenance in data warehouses. 1997. Washington. Layman. 359-370. Agrawal. Data Engineering. 1996 VLDB. P. Agrawal. J. Ramakrishnan. 1997. Chaudhuri and U. R. June 1998. Seattle. 94-105. R. J. In Proc. cross-tab and sub-totals. MDAPI specification version 2. K. In Proc. Venkatrao. Chaudhuri. 232-243. Pellow. Agrawal. Gupta. ACM SIGMOD Record. A. M. Agarwal. Agrawal. 54 • • • • • • • . R. 1996. Yurek. 1997 Int. Automatic subspace clustering of high dimensional data for data mining applications.htm. PA. In Proc. R. S. 506-521. F. Gunopulos. April ‘97. M. Conf. Dayal. Raghavan. J. 1998 ACM SIGMOD. Data Mining and Knowledge Discovery. OLAP council. A. D. Gray.References (I) • S. In http://www. and P. 1999 ACM-SIGMOD. Beyer and R. F. and H. Birmingham. Modeling multidimensional databases. On the computation of multidimensional Singh. D. Gehrke. Sarawagi. Sarawagi. 26:65-74. Deshpande. A. An overview of data warehousing and OLAP technology. A. June 1999. India. Reichart. England. 1998. Bosworth. Ramakrishnan. Pirahesh. Data cube: A relational aggregation operator generalizing group-by. In Proc. Philadelphia. Naughton.0.

Arizona. 1996 ACM-SIGMOD Int. Chatziantoniou. Conf.References (II) • V. In Proc. A. Aug. D. Int. Valencia. Valencia. Spain. May 1997. Int. Rajaraman. OLAP Solutions: Building Multidimensional Information Systems. 1997 ACM-SIGMOD Int. John Wiley & Sons. OLEDB for OLAP programmer's reference version 1. Athens. Discovery-driven exploration of OLAP data cubes. 159-170. In http://www.0. Complex aggregation at multiple granularities. Management of Data. Greece. Megiddo. Spain. E. Conf. pages 168-182. and N. R. of Extending Database Technology (EDBT'98). 116-125. and J. Agrawal. Canada. 1997. 1998. K. Conf. A. Microsoft. March 1998. Ross. An array-based algorithm for simultaneous multidimensional aggregates. pages 205-216. Montreal. Srivastava. In Proc. Harinarayan. Ullman. 55 • • • • • • . Sarawagi. D. K. Conf. 1997. Srivastava. F. 1997 Int. In Proc. Y. March 1998. of Extending Database Technology (EDBT'98). S. P. Deshpande. Ross and D. M. Conf. and J. June 1996. Tucson. Naughton. Implementing data cubes efficiently. In Proc. Fast computation of sparse datacubes. Very Large Data Bases. and D. Management of Data. In Proc. Zhao.

Sign up to vote on this title
UsefulNot useful