This page intentionally left blank

.

Ansari Road. No part of this ebook may be reproduced in any form.newagepublishers. electronic or mechanical.com . microfilm. or any other means. or incorporated into any information retrieval system.110002 Visit us at www... Daryaganj. Publishers Published by New Age International (P) Ltd.Copyright © 2007 New Age International (P) Ltd. without the written permission of the publisher. xerography. by photostat. New Delhi .com ISBN : 978-81-224-2432-4 PUBLISHING FOR ONE WORLD NEW AGE INTERNATIONAL (P) LIMITED. All inquiries should be emailed to rights@newagepublishers. PUBLISHERS 4835/24. Publishers All rights reserved.

Dedicated to our beloved Grandparents... .

This page intentionally left blank .

has been put into separate chapters so that courses in a variety of levels can be taught from this book. Chapter 2 covers the types of knowledge and learning concepts. Chapter 9 covers the data mart and meta data. Chapter 11 deals with performance. Advanced material. however. Chapter 10 covers backup and recovery process. We have attempted to cover the major topics in data mining and warehousing in depth. tuning of data warehouse. this book is the only book. which deals both data mining techniques and data warehousing concepts suited for the modern time. which are essential to data mining techniques. Each and every chapter has exercise. . etc. dimension tables and schemas. A brief synopsis of the chapters and comments on their appropriateness in a basic course is therefore appropriate. This book also contains sufficient material to make up an advanced course on data mining and warehousing. Chapter 1 introduces the basic concepts to data mining and warehousing and its essentials.PREFACE This book is intended as a text in data mining and warehousing for engineering and post-graduate level students. We conclude that. Chapter 7 covers how to design the data warehouse using fact tables. Chapter 3 covers how to discover the knowledge hidden in the databases using various techniques. commercial organizations. challenges. Chapter 4 deals with various types of data mining techniques and algorithms available. Chapter 5 covers various applications related to data mining techniques and real time usage of these techniques in various fields like business. Model question paper is also given. benefits and new architecture of the data warehouse. Chapter 8 deals with partitioning strategy in software and hardware. Chapter 6 deals with the evaluation of data warehouse and need of the new design process for databases.

This page intentionally left blank .

We began writing this book an year ago as good friends and we completed it as even better friends.A. Rajagopalan. Bangalore. We owe thanks to our colleagues and friends who have added to this book by providing us with their valuable suggestions. N.ACKNOWLEDGEMENT Thanks and gratitude is owed to many kind individuals for their help in various ways in the completion of this book. Scientist. this book would indeed be poorer. Prabhu N. for the willingness to accept the advice of the other. for their tolerance of our many requests and especially for their commitment to this book and its authors. each of us wishes to “doff his hat” to the other for the unstinting efforts to make this book the best possible. and for the faith in our joint effort. New Age International (P) Ltd. for sparing his precious time in editing this book. Such excellence as the book may have is owed in part to them. We particularly thank Dr. any errors are ours.. We first of all record our gratitude to the authorities of Bharathiyar College of Engineering and Technology. We are also indebted to our publishers Messrs. We particularly thank my graduate students for contributing a great deal to some of the original wording and editing. Karaikal. Retd.R. Venkatesan . whose efforts maintained the progress of our work and ultimately put us in the right path. Without such help. S.D.L. Once again we acknowledge our family members and wish them to know that our thanks are not merely “Proforma”. Many experts made their contributions to the quality of writing by providing oversight and critiques in their special areas of interest. Ph. And finally. they are sincerely offered and well deserved. S.

This page intentionally left blank .

3 1.4 1.CONTENTS Preface Acknowledgement Chapter 1 Data 1.1 Introduction 3.5 Architecture of Data Mining Exercise Data Mining Techniques 4.4 Different Types of Knowledge Exercise Knowledge Discovery Process 3.5 1.4 Data Mining Operations 3.3 Anatomy of Data Mining 2.2 1.3 Stages of the Data Mining Process 3.2 What is Learning? 2.1 1.1 Introduction 2.6 Mining and Warehousing Concepts Introduction Data Mining Definitions Data Mining Tools Applications of Data Mining Data Warehousing and Characteristics Data Warehouse Architecture Exercise Learning and Types of Knowledge 2.1 Introduction 4.2 Evaluation of Data Mining 3.2 Classification vii ix 1-7 1 2 3 3 4 6 7 8-13 8 8 9 12 13 14-22 14 15 15 20 20 22 23-53 23 23 Chapter 2 Chapter 3 Chapter 4 .

Information and Knowledge 6.5 Data Warehouse Users 7.1 Introduction 7.2 Hardware Partitioning 8.1 Introduction 8.3. Data Warehousing Objects 7.3 Data Mining Products Exercise Data Warehouse Evaluation 6.3 RAID Levels 8.(xii) 4.4 Software Partitioning Methods Exercise 23 24 32 34 45 46 51 52 53 54-62 54 59 61 62 63-71 63 65 65 68 69 71 72-85 72 72 74 77 78 79 81 85 86-94 86 86 88 92 94 .7 Data Warehousing Schemas Exercise Partitioning in Data Warehouse 8.4 OLAP and OLAP Server 6.7 4.1 Applications of Data Mining 5.1 The Calculations for Memory Capacity 6.6 Design the Relational Database and OLAP Cubes 7.6 4.3 4.2 Future Scope 5. OLTP.4 Goals of Data Warehouse Architecture 7.5 4.10 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Neural Networks Decision Trees Genetic Algorithm Clustering Online Analytic Processing (OLAP) Association Rules Emerging Trends in Data Mining Data Mining Research Projects Exercise Real Time Applications and Future Scope 5.8 4. OLAP and Data Mining Exercise Data Warehouse Design 7.2 The Central Data Warehouse 7.2 Data.5 Data Warehouses.9 4.4 4.3 Fundamental of Database 6.

3 9.(xiii) Chapter 9 Data 9.2 Types of Backup 10.3 Backup the Data Warehouse 10.6 New Architecture of Data Warehouse Exercise Glossary Multiple-Choice Questions Frequently Asked Questions & Answers Model Question Papers Bibliography Index 95-100 95 95 96 100 100 101-105 101 101 102 104 105 106-109 106 106 107 108 108 109 109 110-114 115-116 117-119 120-124 125-126 127-129 .2 9.4 Chapter 10 Chapter 11 Appendix Appendix Appendix Appendix A B C D Mart and Meta Data Introduction Data Mart Meta Data Legacy Systems Exercise Backup and Recovery of the Data Warehouse 10.2 Prioritized Tuning Steps 11.1 9.4 Data Warehouse Recovery Models Exercise Performance Tuning and Future of Data Warehouse 11.3 Challenges of the Data Warehouse 11.4 Benefits of Data Warehousing 11.5 Future of the Data Warehouse 11.1 Introduction 10.1 Introduction 11.

This page intentionally left blank .

Point of sale data in retail. This accumulation of data has taken place at an explosive rate. There was also the introduction of new machine learning methods for knowledge representation based on logic programming etc.1 The growing base of data Data storage became easier as the availability of large amounts of computing power at low cost i. in addition to traditional statistical analysis of data.1 INTRODUCTION The past couple of decades have seen a dramatic increase in the amount of information or data being stored in electronic format. Volume of data 1970 1980 1990 2000 Year Fig. policy and claim data in insurance. The new methods tend to be computationally intensive hence a demand for more processing power. It has been estimated that the amount of information in the world doubles every 20 months and the sizes as well as number of databases are increasing even faster. medical history data in health care. 1.. the cost of processing power and storage is falling. The data storage bits/bytes are calculated as follows: ² 1 byte = 8 bits ² 1 kilobyte (K/KB) = 2 ^ 10 bytes = 1.CHAPTER 1 DATA MINING AND WAREHOUSING CONCEPTS 1. financial data in banking and securities. are some instances of the types of data that is being collected.024 bytes . There are many examples that can be cited.e. made data cheap.

125. is the process of extracting hidden knowledge from large volumes of raw data and using it to make crucial business decisions. learning classification rules. data dredging.776 bytes ² 1 petabyte (P/PB) = 2 ^ 50 bytes = 1. or Knowledge Discovery in Databases are: Extraction of interesting information or patterns from data in large databases is known as data mining. This encompasses a number of different technical approaches. OLTPs.624 bytes ² 1 exabyte (E/EB) = 2 ^ 60 bytes = 1. Some of the numerous definitions of Data Mining.921. Frawley.099. Matheus ‘Data Mining. or Knowledge Discovery in Databases (KDD) as it is also known.627.504. The data is often voluminous. prediction. Traditional on-line transaction processing systems.741.2 | Data Mining and Warehousing ² 1 megabyte (M/MB) = 2 ^ 20 bytes = 1.048. also called as data archeology. Database Management Systems gave access to the data stored but this was only a small part of what could be gained from the data. According to Marcel Holshemier and Arno Siebes “Data mining is the search for relationships and global patterns that exist in large databases but are ‘hidden’ among the vast amount of data. and detecting anomalies’. data summarization. According to William J. is the nontrivial extraction of implicit. of the real world registered by the database”.073. such as clustering. analyzing changes. Data mining refers to “using a variety of techniques to identify nuggets of information or decision-making knowledge in bodies of data.906.824 bytes ² 1 terabyte (T/TB) = 2 ^ 40 bytes = 1. data harvesting. previously unknown. This is where Data Mining or Knowledge Discovery in Databases (KDD) has obvious benefits for any enterprise. Gregory Piatetsky-Shapiro and Christopher J. Analyzing data can provide further knowledge about a business by going beyond the data explicitly stored to derive knowledge about the business.842. but as it stands of low value as no direct use can be made of it.976 bytes ² 1 zettabyte (Z/ZB) =1 000 000 000 000 000 000 000 bytes ² 1 yottabyte (Y/YB) =1 000 000 000 000 000 000 000 000 bytes It was recognized that information is at the heart of business operations and that decisionmakers could make use of the data stored to gain valuable insight into the business. These relationships represent valuable knowledge about the database and the objects in the database and.846. forecasting and estimation. finding dependency networks. It is the computer which is responsible for finding .899.2 DATA MINING DEFINITIONS The term data mining has been stretched beyond its limits to apply to any form of data analysis. and extracting these in such a way that they can be put to use in the areas such as decision support. if the database is a faithful mirror. safely and efficiently but are not good at delivering meaningful analysis in return.511.152. such as a relationship between patient data and their medical diagnosis. it is the hidden information in the data that is useful” Data mining is concerned with the analysis of data and the use of software techniques for finding patterns and regularities in sets of data. Data Mining. 1. are good at putting data into databases quickly.606.576 bytes ² 1 gigabyte (G/GB) = 2 ^ 30 bytes = 1. and potentially useful information from data.

uses a methodology to develop an optimal representation of the structure of the data during which time knowledge is acquired. 4 Thought and SYBASE System 10. scalable modeling and real-time scoring.3 DATA MINING TOOLS The best of the best commercial database packages are now available for data mining and warehousing including IBM DB2.1 Oracle Data Mining (ODM) Oracle enables data mining inside the database for performance and scalability. Some of the capabilities are: ² An API that provides programmatic control and application integration ² Analytical capabilities with OLAP and statistical functions in the database ² Multiple Algorithms: Naïve Bayes and Association Rules ² Real-time and Batch Scoring modes ² Multiple Prediction types ² Association insights ORACLE9i Data Mining provides a Java API to exploit the data mining functionality that is embedded within the ORACLE9i database. 1.4 APPLICATIONS OF DATA MINING Data mining has many and varied fields of application some of which are listed below. Once knowledge has been acquired this can be extended to larger sets of data working on the assumption that the larger data set has a structure similar to the sample data.Data Mining and Warehousing Concepts | 3 the patterns by identifying the underlying rules and features in the data. 1. 1. The idea is that it is possible to strike gold in unexpected places as the data mining software extracts patterns not previously discernable or so obvious that no-one has noticed them before. The analysis process starts with a set of data. . Intelligent Miner. This enables e-businesses to incorporate predictions and classifications in all processes and decision points throughout the business cycle. making use of as much of the collected data as possible to arrive at reliable conclusions and decisions. This integrated intelligence enables the automation and decision speed that e-businesses require in order to compete today. INFORMIX-On Line XPS. Data mining analysis tends to work from the data up and the best techniques are those developed with an orientation towards large volumes of data. ORACLE9i. By delivering complete programmatic control of the database in data mining. delivering accurate insights completely integrated into e-business applications. Again this is analogous to a mining operation where large amounts of low-grade materials are sifted through in order to find something of value. Clementine. Oracle Data Mining (ODM) delivers powerful. ODM is designed to meet the challenges of vast amounts of data.3.

4 | Data Mining and Warehousing 1.5 Medicine ² Characterize patient behaviour to predict office visits ² Identify successful medical therapies for different illnesses 1.4.5 DATA WAREHOUSING AND CHARACTERISTICS Data warehousing is a collection of decision support technologies. Data mining potential can be enhanced if the appropriate data has been collected and stored in a data warehouse.3 Insurance and Health Care ² ² ² ² Claims analysis i. A data warehouse is a relational database management system (RDBMS) designed specifically to meet the needs of transaction processing systems. regardless of location. It can be loosely defined as any centralized data repository which can be queried for business benefit but this will be more clearly defined later.4. Data warehousing is a new powerful technique making it possible to extract archived operational data and overcome inconsistencies between different legacy data formats. manager.4. which medical procedures are claimed together Predict which customers will buy new policies Identify behaviour patterns of risky customers Identify fraudulent behaviour 1. . format.2 Banking ² ² ² ² ² ² Credit card fraudulent detection Identify ‘loyal’ customers Predict customers likely to change their credit card affiliation Determine credit card spending by customer groups Find hidden correlation’s between different financial indicators Identify stock trading rules from historical market data 1.1 Sales/Marketing ² ² ² ² Identify buying patterns from customers Find associations among customer demographic characteristics Predict response to mailing campaigns Market basket analysis 1.e. analyst) to make better and faster decisions. As well as integrating data throughout an enterprise.4..4.4 Transportation ² Determine the distribution schedules among outlets ² Analyze loading patterns 1. aimed at enabling the knowledge worker (executive. or communication requirements it is possible to incorporate additional or expert information.

Following are some of those: Tool name Informatica DT/Studio Data Stage Ab Initio Data Junction Oracle Warehouse Builder Microsoft SQL Server Integration Transform On Demand Solonde Transformation Manager Company name Informatica Corporation Embarcadero Technologies IBM Ab Initio Software Corporation Pervasive Software Oracle Corporation Microsoft ETL Solutions A common way of introducing data warehousing is to refer to the characteristics of a data warehouse as set forth by William Inmon. and other applications that manage the process of gathering data and delivering it to business users. author of Building the Data Warehouse and the guru who is widely considered to be the originator of the data warehousing concept. These difficulties are eliminated by ETL Tools since they are very powerful and they offer many advantages in all stages of ETL process starting from extraction. Before the evolution of ETL Tools. is as follows: ² Subject Oriented ² Integrated ² Nonvolatile ² Time Variant Data warehouses are designed to help you analyze data. sales in this case. to learn more about your company’s sales data. data profiling. Data warehouses must put data from disparate sources into a consistent format. ETL Tools are meant to extract. There are a number of ETL tools available in the market to do ETL process the data according to business/technical requirements. they are said to be integrated. transform and load the data into Data Warehouse for decision making. complex coding and more work hours. Using this warehouse.Data Mining and Warehousing Concepts | 5 In addition to a relational database. . debugging and loading into data warehouse when compared to the old method. you can answer questions like “Who was our best customer for this item last year?” This ability to define a data warehouse by subject matter. a data warehouse environment includes an extraction. client analysis tools. This task was tedious in many cases since it involved many resources. data cleansing. an online analytical processing (OLAP) engine. makes the data warehouse subject oriented. They must resolve such problems as naming conflicts and inconsistencies among units of measure. transportation. When they achieve this. For example. transformation. maintaining the code placed a great challenge among the programmers. transformation. and loading (ETL) solution. the above mentioned ETL process was done manually by using SQL code created by programmers. On top of it. Integration is closely related to subject orientation. you can build a warehouse that concentrates on sales.

In order to discover trends in business. 1. A data warehouse that is designed for a particular line of business. trends. to be used for comparisons. This is logical because the purpose of a warehouse is to enable you to analyze what has occurred. When data are moved from the operational environment into the data warehouse.g. In addition to the main warehouse. they assume a consistent coding convention e. and forecasting. the data can be derived from an enterprise-wide data warehouse. for cleaning. Data in the warehouse and data marts is stored and managed by Monitoring & Administration Metadata Repository OLAP Servers Data Warehouse External Sources Extract Transform Load Refresh Query/Reporting Serve Data Mining Analysis Operational dbs Data Sources Data Marts Fig. for loading data into the data warehouse.6 DATA WAREHOUSE ARCHITECTURE Data warehouse architecture includes tools for extracting data from multiple operational databases and external sources. These data are not updated. This is very much in contrast to online transaction processing (OLTP) systems. there may be several departmental data marts. transforming and integrating this data. gender might be coded as “m” and “f” in another by 0 and 1. where performance requirements demand that historical data be moved to an archive. In a dependent data mart.2 Data warehousing architecture Tools . In an independent data mart. in one application. data can be collected directly from sources.6 | Data Mining and Warehousing For instance. A data warehouse’s focus on change over time is what is meant by the term time variant. or finance. Nonvolatile means that. such as sales. or older. once entered into the warehouse. perhaps onto slower archival storage. and for periodically refreshing the warehouse to reflect updates at the sources and to purge data from the warehouse. marketing. 1. analysts need large amounts of data. The data warehouse contains a place for storing data that are 10 to 20 years old. gender data is transformed to “m” and “f”. data should not change.

EXERCISE 1. . there is a repository for storing and managing meta data. and data mining tools. 2. What is data warehouse? List out the characteristics of data warehouse. 3. Explain the uses of data mining and describe any three data mining softwares. report writers. 5. 4. Finally. analysis tools.Data Mining and Warehousing Concepts | 7 one or more warehouse servers. Explain the data warehouse architecture. Define data mining. which present multidimensional views of data to a variety of front end tools: query tools. and tools for monitoring and administering the warehousing system.

This process of classification identifies classes such that each class has a unique pattern of values which forms the class description.e.1 INTRODUCTION It seems that learning is one of the basic necessities of life..CHAPTER 2 LEARNING AND TYPES OF KNOWLEDGE 2. 2. we will not state what learning is but rather specify how to determine when someone has learned something.. The nature of the environment is dynamic hence the model must be adaptive i. we will concentrate on an operational definition of the concept of learning. living creatures must have the ability to adapt themselves to their environment. we need two other concepts a certain ‘task’ to be carried out either well or badly and a learning ‘subject’ that is to carry out the task. Inductive learning where the system infers knowledge itself from observing its environment has two main strategies: ² Supervised learning—This is learning from examples where a teacher helps the system construct a model by defining classes and supplying examples of each class. In order to define learning operationally.e.2 WHAT IS LEARNING? An individual learns how to carry out a certain task by making a transition from a situation in which the task cannot be carried out to a situation in which the same task can be carried out under the same circumstances. Similar objects are grouped in classes and rules formulated whereby it is possible to predict the class of unseen objects.e.1 Inductive learning Induction is the inference of information from data and inductive learning is the model building process where the environment i..2. The system has to find a description of each class i. database is analyzed with a view to finding patterns. the common properties in the examples. Once . 2. There are undeniably many different ways of learning but instead of considering all kinds of different definitions. should be able to learn. Generally it is only possible to use a small number of properties to characterize objects so we make abstractions in that objects which satisfy the same subset of properties are mapped to the same internal representation.

which states that if there are multiple explanations for a particular phenomenon it makes sense to choose the simplest because it is more likely to capture the nature of the phenomenon. are some instances of the types of data that is being collected.g. Again this similar to cluster analysis as in statistics. and the fields in the record correspond to attributes. age) and some symbolic (discrete valued. This system results in a set of class descriptions. disease). Given a set of examples the system can construct multiple models some of which will be simpler than others.1 Multi-disciplinary approach in data mining . changes within. The simpler models are more likely to be correct if we adhere to Ockham’s razor. e. 2. Induction is therefore the extraction of patterns. This has resulted in the capture and availability of data in immense volume and proportion. ² Unsupervised learning—This is learning from observation and discovery.3 ANATOMY OF DATA MINING The use of computer technology in decision support is now widespread and pervasive across a wide range of business and industry.e. Point of sale data in retail.e. one for each class discovered in the environment.g.Learning and Types of Knowledge | 9 the description has been formulated the description and the class form a classification rule which can be used to predict the class of previously unseen objects. The problem is that most environments have different states. where each individual record may correspond to a transaction or a customer. these fields are of mixed type.. The quality of the model produced by inductive learning methods is such that the model could be used to predict the outcome of future situations in other words not only for states encountered but rather for unseen states that could occur.. Very often. policy and claim data in insurance. 2. i.. medical history data in health care. class description) by itself.. The data are typically a collection of records. e. Database Systems Statistics Machine Learning Data Mining Visualization Algorithm Other Disciplines Fig. and it is not always possible to verify a model by checking it for all possible situations. There are many examples that can be cited. The data mine system is supplied with objects but no classes are defined so it has to observe the examples and recognize patterns (i. financial data in banking and securities. This is similar to discriminate analysis as in statistics. with some being numerical (continuous valued.

and the part of Machine Learning (ML) dealing with learning from examples overlap in the algorithms used and the problems addressed.g. Machine learning examines previous examples and their outcomes and learns how to reproduce these and make generalizations about new cases.10 | Data Mining and Warehousing 2. learning with teacher. Data mining however allows the expert’s knowledge of the data and the advanced analysis techniques of the computer to work together. Generally a machine learning system does not use single observations of its environment but an entire finite set called the training set at once. KDD is that part of ML which is concerned with finding understandable knowledge in large sets of real-world examples. observations coded in some machine readable form. a concept representing the results of learning as output. while ML is concerned with improving performance of an agent. This is a broad field which includes not only learning from examples.. there are efforts to extract knowledge from neural networks which are very relevant for KDD. but also reinforcement learning. etc. For example statistical induction is something like the average rate of failure of machines.. ² ML is a broader field which includes not only learning from examples. The main differences are: ² KDD is concerned with finding understandable knowledge. Statistics have a role to play and data mining will not replace such analyses but rather they can act upon more directed analyses based on the results of data mining.1 Statistics Statistics has a solid theoretical foundation but the results from statistics can be overwhelming and difficult to interpret as they require user guidance as to where and how to analyze the data. So efficiency questions are much more important for KDD. etc. while ML typically (but not always) looks at smaller data sets. real-world databases. However. When integrating machine-learning techniques into database systems to implement KDD some of the databases require: .3. learning with teacher.e. 2. 2. ² KDD is concerned with very large.3.2. Analysts to detect unusual patterns and explain patterns using statistical models such as linear models have used statistical analysis systems such as SAS and SPSS. but also reinforcement learning.2 Machine Learning Machine learning is the automation of a learning process and learning is tantamount to the construction of rules based on observations of environmental states and transitions. This set contains examples i. A learning algorithm takes the data set and its accompanying information as input and returns a statement e.3. but not of KDD. The training set is finite hence not all concepts can be learned exactly. So training a neural network to balance a pole is part of ML.1 Differences between Data Mining and Machine Learning Knowledge Discovery in Databases (KDD) or Data Mining.

tuples in relational databases.3. It is usual that the database is often designed for purposes different from data mining and so properties or attributes that would simplify the learning task are not present nor can they be requested from the real world. ² Using machine learning techniques to produce knowledge bases from databases. and knowledge. On its own data visualization can be overwhelmed by the volume of data in a database but in conjunction with data mining can help with exploration. Databases are usually contaminated by errors so the data mining algorithm has to cope with noise whereas ML has laboratory type examples i. organizing. and applying numerical facts.4. Statistical analysis systems such as SAS and SPSS have been used by analysts to detect unusual patterns and explain patterns using statistical models such as linear models. Practical KDD systems are expected to include three interconnected phases ² Translation of standard database information into a form suitable for use by learning facilities. such as tracking customer orders or maintaining a music collection..3.3 Database Systems A database is a collection of information related to a particular subject or purpose. mutation.g.2 Statistical Algorithms Statistics is the science of colleting. rules in a rule-based system.3. Visualization indicates the wide range of tool for presenting the reports to the end user . more intuitive understanding of the data and as such can work well along side data mining. 2.. e.3. using various 2D and 3D rendering techniques to distinguish information presented. as near perfect as possible.5 Visualization Data visualization makes it possible for the analyst to gain a deeper. and natural selection in a design based on the concepts of natural evolution. . which represent instances of a problem domain.4 Algorithms 2. ² More expressive representations for both data. 2. which can be used to solve users’ problems in the domain.e.. and ² Interpreting the knowledge produced to solve users’ problems and/or reduce data spaces. //Data spaces being the number of examples.1 Genetic Algorithms Optimization techniques that use processes such as genetic combination.The presentation ranges from simple table to complex graph.Learning and Types of Knowledge | 11 ² More efficient learning algorithms because realistic databases are normally very large and noisy. e. 2.g. and the semantic information contained in the relational schemata.3. 2.4. Data mining allows the analyst to focus on certain patterns and trends and explore in-depth using visualization.

TABLE 2.1 Different Types of Knowledge and Techniques Type of knowledge Shallow knowledge Multi-dimensional knowledge Hidden knowledge Deep knowledge Technique SQL OLAP Data mining Clues .4.2 Multi-Dimensional Knowledge OLAP tools you have the ability to rapidly explore all sorts of clusterings this is information that can be analyzed using online analytical processing tools.4. which indicates that. 2. The advantage of OLAP tools is that they are optimized for the kind of search and analysis operation. 2. A search algorithm could roam around this landscape for ever. Hidden knowledge is the result of a search space over a gentle. However. Here information that can be obtained through data mining techniques. 2. one could use SQL to find these patterns but this would probably prove extremely time-consuming. a search algorithm can easily find a reasonably optimal solution.4. An example of this is encrypted information stored in a database.4 DIFFERENT TYPES OF KNOWLEDGE Knowledge is a collection of interesting and useful pattern in a database. It is almost impossible to decipher a message that is encrypted if you do not have key. Again. OLAP is not as powerful as data mining. without achieving any significant result. with no indication of any elevations in the neighborhood.4. hilly landscape. there is a limit to what one can learn. In data mining we distinguish four different types of knowledge. A pattern recognition algorithm could find regularities in a database in minutes or at most a couple of hours.1 Shallow Knowledge This is information that can be easily retrieved from database using a query tool such as Structured Query Language (SQL). With and different orderings of the data but it is important to realize that most of the things you can do with an OLAP tool can also be done using SQL. for the present at any rate.3 Hidden Knowledge This is data that can be found relative easily by using pattern recognition or machine learning algorithms.12 | Data Mining and Warehousing 2. it cannot search for optimal solutions. Deep knowledge is typically the result of a search space over only a tiny local optimum.4 Deep Knowledge This is information that is stored in the database but can only be located if we have a clue that tells us where to look. 2. whereas you would have to spend months using SQL to achieve the same result. The key issue in Knowledge Discovery in Database is to realize that there is more information hidden in your data than you are table to distinguish at first sight.

4. 5. What is learning? Explain different types of learning with example. 3. What are the differences between data mining and machine learning? . Write in detail about data mining and query tools. 6. 2.Learning and Types of Knowledge | 13 EXERCISE 1. With a neat sketch explain the architecture of data mining. Explain in detail about self-learning computer systems and concept learning. What are the different kinds of knowledge? Compare data mining with query tools. 7.

Data mining does not guarantee the behavior of future data through the analysis of historical data. and transformation of data. previously unknown. must be performed first if data mining is to truly supply meaningful information. garbage out) law applies more to data mining than to any other area in Analysis Services. and detecting anomalies. or Knowledge Discovery in Databases (KDD) as it is also known. Data mining and data warehouses complement each other. the process of data warehousing improves as. improves the data cleaning and transformation steps that are so crucial to good data warehousing practices. enrichment. enrichment. misconceptions about the role that data mining plays in decision support solutions can lead to confusion about and misuse of these tools and techniques. Similarly. cleaning. used to provide insight into the trends inherent in historical information. through data mining. is the nontrivial extraction of implicit. Although Analysis Services removes much of the mystery and complexity of the data mining process by providing data mining tools for creating and examining data mining models. these tools work best on well-prepared data to answer well-researched business scenarios—the GIGO (garbage in.CHAPTER 3 KNOWLEDGE DISCOVERY PROCESS 3. it becomes apparent which data elements are considered more meaningful than others in terms of decision support and. including research. data mining is a guidance tool.1 INTRODUCTION Data mining is an effective set of analysis tools and techniques used in the decision support process. Data mining. and potentially useful information from data. in turn. cleaning. Data mining is not a “black box” process in which the data miner simply builds a data mining model and watches as meaningful information appears. selection. Quite a bit of work. This encompasses a number of different technical approaches. analyzing changes. Well-designed data warehouses have handled the data selection. finding dependency networks. However. learning classification rules. data summarization. . Instead. such as clustering. and transformation steps that are also typically associated with data mining.

multidimensional databases. massive databases Retrospective. Data Warehousing & Decision Support (1990s) Data Mining (Emerging Today) 3. multiprocessor computers.2 EVALUATION OF DATA MINING TABLE 3.1 An overview of the steps that compose the KDD process . Data Prep 1. ODBC On-line analytic processing (OLAP). Transformation 2. at record level. Data Selection & Cleaning Fig. dynamic data delivery at multiple levels. Prospective. data warehouses Advanced algorithms. (RDBMS).Knowledge Discovery Process | 15 3. Structured dynamic data delivery Query Language (SQL). 3. Relational databases Retrospective.1 Data Mining Evaluation Evolutionary step Data Collection (1960s) Data Access (1980s) Business question “What was my total revenue in the last four years?” “What were unit sales in New Delhi last May?” “What were unit sales in New Delhi last March? Drill down to Chennai. static data delivery. proactive information delivery.” “What’s likely to happen to Chennai unit sales next month? Why?” Enabling technologies Computers. tapes.3 STAGES OF THE DATA MINING PROCESS 5. Interpretation/ Knowledge 4. disks Characteristics Retrospective. Data Mining/ Pattern Discovery 3.

which is a strong indication that they have the same person but that one of the spelling is incorrect. locating data. address. although in many cases this will be the result of negligence. such as people making typing errors. and logical inconsistencies. disease particulars.3. The first part. for data mining purposes. .16 | Data Mining and Warehousing 3. a copy of this operational data is drawn and stored in a separate database. identifying data. period of diseases. There are several types of cleaning process. A domain expert is someone who is intimately familiar with the business purposes and aspects. In the original table we have X and X1. The process of inspecting data for physical inconsistencies. designation.2 Original Data P. while data transformation changes the form or structure of attributes in existing data to meet specific data mining requirements. They have same Number and Address but different name. Data enrichment. In a normal database some clients will be represented by several records. some of which can be executed in advance while others are invoked only after pollution is detected at the coding or the discovery stage. In our example we start with a database containing records of patient for hypertension diseases with duration.No. or domain. by contrast. such as accounts with closing dates earlier than starting dates. or of clients moving from one place to another without notifying change of address. A important element in a cleaning operation is the de-duplication of records.1 Data Selection There are two parts to selecting data for data mining. the data is uniform in terms of key and attribute usage. It is a selection operational data from the Public Health Center patients of a small village contains information about the name. which requires significant input by a domain expert for the data. tends to be more mechanical in nature than the second part. TABLE 3. age. Data cleaning is separate from data enrichment and data transformation because data cleaning attempts to correct misused or incorrect attributes in existing data. such as orphan records or required fields set to null. of the data to be examined. they have a lot in common and pattern recognition algorithm can be applied in cleaning data.5 3. In order to facilitate the KDD process.5 –10. adds new attributes to existing data.5 ——— ——— 10. Although data mining and data cleaning are two different disciplines. 101 102 103 104 105 104 Name U V W X Y XI Address UU1 VV1 WW1 XX1 YY1 XXI Sex Male Male Male Female Male Female Age 49 45 61 49 43 49 Hype_dur ——— 10.3.2 Data Cleaning Data cleaning is the process of ensuring that.

As with data cleaning and data transformation.5 10. this step is best handled in a temporary storage area.5 NULL 3.3.5 — Second type of pollution that frequently occurs in lack of domain consistency and disambiguation.4 Domain Consistency P. For example. Data enrichment is an important step if you are attempting to mine marginally acceptable data.3 De-duplication P. to existing data. in particular the combination of external data sources with data to be mined.No. The duration value is indicated in negative value for Patient number 103.5 -10. Data enrichment. or provide additional derived attributes for a better understanding of indirect relationships. data warehouses frequently provide pre aggregation across business lines that share common attributes for cross-selling analysis purposes. Data transformation involves the manipulation of data.5 NULL 10. You can add information to such data from standardized external industry sources to make the data mining process more successful and reliable. can require a number of updates to both data and meta data.3 Data Enrichment Data enrichment is the process of adding new attributes. because it is hard to trace. In our example we replace the unknown data to NULL. 101 102 103 104 105 Name U V W X Y Address UU1 VV1 WW1 XX1 YY1 Sex Male Male Male Female Male Age 49 45 61 49 43 Hype_dur — 10. TABLE 3. Most references on data mining tend to combine this step with data transformation. This type of pollution is particularly damaging. heart and stroke related disease. In our example. This can include combining internal data with external data. but in greatly influence to type of patterns when we apply data mining to this table.Knowledge Discovery Process | 17 TABLE 3. but data enrichment involves adding information to existing data. obtained from either different departments or companies or vendors that sell standardized industry-relevant data. This is more realistic than it may initially . The value must be positive so the incorrect value to be replaced with NULL. and such updates are generally not acceptable in an established data warehouse. 101 102 103 104 105 Name U V W X Y Address UU1 VV1 WW1 XX1 YY1 Sex Male Male Male Female Male Age 49 45 61 49 43 Hype_dur NULL 10. we will suppose that we have purchased extra information about our patients consisting of age. such as calculated fields or data from external sources.No. kidney.

it is not particularly important how the information was gathered and be easily be joined to the existing records. In the next stage. For this. a lot of desirable data is missing and most is impossible to retrieve. in terms of data mining. First the extra information that was purchased to enrich the database is added to the records describing the individuals. this is a situation that occurs frequently in practice. Although it is difficult to give detailed rules for this kind of operation. In our example we convert disease yes-no into 1 and 0. we select only those records that have enough information to be of value. TABLE 3. is the process of changing the form or structure of existing attributes. after a thorough analysis of the possible consequences. especially fraud detection. A general rule states that any detection of data must be a conscious decision. In some cases.18 | Data Mining and Warehousing seen.4 Data Transformation Data transformation.5 NULL NULL 13 14 15 NULL 0 0 0 0 0 .5 Enrichment Name U V W X Y Address UU1 VV1 WW1 XX1 YY1 Sex Male Male Male Female Male Age 49 45 61 49 43 Kidney No No No No No Heart Yes Yes Yes Yes No Stroke No No No No No 3. Data transformation is separate from data cleansing and data enrichment for data mining purposes because it does not correct existing attribute data or add new attributes. TABLE 3. 3. but instead grooms existing attributes for data mining purposes.6 Coded Database HyperTension 1 0 0 1 1 Occupation HypeDur Diabetes 0 1 1 0 0 Address Kidney Diadur Stroke 0 0 0 0 0 Name Heart 1 1 1 1 0 Age 49 45 61 49 43 Sex Male Male Male Female Male U V W X Y Labor Farmer Farmer Farmer Farmer UU1 VV1 WW1 XX1 YY1 NULL 10.5 Coding That data in our example can undergo a number of transformations. lack of information can be a valuable indication of interesting patterns.5 NULL 10. however.3.3. In most tables that are collected from operational data. since it is quite possible to buy demographic data on average age for a certain neighborhood and hypertension and diabetes patients can be traced fairly easily.

5 10. Advanced graphical techniques in virtual reality enable people to wander through artificial data spaces.3. Data mining is not so much a single technique as the idea that there is more knowledge hidden in the data than shows itself on the surface. We shall also show that there is a relationship is detected during the data mining stage. Here we shall discuss some of the most important machine-learning and pattern recognition algorithms. so data mining techniques from quite a heterogeneous group. data mining is really an ‘anything goes’ affair. An elementary technique that can be of great value is the so called scatter diagram. Any technique that helps extract more out of our data in useful. We shall see that some learning algorithms do well on one part of the set where others fail. These simple methods can provide us with a wealth of information.6 Data Mining The discovery stage of the KDD process is fascinating. such as Inventor.Knowledge Discovery Process | 19 TABLE 3. and this clearly indicates the need for hybrid learning. and may be used at the beginning of a data mining process to get a rough feeling of the quality of the data set and where patterns are to be found. and in this way get an idea of the opportunities that are available as well as some of the problems that occur during the discovery stage.3. white historic development of data sets can be displayed as a kind of animated movie. Interesting possibilities are offered by object oriented three dimensional tool kits. Scatter diagrams can be used to identify Stroke 0 0 Name Heart Age Sex .7 Final Table after Removing Rows HyperTension 0 1 Occupation HypeDur (Years) Diabetes 1 0 Address Kidney Diadur (Years) V X Farmer Farmer VV1 XX1 Male Female 45 49 10. Although various different techniques are used for different purposes. those that are of interest in present context are: ² Query tools ² Statistical techniques ² Visualization ² Online analytical processing ² Case based learning (k-nearest neighbor) ² Decision trees ² Association rules ² Neural networks ² Genetic algorithms 3.7 Visualization/Interpretation/Evaluation Visualization techniques are a very useful method of discovering patterns in datasets. which enable to user to explore three dimensional structures interactively. From this point of view.5 13 15 0 0 1 1 3.

It is usually the source of true discovery since outliers express deviation from some previously known expectation and norm. 3. data warehouses.3 Database Segmentation As databases grow and are populated with diverse types of data it is often necessary to partition them into collections of related records either as a means of obtaining a summary of each database. i. which reflect historical data.e. There is a whole field of research dedicated to the search for interesting projections for data setsthat is called projection pursuit.4.1 Creation of Prediction and Classification Models This is the most commonly used operation primarily because of the proliferation of automatic model development techniques. to automatically generate a model that can predict a future behavior. Database.4 DATA MINING OPERATIONS Four operations are associated with discovery-driven data mining.4.. 3. 3. or due to causal reasons.. data warehouse of a typical data mining system may have the following components. rules. data about the past. The value added by data mining techniques in this operation is in their ability to generate models that are comprehensible. data warehouse or other information repository This is one or a set of databases. Data cleaning and data integration techniques may be performed on the data.20 | Data Mining and Warehousing interesting subsets of the data sets so that we can focus on the rest of the data mining process..4. or before performing a data mining operation such as model creation. 3. and explain whether they are due to noise or other impurities being present in the data..2 Association Whereas the goal of the modeling operation is to create a generalized description that characterizes the contents of a database. the goal of Association is to establish relations between the records in a data base.4 Deviation Detection This operation is the exact opposite of database segmentation. Model creation has been traditionally pursued using statistical techniques.4. The goal of this operation is to use the contents of the database. its goal is to identify outlying points in a particular data set. spreadsheets or other kinds of information repositories. 3. 3.. It is usually applied in conjunction with database segmentation. and explainable. since many data mining modeling techniques express models as sets of if. then. In particular.5 ARCHITECTURE OF DATA MINING Based on databases. .

Data mining engine This is essential to the data mining system and ideally consists of a set of functional modules for tasks such as characterization. classification. may also be included. Graphical Interface Pattern Evaluation Data Mining Engine Knowledge Base Database or Data Warehouse Server Data Cleaning Data Integration Database Filtering Data Warehouse Fig. based on the user’s data mining request. used to organize attributes or attribute values into different levels of abstraction. association.Knowledge Discovery Process | 21 Database or data warehouse server The database or data warehouse server is responsible for fetching the relevant data. 3.2 Architecture of data mining Knowledge base This is the domain knowledge that is used to guide the search or evaluate the interestingness of resulting patterns. Such knowledge can include concept hierarchies. cluster analysis and evolution and deviation analysis. Knowledge such as user beliefs. which can be used to assess a pattern’s interestingness based on its unexpectedness. .

In addition. Write short notes on (i) Data selection. it is highly recommended to push the evaluation of pattern interestingness as deep as possible into the mining process so as to confine the search to only the interesting patterns. explain the architecture of a data mining system. 3. (iii) Data enrichment. Explain the various steps involved in Knowledge Discovery Process. this component allows the user to browse database and data warehouse schemes or data structures. Write a short note on evaluation of data mining. Explain four operations that are associated with discovery-driven data mining. based on the intermediate data mining results. 4. . evaluate mined patterns and visualize the patterns in different forms.22 | Data Mining and Warehousing Pattern evaluation module This component typically employs interestingness measures and interacts with the data mining modules so as to focus the search towards interesting patterns. 2. EXERCISE 1. Graphical user interface This module communicates between user and the data mining system. providing information to help focus the search and performing exploratory data mining. (ii) Data cleaning. allowing the user interact with the system by specifying a data mining query or task. 5. For efficient data mining. With a neat sketch.

4.1 INTRODUCTION There are several different methods used to perform data mining tasks.CHAPTER 4 DATA MINING TECHNIQUES 4. These techniques not only require specific types of data structures. we briefly examined some of the data mining techniques. 4.3 NEURAL NETWORKS Neural Networks are analytic techniques modeled after the (hypothesized) processes of learning in the cognitive system and the neurological functions of the brain and capable of predicting new NEURONS INPUT LAYER 1 LAYER 2 OUTPUT Fig. In this chapter. but also imply certain types of algorithmic approaches.1 Neural networks .2 CLASSIFICATION Classification is learning a function that maps a data item into one of several predefined classes. Examples of classification methods used as part of knowledge discovery applications include classifying trends in financial markets and automated identification of objects of interest in large image databases. 4. Prediction involves using some variables or fields in the database to predict unknown or future values of other variables of interest. Description focuses on finding human interpretable patterns describing the data.

” In that phase. 4.1 Constructing Decision Trees Decision tree programs construct a decision tree T from a set of training cases. the results of such explorations can then facilitate the process of model building. which variables matter. Neural Networks techniques can also be used as a component of analyses designed to build explanatory models because Neural Networks can help explore data sets in search for relevant variables or groups of variables. with one branch and sub-tree for each possible outcome of the test. 3. or even to some extent. 1. . The original idea of construction of decision trees goes back to the work of Hoveland and Hunt on Concept Learning Systems (CLS) in the late 1950s. The neural network is then subjected to the process of “training. It is virtually impossible to “interpret” the solution in traditional. neurons apply an iterative process to the number of inputs (variables) to adjust the weights of the network in order to optimally predict (in traditional terms one could say. analytic terms. Advantages Neural Networks is that. Disadvantages The final solution depends on the initial conditions of the network. If all examples in T are negative. find a “fit” to) the sample data on which the “training” is performed. Create a T node. such as those used to build theories that explain phenomena. create a ‘P’ node with T as its parent and stop. and thus the researcher does not need to have any hypotheses about the underlying model. 4. which provides the classification of the instance. create a ‘N’ node with T as its parent and stop. they are capable of approximating any continuous function. indicating a class of instances. or ² a decision node that specifies some test to be carried out on a single attribute value. the neural network is ready and it can then be used to generate predictions. A decision tree can be used to classify an instance by starting at the root of the tree and moving through it until a leaf node. theoretically.4 DECISION TREES Decision trees are powerful and popular tools for classification and prediction. 2. T ← the whole training set. Decision trees represent rules. After the phase of learning from an existing data set. If all examples in T are positive. Decision tree is a classifier in the form of a tree structure where each node is either: ² a leaf node. The algorithm consists of five steps.24 | Data Mining and Warehousing observations (on specific variables) from other observations (on the same or other variables) after executing a process of so-called learning from existing data.4.

Discrete classes: A case does or does not belong to a particular class. For each Ti do: T X with values v1.5 K=x K=y K=x C = true K=x C = false K=y C A = blue B B < 8. 1.leaf node Fig. . …. N) with T as their parent label of the branch from T to Ti. Sufficient data: Usually hundreds or even thousands of training cases.5 B > 4.. Machine Learning.. The key requirements to do mining with decision trees are: Predefined classes: The categories to which cases are to be assigned must have been established beforehand (supervised data)..1 B < 4. Example for decision tree . ….. 4. 4. Select an attribute TN according their and X = vi as the 5. He first presented ID3 in 1975 in a book. Create N nodes Ti (i = 1. T2. 1.2 An example of a simple decision tree Decision tree induction is a typical inductive approach to learn knowledge on classification. vN and partition T into subsets T1.decision node A A = red B B > 8.Data Mining Techniques | 25 4.1 . “Logical” classification model: Classifier that can be only expressed as decision trees or set of production rules. values on X.4. and there must be for more cases than classes. Ross Quinlan originally developed ID3 at the University of Sydney. no. v2. ← Ti and go to step 2.2 ID3 Decision Tree Algorithm J. vol. ID3 is based on the Concept Learning System (CLS) algorithm.

10–]..] In the decision tree at each node should be associated the non-goal attribute which is most informative among the attributes not yet considered in the path from the root. and the entropy is 0. if pp is 1. Entropy is used to measure how informative is a node. [This defines what a decision tree is. Fig. This measure is used to select among the candidate attributes at each step while growing the tree. Entropy—a measure of homogeneity of the set of examples In order to define information gain precisely. containing only positive and negative examples of some target concept (a 2 class problem).4. the entropy is between 0 and 1. A leaf of the tree specifies the expected value of the goal attribute for the records described by the path from the root to that leaf. we need to define a measure commonly used in information theory. To illustrate. One interpretation of entropy from information theory is that it specifies the minimum number of bits of information needed to encode the classification of an arbitrary member of S (i. 4. Which attribute is the best classifier? The estimation criterion in the decision tree algorithm is the selection of an attribute to test at each decision node in the tree.e.26 | Data Mining and Warehousing 4. In all calculations involving entropy we define 0log0 to be 0. The goal is to select the attribute that is most useful for classifying examples.5. the entropy of set S relative to this simple. Given a set S. A good quantitative measure of the worth of an attribute is a statistical property called information gain that measures how well a given attribute separates the training examples according to their target classification. then pn is 0. suppose S is a collection of 25 examples.3 shows the form of the entropy function relative to a binary classification. Then the entropy of S relative to this classification is Entropy (S) = – (15/25) log2 (15/25) – (10/25) log2 (10/25) = 0.1 Basic Ideas Behind ID3 In the decision tree each node corresponds to a non-goal attribute and each arc to a possible value of that attribute. .970 Notice that the entropy is 0 if all members of S belong to the same class. called entropy. Note the entropy is 1 (at its maximum!) when the collection contains an equal number of positive and negative examples.2. including 15 positive and 10 negative examples [15+. If the collection contains unequal numbers of positive and negative examples. For example. that characterizes the (im)purity of an arbitrary collection of examples. so no message need be sent. as p+ varies between 0 and 1. if all members are positive (pp= 1). For example. On the other hand. a member of S drawn at random with uniform probability). if pp is 0. one bit is required to indicate whether the drawn example is positive or negative. and Entropy (S) = –1′ log2(1) – 0′ log20 = –1′ 0 – 0′ log20 = 0. binary classification is defined as: Entropy (S) = – pp log2 pp – pn log2 pn where pp is the proportion of positive examples in S and pn is the proportion of negative examples in S. the receiver knows the drawn example will be positive.

Gain (S. relative to a collection of examples S. Information gain measures the expected reduction in entropy Given entropy as a measure of the impurity in a collection of training examples. then a collection of messages can be encoded using on average less than 1 bit per message by assigning shorter codes to collections of positive examples and longer codes to less likely negative examples. Thus far we have discussed entropy in the special case where the target classification is binary. Note the first term in the equation for Gain is just the entropy of the original collection S and the second term is the expected value of the entropy after S is partitioned using attribute A. The measure we will use.4 Pp Fig.2 0. A) of an attribute A. If the target attribute takes on c different values.8. called information gain. as the proportion of positive examples pp varies between 0 and 1 0. is simply the expected reduction in entropy caused by partitioning the examples according to this attribute.4 0.8 Entropy 0. A ) = Entropy ( S ) − v ∈ Values ( A) ∑ Sv S Entropy ( Sv ) where Values(A) is the set of all possible values for attribute A. the maximum possible entropy is log2c. Note also that if the target attribute can take on c possible values.2 0 0 0.e.6 0. we can now define a measure of the effectiveness of an attribute in classifying the training data.3 The entropy function relative to a binary classification. Note the logarithm is still base 2 because entropy is a measure of the expected encoding length measured in bits. then the entropy of S relative to this cwise classification is defined as Entropy ( S ) = ∑ − pi log 2 pi i =1 c where pi is the proportion of S belonging to class i.6 0. More precisely. the information gain.Data Mining Techniques | 27 1 0. Sv = {s ∈ S | A(s) = v}). is defined as Gain ( S. The expected entropy described by this second term is simply the sum of the entropies of each subset Sv. 4.8 1 If pp is 0. weighted by the fraction of examples |Sv|/|S| .. and Sv is the subset of S for which attribute A has value v (i.

. Return a tree with root labeled A and arcs labeled a1. return a single node with value failure. C. begin If S is empty. S1). m} until they are empty end.. Every attribute has already been included along this path through the tree.S) among attributes in R. Algorithm function ID3 Input: (R: a set of non-target attributes. examples that will be improperly classified]..A) is therefore the expected reduction in entropy caused by knowing the value of attribute A. Let { S j | j=1. S: a training set) returns a decision tree.. . .. C: the target attribute. The process of selecting a new attribute and partitioning the training examples is now repeated for each non-terminal descendant node. If R is empty... their entropy is zero). Let { A j | j=1. by knowing the value of attribute A. given the value of some other attribute A. . This process continues for each new leaf node until either of two conditions is met: 1. Gain (S. If S consists of records all with the same value for the target attribute.28 | Data Mining and Warehousing that belong to Sv.A) is the information provided about the target attribute value.. so that any given attribute can appear at most once along any path through the tree.2. Attributes that have been incorporated higher in the tree are excluded.ID3(R-{A}.. S2). m} be the subsets of S consisting respectively of records with value aj for A. the training examples associated with this leaf node all have the same target attribute value (i. then return a single node with the value of the most frequent of the values of the target attribute that are found in records of S. . return a single leaf node with that value.2. C. m} be the values of attribute A. or 2. ID3(R-{A}.. Gain (S. .. ... C. a2. Let A be the attribute with largest Gain(A. The value of Gain(S.2. am going respectively to the trees (ID3(R-{A}.e.A) is the number of bits saved when encoding the target value of an arbitrary member of S. this time using only the training examples associated with that node. Sm).. Put another way. Recursively apply ID3 to subsets { S j | j=1. [in that case there may be errors.

The weather attributes are outlook. data is collected to help ID3 build a decision tree (see Table 4. and wind speed. temperature.2). overcast. They can have the following values: outlook = {sunny. strong} Examples of set S are: TABLE 4. mild. The target classification is “should we play cricket” which can be yes or no. The range of entropy is 0 (“perfectly classified”) to 1 (“totally random”). Entropy(S) = –(9/14) Log2 (9/14) – (5/14) Log2 (5/14) = 0. Over the course of 2 weeks. Gain(S.940 Notice entropy is 0 if all members of S belong to the same class (the data is perfectly classified).2 Weather Report Day D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12 D13 D14 Outlook Sunny Sunny Overcast Rain Rain Rain Overcast Sunny Sunny Rain Sunny Overcast Overcast Rain Temperature Hot Hot Hot Mild Cool Cool Cool Mild Cool Mild Mild Mild Hot Mild Humidity High High High High Normal Normal Normal High Normal Normal Normal High Normal High Wind Weak Strong Weak Weak Weak Strong Strong Weak Weak Weak Strong Strong Weak Strong Play ball No No Yes Yes Yes No Yes No Yes Yes Yes Yes Yes No If S is a collection of 14 examples with 9 YES and 5 NO examples then. humidity. normal} wind = {weak. A) is information gain of example set S on attribute A is defined as Gain(S. .Data Mining Techniques | 29 Example of ID3 Suppose we want ID3 to decide whether the weather is amenable to playing cricket. rain} temperature = {hot. A) = Entropy(S) – Σ ((|Sv|/|S|) * Entropy(Sv)) Where: Σ = value v of all possible values of attribute A S v = subset of S for which attribute A has value v |S v| = number of elements in Sv |S| = number of elements in S. cool} humidity = {high.

The next question is “what attribute should be tested at the Sunny branch node?” Since we have used Outlook at the root. or Wind.246 Gain(S. D11} = 5 examples from Table 1 with outlook = sunny Gain(Ssunny. 4. Temperature. the root node has three branches (sunny. We need to find which attribute will be the root node in our decision tree. therefore it is used as the decision attribute in the root node. D9. rain). D2. overcast. D8. The values of Wind can be Weak or Strong. Humidity) = 0. For Wind = Weak. Humidity) = 0. Ssunny = {D1.00 For each attribute.Wind) = Entropy(S)–(8/14)*Entropy(Sweak)–(6/14)*Entropy(Sstrong) = 0. For Wind = Strong.00 = 0.811 – (6/14)*1. Temperature) = 0. Wind) = 0. 3 are YES and 3 are NO.151 Gain(S. suppose there are 8 occurrences of Wind = Weak and 6 occurrences of Wind = Strong. The gain is calculated for all four attributes: Gain(S.30 | Data Mining and Warehousing Suppose S is a set of 14 examples in which one of the attributes is wind speed.811 Entropy(Sstrong) = – (3/6)*log2(3/6) – (3/6)*log2(3/6) = 1.048 Entropy(Sweak) = – (6/8)*log2(6/8) – (2/8)*log2(2/8) = 0.029 Gain(S. Gain(S. Outlook) = 0.970 Outlook Sunny Humidity Overcast Yes Rain Wind High Normal Strong Weak No Yes No Yes Fig. For attribute Wind. the gain is calculated and the highest gain is used in the decision node. 6 of the examples are YES and 2 are NO.4 A decision tree for the concept Playball . Since Outlook has three possible values. we only decide on the remaining three attributes: Humidity. The classification of these 14 examples are 9 YES and 5 NO.940 – (8/14)*0. Therefore.048 Outlook attribute has the highest gain.

cold).4.4 A Detailed Survey on Decision Tree Technique A decision tree contains zero or more internal nodes and one or more leaf nodes.6 Decision Tree Algorithms Name of the Algorithm Developer Year CHAID CART ID3 C4.570 Gain(Ssunny.5 SLIQ SPRINT Kass Breimarn.4. 4.g. All non-terminal nodes contain splits.019 Humidity has the highest gain. Instances are described by a fixed set of attributes (e. Agrawal. 4. This process goes on until all data is classified perfectly or we run out of attributes. et al. mild. The task of constructing a tree from the training set is called tree induction. Wind) = 0. which test the value of a mathematical or logical expression of the attributes. The easiest situation for decision tree learning occurs when each attribute takes on a small number of disjoint possible values (e. Some specific applications include medical diagnosis.g. 1980 1984 1986 1993 1996 1996 . Quinlan Quinlan Agrawal. et al. and web search classification. hot. Each leaf node has a class label associated with it.. credit risk assessment of loan applications. it is used as the decision node. et al. therefore. temperature) and their values (e. ID3 has been incorporated in a number of commercial rule-induction packages. hot).Data Mining Techniques | 31 Gain(Ssunny.4. 4. Temperature) = 0. classification of soybean diseases. equipment malfunctions by their cause. The final decision = tree The decision tree can also be expressed in rule format: IF outlook = sunny AND humidity = high THEN cricket = no IF outlook = rain AND humidity = high THEN cricket = no IF outlook = rain AND wind = strong THEN cricket = yes IF outlook = overcast THEN cricket = yes IF outlook = rain AND wind = weak THEN cricket = yes..5 Appropriate Problems for Decision Tree Learning Instances are represented by attribute-value pairs.. All internal nodes have two or more child nodes.g.

4. Error-Prone with Too Many Classes. Decision trees are able to handle both continuous and categorical variables. 2.5 Single point crossover Here we observe that the regions after the crossover point are interchanged in the children. or interest rate.5 GENETIC ALGORITHM Genetic Algorithms/Evolutionary algorithms are based on Darwin’s theory of survival of the fittest. Two programs called chromosomes combine to produce a third program called child. one crossover point is selected and the string from the beginning of chromosome to the crossover point is copied from one parent and the rest is copied from the second parent resulting in a child. For instance. Decision trees provide a clear indication of which fields are most important for prediction or classification.5. consider the following chromosomes and crossover point at position 4. a point along the chromosome length is randomly chosen at which the crossover is done as illustrated. 3.7 Strengths and Weaknesses of Decision Tree Methods strengths of decision tree methods Decision trees are able to generate understandable rules. 1. 4. Decision trees perform classification without requiring much computation. The weaknesses of decision tree methods Decision trees are less appropriate for estimation tasks where the goal is to predict the value of a continuous variable such as income.1 Crossover Using simple single point crossover reproduction mechanism. blood pressure. 2. Trouble with Non-Rectangular Regions.32 | Data Mining and Warehousing 4. 4.4. (a) Single Point Crossover: In this type. Computationally Expensive to Train. . Here the best program or logic survives from a pool of solutions. The 1. 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 0 0 0 0 0 0 1 1 1 1 1 1 1 Fig. 4. the reproduction process goes through Crossover and Mutation operations. 3. Decision trees are also problematic for time-series data unless a lot of effort is put into presenting the data in such a way that trends and sequential patterns are made visible.

4. (c) Tree Crossover: The tree crossover method is most suitable when tree encoding is employed. However. its behavior changes radically.5. When used by itself without any crossover. when used in the GA. This type of crossover is mainly employed in permutation encoding and value encoding where a single point crossover would result in inconsistencies in the child chromosomes.2 Mutation As new individuals are generated. Parent A Parent B Offspring x + + x = x y 3 y 2 y 2 Fig. Mutation produces incremental random changes in the offspring generated through crossover. consider the following chromosomes and crossover points at positions 2 and 5. mutation is equivalent to a random search consisting of incremental random modification of the existing solution and acceptance if there is improvement. 3 2 1 5 6 7 4 2 7 6 5 1 4 3 3 2 6 5 1 7 4 2 7 1 5 6 4 3 Fig. the part from the first to the second crossover point is copied from the second parent and the rest is copied from the first parent. mutation involves randomly generating a new character in a specified position. 4. In a binary-coded Genetic Algorithm. each character is mutated with a given probability. For instance. One crossover point is chosen at random and parents are divided in that point and parts below crossover point are exchanged to produce new offspring.Data Mining Techniques | 33 (b) Two Point Crossover: In this type. two crossover points are selected and the string from beginning of one chromosome to the first crossover point is copied from one parent. while in a non-binary-coded GA. mutation serves the crucial role for replacing the gene values lost from the population during the selection process so that they can . mutation may be done by flipping a bit.7 Tree crossover 4. In the GA.6 Two point crossover Here we observe that the crossover regions between the crossover points are interchanged in the children.

Here.6. a bit is changed from 0 to 1 and vice-versa.1 What is Clustering? Clustering can be considered the most important unsupervised learning problem. 4. a bit is randomly selected and inverted i. The type of mutation used as in crossover is dependent on the type of encoding employed. as every other problem of this kind.29 5. two random points in the chromosome are chosen and interchanged. For instance. Here.73 4.55) (d) Operator Manipulation: This method involves changing the operators randomly in an operator tree and hence is used with tree-encoded problems.86 4. 1 0 0 0 0 1 1 0 0 1 0 1 Fig.. 4.10 Operator manipulation Here we observe that the divide operator in the parent is randomly changed to the multiplication operator.68 2. . consider mutation at allele 4.e. (1.11 5. or of providing the gene values that were not present in the initial population.68 2.6 CLUSTERING 4. deals with finding a structure in a collection of unlabeled data.34 | Data Mining and Warehousing be tried in a new context. * x + x + y 3 y 3 Fig.55) => (1.9 Order changing (c) Value Manipulation: Value manipulation refers to selecting random point or points in the chromosome and adding or subtracting a small number from it. Hence.29 5. For instance. 4. For instance. 4. so. The various types are as follows: (a) Bit Inversion: This mutation type is employed for a binary encoded problem.22 5. 1 2 3 4 5 6 1 6 3 4 5 2 Fig. this method is specifically useful for real value encoded problems.8 Bit inversion (b) Order Changing: This type of mutation is specifically used in permutation-encoded problems.

the following common distance functions can be defined: Euclidean Distance Function d ( i. . 4. This is called distance-based clustering.. we easily identify the 4 clusters into which the data can be divided.2 Distance Functions Given two p-dimensional data objects i = (xi1.Data Mining Techniques | 35 Definition of clustering could be “the process of organizing objects into groups whose members are similar in some way”. A cluster is therefore a collection of objects which are “similar” between them and are “dissimilar” to the objects belonging to other clusters... objects are grouped according to their fit to descriptive concepts. the similarity criterion is distance: two or more objects belong to the same cluster if they are “close” according to a given distance (in this case geometrical distance).11 Clustering process-example In the example. In other words. 4.. xj2. xjp )... j ) = xi 1 − x j 1 + xi 2 − x j 2 + .. Another kind of clustering is conceptual clustering: two or more objects belong to the same cluster if this one defines a concept common to all that objects. . not according to simple similarity measures. xip ) and j = (xj1..6. j ) = xi 1 − x j 1 + xi 2 − x j 2 + ... Graphical Example Fig. + xip − x jp 2 2 2 Manhattan Distance Function d ( i. xi2. + xip − x jp .

d1 and d2. the relationship between clusters is undetermined.5 Types of Clustering Algorithms There are two types of clustering algorithms: 1. ² Biology: classification of plants and animals given their features. ² City-planning: identifying groups of houses according to their house type. in finding useful and suitable groupings (“useful” data classes) or in finding unusual data objects (outlier detection). for two distances. 4. an important issue is how to determine the similarity between two objects.6. If some of an object’s attributes are measured along different scales.6. we could be interested in finding representatives for homogeneous groups (data reduction). it is not necessary to calculate the square root because distances are always positive numbers and as such. ² Earthquake studies: clustering observed earthquake epicenters to identify dangerous zones. ² Libraries: book ordering. clustering weblog data to discover groups of similar access patterns. so when using the Euclidean distance function. such as the k-means algorithm. in finding “natural clusters” and describe their unknown properties (“natural” data types). 4. With both of these approaches. identifying frauds. ² WWW: document classification.3 Goals of Clustering The goal of clustering is to determine the intrinsic grouping in a set of unlabeled data. 4. the attribute values are often normalized to lie between 0 and 1. In nonhierarchical clustering. .6. Nonhierarchical and 2.4 Applications Clustering algorithms can be applied in many fields. Hierarchical. √d1 > √d2 ⇔ d1 > d2.36 | Data Mining and Warehousing When using the Euclidean distance function to compare distances. To prevent this problem. ² Insurance: identifying groups of motor insurance policy holders with a high average claim cost. Hierarchical clustering repeatedly links pairs of clusters until every data object is included in the hierarchy. for instance: ² Marketing: finding groups of customers with similar behavior given a large database of customer data containing their properties and past buying records. But how to decide what constitutes a good clustering? It can be shown that there is no absolute “best” criterion which would be independent of the final aim of the clustering. For instance. value and geographical location. it is the user which must supply this criterion. so that clusters can be formed from objects with a high similarity to each other. in such a way that the result of the clustering will suit their needs. attributes with larger scales of measurement may overwhelm attributes measured on a smaller scale. Consequently.

or clusters. such that the objects in a cluster are more similar to each other than to objects in different clusters. Fig. Each of the remaining objects is assigned to a cluster and the clustering criterion is used to calculate the cluster mean. 4. typically the squared-error criterion. 3.13 Sample data—k-means clustering process .Data Mining Techniques | 37 4.5   * * * * * * * * * * Fig. The solution to this problem is straightforward. determine a partition of the objects into k groups. Recall that a partition divides a set into disjoint parts that together include all members of the set. Select k clusters arbitrarily.5. Initialize cluster centers with those k clusters. This continues until there is no longer a change when the clusters are recalculated.1 Nonhierarchical Algorithms K-means Algorithm The k-means algorithm is one of a group of algorithms called partitioning methods. must be adopted.12 k-means algorithm Example 0 0  1  1 0.5  5  6   6   5  5. 4. Select a clustering criterion. 4.5 Pi =  5 5  6  6 5.5  0  1   1   0  0. Do loop (a) Partition by assigning or reassigning all data objects to their closest cluster center. The value of k may or may not be specified and a clustering criterion. The problem of partitioned clustering can be formally stated as follows: Given n objects in a d-dimensional metric space. The algorithm is shown in Fig.6. k-means Algorithm: 1. (b) Compute new cluster centers as mean value of the objects in each cluster until no change in cluster center calculation. The k-means algorithm initializes k clusters by arbitrarily selecting one object to represent each cluster.12. then for each data object select the cluster that optimizes the criterion. 2. These means are used as the new cluster points and each object is reassigned to the cluster that it is most similar to.

The fifth object is equidistant from both clusters.0).5.1.4.5)/3 = 0. New center for C1 = (0.1)}.5.5.5.(1.5). so algorithm is finished.5)/7 = 4.(5.16) and C2 = (4. Step 3b: Compute new cluster center for each cluster. so this object is assigned to C2.5) and C2 = (5.5 (0+0+0.5).5.16) and C2 = (4. for the third object: dis(1.5. that point is the cluster center.(6. The seed points could be the first k objects or k objects chosen randomly from the input objects.4.0.1 (1+1+5+5+6+6+5.5.5).(6.(5.6).0)} and C2 = {(0. Result is same as in 3b´.4.2.5) New center for C2 = (5.5)}.2) (0+1+5+5+6+6+5.6).5)} and C2 = {(0.5.6).(5.5)/3 = 0.1).(0.5).3) = |1–0| + |1–0| = 2 and dis(2. so the loop is repeated.1).13 as input and we choose k = 2 and the Manhattan distance function dis = |x2–x1|+|y2–y1|.0. .(5. Step 3a″: New centers C1 = (0. For example.16 New center for C2 = (4. Reassign the ten objects to the closest cluster center.1.1)}.2) differ from old centers C1={(0.0)} and C2={(0.1).5.5.(1. Step 3b″: Compute new cluster centers.0).2). Step 3a: Calculate the distance between each object and each cluster center.38 | Data Mining and Warehousing Suppose we are given the data in Fig.(6.(5.0.(0.0.0. Step 3a′: New centers C1 = (0.5)/7 = 4. so we arbitrarily assign it to C1.5)}. Algorithm is done: Centers are the same as in Step 3b´.5.6).(1. Step 3b′: Compute new cluster center for each cluster New center for C1 = (0. An initial partition can be formed by first specifying a set of k seed points.5) differ from old centers C1 = (0. The detailed computation is as follows: Step 1: Initialize k partitions.3) = |1–0| + |1–1| = 1.0.5. 4.1.5. resulting in: C1 = {(0.5)} C2 = {(5. Step 2: Since there is only one point in each cluster.5.(1. assigning the object to the closest cluster.5. the clusters contain the following objects: C1 = {(0. We choose the first two objects as seed points and initialize the clusters as C1 = {(0.1).0).(6. so the loop is repeated.16) (0+1+0.0).5). After calculating the distance for all points.(0.0.

Here we describe a simple agglomerative clustering algorithm. For n objects. including single link and group-average. Fig. the number of clusters (k) is not specified in hierarchical clustering. 4. 4. Each object in X is initially used to create a cluster containing a single object. Unlike with the k-means algorithm. This hierarchical tree shows levels of clustering with each level having a larger number of smaller clusters. When a pair of clusters is merged. we simply need to traverse down the hierarchy. f l c e h a k b i d m g j Fig. . n-1 mergings are done. which are added to the set of clusters. it can also cause problems if an erroneous merge is done.. or k = 1.Data Mining Techniques | 39 4. the hierarchy graph is called a dendogram. After the hierarchy is built. Group-average first finds the average values for all objects in the group (i.14 Sample dendogram without clustering process In the context of hierarchical clustering. the user can specify the number of clusters required. containing all the objects from X. As such. Hierarchical algorithms can be either agglomerative or divisive. cluster) and calculates the distance between clusters as the distance between the average values. The hierarchy of clusters is implicitly represented in the nested sets of C.6. The distance function in this algorithm can determine similarity of clusters through many methods. C. Single link calculates the distance between two clusters as the shortest distance between any two objects contained in those clusters.15 shows a simple hierarchical algorithm. Hierarchical algorithms are rigid in that once a merge has been done. Fig.5. the original clusters are removed from C. These groups are successively combined based on similarity until there is only one group remaining or a specified termination condition is satisfied. from 1 to n. it cannot be undone. the number of clusters in C decreases until there is only one cluster remaining.14 shows a sample dendogram that could be produced from a hierarchical clustering algorithm.e.2 Hierarchical Clustering Hierarchical clustering creates hierarchy of clusters on the data set. These clusters are successively merged into new clusters. The top level of the hierarchy represents one cluster. Although there are smaller computational costs with this. To examine more clusters. All agglomerative hierarchical clustering algorithms begin with each object as a separate group. 4. merge points need to be chosen carefully. Thus. that is top-down or bottomup.

.{x9}..{x6}.16 Sample data Step 1: Initially.cmin2} to C (d) l = l + 1 end while Fig. {x4}. C = {{x1}. for i = 1 to n ci = {xi } end for 2. where ci is a member of the set of clusters C. .{x3 }. {x 8}..{x2 }. l = n+1 4.xn} A distance function dis(c1. where x1 = (0.cmin2) = minimum dis(ci .0). while C.. The set X contains n = 10 elements.15 Agglomerative hierarchical algorithm Example: Suppose the input to the simple agglomerative algorithm described above is the set X.cj) for all ci .cj in C (b) remove cmin1 and cmin2 from C (c) add {cmin1. C = {c1.cb } 3. 4.. 4. x1 to x10 . We will use the Manhattan distance function and the single link method for calculating distance between clusters..{x5}. 0 1  3  2 6 X= 6 5  3  0 2  0 1  1  4 3  6 2  5  2 1  * * * * * * * * * * Fig.{x7}..{x10 }} Step 2: Set l = 11.size > 1 do (a) (cmin1.40 | Data Mining and Warehousing Given: A set X of objects {x1. shown in Fig.16 represented in matrix and graph form. each element xi of X is placed in a cluster ci ..c2) 1. 4.

{{{{{x2}.{x10}} ² Remove c2 and c10 from C. {x3}}.{x3}} ² Remove c3 and c11 from C. {x 10}}.c13) ² C = {{x6}.c15) ² C = {{{x4}. {x 3}}.{x1}}} Step 3: (Fourth iteration) C. C = {{x1}. {x 8}.{x1}}.{{x6}.{x6}. {x4}.{{{{x2}.cmin2) = (c1.{x 8}}. Step 3: (Second iteration) C.c12) ² C = {{x4}. Depending on how our minimum function works we can choose either pair of clusters. {x8}. ² Add c11 to C.{x8}}.{x9}}.size = 4 ² (cmin1.{x7}}}} Step 3: (Eighth iteration) C. between c2 and c10 and between c3 and c10.{{{{x2}. where x10 is in c11.{x8}}.{x1}}.{x7}.Data Mining Techniques | 41 Step 3: (First iteration of while loop) C. {x3}}.c min2) = (c5. {{x5}.size = 3 ² (cmin1.{x5}.cmin2) = (c3.{x 8}}} Step 3: (Fifth iteration) C.c min2) = (c4.{x1}}. c11 = c2 U c10 = {{x2}.{{{{x2}. {{x4}.{x9}. {{x 5}.{x7}.{x7}}} Step 3: (Sixth iteration) C.{x 1}}.{{x4}. {x10}}.cmin2) = (c9.{x 5}. {x 10}}} ² Set l = l + 1 = 12.{x 3}. Step 3: (Third iteration) C. {x10}}. ² Add c12 to C.{x9}.{x5}. between c3 and c11 because the distance between x3 and x10 is 1.{x9}}} Step 3: (Seventh iteration) C. C = {{x1}.{x7}. {x9}.size = 5 ² (cmin1. (cmin1.c min2) = (c14. {x10}}.size = 6 ² (cmin1. This occurs between. {x10}}. {x4}. {x9}. {x3}}. {x10}}. Arbitrarily we choose the first. {{{{{x 2}.{x10}}.c16) . {x 3}}.size = 8 ² (cmin1.cmin2) = (c 2. {x3}}} ² Set l = 13.cmin2) = (c6.{x9}. (cmin1.{x6}.{{x2}.size = 7 ² (cmin1.{x 7}}. {{x5}.{{{x2}.{{x4}.size = 10 ² The minimum single link distance between two clusters is 1.c8) ² C = {{x 5}.{x6}.{x7}.{x6}.size = 9 ² The minimum single link distance between two clusters is 1.c7) ² C = {{x6}.c11) ² c12 = c3 U c11 = {{{x2}.c10) ² Since l = 10. {x8}. This occurs in two places.

18 Sample dendogram after clustering . {{{{{x2}.42 | Data Mining and Warehousing ² C = { {{x6}.{x8}}. {x10}}. {{{{{x2}.{x8}}. {{{x4}. Algorithm done.18. 4. 4. {x3}}.{x 9}}}} Step 3: (Ninth iteration) C. {x3}}. 4. {{x5}.16 are grouped together more closely in the hierarchy. {{x6}. 4.{x1}}. The cluster created from this algorithm can be seen in Fig. {x 10}}. * * * * * * * * * * Fig.{x7}}}.17 Graph with clusters X6 X5 X7 X1 X2 X10 X3 X9 X4 X8 Fig.size = 2 ² (cmin1.{x7}}}} Step 3: (Tenth iteration) C.{x9}}}.17. The points which appeared most closely together on the graph of input data in Fig.size = 1. 4.c18) ² C = {{{{x4}.{x1}}.c min2) = (c17. The corresponding dendogram formed from the hierarchy in C is shown in Fig. {{x5}.

² Prim-Dijkstra algorithm operates as follows: ² Place an arbitrary document in the MST and connect its NN (Nearest Neighbor) to it. ² Time: Voorhees alg. so no need arises to recalculate the similarity matrix. or the single link hierarchy can be built at the same time as MST. worst case is O(N**2) but sparse matrices require much less ² Advantages: Good results from (Voorhees) comparative studies. using the matrix Relabel clusters. Single Link Method ² Similarity: Join the most similar pair of objects that are not yet in the same cluster. ² Space: O(N). ² Type of clusters: Long straggly clusters. each of which is in one of the two clusters. ellipsoidal. ² Find the document not in the MST that is closest to any point in the MST.5. 2. ² Advantages: Theoretical properties. worst case is O(N**3) but sparse matrices require much less ² Space: Voorhees alg. ² If a document remains that is not in the MST fragment. so have small. widely used.6. No cluster centroid or representative required. ² Disadvantages: Unsuitable for isolating spherical or poorly separated clusters. efficient implementations.Data Mining Techniques | 43 4. ² Disadvantages: Difficult to apply to large data sets since most efficient algorithm is general Hierarchical agglomerative clustering method (HACM) using stored data or stored matrix approach. ² Time: Usually O(N**2) though can range from O(N log N) to O(N**5). and add it to the current MST fragment. chains. For each: Compute and store a row of the distance matrix Find nearest other point. tightly bound clusters. SLINK Algorithms ² ² ² ² ² Array record cluster pointer and distance information Process input documents one by one. as needed MST Algorithms ² MST has all info needed to generate single link hierarchy in O(N**2) operations. repeat the previous step. Distance between 2 clusters is the distance between the closest pair of points. to create an initial MST fragment. . Complete Link Method ² Similarity: Join least similar pair between each of two clusters ² Type of clusters: All entries in a cluster are linked to one another within some minimum similarity.3 Other Hierarchical Clustering Methods 1.

yields unique and exact hierarchy ² Disdvantages: Sensitive to outliers. poor at recovering elongated clusters RNN: We can apply a reciprocal nearest neighbor (RNN) algorithm. . Follow the NN chain from it till an RNN pair is found. cluster centroid nicely characterized by its center of gravity ² Time: O(N**2) ² Space: O(N) — carries out agglomerations in restricted spatial regions ² Advantages: Good at recovering cluster structure. ² Type of clusters: Intermediate in tightness between single link and complete link ² Time: O(N**2) ² Space: O(N) ² Advantages: Ranked well in evaluation studies ² Disadvantages: Expensive for large collections Algorithm ² Sim between cluster centroid and any doc = mean sim between the doc and all docs in the cluster.Single Cluster: 1. since for any point or cluster there exists a chain of nearest neighbors (NNs) that ends with a pair that are each others’ NN. based on Euclidean distance between centroids ² Type of clusters: Homogeneous clusters in a symmetric hierarchy. ² Hence. of sims seen between any 2 active clusters 3. can use centroids only to compute cluster-cluster sim. Algorithm . below.44 | Data Mining and Warehousing Voorhees Algorithm ² Variation on the sorted matrix HACM approach ² Based on fact: if sims between all pairs of docs are processed in descending order. Group Average Link Method ² Similarity: Use the average value of the pairwise links within a cluster. Save space and time if use inner product of 2 (appropriately weighted) vectors — see Voorhees alg. based upon all objects in the cluster. two clusters of size mi and mj can be merged as soon as the mi × mj th similarity of documents in those clusters is reached. ² Requires sorted list of doc-doc similarities ² Requires way to count no. in O(N) space Ward’s Method (minimum variance method) ² Similarity: Join the cluster pair whose merger minimizes the increase in the total withingroup error sum of squares. Select an arbitrary doc 2.

How to apply general algorithm? Lance and Williams proposed the Lance-Williams dissimilarity update formula. If there is only one point. O(N**2) to construct hierarchy. but will be O(N**3) if matrix is scanned linearly. if there is a point in the NN chain preceding the merged points. Storage is therefore O(N**2) and time is at least O(N**2). This formula has 3 parameters. General algorithm for HACM All hierarchical agglomerative clustering methods (HACMs) can be described by the following general algorithm. which reduces disk accesses. and the matrix can be processed linearly. based on the dissimilarities prior to forming the new cluster. Then. 4. Otherwise. 4. ² Type of clusters: Cluster is represented by the coordinates of the group centroid or median. 4. Identify the 2 closest points and combine them into a cluster 2. and then apply Lance-Williams to recalculate dissimilarities between cluster centers.Data Mining Techniques | 45 3. ² Stored data approach: O(N) space for data but recompute pairwise dissimilarities so need O(N**3) time ² Sorted matrix approach: O(N**2) to calculate dissimilarity matrix. Implementations of the general algorithm ² Stored matrix approach: Use matrix. stop. ² Disadvantages: Newly formed clusters may differ overly much from constituent points. Identify and combine the next 2 closest points (treating existing clusters as points too) 3. return to step 2 else to step 1.OLAP (or Fast Analysis of Shared Multidimensional Information . Centroid and Median methods ² Similarity: At each stage join the pair of clusters with the most similar centroids. to calculate dissimilarities between a new cluster and existing points. replacing them with a single new point. If more than one cluster remains. Merge the 2 points in that pair. using the appropriate Lance-Williams dissimilarities. the algorithm above can be applied. Algorithm 1. return to step 1.7 ONLINE ANALYTIC PROCESSING (OLAP) The term On-Line Analytic Processing . meaning that updates may cause large changes throughout the cluster hierarchy. O(N**2 log N**2) to sort it. and each HACM can be characterized by its own set of Lance-Williams parameters.FASMI) refers to technology that allows users of multidimensional databases to . but one need not store the data set.

g.g. Note that despite its name. he will also buy butter as side dish. The final result of OLAP techniques can be very simple (e. the term applies to analyses of multidimensional databases (that may. removal of outliers. let T be the set of all “baskets” or “carts” of products bought by the customers from a supermarket – say on a given day. There are two types of Association rule levels.g. The set of items a customer buys is referred to as an item set.46 | Data Mining and Warehousing generate on-line descriptive or comparative summaries (“views”) of data and other analytic queries. such as various aspects of the manufacturing process or numbers and types of completed transactions at different locations) or the market. The minimum percentage of instances in the database that contain all items listed in a given association rule. they may involve seasonal adjustments. Market Basket Analysis is a modeling technique based on the theory that if you buy a certain group of items then you are more (or less) likely to buy another group of items. e. obviously.g. and market basket analysis seeks to find relationships between purchases. A typical and widely-used example of association rule mining is Market Basket Analysis. Discovery of association rules are showing attribute-value conditions that occur frequently together in a given set of data.. Typically the relationship will be in the form of a rule: IF {bread} THEN {butter}.8. simple cross-tabulations) or more complex (e. analyses referred to as OLAP do not need to be performed truly “online” (or in real-time).. if a customer used to buy bread. descriptive statistics.1 Rules for Support Level The minimum percentage of instances in the database that contain all items listed in a given association rule. OLAP facilities can be integrated into corporate (enterprise-wide) database systems and they allow analysts and managers to monitor the performance of the business (e. frequency tables... contain dynamically updated information) through efficient “multidimensional” queries that reference various types of data.. In the . Association rules show attributes value conditions that occur frequently together in a given dataset. This above condition extracts the hidden information i. The support of an item set S is the percentage of those transactions in T which contain S. 4. More information about OLAP is discussed in Chapter 6. Support of an item set Let T be the set of all transactions under consideration.e.8 ASSOCIATION RULES Association rule mining finds interesting associations and/or correlation relationships among large set of data items. and other forms of cleaning the data). Support Level Confidence Level 4.

Confidence (R) = (support ({A. An item set is frequent if its support is greater or equal to a threshold value. C})/support ({A. milk}. potatoes} then S is obviously a subset of X.. which was given together with the introduction of this mining problem. the algorithm was improved and renamed Apriori by Agrawal et al. The (absolute) support or the occurrence of X (denoted by Supp(X) is the number of transactions that are supersets of X (i. banana. For example. by exploiting the monotonic property of the frequency of item sets and the confidence of association rules Frequent item set mining problem A transactional database consists of sequence of transaction: T = (t1. apples. B.3 APRIORI Algorithm The first algorithm to generate all frequent sets and confident association rules was the AIS algorithm by Agrawal et al. An association rule is like an implication: X –> Y means that if item set X occurs in a transaction.e. then Support(S) = (|U|/|T|) *100% where |U| and |T| are the number of elements in U and T. onions. If U is the set of all transactions that contain all items in S.. If there are 318 customers and 242 of them buy such a set U or a similar one that contains S. then support (S) = (242/318) = 76. market-basket analysis). for example S = {bread. respectively. butter. A set of items is often called the item set by the data mining community. let R = “butter and bread –> milk”. rule confidence is the conditional probability that B is true when A is known to be true. For example. A transaction is a set of items (t.. If a customer buys butter and bread. then item set Y also occurs . The relative support is the absolute support divided by the number of transactions (i. Association rule mining problem This program is also capable of mining association rules. bread.. 4.e. the confidence of a rule R = “A and B –> C” is the support of the set of all items that appear in the rule divided by the support of the antecedent of the rule. Confidence of an association rule To evaluate association rules. n). and hence S is in U.8.tn). referring to the primary application domain (i. sausages.8. cheese.2 Rules for Confidence Level “If A then B”.e.…. that contain X). then the rule is applicable and it says that he/she can be expected to buy milk.1%. than the rule is not applicable and thus (obviously) does not say anything about this customer. the confidence of a rule is the number of cases in which the rule is correct relative to the number of cases in which it is applicable. I).. B})) *100% More intuitively. If he/she does not buy sugar or does not buy bread or buys neither. Shortly after that. Transactions are often called baskets. i.. 4.Data Mining Techniques | 47 supermarket example this is the number of “baskets” that contain a given set S of products. if a customer buys the set X = {milk.e. ∈.

Apriori steps are as follows: Counts item occurrences to determine the frequent item sets. Count the support of item sets pruning process ensures candidate sizes are already known to be frequent item sets. It is like an approximation of p(Y|X). it is the number of transactions that contain both X and Y divided by the number of transaction that contain X. This probability is given by the confidence of the rule.48 | Data Mining and Warehousing with high probability. thus conf(X–>Y) = Supp(XUY)/Supp(X). Algorithm Input I D S Output L // Large Itemsets Apriori Algorithm k = 0 // k is used as the scan number L=0 Ci = I // Initial candidates are set to be the items Ck Repeat k = k + 1 Lk = 0 For each Ii ∈ Ck do ci = 0 // initial counts for each itset are 0 for each tj ∈ D do for each Ii ∈ Ck do if Ii ∈ tj then ci = ci +1 for each Ii ∈ Ck do if ci >= (s x |D|) do Lk = Lk U Ii L = L U Lk // Itemsets // Database of transactions // Support . Candidates are generated. An association rule is valid if its confidence and support are greater than or equal to corresponding threshold values. Use the frequent item sets to generate the desired rules. In the frequent itemset mining problem a transaction database and a relative support threshold (traditionally denoted by min_supp) is given and we have to find all frequent itemsets.

4. 1 2 1 2 3 2 sup. 2 2 3 2 itemset {1} {2} {3} {4} {5} sup. 2 3 3 3 itemset {1 2} {1 3} {1 5} {2 3} {2 5} {3 5} itemset {2 3 5} itemset {2 3 5} Fig.Data Mining Techniques | 49 Ck+1 = Apriori_Gen(Lk) Until Ck+1 =0 Input Li – 1 Output Ci // Candidate of size i Apriori-gen algorithm Ci = 0 For each I ∈ Li-1 do For each J ≠ I ∈ Li-1 do If i-2 of the elements in I and J are equal then Ck = Ck U (I U J) // Large Itemsets of size i–1 AR Gen algorithm Input D // Database of transactions I // Items L // Large Itemsets S // Support œ // Confidence Output R // Association rules satisfying s and œ AR Gen Algorithm R=0 For each I ∈ L do For each x C I such that x ≠0 do If support(I)/support(x) >= œ then R=RU(x = > (1 – X)) TID 100 200 300 400 Items 13 4 235 1235 25 itemset {1 3} {2 3} {2 5} {3 5} sup.19 Apriori example . 2 itemset {1} {2} {3} {5} sup. 2 3 3 1 3 itemset {1 2} {1 3} {1 5} {2 3} {2 5} {3 5} sup.

In an item set is large in a sample.8. (2) Original sampling algorithm reduces the number of database scans to one or two. // Missing Large itemsets If ML ≠ 0 then C = Li // set candidates to be the large itemsets Repeat C = C U BD~© // Expand candidate sets using negative border Until no new itemsets are added to C. it is viewed to be potentially large in the entire database.smalls) C = PL U BD~(PL) L=0 For each Ii ∈ C do ci = 0 // Initial counts for each itemset are 0 For each tj ∈ D do // First scan count For each Ii ∈ C do If Ii ∈ tj then ci = ci + 1 For each Ii ∈ C do If ci >= (sx|D|) do L = L U Ii ML = (x| x BD~(PL) x L). . Then any one algorithm is used to find the large item sets for the sample. Advantages (1) Facilitates efficient counting of item sets in large databases. Input: I // Item set D // Database of transactions S // Support Output: L // Large item set Sampling algorithm: Ds = Sample drawn from Di PL = Apriori(I. Confidence 100%) 4.Ds.50 | Data Mining and Warehousing Rules from example Support(2 3) = 2/4*100 = 50 Support(2 3 5) = 2/4*100 = 50 Confidence(2 3→5) = Support(2 3 5)/support(2 3) = 50/50*100=100% 2 3 → 5 (Support 50%.4 Sampling Algorithm Here a sample is drawn from the database such that it is memory resident.

D . Input I // Item set 1 2 D = (D .s) C = CULi L=0 For each Ii ∈ C do ci =0 // Initial counts for each item set are 0.…. Dn. For each tj ∈ D do // count candidates during second scan For each Ii ∈ C do Ii ∈ tj then ci = ci +1 For each Ii ∈ C do If ci … (sx|D|) then L = L U Ii . D2. Advantages (1) Algorithm is more efficient as it has the advantages of large itemset property wherein a large itemsets must be large in at least one of the partitions.5 Partitioning Algorithm Here the database D is divided into ‘P’ partitions D1. (2) These algorithms adapt better to binted main memory. (3) It is easier to perform incremental generation of association rules by heating the current style of database as one partition and treating the new entries as a second partitions.Data Mining Techniques | 51 For each Ii ∈ C do ci = 0 // Initial counts for each itemset are 0 For each tj ∈ D do // Second scan count For each Ii ∈ C do If Ii ∈ tj then ci = ci + 1 If ci … (sx|D| then L = L U Ii 4.Dp) // Database of transactions divided into partition S // Support Output L // Large Itemset Partitioning Algorithm C=0 For I = 1 to p do // Find large itemsets in each partition i Li = Apriori(I. D .8.….

community infrastructure needs. Geographic information systems are used to store information related to Geographic locations on the surface of the Earth. is data mining as applied to spatial databases or spatial data. .9 EMERGING TRENDS IN DATA MINING 4. This may be the data actually present in Web pages or data related to Web activity. agriculture. 4. resource management. Applications for spatial data mining are GIS system.2 Spatial Mining Spatial data are data that a spatial or location component.1 Web Mining Web mining is mining of data related to the World Wide Web. disaster management.9. Web data can be classified into the following classes. This includes applications to whether.3 Temporal Mining The data that are stored reflect data at a single point in time. Data mining activities include prediction of environmental catastrophes. Spatial Mining or Spatial data mining or knowledge discovery in spatial databases. Web mining taxonomy The following are the classification of web mining: Web content mining Web page content mining Search result mining Web structure mining Web usage mining General access pattern tracking Customized usage tracking.9. Spatial data can be viewed as data about objects that themselves are located in physical space.9. Content of actual web pages Intra page structure includes the HTML or XML code for the page Inter page structure is the actual linkage structure between web Usage data that describe how Web pages are accessed by visitors User profiles include demographic and registration information obtained about users. Biomedical application including medical imaging and illness diagnosis also require spatial system. Data are maintained for multiple time points. and hazardous waste. not just one time point is called temporal database. Each tuple contains the information that is current from the date stored with that tuple to the date stored with the next tuple in temporal order. geology. Spatial data are required for many current information technology systems. called a snapshot database. medicine and robotics. environment science.52 | Data Mining and Warehousing 4. 4.

What is market-basket analysis? Explain with an example. 3. What is OLAP? Explain the various tools available for OLAP. Describe association rule mining and decision trees. 20.Data Mining Techniques | 53 4. . What is clustering? Explain. 21. 10. 4. 9. 14. 8. Write short notes on (i) Apriori algorithm. 8. 2. 17. 19. 6. 12. Data Mining in the Insurance Industry Increasing Wagering Turnover Through Data Mining The Transition from Data Warehousing to Data Mining Rule Extraction from Trained Neural Networks Neural Networks for Market Basket Analysis Data Mining for Customer Relationship Management Web Usage Mining using Neural Networks Data Visualization Techniques and their Role in Data Mining Evolutionary Rule Generation for Credit Scoring Improved Decision Trees for Data Mining Data Mining in the Pharmaceutical Industry Data Mining the Malaria Genome Neural Network Modeling of Water-Related Gastroenteritis Outbreaks Data Mining with Self Generating Neural Networks Development of Intelligent OLAP Tools Mining Stock Market Data Data Mining by Structure Adapting Neural Networks Rule Discovery with Neural Networks in Different Concept Levels Cash Flow Forecasting using Neural Networks Neural Networks for Market Response Modeling Advertising Effectiveness Modeling Temporal Effects in Direct Marketing Response EXERCISE 1. 2. 18. 5. Write in detail about the use of artificial neural network and genetic algorithm for data mining. Write an essay on emerging trends in data mining.10 DATA MINING RESEARCH PROJECTS 1. (ii) ID3 algorithm. 7. 3. 11. 5. 22. 6. 15. 7. 4. 13. What is genetic algorithm? Explain. 16.

.CHAPTER 5 REAL TIME APPLICATIONS AND FUTURE SCOPE 5. data mining tools and techniques have found a niche. ² Ad revenue forecasting ² Churn (turnover) management ² Claims processing ² Credit risk analysis ² Cross-marketing ² Customer profiling ² Customer retention ² Electronic commerce ² Exception reports ² Food-service menu analysis ² Fraud detection ² Government policy setting ² Hiring profiles ² Market basket analysis ² Medical management ² Member enrollment ² New product development ² Pharmaceutical research ² Process control ² Quality control ² Shelf management/store management ² Student recruitment and retention ² Targeted marketing ² Warranty analysis and many more. Starting from marketing down to the medical field.1 APPLICATIONS OF DATA MINING As on this day. They have become the “Hobson’s choice” in a number of industrial applications including. there is not a single field where data mining is not applied.

Although banks have employed statistical analysis tools with some success for several years.Real Time Applications and Future Scope | 55 The 2003-IBM grading of industries which use data mining are as follows in Table 5. banking sector is ahead of many other industries in using mining techniques for their vast customer database.1.1 Industrial Applications of Data Mining Banking Bioinformatics/Biotechnology Direct Marketing/Fundraising eCommerce/Web Entertainment/News Fraud Detection Insurance Investment/Stocks Manufacturing Medical/Pharmacy Retail Scientific Data Security Supply Chain Analysis Telecommunications Travel Others None 1% 2% 4% 2% 1% 8% 13% 10% 10% 5% 1% 9% 8% 3% 2% 6% 6% 9% Industries/fields where data mining is currently applied are as follows: 5. (i) Predict customer reaction to the change of interest rates (ii) Identify customers who will be most receptive to new product offers (iii) Identify “loyal” customers (iv) Pin point which clients are at the highest risk for defaulting on a loan (v) Find out persons or groups who will opt for each type of loan in the following year (vi) Detect fraudulent activities in credit card transactions . but more advanced data mining tools provide insight to the answer. TABLE 5. These statistical tools and even the OLAP find out the answers. previously unseen patterns of customer behavior are now coming into clear focus with the aid of new data mining tools.1 Data Mining in the Banking Sector Worldwide.1. Some of the applications of data mining in this industry are.

2 Data Mining in Bio-Informatics and Biotechnology Bio-informatics is a rapidly developing research area that has its roots both in biology and information technology.g. Communication tools such as the Mass E-Mail Manager make it easy to stay in touch with the constituents.4 Data Mining in Fraud Detection Data dredging has found wide and useful application in various fraud detection processes like (1) Credit card fraud detection using a combined parallel approach . prospects. these data are managed and used to build and improve long-term relationships with them. voluntary organizations. Various applications of data mining techniques in this field are: (i) prediction of structures of various proteins (ii) determining the intricate structures of several drugs (iii) mapping of DNA structure with base to base accuracy 5. and access them from one centralized database.3 Data Mining in Direct Marketing and Fund Raising Marketing is the first field to use data mining tools for its profit. 5. 5. criminologists and many others.1. (i) Capturing unlimited data on various donors. (i) Differential market basket analysis helps to decide the location and promotion of goods inside a store and to compare results between different stores. etc. (ii) Setting up multiple funds and default designations based on various criteria such as campaign. to increase company sales. between customers in different demographic groups. Using powerful search options. (ii) A predictive market basket analysis can be used to identify sets of item purchases (or events) that generally occur in sequence — something of interest to direct marketers. etc. relationship management and data mining tools. agencies and members of different charitable trusts. between different days of the week. etc. assigning requirements and allowing the system to refer the data to automatically generate awards as a recognition of exceptional donor efforts.1. volunteers. (iii) Defining award classes.1. different seasons of the year. Techniques followed in data harvesting also help in fund raising (e. The many ways in which mining techniques help in direct marketing are.) by. election fund raising.. region or membership type to honor the donors’ wishes. (iv) Providing a flexible communication and campaign management structure from a single appeal through multiple campaigns.56 | Data Mining and Warehousing (vii) Predict clients who are likely to change their credit card affiliation in the next quarter (viii) Determine customer preference of the different modes of transaction namely through teller or through credit cards.

Climate data has been collected for many centuries and is being extended into the more distant past. through such activities as analysis of ice core samples from the Antarctic and. This is an example of Temporal mining as it uses a temporal database maintained for multiple time points.5 Data Mining in Managing Scientific Data Data Mining is a concept that is taking off in the commercial sector as a means of finding useful information out of gigabytes of scientific data. which help companies to reduce costs. the moisture content and the type of the soil.1. This is an example of what known as the spatial mining.1. a number of different predictive models have been proposed for future climatic conditions.6 Data Mining in the Insurance Sector Insurance companies can benefit from modern data mining methodologies. (2) Studies on geophysical databases at the University of Helsinki: It has published a scientific analysis of data on farming and the environment. To minimize the resources. Few examples of data mining in the scientific environment are: (1) Studies on global climatic changes: This is a hot research area and is primarily a verification driven mining exercise. acquire new customers. (3) Fraud detection in passport applications by designing a specific online learning diagnostic system. like chemical fertilizers and additives (phosphate). This can be done through: (1) evaluating the risk of the assets being insured taking into account the characteristics of the asset as well as the owner of the asset. Gone are those days when scientists have to search through reams of printouts and rooms full of tapes to find the gems that make up scientific discovery. at the same time. and develop new products. (2) Formulating Statistical Modeling of Insurance Risks (3) Using the Joint Poisson/Log-Normal Model of mining to optimize insurance policies (4) And finally finding the actuarial Credibility of the risk groups among insurers 5. 5.7 Data Mining in Telecommunication As on this date. (4) Rule and analog based detection of false medical claims and so on. while minimizing the resources supplied.Real Time Applications and Future Scope | 57 (2) Fraud detection in the voters list using neural networks in combination with symbolic and analog data mining. As a result it has optimized crop yields. retain current customers. every activity in telecommunication has used data mining technique.1. increase profits. (1) Analysis of telecom service purchases (2) Prediction of telephone calling patterns . it identified the factors affecting the crop yield. 5.

Here we present some of the medical and pharmaceutical uses of Data Mining techniques for the analysis of medical databases. so that customers can transact with all the players in supply chain. various forms of data-mining development are being undertaken by companies looking to make sense of the raw data that has been mounting relentlessly in the recent years. suture materials etc. sutures. A recent article in the Engineering News-Record noted that e-commerce has empowered companies to collect vast amounts of data on customers—everything from the number of Web surfers in a home to the value of the cars in their garage. This was possible.58 | Data Mining and Warehousing (3) Management of resources and network traffic (4) Automation of network management and maintenance using artificial intelligence to diagnose and repair network transmission problems.1.9 Data Mining in the Retail Industries Data mining techniques have gone glove in hand in CRM (Customer Relationship Marketing) by developing models for ² predicting the propensity of customers to buy ² assessing risk in each transaction ² knowing the geographical location and distribution of customers ² analyzing customer loyalty in credit based transactions ² Assessing the competitive threat in a locality.) in a given surgical procedures ² Stocking the most frequently prescribed drugs And many more… 5.1. E-commerce is the newest and hottest to use data mining techniques. (2) By automating business interactions with customers. . Few of the ways in which data mining tools find its use in e-commerce are: (1) By formulating market tactics in business operations.8 Data Mining in Medicine and Pharmacy Widespread use of medical information systems and explosive growth of medical databases require traditional manual data analysis to be coupled with methods for efficient computer-assisted analysis. 5.1. (3) By developing common market place services using a shared external ICT infrastructure to manage suppliers and logistics and implement electronic match making mechanisms. etc. 5. ² Prediction of occurrence of disease and/or complications ² Therapy of choice for individual cancers ² Choice of antibiotics for individual infections ² Choice of a particular technique (of anastamosis.10 Data Mining in E-Commerce and the World Wide Web Sophisticated or not. the article says. with the use of data mining techniques. This is widely used in today’s web world.

(i) Helping the stock market researchers to predict future stock price movement. (ii) It mines payment data to advantageously update pricing policies (iii) Demand analysis and forecasting helps the supplier to determine the optimum levels of stocks and spare parts. there is no all encompassing model or approach.2 FUTURE SCOPE The era of DBMS began using relational data model and SQL. it probably will include similar item: algorithms. but also steps will be taken. development of a “complete” data mining model is desirable. Over the next few years. In 1999 it was estimated to contain are 350 billion pages with a growth rate of 1 million page/day.Real Time Applications and Future Scope | 59 Determining the size of the world wide web is extremely difficult. Coming to the purchaser side in supply chain analysis. As the data mining applications are of diverse types. data mining techniques help them by: (i) Knowing vendor rating to choose the beneficial supplier. To develop an all-encompassing model for data mining. A major development will be in creation of a sophisticated “query language” that includes everything form SQL functions to the data mining applications. the hawala scandal. Data mining techniques have found wide application in supply chain analysis.11 Data Mining in Stock Market and Investment The rapid evolution of computer technology in the last few decades has provided investment professionals (and amateurs) with the capability to access and analyze tremendous amounts of financial data. While these are many tools to and this present. (ii) Data mining of the past prices and related variables help to discover stock market anomalies like. It is of use to the supplier in the following ways: (i) It analyses the process data to manage buyer rating. data model and metrics for goodness. 5. Considering the world wide web to be the largest database collection. Data archeology churns the ocean of stock market to obtain the cream of information in ways like. While it may not look like the relational model.1. Manual definition of request and result interpretation used on the current data mining tools may decrease with increase in automation. not only will there be more efficient algorithms with better interface techniques. 5.1. . web mining can be done in above ways. (ii) Analyzing fulfillment data to manage volume purchase contracts. 5. At present data mining is little there than a set of tools to uncover hidden information from a database.12 Data Mining in Supply Chain Analysis Supply chain analysis is nothing but the analysis of the various data regarding the transactions between the supplier and the purchaser and the use of this analysis to the advantage of either of them or even both of them.

² A generalized relation: obtained by generating data from input data ² A characteristic rule: condition that is satisfied by almost all records in a target class. This is where the true data mining request is made. The data mining request can be one of the following. Difference between these two are: TABLE 5. ² A discriminate rule: condition that is satisfied by a target class but is different from conditions satisfied in other classes ² Classification rule: set of rules used to classify data. (2) CRISP-DM (Cross Industry Standard Process for Data Mining): This is a new KDD process model applicable to many different applications. Two latest models that gain access via ad hoc data mining queries are: (1) KDDMS (Knowledge and Data Discover Management System): It is expected to be the future generation of data mining systems that include not only data mining tools but also techniques to manage the underlying data. ensure its consistency and provide concurrency and recovery features.2 Difference between SQL and DMQL SQL Allows access only to relational database Here the retrieved data is necessarily an aggregate of data from relations There is no such threshold level DMQL Allows access to background information such as concept hierarchies It indicates the type of knowledge to be mined and the retrieved data need not be a subset of data form relations It fixes a threshold that any mined information should obey A BNF statement of DMQL can be given as modified from zaigg <DMQL>: USE DATABASE (database name) {USE HIERARCHY <hierarchy name> FOR <attribute>} <rule_spec> RELATED TO <attr_or_agg_list> FROM <relation(s)> [WHERE <condition>] [ORDER BY <order_list>] {with [<kinds_of>]THRESHOLD = <threshold_value>[FOR <attribute(s)>]} The heart of a DMQL statement is the rule specification portion.60 | Data Mining and Warehousing A data mining query language (DMQL) based on SQL has been proposed. The model addresses all steps in .

back propagation) regression (Linear) Naïve bayes. clustering. forecasting. C&RT a variation of CART). prediction.. Life cycle of CRISP-DM model can be described as follows: Assess (business understanding) | Access (data understanding) | Analyze (data preparation) | Act (modeling) | Automate (evaluation deployment) 5. IBM. (Contd. logarithm) Decision trees. GRI) regression (linear. classification. OS/400 JDA Intellect IDA Software Group Solaris. MYS. Unix (linux). k-means DB Miner Analytical System Intelligent Miner DB Miner Technologies Inc. AIX.3 Products of Data Mining Product 4 Thought CART Vendor Congnos Inc. classification. neural networks (MLP. BIRCH. clustering. time series Association rules.0. prediction Forecasting Apriori. RBF) Regression Windows.) . CARMA.. k-means clustering.3 DATA MINING PRODUCTS TABLE 5. Salford Systems Functions Prediction Classification Techniques Neural Network (MLP) Dicision tree (CART) Platforms Windows CMS .Real Time Applications and Future Scope | 61 the KDD process. clustering factor analysis. Sun Solaris. Decision trees (C5. AIX. neural networks (back-probagation . decision trees.0. Windows MARS Salford systems Unix (Linux. k-means . sequence discovery Association rules. Windows NT Clementine SPSS Inc. prediction sequential patterns. IBM Corporation Windows Decision trees. classification. including the maintenance of the results of the data mining step. clustering Association rules. k-means. Windows HP/UX. classification. neural networks. Solaris. rule induction (C5. Association rules. k-nearest neighbour. OS/390.

neural networks( back-propagation. divisive). hypothesis testing. List some of the products of data mining. exponential smoothing. regression Decision trees. 6. Windows EXERCISE 1. What are the uses of data mining tools in e-commerce? What is the real life application of data mining in the insurance industry? Analyse the future scopes of data harvesting in the marketing industry. decision trees (CART.62 | Data Mining and Warehousing Product MARS Vendor Salford Systems Functions Forecasting Techniques Regression Platforms Unix(Linux. prediction. SOM). 5. 3. classification. clustering ODBC. List a few industries which apply data mining for the management of its vast database. Solaris). regression (linear. RBF. rules Data Miner Statsoft Inc. MLP. 4. nonlinear. How does data mining help the banking industry? List some of the applications of data dredging in managing scientific data. polynomial) statistical techniques (jackknife. Windows Oracle 9i database s-plus Oracle Corporation Insightful Corporation Association rules. classification. prediction. decision trees. Windows All platforms on which oracle9i runs Unix. . correlation(Pearson). time series analysis Naïve bayes ARIMA. hierarchical clustering(agglomerative. prediction Classification. Monte karlo) ARIMA. k-means. logistic. 7. 2. clustering. CHAID). Windows Xpert Rule Miner Attar Software Ltd Association rules. Classification. clustering.

are some instances of the types of data that is being collected. Megabyte (1 000 000 bytes) ² 1 Megabyte: A small novel OR A 3.5 inch floppy disk ² 2 Megabytes: A high resolution photograph .1 bytes: A binary decision ² 1 byte: A single character ² 10 bytes: A single word ² 100 bytes: A telegram OR A punched card 2. policy and claim data in insurance. medical history data in health care. This accumulation of data has taken place at an explosive rate.1 THE CALCULATIONS FOR MEMORY CAPACITY The past couple of decades have seen a dramatic increase in the amount of information or data being stored in electronic format. Point of sale data in retail. financial data in banking and securities. Bytes (8 bits) ² 0. Kilobyte (1000 bytes) ² 1 Kilobyte: A very short story ² 2 Kilobytes: A Typewritten page ² 10 Kilobytes: An encyclopaedic page OR A deck of punched cards ² 50 Kilobytes: A compressed document image page ² 100 Kilobytes: A low-resolution photograph ² 200 Kilobytes: A box of punched cards ² 500 Kilobytes: A very heavy box of punched cards 3. Here.CHAPTER 6 DATA WAREHOUSE EVALUATION 6. There are many examples that can be cited. It has been estimated that the amount of information in the world doubles every 20 months and the sizes as well as number of databases are increasing even faster. 1. we are listed some interesting examples of data storage.

Gigabyte (1 000 000 000 bytes) ² 1 Gigabyte: A pickup truck filled with paper OR A symphony in high-fidelity sound OR A movie at TV quality ² 2 Gigabytes: 20 meters of shelved books OR A stack of 9-track tapes ² 5 Gigabytes: An 8mm Exabyte tape ² 10 Gigabytes: ² 20 Gigabytes: A good collection of the works of Beethoven ² 50 Gigabytes: A floor of books OR Hundreds of 9-track tapes ² 100 Gigabytes: A floor of academic journals OR A large ID-1 digital tape ² 200 Gigabytes: 50 Exabyte tapes 5. Exabyte (1 000 000 000 000 000 000 bytes) ² 5 Exabytes: All words ever spoken by human beings. Terabyte (1 000 000 000 000 bytes) ² 1 Terabyte: An automated tape robot OR All the X-ray films in a large technological hospital OR 50000 trees made into paper and printed OR Daily rate of EOS data (1998) ² 2 Terabytes: An academic research library OR A cabinet full of Exabyte tapes ² 10 Terabytes: The printed collection of the US Library of Congress ² 50 Terabytes: The contents of a large Mass Storage System 6. Yottabyte (1 000 000 000 000 000 000 000 000 bytes) . Zettabyte (1 000 000 000 000 000 000 000 bytes) 9. 8.64 | Data Mining and Warehousing ² 5 Megabytes: The complete works of Shakespeare OR 30 seconds of TV-quality video ² 10 Megabytes: A minute of high-fidelity sound OR A digital chest X-ray ² 20 Megabytes: A box of floppy disks ² 50 Megabytes: A digital mammogram ² 100 Megabytes: 1 meter of shelved books OR A two-volume encyclopaedic book ² 200 Megabytes: A reel of 9-track tape OR An IBM 3480 cartridge tape ² 500 Megabytes: A CD-ROM 4. Petabyte (1 000 000 000 000 000 bytes) ² ² ² ² 1 Petabyte: 3 years of EOS data (2001) 2 Petabytes: All US academic research libraries 20 Petabytes: Production of hard-disk drives in 1995 200 Petabytes: All printed material OR Production of digital magnetic tape in 1995 7.

” (1) The relation among signs.1 Example for Database Emp. semantics. i. 6. or signs knowledge.. only knowledge enables actions and decisions. Normalization in a relational database is the process of removing redundancy in data by separating the data into multiple tables. that data becomes information. performed by actors. . is introduced.e. entry) is a group of related data. ‘information’. who pursue goals which require performing actions. we call the patterns. i. The example above has 3 rows and 2 columns in employee table.2 DATA. syntax. no known way to distinguish knowledge from information or data on a purely representational basis. ‘data’. A row (= tuple. Information is a meaningful data and Knowledge is a interesting and useful patterns in a data base.. can be called one-dimensional. i. (2) The relation between signs and their meaning. One column (data element) contains data of one and the same kind. and ‘knowledge’. A table in a database looks like a simple spreadsheet.3.1 Provides Example of Employee table with 2 columns and 3 rows. (3) The third dimension is added by relating signs and their ‘users’. number 101 102 103 Name John Jill Jack A table is a matrix with data. for example the data of one employee detail. and pragmatics.e.Data Warehouse Evaluation | 65 6. If this third dimension including users. signs only viewed under this dimension equal data.3 FUNDAMENTAL OF DATABASE 6.. TABLE 6. defines an important characteristic of knowledge. i.. in general. Data means any symbolic reperesentation of facts or ideas from which information can potentially be extracted. “There is. for example the column Emp_Number. which is called pragmatics.e. Table 6. the syntax of ‘sentences’ does not relate signs to anything in the real world and thus. i.e. This relationship. The definitions will be oriented according to the three dimensions of semiotics (the theory of signs). It is only through this relation between sign and meaning. INFORMATION AND KNOWLEDGE In this section we will present three terms widely—but often unreflectingly—used in several ITrelated (and other) communities..1 Definitions A database is a collection of tables.e. their semantics adds a second dimension to signs. with related data. In our eyes.

indicating that up to 10 characters can be stored for the column.3. The first step in transforming a logical data model into a physical model is to perform a simple translation from logical terms to physical objects. ² Details regarding the manner in which the DBMS supports indexing. and other features that augment the functionality of database objects. Of course. ² Data definition language (DDL) skills to translate the physical design into actual database objects. each column must be assigned a data type. For example. you can create an effective and efficient database from a logical data model. this simple transformation will not result in a complete and correct physical database design – it is simply the first step. floating point. ² Detailed knowledge of new and obsolete features for particular versions or releases of the DBMS to be used. The transformation consists of the following: ² Transforming entities into tables ² Transforming attributes into columns ² Transforming domains into data types and constraints To support the mapping of attributes to table columns you will need to map each logical domain of the attribute to a physical data type and perhaps additional constraints. But no commercial DBMS product supports relational domains. Certain data types require a maximum length to be specified.66 | Data Mining and Warehousing The process of removing redundancy in data by separating the data into multiple tables. what must be done to implement a physical database? The first step is to create an initial physical data model by transforming the logical data model into a physical implementation based on an understanding of the DBMS to be used for deployment. what data type and length will be used for monetary values if no built-in currency data type exists? Many of the major DBMS . such as graphic.2 Data Base Design Database design is the process of transforming a logical data model into an actual physical database. constraints. data types. 6. Therefore the domain assigned in the logical data model must be mapped to a data type supported by the DBMS. Armed with the correct information. Assuming that the logical data model is complete. You may need to adjust the data type based on the DBMS you use. To successfully create a physical database design you will need to have a good working knowledge of the features of the DBMS including: ² In-depth knowledge of the database objects supported by the DBMS and the physical structures and files required to support those objects. ² Knowledge of the DBMS configuration parameters that are in place. though. A logical data model is required before you can even begin to design a physical database. and decimal (which require a length and scale) types. For example a character data type could be specified as VARCHAR(10). In a physical database. referential integrity. You may need to apply a length to other data types as well.

so you might want to consider creating a data type to support the logical domain. data files.2 Packages—Example RDBMS name Oracle IBM DB2 UDB IBM Informix Microsoft SQL Server Sybase Terradata Company name Oracle Corporation IBM Corporation IBM Corporation Microsoft Sybase Corporation NCR . hash.3 Selection for the Database/Hardware Platform RDBMS are used in OLTP applications (e. Of course. ² Choosing the type of index to create: b-tree. A constraint must be added to restrict the values that can be stored for the column to the specified range. As a first course of action you should try to use the primary key as selected in the logical data model. ² Building indexes on columns to improve query performance.3.g. 1 through 10. you also may need to apply a constraint to the column. But even if the DBMS does not mandate a primary key for each table it is a good practice to identify a primary key for each physical table you create. However. multiple candidate keys often are uncovered during the data modeling process. ² Deciding on the clustering sequence for the data.Data Warehouse Evaluation | 67 products support user-defined data types. and so on. Consider a domain of integers between 1 and 10 inclusive. bit map. Popular RDBMS Databases are listed below. each of the following must be addressed: ² The nullability (NULL) of each column in each table. should fixed length or variable length be used? ² Should the DBMS be used to assign values to sequences or identity columns? ² Implementing logical relationships by assigning referential constraints. reverse key. 6. A primary key should be assigned for every entity in the logical data model. zero. and values greater than ten could be stored. there are many other decisions that must be made during the transition from logical to physical. For example. You may decide to choose a primary key other than the one selected during logical design – either one of the candidate keys or another surrogate key for physical implementation. negative numbers. In addition to a data type and length. Failure to do so will make processing the data in that table more difficult. etc. ATM cards) very frequently and sometimes data warehouse may also use relational databases.. Without a constraint. TABLE 6. if no built-in data type is acceptable. Specification of a primary key is an integral part of the physical design of entities and attributes. buffer pool specification. partitioning. ² For character columns. de normalization. Using check constraints you can place limits on the data values that can be stored in a column or set of columns. Simply assigning the physical column to an integer data type is insufficient to match the domain. ² Other physical aspects such as column ordering.

managers and executives to gain insight into data through fast.1 OLAP Definition On-Line Analytical Processing (OLAP) is a category of software technology that enables analysts.so that it would be the best way for any large computations to be done within 5 years. RDBMS/Hardware Combination: Because the RDBMS physically sits on the hardware platform.4. Parallel Processing Support: The days of multi-million dollar supercomputers with one single CPU are gone. multi-user data manipulation engine specifically designed to support and operate on multi-dimensional data structures. and nowadays the most powerful computers all use multiple CPUs. A multi-dimensional structure is arranged so that every data item is located and accessed based on the intersection of the dimension members . there are going to be certain parts of the code that is hardware platform-dependent. 6. interactive access to a wide variety of possible views of information that has been transformed from raw data to reflect the real dimensionality of the enterprise as understood by the user. OLAP functionality is characterized by dynamic multi-dimensional analysis of consolidated enterprise data supporting end user analytical and navigational activities including: ² calculations and modeling applied across dimensions. OLAP helps the user synthesize enterprise information through comparative. Massively parallel computers in 1993. where each processor can perform a part of the task. as well as through analysis of historical and projected data in various “what-if” data model scenarios. As a result. all at the same time.2 OLAP Server An OLAP server is a high-capacity. bugs and bug fixes are often hardware dependent. and base any testing numbers from there. 6. through hierarchies and/or across members ² trend analysis over sequential time periods ² slicing subsets for on-screen viewing ² drill-down to deeper levels of consolidation ² reach-through to underlying detail data ² rotation to new dimensional comparisons in the viewing area OLAP is implemented in a multi-user client/server mode and offers consistently rapid response to queries. regardless of database size and complexity.68 | Data Mining and Warehousing In making selection for the database/hardware platform.4 OLAP AND OLAP SERVER 6.4. This is achieved through use of an OLAP server. consistent. personalized viewing. there are several items that need to be carefully considered: Scalability: How can the system grow as your data storage needs grow? Which RDBMS and hardware platform can handle large sets of data most efficiently? To get an idea of this. one needs to determine the approximate amount of data that is to be kept in the data warehouse system once it’s mature.

the OLTP database continues to grow in size and requires more indexes to service analytical and report queries.5 DATA WAREHOUSES. OLTP. flexible calculation and transformation of raw data based on formulaic relationships.3 Comparison of Data Warehouse Database and OLTP Data warehouse database Designed for analysis of business measures by categories and attributes Optimized for bulk loads and large. Given the current state of technology and the end user requirement for consistent and rapid response times. usually adding or retrieving a single row at a time per table Optimized for validation of incoming data during transactions. TABLE 6. Without a data warehouse to hold historical information.5.1 A Data Warehouse Supports OLTP A data warehouse supports an OLTP system by providing a place for the OLTP database to offload data as it accumulates. it is not available or organized for use by analysts and decision makers. An OLTP system is designed to handle a large volume of small-predefined transactions. valid data. If data is allowed to accumulate in the OLTP so it can be used for analysis. The OLAP server may either physically stage the processed multi-dimensional information to deliver consistent and rapid response times to end users. and by providing services that would complicate and degrade OLTP operations if they were performed in the OLTP database. data is archived to static media such as magnetic tape. the design characteristics of a relational database that supports a data warehouse differ from the design characteristics of an OLTP database. uses validation data tables Supports thousands of concurrent users 6. staging the multi-dimensional data in the OLAP server is often the preferred method. If data is simply archived for preservation.Data Warehouse Evaluation | 69 which define that item. or it may populate its data structures in real-time from relational or other databases. complex. requires no real time validation Supports few concurrent users relative to OLTP OLTP database Designed for real-time business operations Optimized for a common set of transactions. or allowed to accumulate in the OLTP database. 6. or offer a choice of both. The design of the server and the structure of the data are optimized for rapid ad-hoc information retrieval in any orientation. These queries access and process large portions of the continually growing historical data and add a substantial load to the database. The large indexes needed to support these queries . Because the purpose of a data warehouse differs from that of an On Line Transaction Processing (OLTP). OLAP AND DATA MINING A relational database is designed for a specific purpose. as well as for fast. unpredictable queries that access many rows per table Loaded with consistent.

Whereas OLAP organizes data in a model suited for exploration by analysts. Analysis Services provides tools for developing OLAP applications and a server specifically designed to service OLAP queries. data mining performs analysis on data and provides the results to decision makers. a query that requests the total sales income and quantity sold for a range of products in a specific geographical region for a specific time period can typically be answered in a few seconds or less regardless of how many hundreds of millions of rows of data are stored in the data warehouse database. The inherent stability and consistency of historical data in a data warehouse enables OLAP to provide its remarkable performance in rapidly summarizing information for analytical queries.5. business objects and data stage. 6. A data warehouse provides a multidimensional view of data in an intuitive model designed to match the types of queries posed by analysts and decision makers. nor is it designed to support high volume update transactions. OLAP supports model-driven analysis and data mining supports data-driven analysis.2 OLAP is a Data Warehouse Tool Online analytical processing (OLAP) is a technology designed to provide superior performance for ad hoc business intelligence queries. Thus. OLAP is designed to operate efficiently with data organized in accordance with the common dimensional model used in data warehouses. and then preprocesses these cubes to provide maximum performance for queries that summarize data in various ways. OLAP is not designed to store large volumes of text or binary data. A data warehouse offloads the historical data from the OLTP. text files of data extracted from the data warehouse database. In SQL server 2000. Other tolls are informatica.70 | Data Mining and Warehousing also tax the OLTP transactions with additional index maintenance. As data is moved to the data warehouse. These queries can also be complicated to develop due to the typically complex OLTP database schema. data mining can be used to analyze sales data against customer attributes and create a new cube dimension to assist the analyst in the discovery of the information embedded in the cube data. data mining results can be incorporated into OLAP cubes to further enhance model-driven analysis by providing an additional dimensional viewpoint into the OLAP model. For example. allowing the OLTP to operate at peak transaction efficiency. more commonly. High volume analytical and reporting queries are handled by the data warehouse and do not load the OLTP. . In addition. analysis services provides data mining technology that can analyze data in OLAP cubes. it is also reorganized and consolidated so that analytical queries are simpler and more efficient. as well as data in the relational data warehouse database. OLAP organizes data warehouse data into multidimensional cubes based on this dimensional model. 6.3 Data Mining is a Data Warehouse Tool Data mining is a technology that applies sophisticated and complex algorithms to analyze data and expose interesting information for analysis by decision makers.5. In SQL server 2000. For example. Data mining has traditionally operated only on raw data in the data warehouse database or. which does not need additional indexes for their support.

IT professional day to day operations application-oriented current. multidimensional integrated consolidated ad-hoc lots of scans complex query millions hundreds 100GB–TB query throughout. 4. up-to-date detailed. key short. Distinguish between data base warehouse and OLTP data base. simple transaction tens thousands 100MB–GB transaction throughout usage access unit of work # records accessed # users DB size metric EXERCISE 1.Data Warehouse Evaluation | 71 6. 2. 5.4 Difference between OLTP and OLAP TABLE 6. flat relational isolated repetitive read/write index/hash on prim. .5. 6. 3. response users function DB design data clerk. What is the difference between data and information? What are OLAP and OLAP Server? What is OLTP? What are the items to be considered to create a data warehouse? Compare OLAP and OLTP.4 Difference between OLTP and OLAP OLTP OLAP knowledge worker decision support subject-oriented historical. summarized.

it is the result of transformations. Think of it as a giant grocery store warehouse where the chocolates are kept in one section. in order to minimize redundancy and so that each may be found easily. aggregates. A paradox occurs: data exists but information cannot be obtained. A DW is a database with particular features. Concerning the data it contains. In general. complete and integrated information of the enterprise that can satisfy decision-making requests.2 THE CENTRAL DATA WAREHOUSE The Central Data Warehouse is just that—a warehouse. a DW is constructed with the goal of storing and providing all the relevant information that is generated along the different databases of an enterprise. In this chapter we concentrate in DW design. 7. A very frequent problem in enterprises is the impossibility for accessing to corporate. quality improvement and integration of data that comes from operational bases. and the CDs are in a third. it is supposed to support complex queries (summarization. ² One or more “data marts”—extracts from the central data warehouse that are organized according to the particular retrieval requirements of individual users. and another that bases mainly in the concept of materialized views. Concerning its utilization. In addition. while its maintenance does not suppose transactional load. crossing of data). ² The “legacy systems” where an enterprise’s data are currently kept. globally two different approaches for Relational DW design: one that applies dimensional modeling techniques. Besides. it includes indicators that are derived from operational data and give it additional value. All the enterprise’s data are stored in there. Building and maintaining a DW need to solve problems of many different aspects. instead of accessing information through reports generated by specialists. “normalized”. the T-shirt is in another. in a DW environment end users make queries directly against the DW through user-friendly query tools. which is a data base organized according to the corporate data model. This is accomplished by organizing it according to the enterprise’s corporate data model. .CHAPTER 7 DATA WAREHOUSE DESIGN 7. A data warehouse has three main components: ² A “Central Data Warehouse” or “Operational Data Store(ODS)”.1 INTRODUCTION Data Warehouse (DW) is a database that stores information oriented to satisfy decision-making requests. We found in the literature.

It typically represents business items or business transactions. OLAP cube design requirements will be a natural outcome of the dimensional model if the data warehouse is designed to support the way users wants to query data. This maximizes efficiency of updates. A dimension is a collection of data that describe one business dimension. country 2-D cuboids Product. The basic concepts of dimensional modeling are: facts. A fact is a collection of related data items. date 3-D (base) cuboid Product. A measure is a numeric attribute of a fact. 7. but not for DWs: (i) Its structure is not easy for end-users to understand and use. but tends to penalize retrievals.Data Warehouse Design | 73 Dimensional models represent data with a “cube” structure. Dimensions determine the contextual background for the facts. Normalized databases have some characteristics that are appropriate for OLTP systems. country Fig. date. dimensions and measures. Data redundancy is not a problem in DWs because data is not updated on-line.1 Cubes . they are the parameters over which we want to perform OLAP. 0-D (apex) cuboid Product Date try t. representing the performance or behavior of the business relative to the dimensions. (ii) To maximize the efficiency of queries. usually end-users interact with the database through a layer of software. coun Produc Country 1-D cuboids Date. It achieves these objectives by minimizing the number of tables and relationships between them. consisting of measures and context data. making more compatible logical data representation with OLAP data management. In OLTP systems this is not a problem because. (ii) Data redundancy is minimized. According to the objectives of dimensional modeling are: (i) To produce database structures that are easy for end-users to understand and write queries against.

the primary key of the fact table is usually a composite key that is made up of all of its foreign keys.2 A sales analysis star schema Dimension tables. containing hundreds of millions of rows and consuming hundreds of gigabytes or multiple terabytes of storage. Creating a new fact table: You must define a fact table for each star schema. Additive facts can be aggregated by simple arithmetical addition. Fact tables typically contain facts and foreign keys to the dimension tables. Dimension tables store the information you normally use to contain queries. cost. Because dimension tables contain records that describe facts. 7. contain the relatively static data in the warehouse. A fact table contains either detail-level facts or facts that have been aggregated.3 DATA WAREHOUSING OBJECTS The following types of objects are commonly used in dimensional data warehouse schemas: Fact tables are the large tables in your warehouse schema that store business measurements. Dimension tables are usually textual and descriptive and you can use them as the row headers of the result set. Location. A fact table usually contains facts with the same level of aggregation. usually numeric and additive. From a modeling standpoint. Though most facts are additive.1 Fact Tables A fact table typically has two types of columns: those that contain numeric facts (often called measurements). Text. and those that are foreign keys to dimension tables. where you cannot tell what a level means simply by looking at it. An example of this is inventory levels. Fact tables are often very large. Fact tables contain business event details for summarization.74 | Data Mining and Warehousing 7.3. Examples include sales. also known as lookup or reference tables. Semi-additive facts can be aggregated along some of the dimensions and not along others. Non-additive facts cannot be added at all. they can also be semi-additive or non-additive. Examples are customers. Time. Suppliers or products. the fact table can be reduced to columns for dimension foreign keys and numeric fact values. BLOBs. and profit. Fact tables that contain aggregated facts are often called SUMMARY TABLES. An example of this is averages. Time Customer Sales Facts Seller Product Manufacturing Location Fig. 7. that can be analyzed and examined. and denormalized data are typically not stored in the fact table. A common example of this is sales. Fact tables represent data. The definitions of this ‘sales’ fact table follow: .

Using conforming dimensions is critical to successful data warehouse design. Summarization data and reports will not correspond if different schemas use different versions of a dimension table. combined with facts. Each business function should have its own fact table and will probably have some unique dimension tables. as discussed earlier in “Dimension Tables.2) CONSTRAINT sales_cost_nn NOT NULL ) Multiple Fact Tables: Multiple fact tables are used in data warehouses that address multiple business functions. amount NUMBER(10. time. The partition divisions are almost always along a single dimension. A dimension table may be used in multiple places if the data warehouse contains multiple fact tables or contributes data to data marts. and also in one or more departmental data marts. enable you to answer business questions. products.” Each business function will typically have its own schema that contains a fact table. often composed of one or more hierarchies. These natural rollups or aggregations within a dimension table are called hierarchies.2 Dimension Tables A dimension is a structure. They are normally descriptive. such as sales. cust_id NUMBER CONSTRAINT sales_customer_nn NOT NULL. Commonly used dimensions are customers. For example. A dimension such as customer. 7. that categorizes data. OLAP cubes are usually partitioned to match the partitioned fact table segments for ease of maintenance. and time. or product that is used in multiple schemas is called a conforming dimension if all copies of the dimension are the same. Any dimensions that are common across the business functions must represent the dimension information in the same way.Data Warehouse Design | 75 CREATE TABLE sales ( prod_id NUMBER(7) CONSTRAINT sales_product_nn NOT NULL. cost NUMBER(10.2) CONSTRAINT sales_amount_nn NOT NULL. Dimension data is typically collected at the lowest level of detail and then aggregated into higher-level totals that are more useful for analysis. If fact tables are partitioned.3. Partitioned fact tables can be viewed as one table with an SQL UNION query as long as the number of tables involved does not exceed the limit for a single query. Very large fact tables may be physically partitioned for implementation and maintenance design considerations. Dimensional attributes help to describe the dimensional value. Such business-specific schemas may be part of the central data warehouse or implemented as data marts. textual values. time_id DATE CONSTRAINT sales_time_nn NOT NULL. and some dimension tables unique to the specific business function. and the time dimension is the most common one to use because of the historical nature of most data warehouse data. ad_id NUMBER(7). several conforming dimension tables. inventory. Several distinct dimensions. and finance. The definitions of this ‘customer’ fact table follow: .quantity_sold NUMBER(4) CONSTRAINT sales_quantity_nn NOT NULL. a product dimension may be used with a sales fact table and an inventory fact table in the data warehouse.

cust_credit_limit NUMBER.prod_desc ATTRIBUTE subcategory DETERMINES products.prod_subcategory) LEVEL category IS (products. For example. cust_last_name VARCHAR2(40) CONSTRAINT customer_lname_nn NOT NULL. Geography is seldom a dimension of its own. if a dimension attribute changes frequently.cust_sex CHAR(1). but the dimension table should contain the actual text for the category. Hierarchies are determined by the business need to group and summarize data into usable information. Attributes that are coded in an OLTP database should be decoded into descriptions. it is usually a hierarchy that . However. Quarter. these attributes are rich and user-oriented textual details. product category may exist as a simple integer in the OLTP database.prod_id) LEVEL subcategory IS (products. or a number of sales of a single product. Hierarchies: The data in a dimension is usually hierarchical in nature. cust_phone_number VARCHAR2(25). maintenance may be easier if the attribute is assigned to its own table to create a snowflake dimension. This denormalization simplifies and improves the efficiency of queries and simplifies user query tools. For example. Year.cust_street_address VARCHAR2(40) CONSTRAINT customer_st_addr_nn NOT NULL.cust_city VARCHAR2(30) CONSTRAINT customer_city_nn NOT NULL. A dimension may contain multiple hierarchies – a time dimension often contains both calendar and fiscal year hierarchies.prod_subcat_desc ATTRIBUTE category DETERMINES products.prod_cat_desc. cust_year_of_birth NUMBER(4). Day or Week.prod_name ATTRIBUTE product DETERMINES products. Month. The dimension table contains attributes associated with the dimension entry.country_id CHAR(2) CONSTRAINT customer_country_id_nn NOT NULL.prod_category) HIERARCHY prod_rollup ( product CHILD OF subcategory CHILD OF category ) ATTRIBUTE product DETERMINES products. cust_postal_code VARCHAR2(10) CONSTRAINT customer_pcode_nn NOT NULL.76 | Data Mining and Warehousing CREATE TABLE customers ( cust_id NUMBER. cust_email VARCHAR2(30) ) CREATE DIMENSION products_dim LEVEL product IS (products. cust_first_name VARCHAR2(20) CONSTRAINT customer_fname_nn NOT NULL. such as product name or customer name and address. For example. cust_marital_status VARCHAR2(20). cust_income_level VARCHAR2(30). there may be a number of sales to a single customer. cust_state_district VARCHAR2(40). a time dimension often contains the hierarchy elements: (all time). The records in a dimension table establish one-to-many relationships with the fact table. Attributes serve as report labels and query constraints. The code may also be carried in the dimension table if needed for maintenance.

For example. Level relationships specify top-to-bottom ordering of levels from most general (the root) to most specific information. collecting these comments in a single dimension removes a sparse text field from the fact table and replaces it with a compact foreign key.Data Warehouse Design | 77 imposes a structure on sales points. often called a junk dimension. Hierarchies are also essential components in enabling more complex rewrites. 7. A common example of a multi-use dimension is a dimension that contains customer demographics selected for reporting standardization. size is not an architectural goal—it is a characteristic driven by the amount of data needed to serve the users. . A data warehouse must be designed to satisfy the following requirements: ² Deliver a great user experience—user acceptance is the measure of success ² Function without interfering with OLTP systems ² Provide a central repository of consistent data ² Answer complex queries quickly ² Provide a variety of powerful analytical tools such as OLAP and data mining Most successful data warehouses that meet these requirements have these common characteristics: ² Based on a dimensional model ² Contain historical data ² Include both detailed and summarized data ² Consolidate disparate data from multiple sources while retaining consistency ² Focus on a single subject such as sales. the table can be populated with value combinations as they are encountered during the load or update process. or finance Data warehouses are often quite large. They define the parent-child relationship between the levels in a hierarchy. unrelated dimensions into a single physical dimension. Another multiuse dimension might contain useful textual comments that occur infrequently in the source data records. city. Often the combined dimension will be prepopulated with the cartesian product of all dimension values. However. store. the database can aggregate existing sales revenue on a quarterly base to a yearly aggregation when the dimensional dependencies between quarter and year are known. or other geographically distributed dimensions. region. customers.4 GOALS OF DATA WAREHOUSE ARCHITECTURE A data warehouse exists to serve its users—analysts and decision makers. This can greatly reduce the size of the fact table by reducing the number of foreign keys in fact table records. Multi-use dimensions: Sometimes data warehouse design can be simplified by combining a number of small. An example geography hierarchy for sales points is: (all). state or district. If the number of discrete values creates a very large table of all possible value combinations. inventory. country.

historical data might as well be archived to magnetic tape and stored in the basement. These are the users who get the Designer or Analyst versions of user access tools. Without users. They use static or simple interactive reports . Each type makes up a portion of the user population as illustrated in this diagram.3 Data warehouse users Statisticians: There are typically only a handful of statisticians and operations research types in any organization. Knowledge Workers: A relatively small number of analysts perform the bulk of new queries and analyses against the data warehouse. Information Consumers: Most users of the data warehouse are Information Consumers. Successful data warehouse design starts with understanding the users and their needs. After a few iterations. and executives. Data warehouse users can be divided into four categories: Statisticians. information consumers. they will probably never compose a true ad hoc query. 7.78 | Data Mining and Warehousing 7.5 DATA WAREHOUSE USERS The success of a data warehouse is measured solely by its acceptance by users. Knowledge Workers are often deeply engaged with the data warehouse design and place the greatest demands on the ongoing data warehouse operations team for training and support. Statisticians (2%) Knowledge Workers (15%) Executives Information Consumers (82%) Fig. They will figure out how to quantify a subject area. knowledge workers. Their work can contribute to closed loop systems that deeply influence the operations and profitability of the company. their queries and reports typically get published for the benefit of the Information Consumers.

Direct queries to the data warehouse relational database should be limited to those that cannot be accomplished through existing tools. analysts will write MDX queries against OLAP cubes and use data mining. Executives use standard reports or ask others to create specialized reports for them. Primary/ foreign key relationships should be established in the database schema. The most common types of constraints include: ² UNIQUE constraints—To ensure that a given column is unique ² NOT NULL constraints—To ensure that no null values are allowed ² FOREIGN KEY constraints—To ensure that two keys share a primary key to foreign key relationship Constraints can be used for these purposes in a data warehouse is Data cleanliness and Query optimization. and gather feedback from these users to improve the information sites over time. Statisticians frequently extract data for use by special analytical tools. The index on the primary key is often created as a clustered index.Data Warehouse Design | 79 that others have developed. Reporting tools and custom applications often access the database directly.1 How Users Query the Data Warehouse? Information for users can be extracted from the data warehouse relational database or from the output of analytical services such as OLAP or data mining. When using the analysis services tools in SQL server 2000. and fact table partitions are also defined. Set up a great communication infrastructure for distributing information widely.6 DESIGN THE RELATIONAL DATABASE AND OLAP CUBES In this phase. They usually interact with the data warehouse only through the work product of others. and published reports are highly visible. statisticians will often perform data mining. Analysts may write complex queries to extract and compile specific information not readily accessible through existing tools.6.. surrogate keys are defined and primary and foreign key relationships are established. 7. Information consumers do not interact directly with the relational database but may receive e-mail reports or access web pages that expose data from the relational database. 7. Integrity constraints provide a mechanism for ensuring that data conforms to guidelines specified by the database administrator. OLAP cubes are designed that support the needs of the users.1 Keys and Relationships Tables are implemented in the relational database after surrogate keys for dimension tables have been defined and primary and foreign keys and their relationships have been identified. Executives: Executives are a special case of the Information Consumers group. The index alone is almost as large as the fact table.5. In many scenarios a clustered primary key provides excellent . and Information Consumers will use interactive reports designed by others. Views. which are often more efficient than direct queries and impose less load on the relational database. This group includes a large number of people. The composite primary key in the fact table is an expensive key to maintain. indexes. the star or snowflake schema is created in the relational database. 7.

including index usage. .6. the workload. or the internals of SQL Server. more effective indexes and. Indexed views are recommended on platforms that support their use. the less beneficial it is to cluster the primary key index. it can then be implemented immediately. drop existing indexes that are ineffective. Also create an IDENTITY column in the fact table that could be used as a unique clustered index. 7. many star schemas are defined with an integer surrogate primary key. Elaborate initial design and development of index plans for end-user queries is not necessary with SQL server 2000.2 Indexes Dimension tables must be indexed on their primary keys. if wanted. The empirical tuning approach provided by the index tuning wizard can be used frequently when the data warehouse is first implemented to develop the initial index set. which has sophisticated index techniques and an easy to use Index Tuning Wizard tool to tune indexes to the query workload. As a result. it is usually more effective to create a unique clustered index on a meaningless IDENTITY column. With a large number of dimensions. A recommendation from the wizard consists of SQL statements that can be executed to create new. which are the surrogate keys created for the data warehouse tables. After the Index Tuning Wizard has suggested a recommendation. scheduled as a SQL Server job. The SQL Server 2000 Index Tuning Wizard allows you to select and create an optimal set of indexes and statistics for a database without requiring an expert understanding of the structure of the database. ² It can recommend ways to tune the database for a small set of problem queries. all other indexes on the fact table use the large clustered index key. the system will require significant additional storage space. All indexes on the fact table will be large. distribution of queries among tables. such as disk space constraints. or executed manually at a later time.80 | Data Mining and Warehousing query performance. However. There are scenarios where the primary key index should be clustered and other scenarios where it should not. We recommend that the fact table be defined using the composite primary key. The wizard analyzes a query workload captured in a SQL Profiler trace or provided by an SQL script. and performance of queries in the workload. ² It allows you to customize its recommendations by specifying advanced options. ² It analyzes the effects of the proposed changes. and query performance may degrade. or no primary key at all. The fact table must have a unique index on the primary key. and recommends an index configuration to improve the performance of the database. and then employed periodically during ongoing operation to maintain indexes in tune with the user query workload. The index tuning wizard provides the following features and functionality: ² It can use the query optimizer to analyze the queries in the provided workload and recommend the best combination of indexes to support the query mix in the workload. The larger the number of dimensions in the schema. should the database administrator determine this structure would provide better performance.

the schema for a dimensional model contains a central fact table and multiple dimension tables.7 DATA WAREHOUSING SCHEMAS A schema is a collection of database objects. branch name. A dimensional model may produce a star schema or a snowflake schema. The physical implementation of the logical data warehouse model may require some changes to adapt it to your system parameters—size of machine. 7. 7. The following diagram shows a classic star schema. subregion name). You can arrange schema objects in the schema models designed for data warehousing in a variety of ways.1 Dimensional Model Schemas The principal characteristic of a dimensional model is a set of detailed business facts surrounded by multiple dimensions that describe those facts. You can sometimes get the source model from your company’s enterprise data model and reverse-engineer the logical data model for the data warehouse from this.7.1 Star Schemas A schema is called a star schema if all dimension tables can be joined directly to the fact table. including tables.1. Indexed views can be used to improve performance of user queries that access data through views. In the star schema design.7. number of users. type of network. ² Identify measures or facts (sales dollar). and software. time dimension. When realized in a database. and synonyms. A simple star consists of one fact table. views. A star schema can be simple or complex. ² Determine the lowest level of summary in a fact table (sales dollar). storage capacity.Data Warehouse Design | 81 7. ² Identify dimensions for facts (product dimension. a complex star can have more than one fact table. Users can be granted access to views without having access to the underlying data.3 Views Views should be created for users who need direct access to data in the data warehouse relational database. Most data warehouses use a dimensional model. a single object (the fact table) sits in the middle and is radically connected to other surrounding objects (dimension lookup tables) like a star. Steps in Designing Star Schema ² Identify a business process for analysis (like sales).6. ² List the columns that describe each dimension (region name. View definitions should create column and table names that will make sense to business users. The model of your source data and the requirements of your users help you design the data warehouse schema. 7. indexes. location dimension. . organization dimension).

82 | Data Mining and Warehousing Products Dimension Table Sales (amount sold. 7. in a time dimension. . regardless of whether the levels in the hierarchy represent aggregated totals or not. 7. A hierarchy can also be used to define a navigational drill path. from the quarter level to the year level. A hierarchy can be used to define data aggregation. a hierarchy might be used to aggregate data from the month level to the quarter level.4 Star schema Times Dimension Table Channels Dimension Table Example – Star Schema with Time Dimension PRODUCT DIMENSION Product Dimension Identifier (PK) Product Category Name Product Sub-Category Name Product Name Product Feature Description Date Time Stamp ORGANISATION DIMENSION Organisation Dimension Identifier (PK) Corporate Office Name Region Name Branch Name Employees Name Date Time Stamp SALES FACT Time Dimension Identifier (FK) Location Dimension Identifier (FK) Product Dimension Identifier (FK) Organisation Dimension Identifier (FK) LOCATION DIMENSION Location Dimension Identifier (PK) Country Name State Name City Name Date Time Stamp Sales Dollar Date Time Stamp TIME DIMENSION Time Dimension Identifier (PK) Year Number Day of Year Quarter Number Month Number Month Name Month Day Number Week Number Day of Week Calender Date Date Time Stamp Fig.5 Example of star schema Hierarchy A logical structure that uses ordered levels as a means of organizing data. quantity sold) Fact Table Customers Dimension Table Fig. for example.

A fact table typically has two types of columns: those that contain facts and those that are foreign keys to dimension tables. For example.7.2 Snowflake Schemas A schema is called a snowflake schema if one or more dimension tables do not join directly to the fact table but must join through other dimension tables.1. sales fact table is connected to dimensions location. ² Sales dollar value for a particular product ² Sales dollar value for a product in a location ² Sales dollar value for a product in a year within a location ² Sales dollar value for a product in a year within a location sold or serviced by an employee 7. a time dimension might have a hierarchy that represents data at the month. Suppliers Times Sales (amount sold. A fact table might contain either detail level facts or facts that have been aggregated (fact tables that contain aggregated facts are often instead called summary tables). quantity sold) Customers Channels Products Countries Fig. For example. In the example Fig. It shows that data can be sliced across all dimensions and again it is possible for the data to be aggregated across multiple dimensions. product. 7. 7. A fact table usually contains facts with the same level of aggregation. and year levels.6. The primary key of a fact table is usually a composite key that is made up of all of its foreign keys. Fact Table A table in a star schema that contains facts and connected to dimensions. “Sales Dollar” in sales fact table can be calculated across all dimensions independently or in a combined manner which is explained below. time and organization. a dimension that describes products may be separated into three tables (snowflaked). quarter.6 A snowflake schema .Data Warehouse Design | 83 Level A position in a hierarchy.

the example diagram shown below has 4 dimension tables. and month) are being broken out of the dimension tables (PRODUCT. branch. The reason is that hierarchies(category. 7. Example—Snowflake Schema In Snowflake schema.7 Example of snowflake schema . The main disadvantage of the snowflake schema is the additional maintenance efforts needed due to the increase number of lookup tables.84 | Data Mining and Warehousing The snowflake schema is an extension of the star schema where each point of the star explodes into more points. LOCATION. state. 4 lookup tables and 1 fact table. The main advantage of the snowflake schema is the improvement in query performance due to minimized disk storage requirements and joining smaller lookup tables. and TIME) PRODUCT CATEGORY LOOKUP Product Category Code (PK) Product Category Name Date Time Stamp BRANCH LOOKUP Branch Code (PK) Branch Name Date Time Stamp PRODUCT DIMENSION Product Dimension Identifier (PK) Product Category Code (FK) Product Sub-Category Name Product Name Product Feature Description Date Time Stamp ORGANISATION DIMENSION Organisation Dimension Identifier (PK) Corporate Office Name Region Name Branch Name Employees Name Date Time Stamp SALES FACT Time Dimension Identifier (FK) Location Dimension Identifier (FK) Product Dimension Identifier (FK) Organisation Dimension Identifier (FK) LOCATION DIMENSION Location Dimension Identifier (PK) Country Name State Code (FK) City Name Date Time Stamp Sales Dollar Date Time Stamp TIME DIMENSION Time Dimension Identifier (PK) Year Number Day of Year Quarter Number Month Number Month Name Month Day Number Week Number Day of Week Calender Date Date Time Stamp STATE LOOKUP State Code (PK) State Name Date Time Stamp MONTH LOOKUP Month Number (PK) Month Name Date Time Stamp Fig. ORGANIZATION.

2.1. ² Hierarchies for the dimensions are stored in the dimensional table itself in star schema. they try to normalize the dimension tables to save space. ² Whereas hierarchies are broken into separate tables in snowflake schema. In OLAP. 6. 4. Define fact and dimension tables. 5. These hierarchies help to drill down the data from topmost hierarchies to the lowermost hierarchies. Explain data warehouse architecture. this Snowflake schema approach increases the number of joins and poor performance in retrieval of data.Data Warehouse Design | 85 respectively and shown separately. Write an essay on schemas. ² In a star schema. 7. Snowflake schema approach may be avoided. Who are data warehouse users? Explain. In few organizations. a dimension table will have one or more parent tables.3 Important Aspects of Star Schema & Snowflake Schema ² In a star schema every dimension will have a primary key.7. Since dimension tables hold less space. EXERCISE 1. Discuss about use of data warehouse. ² Whereas in a snowflake schema. a dimension table will not have any parent table. 3. How will you design OLAP cubes? .

and fault tolerance. it is I/O bottleneck. When a system is constrained by having limited CPU resources. A variety of RAID controllers and disk configurations offer tradeoffs among cost. These topics are discussed: ² Hardware partitioning ² Software partitioning 8. RAID can provide both performance and fault tolerance benefits. Fault Tolerance RAID also provides protection from hard disk failure and accompanying data loss by using two methods: mirroring and parity.2 HARDWARE PARTITIONING When a system is constrained by I/O capabilities. Performance Hardware RAID controllers divide read/writes of all data from Windows NT 4. Splitting data across physical drives like this has the effect of distributing the read/write I/O workload evenly across all physical hard drives participating in the RAID array. instead of some disks becoming a bottleneck due to uneven distribution of the I/O requests. The objective of partitioning is manageability and performance related requirements.1 INTRODUCTION Data warehouses often contain large tables and require techniques both for managing these large tables and for providing good query performance across these large tables. it is CPU bottleneck. . RAID can be implemented in several levels. RAID is a storage technology often used for databases larger than a few gigabytes. as a whole are kept equally busy. ranging from 0 to 7.CHAPTER 8 PARTITIONING IN DATA WAREHOUSE 8.0 and Windows 2000 and applications (like SQL Server) into slices (usually 16–128 KB) that are then spread across all disks participating in the RAID array. This increases disk I/O performance because the hard disks participating in the RAID array. performance. Database architects frequently use RAID (Redundant Arrays of Inexpensive Disks) systems to overcome I/O bottlenecks and to provide higher availability.

is often one of the most expensive components needed. Be ready to add disks and redistribute data across RAID arrays and/or small computer system interface (SCSI) channels as necessary to balance disk I/O and maximize performance.0 and Windows 2000 and RDBMS are online. Read operations. RAID 0+1 (sometimes referred to as RAID 10 or 0/1) are implemented through mirroring. Mirroring can enable the system to survive at least one failed drive and may be able to support the system through failure of up to half of the drives in the mirrorset without forcing the system administrator to shut down the server and recover from the file backup. Such RAID systems are commonly referred to as “Hot Plug” capable drives. Due to the additional costs associated with calculating and writing parity. . Disadvantage The disk cost of mirroring is one extra drive for each drive worth of data. once to each side of the mirrorset. Another advantage is that mirroring provides more fault tolerance than parity RAID implementations. the data for the lost drive can be rebuilt by replacing the failed drive and rebuilding the mirror set. If there is a drive loss with mirroring in place. RAID 5 and its hybrids are implemented through parity. Advantage It offers the best performance among RAID options if fault tolerance is required. only one additional drive is required. for a data warehouse. To protect any number of drives with RAID 5. System monitor will indicate if there is a disk I/O bottleneck on a particular RAID array. Read I/O operation costs are the same for mirroring and parity. If a drive should fail. a new drive is inserted into the RAID array and the data on that failed drive is recovered by taking the recovery information (parity) written on the other drives and using this information to regenerate the data from the failed drive. compared to two disk I/O operations for mirroring. Bear in mind that each RDBMS write to the mirrorset results in two disk I/O operations. Most RAID controllers provide the ability to do this failed drive replacement and re-mirroring while Windows NT 4. which. Advantage The main plus of parity is cost. General Rule of Thumb: Be sure to stripe across as many disks as necessary to achieve solid disk I/O performance. Parity information is evenly distributed among all drives participating in the RAID 5 array. Both RAID 1 and its hybrid. however. are usually one failed drive before the array must be taken offline and recovery from backup media must be performed to restore data. This essentially doubles your storage cost. RAID 5 requires four disk I/O operations for each write. Parity It is implemented by calculating recovery information about data written to disk and writing this parity information on the other drives that form the RAID array. Disadvantages The minus of parity are performance and fault tolerance.Partitioning in Data Warehouse | 87 Mirroring It is implemented by writing information onto a second (mirrored) set of drives.

1 RAID level 0 . Data is divided into blocks and spread in a fixed order among all disks in an array.3 RAID LEVELS Level 0 This level is also known as disk striping because of its use of a disk file system called a stripe set. The following illustration shows RAID 0. In keeping with the principle that sequential and larger I/O is good for performance. Although a UPS for the server does not directly affect performance. This available caching with SQL Server can significantly enhance the effective I/O handling capacity of the disk subsystem. the RAID controller has more time and opportunity to flush data to disk in the event of power disruption. so that operations can be performed independently and simultaneously. The principle of these controller-based caching mechanisms is to gather smaller and potentially nonsequential I/O requests coming in from the host server and try to batch them together with other I/O requests for a few milliseconds so that the batched I/Os can form larger (32–128 KB) and maybe sequential I/O requests to send to the hard drives. RAID 0 improves read/write performance by spreading operations across multiple disks. These RAID controllers usually protect their caching mechanism with some form of backup power.88 | Data Mining and Warehousing Effect of On-Board Cache of Hardware RAID Controllers Many hardware RAID controllers have some form of read and/or write caching. except RAID 5 also provides fault tolerance. Disk 1 Disk 2 Disk 3 Disk 4 A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 Fig. RAID 0 is similar to RAID 5. Rather. 8. This backup power can help preserve the data written in cache for some period of time (perhaps days) in case of a power outage. the RAID controller cache is using some organization to arrange incoming I/O requests to make best possible use of the underlying hard disks’ fixed amount of I/O processing ability. 8. It is not that the RAID controller caching magically allows the hard disks to process more I/Os per second. If the database server is also supported by an uninterruptible power supply (UPS). this helps produce more disk I/O throughput given the fixed number of I/Os that hard disks are able to provide to the RAID controller. it does provide protection for the performance improvement supplied by RAID controller caching.

Disk mirroring provides a redundant. 8. All data written to the primary disk is written to the mirror disk.Partitioning in Data Warehouse | 89 Level 1 This level is also known as disk mirroring because it uses a disk file system called a mirror set. It also employs a disk-striping strategy that breaks a file into bytes and spreads it across multiple disks. The following illustration shows RAID 1.3 RAID level 2 D0 1 party D2 D3 D4 0 party E1 E2 E3 E4 . This strategy offers only a marginal improvement in disk utilization and read/write performance over mirroring (RAID 1).2 RAID level 1 Level 2 This level adds redundancy by using an error correction method that spreads parity across all disks. Disk 1 Disk 2 A B C D A B C D Mirror Fig. RAID 2 is not as efficient as other RAID levels and is not generally used. identical copy of a selected disk. 8. Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 A0 A1 A2 A3 4 parity B0 B1 B2 3 parity B4 C0 C1 2 parity C3 C4 Fig. RAID 1 provides fault tolerance and generally improves read performance (but may degrade write performance).

Like RAID 3. RAID 5 is one of the most commonly used RAID configurations. For example. Use of disk space varies with the number of data disks. The following illustration shows RAID 10. RAID 10 provides the highest read/write performance of any of the RAID levels at the expense of using twice as many disks. this level is the most popular strategy for new designs. the error correction method requires only one disk for parity data. The following illustration shows RAID 5. RAID 10 provides the performance benefits of disk striping with the disk redundancy of mirroring. Disk 1 Disk 2 Disk 3 Disk 4 Disk 1 Disk 2 Disk 3 Disk 4 A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 Fig. when a disk fails). RAID 3 also is rarely used. The data and parity information are arranged on the disk array so the two are always on different disks. It keeps user data separate from error-correction data. However. which are then mirrored to another identical set of striped disks. . read performance degrades (for example. 8. RAID 3 provides some read/write performance improvement. Data redundancy is provided by the parity information. a striped array can be created using four disks. when a stripe member is missing.4 RAID level 5 Level 10 (1 + 0) This level is also known as mirroring with striping. This level uses a striped array of disks. RAID 4 is not as efficient as other RAID levels and is not generally used. The striped array of disks is then mirrored using another set of four striped disks.90 | Data Mining and Warehousing Level 3 This level uses the same striping method as RAID 2. It is similar to RAID 4 because it stripes the data in large blocks across the disks in an array. Level 4 This level employs striped data in much larger blocks or segments than RAID 2 or RAID 3. but the error correction method requires only one disk for parity data. Level 5 Also known as striping with parity. Striping with parity offers better performance than disk mirroring (RAID 1). It differs in how it writes the parity across all the disks.

it should never be used in a production environment. RAID 1 and RAID 0+1 offer the best data protection and best performance among RAID levels.5 Comparison of different RAID levels . RAID 0 is typically used only for benchmarking or testing. RAID 0 RAID 1 RAID 5 RAID 0 + 1 Data Fault tolerance information Fig. 8. The write performance of RAID 5 is only about half that of RAID 1 or RAID 0+1 because of the additional I/O needed to read and write parity information. Because RAID 0 provides no fault tolerance protection. and it is not recommended for development environments.Partitioning in Data Warehouse | 91 As mentioned above. but cost more in terms of disks required. RAID 5 costs less than RAID 1 or RAID 0+1 but provides less fault tolerance and less write performance. When cost of hard disks is not a limiting factor. RAID 1 or RAID 0+1 are the best choices in terms of both performance and fault tolerance. The best disk I/O performance is achieved with RAID 0 (disk striping with no fault tolerance protection).

92 | Data Mining and Warehousing

Many RAID array controllers provide the option of RAID 0+1 (also referred to as RAID 1/ 0 and RAID 10) over physical hard drives. RAID 0+1 is a hybrid RAID solution. On the lower level, it mirrors all data just like normal RAID 1. On the upper level, the controller stripes data across all of the drives (like RAID 0). Thus, RAID 0+1 provides maximum protection (mirroring) with high performance (striping). These striping and mirroring operations are transparent to Windows and RDBMS because they are managed by the RAID controller. The difference between RAID 1 and RAID 0+1 is on the hardware controller level. RAID 1 and RAID 0+1 require the same number of drives for a given amount of storage. For more information on RAID 0+1 implementation of specific RAID controllers, contact the hardware vendor that produced the controller. The Fig. 8.5 shows differences between RAID 0, RAID 1, RAID 5, and RAID 0+1. Note: In the illustration above, in order to hold four disks worth of data, RAID 1 (and RAID 0+1) need eight disks, whereas Raid 5 only requires five disks. Be sure to involve your storage vendor to learn more about their specific RAID implementation.

8.4 SOFTWARE PARTITIONING METHODS
² ² ² ² ² Range Partitioning Hash Partitioning Index Partitioning Composite Partitioning List Partitioning

Range Partitioning
Range partitioning maps data to partitions based on ranges of partition key values that you establish for each partition. It is the most common type of partitioning and is often used with dates. For example, you might want to partition sales data into weekly, monthly or yearly partitions. Range partitioning maps rows to partitions based on ranges of column values. Range partitioning is defined by the partitioning specification for a table or index. PARTITION BY RANGE (column_list) and by the partitioning specifications for each individual partition: VALUES LESS THAN (value_list) where: column_list is an ordered list of columns that determines the partition to which a row or an index entry belongs. These columns are called the partitioning columns. The values in the partitioning columns of a particular row constitute that row’s partitioning key. value_ list is an ordered list of values for the columns in the column list. Only the VALUES LESS THAN clause is allowed. This clause specifies a non-inclusive upper bound for the partitions. All partitions, except the first, have an implicit low value specified by the VALUES LESS THAN literal on the previous partition. Any binary values of the partition key equal to or higher than this literal are added to the next higher partition. Highest partition being where MAXVALUE literal is defined. Keyword, MAXVALUE, represents a virtual infinite value that sorts higher than any other value for the data type, including the null value. The statement below creates a table sales_range_part that is range partitioned on the sales_date field using Oracle.

Partitioning in Data Warehouse | 93

CREATE TABLE sales_range_part (sales_man_id NUMBER(6), emp_name VARCHAR2(25), sales_amount NUMBER(10), sales_date DATE) PARTITION BY RANGE(sales_date) ( PARTITION sales_jan2006 VALUES LESS THAN(TO_DATE(’02/01/2006',’MM/DD/YYYY’)), PARTITION sales_feb2006 VALUES LESS THAN(TO_DATE(’03/01/2006',’MM/DD/YYYY’)), PARTITION sales_mar2006 VALUES LESS THAN(TO_DATE(’04/01/2006',’MM/DD/YYYY’)), PARTITION sales_apr2006 VALUES LESS THAN(TO_DATE(’05/01/2006',’MM/DD/YYYY’)),);

Hash Partitioning
Hash partitioning maps data to partitions based on a hashing algorithm that Oracle applies to a partitioning key that you identify. The hashing algorithm evenly distributes rows among partitions, giving partitions approximately the same size. Hash partitioning is the ideal method for distributing data evenly across devices. Hash partitioning is a good and easy-to-use alternative to range partitioning when data is not historical and there is no obvious column or column list where logical range partition pruning can be advantageous. Oracle uses a linear hashing algorithm and to prevent data from clustering within specific partitions, you should define the number of partitions by a power of two (for example, 2, 4, 8). The statement below creates a table sales_hash, which is hash partitioned on the salesman_id field. data1, data2, data3, and data4 are tablespace names. CREATE TABLE sales_hash (salesman_id NUMBER(5), emp_name VARCHAR2(25), sales_amount NUMBER(10), week_no NUMBER(2)) PARTITION BY HASH(salesman_id) PARTITIONS 4 STORE IN (data1, data2, data3, data4);

List Partitioning
List partitioning enables you to explicitly control how rows map to partitions. You do this by specifying a list of discrete values for the partitioning column in the description for each partition. This is different from range partitioning, where a range of values is associated with a partition and with hash partitioning, where you have no control of the row-to-partition mapping. The advantage of list partitioning is that you can group and organize unordered and unrelated sets of data in a natural way. CREATE TABLE sales_list (salesman_id NUMBER(5), emp_name VARCHAR2(25), sales_area VARCHAR2(20), sales_amount NUMBER(10), sales_date DATE) PARTITION BY LIST(sales_area) ( PARTITION sales_south VALUES IN(‘Delhi’, ‘Mumbai’,’Kolkatta’), PARTITION sales_north VALUES IN (‘Bangalore’, ‘Chennai’, ‘Hyderabad’));

94 | Data Mining and Warehousing

Composite Partitioning
Composite partitioning combines range and hash partitioning. Oracle first distributes data into partitions according to boundaries established by the partition ranges. Then Oracle uses a hashing algorithm to further divide the data into subpartitions within each range partition. CREATE TABLE sales( s_productid NUMBER, s_saledate DATE, s_custid NUMBER, s_totalprice NUMBER) PARTITION BY RANGE (s_saledate) SUBPARTITION BY HASH (s_productid) SUBPARTITIONS 16 (PARTITION sal98q1 VALUES LESS THAN (TO_DATE(’01-APR-1998', ‘DD-MON-YYYY’)), PARTITION sal98q2 VALUES LESS THAN (TO_DATE(’01-JUL-1998', ‘DD-MON-YYYY’)), PARTITION sal98q3 VALUES LESS THAN (TO_DATE(’01-OCT-1998', ‘DD-MON-YYYY’)), PARTITION sal98q4 VALUES LESS THAN (TO_DATE(’01-JAN-1999', ‘DD-MON-YYYY’)))

Index Partitioning
You can choose whether or not to inherit the partitioning strategy of the underlying tables. You can create both local and global indexes on a table partitioned by range, hash, or composite methods. Local indexes inherit the partitioning attributes of their related tables. For example, if you create a local index on a composite table, Oracle automatically partitions the local index using the composite method.

EXERCISE
1. Why we need partition? Explain. 2. Explain different types of partition with suitable examples. 3. Write a short note on RAID technology.

your company’s data warehousing environment may differ slightly from what we are about to introduce.2 DATA MART A data warehouse is a cohesive data model that defines the central data repository for an organization. There are three types of data warehouses: 1. This is most useful for users to access data since a database can be visualized as a cube of several dimensions. It contains summarized data that the user group can easily understand. Each data mart is a collection of tables organized according to the particular requirements of a user or group of users. and other applications that manage the process of gathering data and delivering it to business users. it requires a data warehouse.This has a broad enterprise wide scope. A data mart is a data repository for a specific user group. client analysis tools.Datamart is a subset of data warehouse and it supports a particular region. Retrieving a collection of different kinds of data from a “normalized” . ODS (Operational Data Store) . and apply. A data warehouse usually contains historical data that is derived from transaction data. Data Mart .An enterprise data warehouse provides a central database for decision support throughout the enterprise.CHAPTER 9 DATA MART AND META DATA 9. but unlike the real entertprise data warehouse.1 INTRODUCTION A data warehouse is a relational/multidimensional database that is designed for query and analysis rather than transaction processing. process. A data warehouse provides an opportunity for slicing and dicing that cube along each of its dimensions. an OLAP engine. Data warehouses and data marts are built on dimensional data modeling where fact tables are connected with dimension tables. 2. A data mart cannot stand alone. 3. data is refreshed in near real time and used for routine business activity. a data warehouse environment often consists of an ETL solution. 9. Because each data warehousing effort is unique. Enterprise Data Warehouse . In addition to a relational/multidimensional database. It separates analysis workload from transaction workload and enables a business to consolidate data from several sources. business unit or business function.

and loans. and video tapes all next to each other. 9. variable measurements. The notion of a “mart” suggests that it is organized for the ultimate consumers — with the potato chips. many application designers in each branch have made their individual decisions as to how an application and database should be built. Hence the need to rearrange the data so they can be retrieved more easily. has millions of customers and the lines of business of the enterprise are savings. This organization does not have to follow any particular inherent rules or structures. The following example explains how the data is integrated from source systems to target systems. . So source systems will be different in naming conventions.1 Data warehouse and data marts 9. ² Tools to convert models into data marts quickly. ² Responsiveness to the user’s stated objectives. Data Warehouse Data Mart1 Data Mart2 Data Mart N Fig. encoding structures. over the years.3 META DATA For Example: In order to store data. and physical attributes of data. ² Sufficient facility with database modeling and design to produce new tables quickly. This means that the creation of data marts requires: ² Understanding of the business involved. the requirements are almost certain to change once the user has seen the implications of the request. it may not even make sense.96 | Data Mining and Warehousing warehouse can be complex and time-consuming. Consider a bank that has got several branches in several countries. Indeed. And however the marts are organized initially.

This inconsistency in data can be avoided by integrating the data into a data warehouse with good standards. the job of the analyst is much more difficult and the work required for analysis is compounded significantly. Using a systems engineering approach. .e. Without a good source of meta data to operate from. the user of the data warehouse must have access to meta data that is accurate and up to date. Example of Target Data (Data Warehouse) Target system Record #1 Record #2 Record #3 Attribute name Customer Application Date Customer Application Date Customer Application Date Column name CUSTOMER_ APPLICATION_DATE CUSTOMER_ APPLICATION_DATE CUSTOMER_ APPLICATION_DATE Datatype DATE DATE DATE Values 01112005 01112005 01112005 In the above example of target data. column name.Data Mart and Meta Data | 97 Example of Source Data System name Source System 1 Source System 2 Source System 3 Attribute name Customer Application Date Customer Application Date Application Date Column name CUSTOMER_ APPLICATION_DATE CUST_APPLICATION_ DATE APPLICATION_DATE Datatype Values NUMERIC (8. where it stored. 0) 11012005 DATE DATE 11012005 01NOV2005 In the aforementioned example.. and how it is related to the business. column names. The topic of standardizing meta data across various products and applying a systems engineering approach to this process in order to facilitate data warehouse design is what this project intend to address. attribute names. A data warehouse stores current and historical data from disparate operational systems (i. this process will be used initially to help define the data warehouse requirements and it will then be used iteratively during the life of the data warehouse to update and integrate new dimensions of the warehouse. It describes the kind of information in the warehouse. This is how data from various source systems is integrated and accurately stored into the data warehouse. transactional databases) into one consolidated system where data is cleansed and restructured to support data analysis. Meta data is literally “data about data”. and datatypes are consistent throughout the target system. how it relates to other information. where it comes from. attribute name. We intend to create a design process for creating a data warehouse through the development of a metadata system. The meta data system we intend to create will provide a framework to organize and design the data warehouse. In order to be effective. datatype and values are entirely different from one source system to another.

Effective tools must be used to properly store and use the meta data generated by the various systems. table name. type. The data warehouse presents an alternative in data processing that represents the integrated information requirements of the enterprise.1 Data Transformation and Load Meta data may be used during transformation and load to describe the source data and any changes that need to be made. For the most part. Name is the designation that the field will have in the base data within the data warehouse. Whether or not you need to store any meta data for this process will depend on the complexity of the transformations that are to be applied to the data. name. data has aggregated in the corporate world as a by-product of transaction processing and other uses of the computer.3. These trends and patterns are absolutely vital to the vision of management in directing the organization. type. Base data is used to distinguish between the original data load destination and any copies. type. For each source data field the following information is required: Source field Unique identifier. Meta data itself is arguably the most critical element in effective data management. name. The destination field needs to be described in a similar way to the source. A unique identifier is needed to help avoid any confusion between commonly used field names. Name. the data that has accumulated has served only the immediate set of requirements for which is was initially intended. Name is its local field name. Destination Unique identifier. Destination . Type is the storage type of data. when moving it from source to data warehouse. Location is both the system comes from and the object contains it. A unique identifier is required to avoid any confusion occurring between two fields of the same name from different sources. ² Performance issues ² Data structures inside the warehouse ² Migration For years. location describe where the data comes from.98 | Data Mining and Warehousing Understanding the role of the enterprise data model relative to the data warehouse is critical to a successful design and implementation of a data warehouse and its meta data management. location (system. Some specific challenges which involve the physical design tradeoffs of a data warehouse include: ² Granularity of data . 9. It is designed to support analytical processing for the entire business organization. Analytical processing looks across either various pieces of or the entire enterprise and identifies trends and patterns otherwise not apparent.refers to the level of detail held in the unit of data ² Partitioning of data . It will also depend on the number of source systems and the type of each system. object).refers to the breakup of data into separate physical units that can be handled independently.

type. Meta data is needed for all of the following: Table Columns (name. The other attributes are module name and syntax. field name. Transformation needs to be applied to turn the source data into the destination data. Aggregation Aggregation name. syntax). This is needed by the warehouse manager to allow it to track and control all data movements. such as clear. Every object in the database needs to be described. For each table the following information needs to be stored. reference identifier) Aggregations are a special case of tables. description. Name is a unique identifier that differentiates this from any other similar transformations. aggregation). 9. type) Views Columns (name. sum . columns (column_name. max. table (columns) To this end we will need to store meta data like the following for each field.3. Transformation(s) Name. average. Language attribute contains the name of the language that the transformation written in. number.2 Data Management Meta data is required to describe the data as it resides in the data warehouse. reference identifier. Table Table name. and they require the following meta data to be stored. type) Constraints Name. columns (column_name. type) Indexes Columns (name. Table name is the name of the table field will be part of. Field Unique identifier. The functions listed below are examples of aggregation functions: Min. varchar. language (module name.Data Mart and Meta Data | 99 type is the database data type.

Inc. Essential Strategies. and sites of those systems. .4 LEGACY SYSTEMS The task of documenting legacy systems and mapping their columns to those of the central data warehouse is the worst part of any data warehouse project. 4. technology. What is data mart? Give example. the particular characteristics of the project vary. 2. 3. Moreover. Define meta data. has extended Oracle’s Designer/2000 product to allow for reverse engineering of legacy systems—specifically providing the facility for mapping each column of a legacy system to an attribute of the corporate data model. What is legacy system? Explain data management in meta data. partition key(reference identifier)) 9.100 | Data Mining and Warehousing Partitions are subsets of a table. depending on the nature. Partition Partition name. Combining this with the mapping of each column in a data mart to an attribute of the data model provides direct mapping of original data to the view of them seen by users. EXERCISE 1. but they also need the information on the partition key and the data range that resides in the partition. table name (range allowed. range contained.

Network. Hot Backup—Any backup that is not cold is considered to be hot. system failure and the other unfrozen disasters can make your work vanish into digital nothingness in no time. Software.If such failure affects the operation of a database system .2 TYPES OF BACKUP The definitions below should help to clarify the issue. 10. If your data is precious to you. then treat it accordingly. Partial Backup—Any backup that is not complete. Human errors. Process. It make sense backup your data on a regular basis . These will vary from RDBMS to RDBMS. Complete Backup—The entire database is backed up at the same time. There are special requirements that need to be observed if the hot backup is to be valid.it is required to recover the data base and return to normal operation as quick as possible. The terminology comes from the fact that the database engine is up and running hence hot. In a multiinstance environment. Cold Backup—Backup that is taken while the database is completely shut down. There are utilities that can backup entire partitions along with the operating system and everything else. Consider how these important issues affect backup plan: ² Realize that the restore is almost more important than the backup ² Create a hot backup ² Move your backup offline ² Perfect the connection between your production and backup servers ² Consider a warm backup ² Always reassess the situation . Hardware.1 INTRODUCTION One of the most important part is backup and recovery in a data warehouse. These definitions apply to the back up of the database. all instances of that database must be shut down.CHAPTER 10 BACKUP AND RECOVERY OF THE DATA WAREHOUSE 10. This includes all database data files and the journal files.

and online redo log files. 10. should your production database ever crash. ² Backup and recovery operations are fully integrated with partitioning. can be backed up and restored independently of the other partitions of a table. This reduces administrative burden and minimizes the possibility of human errors. control files. Some of the highlights are: ² Details related to backup. ² Oracle includes support for incremental backup and recovery. Individual partitions. Furthermore. when placed in their own tablespaces. You only pay a small one time fee for each additional computer. Putting in place a backup and recovery plan for data warehouses is imperative. Open platforms for more hardware options & enterprise-level platforms. Three types of files must be backed up: database files. safer operations at terabyte scale. enabling operations to be completed efficiently within times proportional to the amount of changes. restore. Backup and recovery problems are most likely to occur at this level. data warehouses often contain external data that. it is removed from the operational databases. Even though most of the data comes from operational systems originally. We allow you to back up any number of computers to your account.102 | Data Mining and Warehousing 10. Cold backups shut down the database. Keeping a local backup allows you to save data way in excess of the storage that you purchase for online backup. As operational data ages. All computers will share the common pool of storage space. Oracle provides a server-managed infrastructure for backup.3 BACKUP THE DATA WAREHOUSE Oracle RDBMS is basically a collection of physical database files. rather than the overall size of the database. you have not made a successful backup of the database. you should at least do periodic test restores. and cancel-based recovery. time-based recovery.1 Create a Hot Backup The best solution for critical systems is always to restore the backup once it has been made onto another machine. This provides for efficient operations that can scale up to handle very large volumes of data. There are also supplemental backup methods. full database recovery. restore. ² The backup and recovery technology is highly scalable. and recovery tasks that enables simpler. if lost. may have to be purchased. If you omit any of these files. and recovery operations are maintained by the server in a recovery catalog and automatically used as part of these operations. you cannot always rebuild data warehouses in the event of a media failure (or a disaster). and provides tight interfaces to industry-leading media management subsystems. Each type of backup has its advantages and disadvantages. but it may still exist in the data warehouse.3. If you can’t afford this higher cost. This will also provide you with a “hot” backup that is ready to be used. For other backup products designed for local backup. Hot backups take backups while the database is functioning. . such as exports. The major types of instance recovery are cold restore.

SureWest Broadband has licensed Backup Software from NovaStor. it’s best to keep a snapshot of every day’s work. You also need to have multiple versions of your information backed up. Consider what happens if someone accidentally deletes the wrong data. another hard drive. Simple data backup can be done by saving your data to multiple locations. you should consider creating a copy of the database that is kept “warm”—i. Most users . hardware or human error. Ideally you should keep a copy of this data at some location other than your home or place of business. If you use your computer for business or personal use.2 Consider a Warm (Not Hot) Backup After you install hot backups and build a solid network. If your house burns down and your backups are sitting on the shelf next to your computer. To make this task easier.e.3 Tape Backup Storage ² ² ² ² ² Tape backup of customers’ data on SAN Tape backup of Veritas disk storage Tape backup of customers’ data direct from network Tape archiving Off-site tape storage 10. you propagate that change to your backup server. except there’s one thing you still need to worry about. 10.3. Hot backups are great because they are extremely current. The minute data is changed on the primary server. But that very same close binding also comes with its own dangers. This is easily done with an online backup service. That way. If you are working on a spreadsheet over a period of several days. you only have to restore the last several transaction logs rather than start from scratch. or another computer’s hard drive on the network. this actually happened at a client of mine. but deliberately keep your warm backup 30 minutes behind your primary. you need to always have a backup of your data files in the event of software. For example. you might set the hot backup within a minute of the primary server.. If every second of downtime is expensive. copying the data from its original location to removable media.4 What is SureWest Broadband’s Online Backup Service? It is Internet-based services that allows computer users to routinely backup and recover their data using a secure connection and safely store it in the SureWest Broadband Data Storage Warehouse until it is needed.) The inadvertent deletion will immediately get propagated to the backup.3. For example. can you relax? Sort of. fairly recent but not as recent as your hot backup. (I’m not making this up. a leader in the backup business. too. say an administrator accidentally drops the accounts table in the database.Backup and Recovery of the Data Warehouse | 103 10.3. they too will be lost. You need to make backups of your important data to prevent a total loss in the event of any kind of system failure or human error. to enable users to make backup copies of their data.

When the differences are extracted from the two files. data is recoverable only to the last (most recent) full database or differential backup. you can use SureWest Broadband’s Online Backup Service to keep your important files safe from disaster on a daily basis. create index or bulk copy). It’s known as online backup. ² Full recovery provides the most flexibility for recovering databases to an earlier point in time. the transactions are truncated from the log upon checkpoint. The trade-offs that occur pertain to performance. It is imperative that the risks be thoroughly understood before choosing a recovery model. index creation or bulk loads) ² Data loss exposure (for example. After the log space is no longer needed for recovery from server failure (active transactions). it is truncated and reused. The patch file is often 85% to 1010. However. ² Bulk-logged recovery provides higher performance and lower log space consumption for certain large-scale operations (for example. All of your data is encrypted before it is sent to our storage network to protect your privacy. and protection against data loss. the loss of committed transactions) . If you have a working Internet connection on your computer. Online backup software is designed to routinely copy your important files to a private. Transaction log backups are not usable for recovering transactions because. 10. The patching process involves the comparison of two different versions of the same file and extracting the differences between the files.4 DATA WAREHOUSE RECOVERY MODELS The model chosen can have a dramatic impact on performance.10% smaller than the file which the patch was extracted from originally. It does this at the expense of some flexibility of point-in-time recovery. Each recovery model addresses a different need.3. space utilization (disk or tape). bulk-logged. the amount of exposure to data loss varies with the model chosen. When you choose a recovery model. The model for a database can be changed after the database has been created. When using the simple recovery model. There are three recovery models: full. Trade-offs are made depending on the model you chose. you are deciding among the following business requirements: ² Performance of large-scale operations (for example. so a new type of backup system was needed. in this model.104 | Data Mining and Warehousing do not want to be bothered with managing the tasks necessary for maintaining their own backups. but it does so with significant exposure to data loss in the event of a system failure. they are saved into a new file and compressed into what is known as a patch. especially during data loads. Knowledgeable administrators can use this recovery model feature to significantly speed up data loads and bulk operations. This creates the potential for data loss. and simple. secure location on the Internet by transmitting your data over your existing Internet connection. The recovery model of a new database is inherited from the model database when the new database is created.5 What is a FastBIT Backup? The FastBIT patching process is the core technology behind NovaStor’s speedy backup program. 10. ² Simple recovery provides the highest performance and lowest log space consumption.

Then changes must be redone.Backup and Recovery of the Data Warehouse | 105 ² Transaction log space consumption ² Simplicity of backup and recovery procedures Depending on what operations you are performing. . consider the impact it will have. or bulk operations occurred since the most recent log backup. The following table provides helpful information. If the log is damaged. Reclaims log space to keep space requirements small. Recover to point in time? Can recover to the end of any backup. How will you backup the data warehouse? Explain. What are the types of backup? 3. Can recover to any point in time. Then changes must be redone. EXERCISE 1. Can recover to the end of any backup. 2. Permits high-performance bulk copy operations. Full Bulk-Logged If the log is damaged. no work is lost. changes since the most recent log backup must be redone. Before choosing a recovery model. Explain data warehouse recovery model. one model may be more appropriate than another. Minimal log space is used by bulk operations. Otherwise. changes since that last backup must be redone. Can recover to an arbitrary point in time (for example. prior to application or user error). Recovery model Simple Benefits Permits high-performance bulk copy operations. No work is lost due to a lost or damaged data file. Work loss exposure Changes since the most recent database or differential backup must be redone Normally none.

Performance gains made in later steps may pave the way for further improvements in earlier steps. likely require different tuning methods. For optimal results. Step Step Step Step Step Step Step Step Step Step 1: Tune the Business Rules 2: Tune the Data Design 3: Tune the Application Design 4: Tune the Logical Structure of the Database 5: Tune Database Operations 6: Tune the Access Paths 7: Tune Memory Allocation 8: Tune I/O and Physical Structure 9: Tune Resource Contention 10: Tune the Underlying Platform(s) After completing these steps. Different tuning strategies vary in their effectiveness. so additional passes through the tuning process may be useful.CHAPTER 11 PERFORMANCE TUNING AND FUTURE OF DATA WAREHOUSE 11. Tuning is an iterative process.2 PRIORITIZED TUNING STEPS The following steps provide a recommended method for tuning an Oracle database. systems with different purposes. reassess your database performance and decide whether further tuning is necessary. Fig. such as online transaction processing systems and decision support systems. therefore.1 INTRODUCTION A well planned methodology is the key to success in performance tuning.1 illustrates the tuning method: . 11. 11. resolve tuning issues in the order listed: from the design and development phases through instance tuning. These steps are prioritized in order of diminishing returns: steps with the greatest effect on performance appear first. Furthermore.

1. 11. The tuning method .Performance Tuning and Future of Data Warehouse | 107 1 Tune the business rules 2 Tune the data design 3 Tune the application design 4 Tune the logical structure of the database 5 Tune database operations 6 Tune the access paths 7 Tune memory allocation 8 Tune the I/O and physical structure 9 Tune the resource contention 10 Tune the underlying platform(s) Fig.

7. 2. For example. software staffing 32% Selling to management 26% Training users 16% Managing expectations 11% Managing change 8% 11. 6.108 | Data Mining and Warehousing Decisions you make in one step may influence subsequent steps. Also. 5. Better data quality 63% Better competitive data 61% Direct end-user access 32% Cost reduction/higher productivity 32% More timely data access 24% Better availability of systems 16% Supports organization change 8% 11. 5.4 BENEFITS OF DATA WAREHOUSING 1. depends on the size of the buffer cache. which is tuned in step 8. 3. in step 5 you may rewrite some of your SQL statements. 6. 4. Technical challenges 42% Data management 40% Hardware. disk I/O.5 FUTURE OF THE DATA WAREHOUSE ² Peta byte system (1 PB = 1024 TB) ² Size of the database grows to a Very Large Database (VLDB) to ELDB (Extremely Very Large Data Base) ² Integration and manipulation of non-textual (multimedia) and textual data ² Building and running ever-larger data warehouse system ² Web enabled application grows ² Handle vast quantities of multiformat data ² Distributed databases will be used ² Cross database integrity ² Use of middleware and multiple tiers .3 CHALLENGES OF THE DATA WAREHOUSE 1. These SQL statements may have significant bearing on parsing and caching issues addressed in step 7. 3. 4. you may need to return from any step to any previous step. Although the figure shows a loop back to step 1. which is tuned in step 7. 2. 11. 7.

Explain tuning the data warehouse. 3. 11.Performance Tuning and Future of Data Warehouse | 109 11.I) l Reports [Organization . Transformation and Load Process Data Warehouse Data Mart1.6 NEW ARCHITECTURE OF DATA WAREHOUSE Business Modeling l Business Process l Data Models Data Modeling Source data Model l Data Warehouse Models l Source Design * RDBMS * Source Systems * Files * ERP [SAP] ETL Procedure Designing Extraction. Business and Internal] Fig. What is the future scope of data warehouse? . Data Mart 2… Data Mart n Business Intelligence (B. Draw the architecture diagram of data warehouse. 2.2 Data warehouse architecture diagram EXERCISE 1.

Algorithm: Any well-defined computational procedure that takes some value or set of values as input. Clickstream Data: Data regarding web browsing Cluster: A tightly coupled group of SMP machines. The data can then be referred to as aggregate data. Association Rules: rules that state a statistical correlation between the occurrence of certain attributes in a database table. X2. with the database is shut down. Byte: A standard unit of storage.Xn =>y. X2. and produces some value or set of values as output. Aggregation is synonymous with summarization. sales data could be collected on a daily basis and then be aggregated to the week level. Cleansing: The process of resolving inconsistencies and fixing the anomalies in source data. . an attribute is a column in a dimension that characterizes elements of a single level. A unit of computer storage that can store 2 values. 0 or 1. Cold Backup: A database backup taken while the database is shut down. In RDBMS. Business Intelligence Tools: Software that enables business users to see and use large amounts of complex data. unit sales of a particular product could be aggregated by day. Classification: A subdivision of a set of examples into a number of classes. the week data could be aggregated to the month level. Coding: Operations on a database to transform or simplify data in order to prepare it for a machine-learning algorithm. a byte is a sufficient to store a single character. Attributes represent logical groupings that enable end users to select data based on like characteristics.…. For example.Appendix A GLOSSARY Aggregate: Summarized data. The general form of an association rule is X1. Basically. and so on. month. See also ETL. Bit(s): acronym for binary digit.…. Aggregation: The process of consolidating data values into a single value. Attribute: A descriptive characteristic of one or more levels. and aggregate data is synonymous with summary data. A byte consists of 8 bits. quarter and yearly sales. This means that the attributes X1. For example. typically as part of the ETL process.Xn predict Y.

Glossary | 111

Common Warehouse Meta Data (CWM): A repository standard used by Oracle data warehousing, decision support, and OLAP tools including Oracle Warehouse Builder. The CWM repository schema is a standalone product that other products can share each product owns only the objects within the CWM repository that it creates. Confidence: Given the association rule X1, X2…Xn=>Y, the confidence is the percentage of records for which Y holds, within the group of records for which X1, X2,…Xn holds. Data: Any symbolic representation of facts or ideas from which information can potentially be extracted. Data Selection: The stage of selecting the right data for a KDD process. Data Source: A database, application, repository, or file that contributes data to a warehouse. Data Mart: A data warehouse that is designed for a particular line of business, such as sales, marketing, or finance. In a dependent data mart, the data can be derived from an enterprise-wide data warehouse. In an independent data mart, data can be collected directly from sources. See also data warehouse. Data Mining: The actual discovery phase of a knowledge discovery process. Data Warehouse: A relational database that is designed for query and analysis rather than transaction processing. A data warehouse usually contains historical data that is derived from transaction data, but it can include data from other sources. It separates analysis workload from transaction workload and enables a business to consolidate data from several sources. In addition to a relational database, a data warehouse environment often consists of an ETL solution, an OLAP engine, client analysis tools, and other applications that manage the process of gathering data and delivering it to business users. See also ETL, OLAP. Denormalize: The process of allowing redundancy in a table so that it can remain flat. Contrast with normalize. Derived Fact (or Measure): A fact (or measure) that is generated from existing data using a mathematical operation or a data transformation. Examples include averages, totals, percentages, and differences. Decision Trees: A decision tree consists of nodes and branches, starting from a single root node. Each node represents a test or decision. Depending on the outcome of the decision, one chooses a certain branch and when a terminal node(or leaf) is reached a decision on a class assignment is made. Deep Knowledge: Knowledge that is hidden within a database and can only be recovered if one is given certain clues. Dimension: A structure, often composed of one or more hierarchies, that categorizes data. Several distinct dimensions, combined with measures, enable end users to answer business questions. Commonly used dimensions are customer, product, and time. Dimension Data: Data stored externally to the fact data, which contains expanded information on specific dimensions of the fact data. Dimension Table: Part of star schema. A table which holds dimension data.

112 | Data Mining and Warehousing

Drill: To navigate from one item to a set of related items. Drilling typically involves navigating up and down through the levels in a hierarchy. When selecting data, you can expand or collapse a hierarchy by drilling down or up in it, respectively. See also drill down, drill up. Drill Down: To expand the view to include child values that are associated with parent values in the hierarchy. (See also drill, drill up) Drill Up: To collapse the list of descendant values that are associated with a parent value in the hierarchy. Enrichment: A stage of the KDD process in which new data is added to the existing selection. Element: An object or process. For example, a dimension is an object, a mapping is a process, and both are elements. ETL: Extraction, transformation, and loading. ETL refers to the methods involved in accessing and manipulating source data and loading it into a data warehouse. The order in which these processes are performed varies. Note that ETT (extraction, transformation, transportation) and E T M (extraction, transformation, move) are sometimes used instead of ETL. (See also data warehouse, extraction, transportation) Extraction: The process of taking data out of a source as part of an initial phase of ETL Fact Data: The fact data is the basic core data that will be stored in the data warehouse. It is called fact data because it will correspond to business-related facts such as call record data for a telco, or account transaction data for a bank. Fact Table: Part of a star schema. The central table that contains the fact data. File System: An area of space set aside to contain files. The file system will contain a root directory in which other directories or files can be created. File-to-table Mapping: Maps data from flat files to tables in the warehouse. Genetic Algorithm: A class of machine-learning algorithm that is based on the theory of evaluation. Granularity: The level of detail of the facts stored in a data warehouse. Hidden Knowledge: Knowledge that is hidden in a database and that cannot be recovered by a simple SQL query. More advanced machine-learning techniques involving many different SQL queries are necessary to find hidden knowledge. Hierarchy: A logical structure that uses ordered levels as a means of organizing data. A hierarchy can be used to define data aggregation; for example, in a Time dimension, a hierarchy might be used to aggregate data from the month level to the quarter level to the year level. A hierarchy can also be used to define a navigational drill path, regardless of whether the levels in the hierarchy represent aggregated totals. Inductive Learning: Learning by generalizing form examples. Information: Meaningful data. KDD: Knowledge Discovery in Databases: The non-trivial extraction of implicit, previously unknown and potentially useful information from data. Knowledge: A collection of interesting and useful patterns in a database.

Glossary | 113

Learning: An individual learns how to carry out a certain task by making a transition from a situation in which the task cannot be carried out to a situation in which the same task can be carried out under the same circumstances. Machine Learning: A sub-discipline of computer science that deals with the design and implementation of learning algorithms. Materialized View: A pre-computed table comprising aggregated and/or joined data from fact and possibly dimension tables. Also known as a summary or aggregate table. Meta Data: Data about data. Meta data is data that describes the source, location and meaning of another piece of data. For example, the schema design of a data warehouse is typically stored in a repository as metadata, which is used to generate scripts used to build and populate the data warehouse. A repository contains meta data. Examples include: for data, the definition of a source to target transformation that is used to generate and populate the data warehouse; for information, definitions of tables, columns and associations that are stored inside a relational modeling tool; for business rules, discount by 10 percent after selling 1,000 items. Multi-dimensional Knowledge: A table with n independent attributes can be seen as an ndimensional space. Some regularities represented in such a table cannot easily be detected when the table has a standard two-dimensional representation. WE have to explore the relationship Neural Networks: A class of learning algorithm consisting of multiple nodes that communicate through their connecting synapses. Neural networks initiate the structure of biological nervous systems. Normalize: In a relational database, the process of removing redundancy in data by separating the data into multiple tables. Contrast with denormalize. Operational Data Store (ODS): The cleaned, transformed data from a particular source database. OLAP: Online analytical Processing OLTP: Online transaction processing. An OLTP system is designed to handle a large volume of small predefined transactions. Online Backup: A database backup taken while the database is open and potentially in use. The RDBMS will need to have special facilities to ensure that data in the backup is consistent. Overnight Processing: Any processing that needs to be done to accomplish the daily loading, cleaning, aggregation, maintenance and backup of data, in order to make that new data available to the users for the following day. Patterns: Structures in a database that are statistically relevant. Prediction: The result of the application of a theory or a rule in a specific case. Query Tools: Tools designed to query a database. Shallow Knowledge: The information stored in a database that can be retrieved with a single query. Snowflake Schema: A variant of the star schema where each dimension can have its own dimensions.

X2. Some dimensions may be represented in both forms to cater different query requirements.Xn and Y both hold. The fact table will generally be extremely large in comparison to its dimension tables. Statistical Techniques: Techniques used in statistics.…Xn => Y. Validation: The process of verifying metadata definitions and configuration parameters. . Transportation: The process of moving copied or transformed data from a source to a data warehouse. Verification: The validation of a theory on the basis of a finite number of examples. Visualization Techniques: A class of graphic techniques used to visualize the contents of a database. the support is the percentage of records for which X1…. Unsupervised Algorithms: The opposite of supervised algorithms. Star Flake Schema: This is a hybrid structure that contains a mix of star and snowflake schemas. Support: Given the association rule X1. Supervised Algorithms: Algorithms that need the control of a human operator during their execution. There will be a foreign key in the fact table for each dimension table.114 | Data Mining and Warehousing Star Schema: A logical structure that has a fact table in the centre with dimension table radiating off of this central table. Versioning :The ability to create new versions of a data warehouse project for new requirements and changes.

Data which is not updated or changed in any way once they enter the data warehouse are called: (a) Non-volatile (b) Integrated (c) Cluster (d) Coding 4. (a) Integrated and subject oriented (b) Time variant (c) Non volatile (d) All the above 2.collection of data in support of management’s decisionmaking process. A data warehouse is a………. Data cleansing (a) The process of resolving inconsistencies and fixing the anomalies in source data.Appendix B MULTIPLE-CHOICE QUESTIONS 1. (b) Removes duplication (c) Reconciles differences between various styles of data collection (d) All the above 5. SMP means (a) Systematic multi-processing (b) Stage multi-processing (c) Symmetric multi-processing (d) None of the above . Who is the originator of the data warehousing concept? (a) Johan McIntyre (b) Bill Inmon (c) Bill Gates (d) Chris Erickson 3.

....... 4 (a)... 10 (a). partitioning.... 3 (a).. OLAP means (a) Off line adaptive processing (b) Off line adaptive product (c) On line adaptive product (d) On line analytical processing 10.... 5 (c).. Data we use to run our business is (a) Historical data (b) Operational data (c) Statistical data (d) Transactional data 8. 9 (d). Composite partitioning combines .. 7 (b)..116 | Data Mining and Warehousing 6... 2 (b).. 6 (d). OLTP means (a) Off line transaction processing (b) Off line transaction product (c) On line transaction product (d) On line transaction processing 7... Data marts are .. 8 (b). (a) Range & hash (b) Hash & index (c) Index and range (d) None of the above Answer 1 (d). (a) Business houses (b) Workgroups or departmental warehouses (c) Data structures (d) Company warehouses 9. . which are small in size. and .

Insert and update information in one step). External Tables allows users to run SELECT statements on external data files (with pipelining support). Typical relational databases are designed for on-line transactional processing (OLTP) and do not meet the requirements for effective on-line analytical processing (OLAP). data warehouses are designed differently than traditional relational databases. In general a Data Warehouse is used on an enterprise level. OLTP databases are designed to maintain atomicity. point-in-time. Many new features in the Oracle9i database will also make ETL processing easier. while Data Marts is used on a business division/department level. ROLAP stands for Relational OLAP. 3.Appendix C FREQUENTLY ASKED QUESTIONS AND ANSWERS 1. What is the difference between a W/H and an OLTP application? Typical relational databases are designed for on-line transactional processing (OLTP) and do not meet the requirements for effective on-line analytical processing (OLAP). subject-oriented. ROLAP. As a result. 2. What is the difference between OLAP. transforming (or transporting) and loading (ETL) data from source systems into the data warehouse. Oracle supports the ETL process with their “Oracle Warehouse Builder” product. It is a subject oriented. Since a data warehouse is not updated. MOLAP and HOLAP are specialized OLAP (Online Analytical Processing) applications. What is ETL/How does Oracle support the ETL process? ETL is the Data Warehouse acquisition processes of extracting. For example: New MERGE command (also called UPSERT. 4. MOLAP and HOLAP? ROLAP. consistency and integrity (the “ACID” tests). What is a data warehouse? A data warehouse is the “corporate memory”. non-volatile (read only) and integrated. As a result. data warehouses are designed differently than traditional relational databases. inquiry only collection of operational data. but . What is the difference between a data warehouse and a data mart? There are inherent similarities between the basic constructs used to design a data warehouse and a data mart. Warehouses are time referenced. these constraints are relaxed. Users see their data organized in cubes with dimensions. A data mart only contains the required subject specific data for local analysis.

Normal relational databases store data in two-dimensional tables and analytical queries against them are normally very slow. What is a star schema? Why does one design this way? Why? A single “fact table” containing a compound primary key. allowing them to slice and dice the data to answer business questions. 6. When should you use a STAR and when a SNOW-FLAKE schema? The star schema is the simplest data warehouse schema.ORA) Oracle can direct . A warehouse typically contains years of data (time referenced). Its sources include legacy systems and it contains current or near term data. HOLAP stands for Hybrid OLAP. Oracle designer. Snow flake schema is similar to the star schema. It normalizes dimension table to save data storage space. It can be used to represent hierarchies of information. When should one use an MD-database (multi-dimensional database) and not a relational one? Data in a multi-dimensional database is stored as business people views it. 8. What is the difference between an ODS and a W/H? An ODS (operational data store) is an integrated database of operational data. Users see their data organized in cubes with dimensions. Discoverer is an ad-hoc end-user query tool. with one segment for each “dimension. MOLAP stands for Multidimensional OLAP. Oracle express. Seagate Software’s Holos is an example HOLAP environment. 5. It allows for the highest level of flexibility of metadata Low maintenance as the data warehouse matures Best possible performance. The RDBMS will store data at a fine grain level. What is the difference between Oracle express and Oracle discoverer? Express is an MD database and development environment. but the data is store in a Multi-dimensional database (MDBMS) like Oracle Express Server. 9. Express objects. it is a combination of both worlds. How can Oracle materialized views be used to speed up data warehouse queries? With “Query rewrite” (QUERY_REWRITE_ENABLED=TRUE in INIT. In a HOLAP system one will find queries on aggregated data as well as on detailed data. Data warehouses group data by subject ather than by activity (subject-oriented).” and additional columns of additive. When designed correctly. 10. an OLAP database will provide must faster response times for analytical queries. What Oracle tools can be used to design and build a W/H? Data Warehouse Builder (or Oracle data mart builder).118 | Data Mining and Warehousing the data is really stored in a Relational Database (RDBMS) like Oracle. In a MOLAP system lot of queries have a finite answer and performance is usually critical and fast. 7. 11. Other properties are: Non-volatile (read only) and Integrated. An ODS may contain 30 to 90 days of information. response times are usually slow. numeric facts.

ETL processing using Oracle DataPump. Data is automatically directed to the correct partition based on data ranges or hash values. because they store summarized data. What is drill down and drill up? Drill down—To expand the view to include child values that are associated with parent values in the hierarchy drill up—To collapse the list of descendant values that are associated with a parent value in the hierarchy. ETL processing using Data stage 7. Oracle materialized views can be used to preaggregate data.5. By granularity.Frequently Asked Questions & Answers | 119 queries to use pre-aggregated tables instead of scanning large tables to answer complex queries. Data partitioning allows one to split big tables into smaller more manageable sub-tables (partitions). What is granularity? The first step in designing a fact table is to determine the granularity of the fact table. What Oracle features can be used to optimize the warehouse system? From Oracle8i One can transport tablespaces between Oracle databases. Materialized views in a W/H environments is typically referred to as summaries. The determining factors usually go back to the requirements. exp/imp and SQL*LOADER. This constitutes two steps: Determine which dimensions will be included and determine where along the hierarchy of each dimension the information will be kept. we mean the lowest level of information that will be stored in the fact table. . 13. designing data W/H using ERWIN. 12. Using this feature one can easily “detach” a tablespace for archiving purposes. business objects. Oracle parallel query can be used to speed up data retrieval by using multiple processes (and CPUs) to process a single task. 15. 14. This will dramatically speed-up warehouse queries and saves valuable machine resources. One can also use this feature to quickly move data from an OLTP database to a warehouse database. The query optimizer can direct queries to summary/ roll-up tables instead of the detail data tables (query rewrite). What tools can be used to design and build a W/H? ETL processing using informatica . OLAP processing using cognos.

Appendix D

MODEL QUESTION PAPERS
MCA DEGREE EXAMINATIONS
Data mining Time: 3 hrs 5 × 15 = 75

1. a. What do you mean by expanding universe of data? b. Explain the term “data mining” (or) 2. a. Explain, what are the practical applications of data mining. b. Distinguish between data mining and query tools. 3. With a suitable diagram explain the architecture of data warehouse. (or) 4. What are the challenges in creating and maintaining a data warehouse? Explain in detail with suitable example. 5. What is OLAP? Explain the various tools available for OLAP. (or) 6. Write short notes on (i) K–nearest neighbour, (ii) Decision trees. 7. Explain the various steps involved in knowledge discovery process. (or) 8. Write short notes on (i) Data selection, (ii) Data cleaning, (iii) Data enrichment. 9. Explain about data compression. (or) 10. Explain the term denormalization.

Model Question Papers | 121

MCA DEGREE EXAMINATIONS
Data mining Time: 3 hrs 5 × 15 = 75

1. a. What is data mining and data warehousing? b. How computer system that can learn? c. Discuss data mining in marketing. (or) 2. a. Explain the practical applications of data mining. b. Write an account on machine learning and methodology of science. c. How query language suits to data mining operation? 3. a. Write in detail about manufacturing machine. b. Write short notes on cost justification. (or) 4. a. Give an account of the need for decision support-system for business and scientific applications. 5. Write in detail about the use of artificial neural network and genetic algorithm for data mining. (or) 6. What are OLAP tools? Explain various tools available for OLAP. 7. What are the different kinds of knowledge? How to develop a knowledge discovery database project? (or) 8. Explain the data selection, cleaning, enrichment, coding and analysis of knowledge discovery process. 9. What are the primitives of data mining? Explain the data mining primitives in SQL. (or) 10. Explain the following terminologies: Learning as compression of data set, Information content of a message, Noise and redundancy, Fuzzy database, Relational databases.

122 | Data Mining and Warehousing

MCA DEGREE EXAMINATIONS
Data mining Time: 3 hrs 5 × 15 = 75

1. a. What is meant by data mining? b. Compare data mining with query tools. (or) 2. a. Explain the concept of learning. b. Explain the complexity of search space with an example of a kangaroo in mist. 3. What is data warehouse? Why do we need it? Explain briefly. (or) 4. Explain the difficulties of a client/server and data warehousing. 5. Explain the knowledge discovery process in detail. (or) 6. Write a note on neural networks. 7. a. Explain the goal of KDD. b. What are the ten golden rules for reliable data mining? (or) 8. Write short notes on (i) Data selection, (ii) Data cleaning, (iii) Data enrichment. 9. Explain briefly the reconstruction foreign key relationship with suitable example. (or) 10. Explain the following: (i) Denormalization, (ii) Data mining primitives.

B. Tech DEGREE EXAMINATIONS
Data mining and warehousing Time: 3 hrs 5 × 15 = 75

1. What is meant by data mining? Compare data mining and query tools. Explain in detail about data mining in marketing. (or) 2. Explain in detail the knowledge discovery process. 3. Explain in detail about self learning computer systems and concept learning. (or)

10. Explain in detail hardware physical layout security. 13. Write in detail about data mining and query tools. Explain in detail the physical layout security. Discuss in detail data mining and data warehouse. 9. 5. Discuss in detail about database design schema and partitioning strategy. Discuss in detail visualization techniques. Explain backup and recovery. Explain about visualization techniques and OLAP tools. (or) 10. 8. Explain how you can true and test the data warehouse. (or) 6. cleaning. Discuss about system process architecture design and database schema. 12. Write in detail about knowledge discover in database environment. Define data mining? Write in detail about information and production factor. Tech DEGREE EXAMINATIONS Data mining and warehousing Time: 3 hrs 5 × 15 = 75 1. Explain in detail data warehouse features. (or) 8. Write in detail about meta data and system and data warehouse process managers. Discuss about system and data warehouse process managers. Write in detail about data selection. 6. 7. 4. 11. 14. enrichment and coding. Discuss how one can operate the data warehouse. . 2. 7. B. Explain about capacity planning and tuning the data warehouse. backup and recovery. 15. 9. 5. 3.Model Question Papers | 123 4. Explain in detail hardware architecture. Explain about service level agreement and operating the data warehouse. Explain in detail self learning computer systems. How do you test the data warehouse? Explain.

5 × 15 = 75 . Differentiate between data mining and data warehousing. Explain. 3. 4. With relevant examples discuss the need for backup and recovery with respect to a data warehouse. What are the different security issues in data warehousing? 9. Discuss with simple case studies the role of neural networks in data mining. in detail. b. With a neat sketch. Explain with neat sketch. the procedure and the need for testing the data warehouse. b. with related case studies. Justify the above statement. 8. What is data warehouse tuning? Discuss the need for data warehouse tuning. What are decision trees? Discuss their applications. With suitable examples and relevant description justify the above statement. With relevant examples explain how association rules can be mined from large data sets. Tech DEGREE EXAMINATIONS Data mining and warehousing Time: 3 hrs 1. What is meta data? Discuss with an example. data mining as step in the process of knowledge discovery. List the basic hardware architecture for a data warehouse. What is market-basket analysis? Explain with an example. Data mining plays a role in self learning computer systems. 10. Explain with relevant diagrams the steps involved in designing a database schema.124 | Data Mining and Warehousing B. 5. 2. With a neat sketch explain the architecture of a data warehouse “A data warehouse contains subject oriented data”. a. 7. Give the necessary description also. 6. List the steps involved in operating the data warehouse. explain the architecture of a data mining system. a.

Data Warehouse Design. Release 1 (9. 1993. Predicting Future and Understanding the Past (Eds. NY: Oxford University Press. 1994 . Cristopher M. MIT Press. Barquin & Herbert A. Data Mining. Sam Anahory Dennis Murray. 1994. NY: Springer– Verlag . Baltimore. Churchill R.1) Oracle Corporation. 1993. Pearson Education. 2000. Elmasri R. Pearson Education. Hamilton J.M. 1996.. Addison Wesley. Statistical Analysis of Time Series. Ullman J. and Uthurusamy K. Eds. . Bishop. 1999. et al. 2000. Edelstein.. 1996.E. Principles and Practice of Public Health Surveillance. Mozer. Tsur S. NY: John Wiley & Sons. MIT Press. Gershenfeld).. Data Warehousing Guide... Epidemics: Identification and Management. 1995. Data Mining: Concepts and Techniques. Dunhan. Using and Managing the Data Warehouse. 1999. MIT Press. Anderson. Jean-mare Adamo. (1988) Improving the Convergence of Back Propagation Learning with Second Order Methods. et al. Englewood Cliffs..D. Piatetsky-shapiro G. Prevention and Control of Nosocomial Infections.. In: Wenzel RP (Ed). A. Han J.. Oracle9i. (1994) Time Series Analysis. Introduction to Time Series and Forecasting. Kimball R. Smyth P.. (Eds. Part I and Part II. NJ: Prentice Hall PTR. S. Building. Davis. Paul Lane. Weigend and N.B. (Eds.. (1993) Neural Net Architecture for Temporal Sequence Processing. Navathe S.). NY: Springer-Verlag. 1971. 2004.D. 2000. Addison Wesley Pub.. Princeton University Press. 1997. The Data Warehouse Toolkit. M. Margaret H. 1988. Data Warehousing in the Real World. 1994. Dynamic Itemset Counting and Implications Rules for Market Basket Data. T. Peter J. Ramon C. Brin S. Teutsch S. Morgan Kaufman. In: ACM Press. Neural Networks for Pattern Recognition. NY: Oxford University Press. Data Mining for Association Rules and Sequential Patterns. John Wiley & Sons. Becker.BIBLIOGRAPHY BOOKS Fayyad U. Brockwell and Richard A.1996. 2000. Data Mining Introductory and Advanced Topics. Doebbeling N.C.. Kamber M. Fundamental of Database Systems.W.) (1996) Advance in Knowledge Discovery and Data Mining. Pieter Adriaans Dolf Zantinge. Motwani R.M. Md: Williams and Wilkins.0. Inman.

605–615. Bustamente. November. G. Brachman.generation5. Phyllis and Inmon.. Vol 41(5) March 15. and Simoudis.126 | Data Mining and Warehousing JOURNALS AND PUBLICATIONS Radding.shtm http://www. R. Murata. http://www. W. Decision Support at Lands’ End – An Evolution. Alan. 11. Datamation. Piatetsky-Shapiro.doc.nlm. The Right Server for Your Data Warehouse. 1. Vol 1(8).ic. (1996) Mining Business Databases. K.ac. (1991). Commandeering Mainframe Database for Data Warehouse Use. Artificial Neural Networks Soulie. Vol.ncbi.arborsoft.cvut. G.. & Sorenson.htm http://www.felk.gov. WEBSITES http://cs.org http://www.uk/complications/ESRD http://www.olapcouncil.lancet2000. http://pwp.. Artificial Neural Networks.J. E. 1996.com/ida/browse/96-1/ida96-1.html. 1994.com/OLAP.org/aisolutions/ga_tsp. Khabaza.htm. Koslow.starnetinc. Francois (1991). 1994. Vol. Aug. No. T. Elaine.nih. IBM System Journal. Appleton. G.com/larryg/articles.. 1995. Vol 33(2). Kloesgen.html http://www-east. Datamation Vol 41(5) March 15. Noboru et al. 1995.com t t t . Support Decision Makers with a Data Warehouse. 39. Application Development Trends. Communications of the ACM.elsevier..cz/~xobitko/ga/ http://www.uk/~nd/surprise_96/journal/vol4/tcw2/report. Bill.

10. 99 Crossover–32 E ELDB–108 Encoding–27 Enrichment–17 Entropy–26 Exabyte–64 F Fact table–83. 36. 20. 15. 74 Foreign key–79 Frequent itemsets–47 D Data–66 Data cleaning–16 Data mining–2. 65 DB miner–61 Decision tree–24 De-duplication–17 Deep knowledge–12 Dendogram–39 Design–72. 79 Destination–99 Dimension table–75. 61 Cluster–37 Clustering–34 Coding–18 Cold backup–101 Constraints–79. 76 Dimensions–74 Distance–35 Domain consistency–17 DMQC–60 B Backup–101. 3. 6 G Genetic algorithm–11. 32 Gain–27 Giga byte–64 . 78 Association rules–46 Apriori algorithm–47 Data mart–95 Database–11. 54 Apriori–48 Architecture–6. 19 Data selection–16 Data warehouse–5. 102 C CART–62 Confidence level–47 Classification–23 Cleaning–16 Clementine–3.Index A Aggregation–99 Algorithms–11 Applications–2.

92 Partitioning algorithm–51 Performance of–87 Petabyte–64 Prediction–23 Primary key–67 I Indexes–99 Inductive learning–8 Information–65 Interpretation–19 ID3 algorithm–28 Q K KDD–14 Keys–79 K-means algorithm–37 Knowledge–12. 65 Query–79 R Range partitioning–92 Raid–87. 45. 98 Spatial mining–52 Star schema–81. 68 OLAP server–68 OLAP tools–5 Oracle–62. 82 SQC–88 SQL server 2000–89 Statistical analysis–11 Statistical techniques–11 Storage–68 N Neural networks–23 Neurons–24 Nonvolatile–6 Normalization–65 . 36 Medicine–4 Meta data–96 Mutation–33 Mirroring–87 Multi-dimensional knowledge–12 Sampling algorithm–49 Scalability–69 Scatter diagrams–19 Schema–81 Sesi–87 Shallow knowledge–12 Snowflake schema–83 Source–97. 103 Hidden knowledge–12 OLAP–12. 102 O P Parity–87 Partitioning–86. 88 Recovery–104 Relational databases–4 Report–109 Row–65 L Learning–8 Legacy system–10 List partitioning–93 S M Machine learning–10 Marketing–4.128 | Data Mining and Warehousing H Hash partitioning–94 Hardware–87 Hierarchical clustering–39 Hierarchies–77 Hot backup–102.

52 World wide web (www)–58 T Table–101 Tape–104 Temporal mining–52 Tuning–106 Tera byte–64 X XML–52 Xpert rule miner–62 Y Yotta byte–64 U Unsupervised learning–9 Unique–79 Z Zetta byte–64 V Visualization–11. 81 .Index | 129 Sybase–3 System–3 Support level–46 Supervized learning–8 W Web mining–51. 19 Visualization techniques–19 View–99.

Sign up to vote on this title
UsefulNot useful