You are on page 1of 9

Manjunath T.N et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol.

2 (1) , 2011, 477-485

Analysis of Data Quality Aspects in Data Warehouse Systems


Manjunath T.N1, Ravindra S Hegadi2, Ravikumar G.K3
1

Bharathiar University, Coimbatore Tamilnadu, INDIA,


2 3

Karnatak University, Dharwad Karnataka, INDIA, Dr. MGR University, Chennai Tamilnadu, INDIA,

Abstract: Data quality is a critical factor for the

success of data warehousing projects. If data is of inadequate quality, then the knowledge workers who query the data warehouse and the decision makers who receive the information cannot trust the results. In order to obtain clean and reliable data, it is imperative to focus on data quality. While many data warehouse projects do take data quality into consideration, it is often given a delayed afterthought. Even QA after ETL is not good enough the Quality process needs to be incorporated in the ETL process itself. Data quality has to be maintained for individual records or even small bits of information to ensure accuracy of complete database. Data quality is an increasingly serious issue for organizations large and small. It is central to all data integration initiatives. Before data can be used effectively in a data warehouse, or in customer relationship management, enterprise resource planning or business analytics applications, it needs to be analyzed and cleansed. To ensure high quality data is sustained, organizations need to apply ongoing data cleansing processes and procedures, and to monitor and track data quality levels over time. Otherwise poor data quality will lead to increased costs, breakdowns in the supply chain and inferior customer relationship management. Defective data also hampers business decision making and efforts to meet regulatory compliance responsibilities. The key to successfully addressing data quality is to get business professionals centrally involved in the process. We have analyzed possible set of causes of data quality issues from exhaustive survey and discussions with data warehouse groups working in distinguishes organizations in India and

abroad. We expect this paper will help modelers, designers of warehouse to analyze and implement quality warehouse and business intelligence applications. Keywords: Data warehouse (DWH), Data Quality, ETL, Data Staging, And Multidimensional Modeling (MDM).

1. Introduction
The presence of data alone does not ensure that all the data management functions and decisions can be made. The main purpose of data quality is about horrific data - data which is missing or incorrect or invalid in some perspective. A large term is that, data quality is attained when business uses data that is comprehensive, understandable, and consistent, indulging the main data quality magnitude is the first step to data quality perfection which is a method and able to understand in an effective and efficient manner, data has to satisfy a set of quality criteria. Data gratifying the quality criterion is said to be of high quality. Ample attempts have been made to classify data quality and to identify its dimensions. Magnitude of data quality typically includes accuracy, reliability, importance, consistency, precision, timeliness, fineness, understandability, conciseness and usefulness. For our research work we have taken the quality criterion by taking 9 key factors as mentioned below. According to English [3] defines the following inherent information quality characteristics and measures: 1. Definition Conformance 2. Completeness (of values)

477

Manjunath T.N et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (1) , 2011, 477-485

3. Validity, or business rule conformance 4. Accuracy to source 5. Precision 6. Non duplication (of occurrences) 7. Derivation Integrity 8. Accessibility 9. Timeliness 1. Definition Conformance: The chosen Object is of most important and its definition should have complete details and meaning of the real world object. 2. Completeness (of values): Is the characteristic of having all required values for the data fields. 3. Validity (Business rule conformance): Is a measure of degree of conformance of data values to its domain and business rules. This includes Domain values, Ranges, reasonability tests, Primary key uniqueness, Referential Integrity. 4. Accuracy (to the Source): Is a measure of the degree to which data agrees with data contained in an original source. 5.Precision: The domain value which specifies business should have correct precisions as per specifications. 6. Non-duplication (of occurrences): Is the degree to which there is a one-to-one correlation between records and the real world object or events being represented. 7. Derivation Integrity: Is the correctness with which two or more pieces of data are combined to create new data. 8. Accessibility: Is the characteristic of being able to access data on demand. 9.Timeliness: Is the relative availability of data to support a given process within the timetable required to perform the process. 1.1 Data Warehouse System The term Data Warehouse was coined by Bill Inmon in 1990, which he defined in the following way: "A warehouse is a subject-

oriented, integrated, time-variant and nonvolatile collection of data in support of management's decision making process. Ralph Kimball provided a much simpler definition of a data warehouse. A data warehouse is "a copy of transaction data specifically structured for query and analysis".

Fig 1: Data warehouse Architecture

1.3 Layers of Data warehouse liable to Data Quality Issues This paper addresses The Analysis of Data Quality aspects in all the layers data warehouse Systems. The Stages are: Data Sources Data Staging Multi dimensional modeling design

Data quality can be negotiated based on how data is received, entered, integrated, maintained, processed (Extracted, Transformed and Cleansed) and loaded. Data crashing is happening by many courses of actions that get data into DWH environment, the majority of which will affect the data quality. The Layers of data warehouse system is liable for data quality issues. In spite of all the efforts, there still exists a certain percentage of dirty data. The bad data must be reported, affirm the reasons for this bad data while doing data cleaning. Data quality factors can occur in different ways [9]. The most common include: Poor data handling procedures and processes. Malfunction to stick on to data entry and procedures.

478

Manjunath T.N et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (1) , 2011, 477-485

Errors in the migration process from one system to another. Unstructured data may not look like our required data. The hypothesis assumed that data quality factors can be begin at any layer of data warehouse viz. in data sources, staging area, and database modeling(DM).Following framework is designed with stages of data warehouse which are exposed to data quality factors.

Data Quality Steps

Profiling Cleansing Standardization

Staging Area

VLDB (very Large Datab ase)


Target system (DWH)

Matching Enrichment Monitoring


Fig: 3 Data Quality Steps in Data warehouse System

Source Systems

ETL Process

Fig 2: Stages of data warehouse liable to data quality factors

2. Methodology
Data quality assurance is a complex problem that requires a systematic approach. English [3] proposes a comprehensive Total Quality data Management Methodology, which consists of 5 steps of measuring and improving information quality, and an umbrella process for bringing about cultural and environmental changes to sustain information quality improvement as a management tool and a habit: Step 1: Assess Data Definition & Information Architecture Quality Step 2: Assess Information Quality Step 3: Measure Non quality Information Costs Step 4: Reengineer and Cleanse Data Step 5: Improve Information Process Quality Step 6: Establish the Information Quality Environment Fig-3 describes the steps of data quality to be accomplished to have proper and free from data quality factors which are described in the coming sections [2].

2.1 Data Quality Steps The primary goal of Data Quality solution is to assemble data from one or more data sources. However, the process of bringing data together usually results in a broad range of data quality issues that need to be addressed. For instance, incomplete or missing customer profile information may be uncovered, such as blank phone numbers or addresses. Or certain data may be incorrect, such as a record of a customer indicating he/she lives in the city of Wisconsin, in the state of Green Bay. Fig-3 describes the 6 tasks of Data quality in data warehouse [2]. 1. Profiling As the first line of defense for your data integration solution, profiling data helps you examine whether your existing data sources meet the quality standards of your solution. Properly profiling your data saves execution time because you identify issues that require immediate attention from the start and avoid the unnecessary processing of unacceptable data sources. Data profiling becomes even more critical when working with raw data sources that do not have referential integrity or quality controls. There are several data profiling tasks: column statistics, value distribution and pattern

479

Manjunath T.N et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (1) , 2011, 477-485

distribution. These tasks analyze individual and multiple columns to determine relationships between columns and tables. The purpose of these data profiling tasks is to develop a clearer picture of the content of your data [2]. Column Statistics-This task identifies problems in your data, such as invalid dates. It reports average, minimum, maximum statistics for numeric columns. Value Distribution- Identifies all values in each selected column and reports normal and outlier values in a column. Pattern Distribution identifies invalid strings or irregular expressions in your data. 2. Cleansing After a data set successfully meets profiling standards, it still requires data cleansing and deduplication to ensure that all business rules are Properly met. Successful data cleansing requires the use of flexible, efficient techniques capable of handling complex quality issues hidden in the depths of large data sets. 3. Standardization This technique parses and restructures data into a common format to help build more consistent data. For instance, the process can standardize Addresses to a desired format, or to USPS specifications, which are needed to enable CASS Certified processing. This phase is designed to identify, correct and standardize patterns of data across various data sets including tables, columns and rows, etc.

Data enrichment enhances the value of customer data by attaching additional pieces of data from other sources, including geocoding, demographic data, full-name parsing and genderizing, phone number verification, and email validation. The process provides a better understanding of your customer data because it reveals buyer behavior and loyalty potential [2]. Address Verification-Verify U.S. and Canadian addresses to highest level of accuracy the physical delivery point using DPV and LACSLink, which are now mandatory for CASS Certified processing and postal discounts. Phone Validation-Fill in missing area codes, and update and correct area code/prefi x. Also Append lat/long, time zone, city, state, ZIP, and county. Email Validation-Validate, correct and clean up email addresses using three levels of verifi cation: Syntax; Local Database; and MXlookup. Check for general format syntax errors, domain name changes, improper email format for common domains (i.e. Hotmail, AOL, Yahoo) and validate the domain against a database of good and bad addresses, as well as verify the domain name exists through the MaileXchange (MX) Lookup, and parse email addresses into various components. Name Parsing and Gendering-Parse full names into components and determine the gender of the first name. Residential Business Delivery IndicatorIdentify the delivery type as residential or business. Geocoding-Add latitude/longitude coordinates to the postal codes of an address. 6. Monitoring This real-time monitoring phase puts automated processes into place to detect when data exceeds pre-set limits. Data monitoring is designed to help organizations immediately recognize and correct issues before the quality of data declines. This approach also empowers businesses to enforce data governance and compliance measures [2].

4. Matching Data matching consolidates data records into identifiable groups and links/merges related records within or across data sets. This process locates matches in any combination of over 35 different components from common ones like address, city, state, ZIP, name, and phone to other not-so-common elements like email address, company, gender and social security number. 5. Enrichment

480

Manjunath T.N et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (1) , 2011, 477-485

2.3 A Case of Data Quality issues at Source An approach to resolving source data quality issues that will be adopted in the design phase. The following types of data quality issues can be considered: Data quality issues related to source systems data such as Customer details, address etc.The following table-1 outlines the identification of data quality issues and the approach to managing the data quality issues during the data analysis phase. Data anomalies will be presented, which will take a decision on whether data cleansing rules are required, or whether the data will be migrated unmodified. The process of keeping migration activities such as Cleansing, Extraction and Loading under control. It is also instrumental in keeping migration on track, ensuring the authenticity of the quality and quantity of the data migrated at each stage. Verification programs will be introduced at the extract and upload levels to capture the count of records. A step-by-step reconciliation carried out for all data conversions will be maintained in the form of a balance sheet.

will leads to unfortunate data quality. Multiple data source systems will have different kind of quality issues. The source system which has bad data consists of typo error driven by humans, or wrong data updating leads to malicious data. Part of the data comes from text files, part from MS Excel files and some of the data is direct ODBC connection to the source database [16]. Because of combination of different files from different source systems would result in data quality factors at any stage Table-1 describes the feasible causes of data quality factors originating at the source system of data warehouse.
DB Objec

Flat Files

OLTP

Legacy Data

XML Docs

Data Sources

Other DBs

CRM, ERP

Csv Files

ODS

3. Analysis of Data Quality Factors


To help Quality Analyst to find the root cause of data quality factors so we need to design the tools which will address data quality factors. It is highly recommended to understand common data quality factors. The Analysis of data quality factors will help data warehouse and data quality practices in the organizations. 3.1 Data Quality factors at Data Sources A foremost cause of data warehouse and business intelligence projects to failures is because of wrong or poor quality data. Ultimately data into data warehouse is loaded from various sources as shown in the Fig-5.The source system contains transactional unstructured data, from which the information is described and builds and loaded into data warehouse system. The data sources which are highlighted in the Fig-4 will have their own data formats for storage, some are compatible and some are not. Because of these multiple reasons, which would contribute to data quality factors, if there are not properly converted. For source systems, if we dont have secured access, which Fig 4: Probable Data Source for Data warehouse

Sl No 1

4 5 6 7 8

Origin Of Data Quality factors at Sources Combination of different files from heterogeneous source system leads to data quality factors. As time and proximity from the source increase, the chances for getting correct data decrease [4]. Lack of having knowledge between different data sources leads to data quality factors. Inability to mange with the aged data which leads to data quality factors [4]. Changeable timeliness of data sources [6] [7]. Deficient validation programs at source systems leads to data quality factors. Sudden changes in source systems cause DQ Problems. Multiple data sources generate semantic heterogeneity which leads to data quality issues.

481

Manjunath T.N et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (1) , 2011, 477-485

10 11 12 13

14 15

16

17 18 19 20

21 22 23 24

25 26 27 28

29 30

31

32 33

Usage of unmanaged applications and databases as data sources for data warehouse in the organizations. Use of different representation formats in data sources. Measurement errors [11]. Non-Compliance of data in data sources with the Standards. Failure to have consistent data updates in regular intervals leads to data quality factors. Malfunction to update all replicas of data causes DQ Problems. Having redundant data in different source systems which leads to data quality factors [7] [11]. Selecting wrong Columns as Primary keys in source system leads to Data quality factors. Ambiguous data present in source systems leads to data quality factors [7]. Different encoding formats (ASCII, EBCDIC,.) [11]. Lack of data quality assurance on individual data. Lack of physical design structure while planning entire database system leads to data quality factors [4]. Wrong domain values for attributes [6]. Having Incompatible data formatting. System fields designed to allow free forms (Field not having adequate length). Missing Columns (You need a middle name of a person but a column for it does not exist.) [6][7]. Missing values in data sources [2] [11] [12]. Misspelled data [11] [12]. Additional columns [6] [11]. Using Same column name in different source system tables which leads to Data quality factors [6]. Special characters which conflicts the data domains [11]. Different data types for similar columns (A customer ID is stored as a number in one table and a string in another). Different data formats in source systems (The month of the year is stored as Jan, Ja, 1, and January in four separate Columns) [10]. Deficient of domain level validation in source Data. No meaningful data is stored in source systems as per business [7].

34 35 36 37 38 39

40 41 42

Improper relationships between tables while DB designs. Not handling Null characters properly in CSV source files result in wrong data. Incorrect number of field splitters in source files leads to data quality factors. Presence of Outliers. Data and metadata mismatch. Important entities, attributes and relationships are hidden and floating in the text fields [6] [7]. Incompatible use of special characters in various sources [6] [7]. Multi-purpose fields present in data sources. Wrong data mapping which leads to data quality factors.

Table 1: Data Quality Factors at source. According to the Gartner Research Group, one of the primary causes of data migration project overruns by time and deficient of understanding data sources prior to data movement into data warehouse [17]. One should concentrate the source systems before we move to target systems. So for the analysis done, specifies how we need to populate the data in source systems, we mainly focus on data quality factors in following types of source systems: a) Legacy Systems b) OLTP/ operational Systems c) Flat/Delimited Files d) Mainframe Machines And analysis is limited to non multimedia (Images, Video and Audio) data only. The data sources considered is on the basis of survey conducted by Larry P [15] in the report titled Evaluating ETL and Data Integration Platforms.Fig-6 shows the analysis of using source systems with their percentage used for populating data warehouse. All the factors presented in the table 1 are related to the source systems which are the input to the data warehouse. Examined the data quality factors in source systems. One of the main barriers in the existing data warehouse system disquiet the existence of inconsistent data. According to English [3]. Our Analysis has presented much more number of factors of data population. Fig-6 shows types of Source systems which are used to extract [10].

482

Manjunath T.N et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (1) , 2011, 477-485

10

11 12 13 14 Fig 5: Types of Data Sources Extracted 15 3.2 Data Quality factors at Staging Layer One concern about where needs to be cleansed the data i.e. in source systems or Staging or in data warehouse [18]. A data cleansing usually will be carried in staging layer in order to main dimension and fact tables consistent and accuracy of data warehouse will be improved. In staging area we does most of the activities like data profiling, data cleansing and data matching and data flushing with reference to the source systems where maximum effort is utilized. We have different reasons for data quality factors in this layer; we have recognized reasons from our analysis as shown in table-2 Sl No 1 2 Origin Of Data Quality factors at Staging Layer Architecture of data warehouse will influence the data quality. The Database which we are using as staging layer also contributes for data quality. Various Parsing/Business rules of source system contribute for data quality. Lack of business rules formation leads to data quality problems. No proper extracts from data sources without time response leads to data quality factors. Lack of capturing only changes in source files [24]. No Proper restoring of staging leads to data quality factors. Reconciliation problem may occur due to data staging area clean up. Improper referential integrity constraints 16

17

18 19

20

21 22

23

in staging layer leads to bad data and their relationships which are used to extract leads to data quality factors [11]. Data warehouse tools used for Extract Transform and Load (ETL) do not generate consolidated meta data log, which contribute to data quality. Failure to have centralized metadata repository leads to poor data quality. Due to Data cleaning without rules established leads to poor data quality. Unacceptable data mapping cause Data quality issues. Faulty execution of slowly change data capture (CDC) plan in ETL stage leads huge data quality issues. Incompatible use of code symbols and formats [4]. Mishandling null Values while transforming data results in data quality issues. Wrong way of using audit columns (ex: create date, generated date, updated date) while performing ETL. Inefficient way using SCD logic in ETL Process. Loading strategy selected (delta/incremental,bulk,refresh) leads to data quality issues. No Standard naming conventions followed while creation mapping and workflows/tasks leads to data quality. Misinterpretations of change requests on ETL stage leads to data quality issues. No Proper data conversion logics and migration and reconciliation which leads to data quality [24]. Incapability to resume the ETL Process from breakpoints without losing data [14].

3 4 5

Table 2: Data Quality Factors at Staging 3.3 Data Quality factors at dimensional modeling (target) stage The quality of the information depends on 3 things: (1) The quality of the data itself (2) The quality of the application programs and (3) The quality of the data model [19]. Data warehouse design affects the data quality analysis so we should give extra effort while designing the model. Some of the parameters such as change data capture i.e. CDCs and multi valued attributes etc. A defective model design forces

6 7 8 9

483

Manjunath T.N et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (1) , 2011, 477-485

Errors on data quality. Table-3 shows the origin of data quality factors at dimensional model. Sl No 1 2 Origin Of Data Quality factors DWH(Target System/Dimensional Model) Misinterpretations of Requirements. Choice of dimensional modeling (STAR, SNOWFLAKE, FACT CONSTELLATION) schema Contribute to data quality. Bad Data model design contributes to data quality factors. Delay in identifying SCDs leads to data quality issues. Appending Dimensions leads to Data Quality issues. Hierarchies Leads to Data Quality Issues. Improper usage of multidimensional objects and their relationships leads to data quality factors. Lack of Database design support contributes to data quality factors.

Experts (SME), data warehouse groups, Data Quality Experts of various organizations in India and abroad. The authors gratefully acknowledge the time spend in this discussions provided by Mr.Shahzad, SME, CSC USA, Mr. Parswanath Project Manager (Data Warehouse Wing). Wipro Technologies, India. Mr. Govardhan (Architect) IBM India Pvt Ltd, Mr. Arun Kumar Data Architect KPIT Cummins India.

3 4 5 6 7

References
[1] Symbiotic Cycles of Data Profiling Integration, and Quality (poster) TDWI, 2006 [2] Six Steps to Managing Data Quality with SQL Server Integration Services: www.MelissaData.com. [3] English, L. P. (1999). Improving Data Warehouse and Business Information Quality: Methods for Reducing Costs and Increasing Profits, John Wiley and Sons, Inc.Data Quality Issues, Page 10. [4] Fields, K. T., Sami, H. and Sumners, G. E. (1986). Quantification of the auditors evaluation of internal control in data base systems. The Journal of Information Systems, 1(1), pp. 24-77. [5] Firth, C. (1996). Data quality in practice: experience from the frontline, Paper presented to the Conference of Information Quality, 25-26 Oct. [6] White paper by Vivek R Gupta,Senior consultant, System Services Corporation, An Introduction to Data Warehousing. [7] Potts, William J. E., (1997), Data Mining Using SAS Enterprise Miner Software. Cary, North Carolina: SAS Institute Inc. [8] SAS Institute Inc., (1998), SAS Institute White Paper, From Data to Business Advantage: Data Mining, the SEMMA Methodology and the SAS System, Cary, NC: SAS Institute Inc. [9] Microsoft CRM Data Migration Framework White Paper by Parul Manek, Program Manager Published: April 2003. [10] Jaiwei Han, Michelinne Kamber, "Data Mining: Concepts and Techniques." [11] DATA MIGRATION BEST PRACTICES NetApp Global Services January 2006. [12] DATA Quality in health Care Data warehouse Environments Robert L.Leitheiser, University of Wisconsin Whitepaper [13] Badri, M. A., Davis, Donald and Davis, Donna (1995). A study of measuring the critical factors of quality management. International

Table 3: Data Quality Problems at Target (DM)

4. Conclusions
We provided analysis of data quality factors in each stage of data warehouse system ie data sources and staging area and Target system (Multidimensional Schema). We further outlined the major steps for data Quality in Data warehouse systems. So far only a little research has appeared on data Quality Problems. We see several topics deserving further research. First of all, more work is needed on the design and implementation of the best tool for finding the Data Quality Subjects in Data warehouse systems, ie entire through the life cycle of data warehouse. We expect this paper will help modellers, designers of warehouse to analyze and implement quality warehouse and business intelligence applications.

5. Future Work
We can implement a standardized tool to handle all the data quality factors in each stage of the data warehouse systems, which are emphasized in the above discussions i.e. table 1,2 and 3.

6. Acknowledgements
This paper is prepared through exhaustive discussions and T-cons with Subject Matter

484

Manjunath T.N et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (1) , 2011, 477-485

Journal of Quality and Reliability Management, 12(2), pp. 36-53. [14] Birkett, W.P. (1986). Professional specialization in accounting IV: management accounting. Australian Accountant, September, p.78. [15] Larry P. English. Improving Data Warehouse and Business Information Quality. Wiley & Sons,New York, 1999. [16] Miles, M. B. & Huberman, A. M. (1994). Qualitative Data Analysis - A Source Book of New Methods, Sage Publications, Thousand Oaks. [17] Jochen Hipp, Ulrich Guntzer, and Udo Grimmer.Data Quality Mining Making a Virtue of Necessity. In Proc. of the 6th ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery (DMKD 2001), pages 5257, Santa Barbara, California, 2001. [18] M.A. Jeusfeld, C. Quix, and M. Jarke. Design and Analysis of Quality Information for Data Warehouses. In Proc. of the 17th International Conference on the Entity Relationship Approach (ER98), Singapore, 1998. [19] Wang, R. Y. (1998). A product perspective on total data quality management.Communications of the ACM, 41(2), pp.58-65. [20] Udo Grimmer and Holger Hinrichs. A Methodological Approach to Data Quality management Supported by Data Mining. In Proceedings of the 6th International Conference on Information Quality (IQ 2001), pages 217 232, 2001. [21] Wang, R. A Product Perspective on Total Data Quality Management. Communications of the ACM, 41(2), 1998. [22] S. Chaudhuri and U. Dayal. An Overview of Data Warehousing and OLAP Technology. SIGMOD Record, 26(1):6574, 1997. [23] Considerations for Building a Real-time Data Warehouse John Vandermay, DataMirror Corporation. [24] Schroeck, Michael J., E-AnalyticsThe Next Generation of Data Warehousing, DM Review August 2000 <http://www.dmreview.com/master.cfm?NavID =55&EdID=2551>. [25] White paper on Data Quality Strategy: AStep-by-Step Approach by SAP Labs [26] "What is a Data Warehouse?" W.H. Inmon, Prism, Volume 1, Number 1, 1995

Authors Profile

Manjunath T N. received his Bachelors Degree in computer Science and Engineering from SJC Institute of Technology, Chickballapur, Karnataka,india during the year 2001 and M. Tech in computer Science and Engineering from Jawaharlal Nehru National College of Engineering, Shimoga,Karnataka,india during the year 2004. Currently pursing Ph.D degree in Bharathiar University, Coimbatore. He is having total 10 years of Industry and teaching experience. His areas of interests are Data Warehouse & Business Intelligence, multimedia and Databases. He has published and presented papers in journals, international and national level conferences.

Dr.Ravindra S Hegadi received his Master of Computer Applications (MCA) & M.Phil and Doctorate of Philosophy Ph.D. in year 2007 in computer science from Gurbarga University, Karnataka; He is having 15 years of Experience. He has visited overseas to various universities as SME.His area of interests are Image Mining, Image Processing and Databases and business intelligence. He has published and presented papers in journals, international and national level conferences.

Ravikumar GK. received his Bachelors degree from Siddaganga Institute of Technology, Tumkur (Bangalore university) during the year 1996 and M. Tech in Systems Analysis and Computer Application from Karnataka Regional Engineering College Surthakal (NITK) during the year 2000. Currently pursing PhD degree in Dr MGR University, Chennai. He is having around 14 years of Professional experienced which includes Software Industry and teaching experience. His area of interests are Data Warehouse & Business Intelligence, multimedia and Databases. He has published and presented papers in journals, international and national level conferences.

485

You might also like