You are on page 1of 40

-Vijay Nagappa NetApp Project


              

Definition Types of Data Profiling Reasons for profiling data Data Profiling Roles Key Steps followed during Data profiling Profiling of Data Sources The Iterative Profiling Process Data Profiling Strategies and Techniques Steps to understand the data through Data Profiling Benefits of Data Profiling Data Analysis Role of DP in Data Quality Management Metrics Specifics Data Profiling Tools Best Practices


What is Data Profiling?
Definitions:  Data profiling refers to analytical techniques used to examine existing data for completeness and accuracy. Data profiling is the first step toward improving data quality.

Data profiling, also sometimes referred to as data discovery, is the process of statistically examining data sources such as a database or data residing in any file structure primarily to decipher problem prone areas in data organization and plan for data organization’s revamp. Data profiling process is a stepping stone in improving upon data quality by way of deciphering and validating data patterns and formats, identifying and validating redundant data across data sources and finally setting and getting the data sources for development of an integrated data driven enterprise level applications. Profiling techniques for data completeness indicate whether all records that should be there are there, none of their fields are blank, existence of duplicate records. Profiling techniques for data accuracy show whether the values contained in the fields are valid.

Types of Data Profiling
Data Profiling can refer to: Data Quality Profiling or Database Profiling. 1) Data Quality Profiling is the process of analyzing a database in relation to a Data Quality Domain, to identify and prioritize data quality problems. The results can include:  Summaries (with counts and percentages) describing   

Completeness of datasets and data records Problems organized by importance The distribution of problems in a dataset Lists of missing data records Data problems in existing records

 

Data quality profiling can be useful when planning and managing data cleanup projects.


Types of Data Profiling 2) Database Profiling is the process of analyzing a database to determine its structure and internal relationships:  The tables used. their keys and number of rows  The columns used and the number of rows with a value  Relationships between tables  Columns copied or derived from other columns  Database Profiling can also include analysis of:  Tables and columns used by different applications  How tables and columns are populated and changed  The importance of different tables and columns  Database profiling can be useful when planning and managing data conversion and data cleanup projects. which is used in Data Quality Profiling. 5 .  Database profiling can be an initial step in defining a Data Quality Domain.

consistent. systems. quality. including:  Assessing risks—Can data support the new initiative?  Planning projects—What are realistic time lines and what data. and level of effort required? Assessing data quality—How accurate. and resources will the project involve? Scoping projects—Which data and systems will be included based on priority.Why Do People Profile? People may want to profile for several reasons. and complete is the data within a single system? Designing new systems—What should the target structures look like? What mappings or transformations need to occur? Checking/monitoring data—Does the data continue to meet business requirements after systems have gone live and changes and additions occur?     6 .

Who should be Profiling the Data? Data profiling is primarily considered part of IT projects. how it is applied in existing business processes. They can contribute to both the requirements for specific projects and the corporation. what data is required for new processes. developers. what data is inaccurate or out of context? III) Data stewards People who understand corporate standards and enterprise data requirements as a whole. I) IT system owners. but the most successful efforts involve a blend of IT resources and business users of the data. 7 . and project managers Analyze and understand issues of data structure: a) How complete is the data? b) How consistent are the formats? c) Are key fields unique? d) Is referential integrity enforced? II) Business users and subject matter experts People who understand the data content: what the data means.

Identifying multiple coding schemes and different spellings used in the data content.Key Steps followed during Data profiling 1) Use of analytical and statistical tools to outline the quality of data structure and data organization by determining various frequencies and ranges of key data element within data sources. Identifying data patterns and data formats and making note of the variation in the data types and data formats being used within data sources. 1) 2) 3) 8 . Applying Numerical analysis techniques to determine the scope of numeric data within data sources.

Key Steps followed during Data profiling 5) Identifying duplicity in the data content such as in name. address or other pertinent information. 6) 7) 8) 9 . Making validation trials by following specific business rules on data records across the data sources. Making note of primary and foreign key relationships and studying their impact on data organization and data retrieval. Deciphering and validating redundant data within the data sources.

and questionable documentation and metadata. query by query.  Manual profiling is appropriate for small data sets from a single source.  These tools are the most appropriate for projects with hundreds of thousands of records. 10 . where the data is relatively simple. especially for high-profile and mission-critical projects. multiple sources.  Sophisticated data profiling technology was built to handle complex problems. II) Automated techniques  It uses software tools to collect summary statistics and analyses. many fields. with fewer than 50 fields.Data Profiling Techniques The techniques for profiling are either manual or automated via a profiling tool: I) Manual techniques  It involves people sifting through the data to assess its condition.

Key Points in Profiling of Data Sources At a minimum. Data format . "killer rows" can be identified. Do product codes match the product categories? Are geographical tags consistent? Do the same products appear under different categories depending on transaction date? Value outliers . 11    .Are phone numbers phone numbers? Are email addresses properly formed? Do postal codes have the correct structure? Cross column consistency .Is there a value in the field?  Historical Completeness . all data sources should be profiled for the following:  Value Completeness .By looking for extremely high or low values. its important to profile across columns. One row with a value that is orders of magnitude off will skew averages and totals and surprisingly might not be noticed through the normal course of report validation.Is the historical data complete? . but might not be noticed at a summary level.Even if all the values in the columns are correct.missing data can corrupt results.

Data Profiling Challenge > Metadata does not exist > Metadata is incomplete > Metadata is inaccurate > Metadata is difficult to gather The quality of metadata is generally much worse than the quality of the data > Data deviates from expectations The quality of the data is generally unknown > Data has inaccuracies > Data inconsistent with metadata 12 .

The Iterative Profiling Process 13 .

The Iterative Profiling Process There is an iterative approach to profiling within each of the analysis steps below.  Running the analysis within the appropriate Vendor Tool  Analyzing the results of each analysis  Verify the results with the Source System SME  Documenting the results (both in deliverables and within the profiling tools)  Plan further analysis based on results  The data investigation process should verify what the source system owners say about the data against the actual data and vice versa. This must be discussed with the source system owner. the data analysis looks like below:  However.  Further analysis is performed in this case to verify that all anomalous records were created before the expected data. when this rule is checked against the data. 14 . a source system owner may say that all customer names must have a full first name and full surname. correct. match and enrich data. But.  This type of anomaly may be explained by a business rule that was applied to new data that was not applied to historical data. For example.  Data Re-Engineering also follows and iterative process for standardize. it shows that 10% of the records have only a first initial.

 Percentage of regular expressions that occur in a column. and standard deviation for numeric columns.  Minimum.  Percentage of null values in a column. maximum.Examples of Data Profiling  Distinct lengths of string values in a column and the percentage of rows in the table that each length represents. [919]6749988.  Example: Profile for an Employee birth date column shows the maximum value is in the future.  Example: Profile of a column of US State codes. and minimum and maximum for date/time columns. 15 . and 9199018888.  Example: Profile of a Zip Code/Postal code column shows a high percentage of missing codes. shows values longer than 2 characters. which should be two characters.  Example: A pattern profile of a phone number column shows numbers entered in three different formats: (919)674-9999. average.

 Example: Profile shows that two or more values in the State field have the same value in the Zip Code field.  Example: A profile of a U.   Dependency of values in one column to values in another column or columns.Examples of Data Profiling  Distinct values in a column and percentage of rows in the table that each value represents.  Example: Some values in the ProductID column of a Sales table have no corresponding value in the ProductID column of the Products table. 16 .S State column contains more than 50 distinct values.  Value inclusion between two or more columns.  Candidate key column for a selected table. Example: Profile shows duplicate values in a potential key column.

The data value was accurate. is legitimately somebody’s birthday. so when they came across a field that was required that didn’t have a value for. 2. 1970. So we had a bunch of bogus dates entered into the system. 2. Jan. but it’s not the birthday of the customer that was associated with the insurance transaction. 1 or Feb. 1. Some weird trends were found – people tended to be born on Jan. but used Jan. 17 .A Typical example     While doing a data profiling on some fields like date of birth in a Insurance project. March 3. Feb. 1. they just made a date up and they just basically picked their birth year. April 4. But the data entry clerks are being paid based on how many applications they can get in per hour. May 5 Why are those particular dates coming up? It was found out that the date of birth field was a required field on an insurance application that most people applying for automobile insurance didn’t feel the need to provide.

it might be possible to isolate bad records and ensure that they are clearly identified. Faced with data quality issues. For example. we have a number of options: 1) Reduce scope to avoid data that simply is not viable We would not want some key information that was advertised during the project justification phase is simply not viable. Reports need to clearly state "Excluding XYZ". If users can use subsets of data after certain analysis.Data Profiling Strategies Strategies based on the results: Once the result of the profiling is available. at least the full analysis can be run on a portion of the sales not affected by the problem. 18 . it might still be of value to them to run certain analysis on a portion. if sales data from a subsidiary is missing certain fields. it is critical to be realistic about what our analysis tells you. its not necessary to abandon bad data completely. Cut any losses if required to. 2) Isolate problem areas and mark them In some cases.

Data quality is an often overlooked. It’s a key aspect of data warehouse project success. c) Profile your data early and often. as a project manager you need to honestly look yourself in the mirror and answer the question "Based on what I now know. For example. it is difficult to design the data warehouse or accurately estimate the effort involved. 4) Revise your project budget if needed a) If you have established a project budget before you undertake your data quality investigation. Without understanding the raw materials available. you should not set the detailed budget until we have done a data quality investigation. with an external service it might be possible to populate missing longitude and latitude information from street address to enable map based visualizations. For example: a) Remove duplicates from Master files such as the customer list using matching software b) Identify missing information and launch data collection and/or entry sub-projects to populate the key tables c) Use external data sources to enhance existing or sparse data sets. is the approved budget still viable?" b) Ideally. 19 .Data Profiling Strategies 3) Cleanse issue areas: It may be possible to cleanse data with issues early.

such as marketing. Data mining is the process of extracting hidden patterns from data. fraud detection and scientific discovery.Data Mining  It’s part of Data Profiling process that is used to dig the data deeper in certain profiling areas. data mining is becoming an increasingly important tool to transform this data into information. surveillance.   20 . Data Mining is commonly used in a wide range of profiling practices. As more data is gathered. with the amount of data doubling every three years.

frequency counts for fields. It includes analysis by field for minimum and maximum values. Standard data profiling automatically compiles statistics and other summary information about the data records. 21 2) 3) 1) 2) 3) . the team avoids the embarrassing and career-shortening surprises of discovering BIG problems near the end of a project. data type and patterns/formats and conformity to expected values. shortens the implementation cycle of major projects. It improves data quality. Data profiling is one of the most effective technologies for improving data accuracy in corporate databases. By correctly anticipating the difficult data quality issues of a project up front.Benefits of Data Profiling 1) A big benefit of data profiling that perhaps should be left unstated (at least while justifying the data warehouse to executives) is that data profiling makes the implementation team look like they know what they are doing. and improve understanding of data for the users. Discovering business knowledge embedded in data itself is one of the significant benefits derived from data profiling.

b) Need to patiently operate with a business. program each and every detail so that we can get from the limitless loop due to undiscovered errors inside your data. b) Information profiling will increase the top quality which gives out superior 22 outcomes for your marketing campaign.Benefits of Data Profiling 7) Some of the other benefits gained by data profiling procedures are: a) Identifies Data Entry Errors b) Quantifies Data Corruption c) Optimizes Data Management Environment d) Enables enterprise-level data e) Enables metadata repository Saves time when re-doing errors: a) One must tediously operate using the provider on tips on how to go about the campaign and genuinely interrogate the information prior to making use of it. 7) 9) . Raises the good quality in the database: a) Understanding or getting the full grasp the data to eliminate the probability of fixing unanticipated issues which is among the factors for venture delays and even failure.

In reality. Identifies Data Entry Errors Ideally. reattempt to migrate. Identify inconsistent home business info and data: 1. there ends up being a tradeoff between quality control and processing throughput. fix fallout. 2. Significant savings result when anomalies are addressed up-front with risk management strategies. Profiler Suitcase saves you money by detecting errors before they cost you dollars and customers. your systems were designed with checks and edits to protect against data entry errors. 23 11) 12) 13) .Benefits of Data Profiling 10) Maintains accuracy: If a corporation periodically profiles the facts it guarantees that the database has no lacking data and deletes any duplicate data and this has higher chances of producing better outcomes for your organization. rather than in endless cycles of migrate. Avoids Data Migration Rework Critical human resources can be targeted to resolve data anomalies rather than in manual investigation or costly rework. Every system lets data errors in. The reviews driven will serve as concrete evidences that will enable you to determine the essential techniques one should function on. Reviews and methods produced through the database helps businesses a greater probability of closing a sale.

Disk space is freed up. Optimizes Data Management Environment Identifying and removing unnecessary. Enables meta-data repository Meta-data is a natural by-product of profiling. a business can make a knowledgeable decision about where corrective efforts will be most effective. And a business intelligence tool can use derived quantitative measurements of the structure within data content to create statistical models of the attribute domain. redundant and obsolete data streamlines the technical environment. Increases Efficacy of Major Data Tools Profiling results can be used to increase the effectiveness of source-to-target transformation mappings in ETL tools. 24 15) 16) 17) 18) . resulting in direct savings in both the environment and administration costs.Benefits of Data Profiling 14) Quantifies Data Corruption By quantifying degrees of corruption. KnowledgeDriver provides significant input for defining an enterprise view of data. Enables enterprise-level data When used with other processes to evaluate applications.

batch feeds from 3rd parties. And these problems can have a significant impact on a business.Facts: Impact of Data Profiling on Business  21% of senior IT executives believed that poor data quality costs their company between $10 and $100 million per year. Current data quality problems cost U. And less than 15% believe their data is high quality. ecommerce. according to a recent survey. and the odd quick patch.    25 .S. It's little wonder that inconsistencies creep in. Data is coming in through all sorts of processes. businesses more than $600 billion per year. its credibility and its bottom-line. Automation of traditional analysis techniques – it is not uncommon to see analysis time cut by 90% while still providing a better understanding. manual data-entry.

smoother running systems. more efficient business processes. Profiling data provides both the framework and roadmap to improved data quality.How does Data Profiling promote better Data Quality?  Delivering better data quality relies first and foremost on understanding the data you manage and the rules that govern it. 26 . and ultimately. the performance of the enterprise itself.  Compared to manual analysis techniques. Data Profiling technology can significantly improve the organization's ability to meet the challenge of managing data quality.

Traditional V/s Data Profiling 27 .

its ability to raise and maintain the quality of corporate information promotes competitive advantage and cuts costs. Adoption is straightforward and the potential returns on investment is very significant. a) b) c) d) II. it has been amply demonstrated that a Data Profiling led approach can deliver tangible value across the business when applied to the challenges of data analysis and quality management.Who does Data Profiling benefit and how? As a mature technology. business users and quality managers Project Managers Visibility of all data quality issues and their current statuses Condensed and achievable delivery timescales Reduced risk of project delays and budget over-runs due to unexpected data quality issues a) b) c) 28 . Data Analysts and Data M anagers Step improvements in analysis performance through automation – do more in less time Significant increase to achievable breadth and depth of analysis scope Far clearer understanding of data content and business rules Facilitated communication between analysts. I. At the enterprise level.

complete and trustworthy information Better guarantee of a return on investment from corporate systems and data assets Reliable information supports better strategic and tactical business decisions Increased profitability through improved efficiency and customer management 29 .Who does Data Profiling benefit. Data Owners/Stewards a) b) c) Framework for effective delivery of data quality strategy Ability to meet data quality responsibilities Greater confidence in data assets III. and how? III. System Owners a) b) c) d) Sustainable data quality Diminished operational costs Fewer disruptions and less manual intervention Ability to deliver a better value service III. Executive Management a) b) c) d) Effective business processes based on accurate.

Typical Data Profiling Process in a Project 30 .

Typical DP Process in a Project This approach consists of five steps or phases: 1) Prepare for the project 2) Prepare for the analysis 3) Extract and Format the data 4) Sampling a) b) c) d) e) 5) Loading a Sample of the data Analysis of the same Adjust the extracts and formats of the data Produce deliverables Delete the samples Analysis a) b) c) Load the data Performance the Analysis Produce deliverables 31 .

co-variance. averages. Reveal hidden business rules Identify appropriate data values and define transformations so as to maintain data validity. Report on column minimums. median. maximums. XML and CSV Provide point in time data profiling history 32 . mean. standard deviation and outliers Measure business rule compliance across data sets Report results in various formats including PDF. HTML.Data Analysis to be followed in a Project             Identify your data's current state and determine data quality issues to develop standards Identify the reusability of the existing data Early management of possible risk when integrating data with other applications Resolve missing values/erroneous values Discover formats and patterns in your data Identify cleansing issues to maintain the integrity of the data. mode. variance.

 Data profiling will produce facts about data inaccuracies and as such it will generate metrics based on the facts.  Ideally a bottom-up approach should be used. it can only find the violations to a specific set of predefined rules.  The data profiling exercise itself should also follow a specific method to be most effective. which makes the data profiling of the higher level more successful. 33 . Therefore it is crucial to start with the proper technical & business metadata definitions as discussed in the previous process step.  The exercise should start at the most atomic level of the data.  Ideally analysts correct data inaccuracies at each level before moving to a higher level. structure and quality of data.  It is important to note here that data profiling does not find all inaccurate data.  Problems found at the lower levels can be used in the analysis at the higher level.Role of DP in Data Quality Management Collection of quality facts:  Data profiling uses analytical techniques to discover the true content.

Complexity of analysis. or both 2. IT. number of sources 4. Ongoing support and monitoring Data Profiling Tools       DataFlux Trillium InfoSphere Information Analyzer Data Quality Explorer IBM ProfileStage Oracle Warehouse Builder 34 . Security of data 5.Key Considerations for Selecting a Data Profiling Tool 1. and interpret results 3. Who is profiling: business users. review. Common environment to communicate.

Multi-level rules analysis (by rule.  Deep profiling capabilities . by pattern) unique to the data quality space . These insights help derive more information from enterprise data to accelerate information-centric projects. key. and cross domain levels.  35 .provide a comprehensive understanding of data at the column. flexible data rules design & analysis. and address multiple data issues by record rather than in isolation. analyze.provides the ability to evaluate.IBM InfoSphere Information Analyzer  It helps in continuously Managing and monitoring Data Quality. source. and quality monitoring capabilities. by record. Key features are as below:  IBM® InfoSphere™ Information Analyzer helps quickly & easily understand data by offering data quality assessment.

find patterns and set up baselines for implementing quality monitoring efforts and tracking data quality improvements. Enhanced data classification capabilities help focus attention on common personal identification information to build a foundation for Data Governance.IBM InfoSphere Information Analyzer  Native parallel execution for enterprise scalability . Supports Data Governance initiatives through auditing.enables high performance against massive volumes of data. Proactively identify data quality issues. tracking and monitoring of data quality conditions over time.    36 .

This part of IA can based on predefined rules expose exceptions in your data from the required format contents and relationships. Quality Stage: is now embedded in Information Server and provides functionality for fuzzy matching records and for standardizing record fields based on predefined rules.. This data model would be in 3 NF. In addition it can based on the real data rather than metadata suggest a data model for the union of your data sources. Audit stage: is now a part of Information Analyzer.IBM Profile Stage  Profile stage: is a profiling tool to investigate data sources to see inherent structures frequencies of phrases identify data types etc.  37 .

you’re ready to prepare the data for profiling. Once you receive source data extracts. When creating or updating a data profile. start with basic column-level analysis: 1) Distinct count and percent: a) Analyzing the number of distinct values within each column will help identify possible unique keys within the source data b) Identification of natural keys is a fundamental requirement for database and ETL architecture. 38 . loading data extracts into a database structure will allow you to freely write SQL to query the data while also having the flexibility to use a profiling tool if needed. The first step and also a critical dependency is to clearly identify the appropriate person to provide the source data and also serve as the “go to” resource for followup questions.Best Practices     Data profiling is best scheduled prior to system design. typically occurring during the discovery or analysis phase. As a tip.

and scan times. If your profile shows a numerical field does not require decimal precision.Best Practices 2) a) b) Zero.e. This information will help database and ETL architects set up appropriate default values or allow NULLs on the target database columns where an unknown or untouched (i. NULL) data element is an acceptable business case. and NULL percent: Analyzing each column for missing or unknown data helps you identify potential data issues... b) c) 4) a) Numerical and date range analysis: Gathering information on minimum and maximum numerical and date values is helpful for database architects to identify appropriate data types to balance storage and performance requirements. 39 b) . consider using an integer data type because of its relatively small size. keeping the data types in check will also minimize index size. and average string length: Analyzing string lengths of the source data is a valuable step in selecting the most appropriate data types and sizes in the target database. If the respective field is part of an index.. maximum. blank. (especially true in large and highly accessed tables where performance is a top consideration) Reducing the column widths to be just large enough to meet current and future requirements will improve query performance by minimizing table scan time. 3) a) Minimum. overhead.

40 .

943894.0890/.0  /0.5.8084:/89.7999057450790.90/.850.439039 897:.79.08.3/ 6:.8/8.80/43901.9/.80941570/0130/7:08 %070147098.9904070.5574..3.. 9.9 0307. %0/.57413/40834913/.90..3.9:70.57413574/:.0881: 9825479..9..3.3.:880/390 570.902097.2.3.850.2094/94-02489 0110.57413007.990070.9.  !74-02814:3/.3 4313/90.%636#5.9..80980184:/.3/.9..9.84:/-0:80/  %0007.941/.:7.57413:808.:7.:7. -:83088209.90..4.$.0888905 ..943416:.024708:.9902489..3.-4:9/.4.45.7:.9.898.0.0-0147024... ..8-.3-0:80/390.3.88:.3.574134190070.9..88.0  /0.-49942 :5.94 89.98  ..07 0.08./.0..08..9.1.98 .9./0139438.91.9.079097:0. 40.9.942.399434900709..98.4:8574.9.36:0894/8...4770..0 .9/.9.1.04190/..84144.01..394.

5 .704:80:/07  .:7941/..9.3/2439473 .: %7:2 314$507031472.9.   3438:55479.954707 !7410$9.743203994.0 ..9433.65:19.88 3:2-074184:7.422:3.3.#9635'663  4857413-:83088:8078 % 47-49  4224303.90 70..07 .3/39075709708:98  4250941.9.9.65:69&30.08  $0..0 7..0.":..!7413%448 .

4:23 0 84:7./0.5740.9.30.3/ 6:.39 90583.7488/42.3.3//.-908 %080389805/07..88:08-70.-9940.":.47/ -5.3/341/.3/.0 .88 .70.3/.880882039 10-0/..0247031472.9079.9.08 :9 0.9:708.985.9 039075780/.94. 56&795694.3..5.3.6:.90 .9.65 5.07:08.:.9031472. -411073/.5.9 010.0 ./0890.7:08/083 .8:3/0789.3384.943  .99073 :36:09490/.-908 574.0 574.47/7.88 -7:0 -70.. 0.//70882:950 /.9.9.8-04 314$507031472.9.0397.4393:4:8.92439473...9.42570038.943 . 6:.3.0:3/0789.007..3/2439473..9433.98 00557413.

3.0 .92574.94331472..9..9.:8.07920 3.9.08974:.42243 507843.3. $:554798...039.:/93 97... 56&795694.6:.000.0/0391/.9.3/2439473 41/...9.3/97.990394343.6:.-9080514.94394-:/.988:08 13/5.05.4:20841/.80308147 250203936:.0 !74..-085071472.3892.  ..65 5.943.924394730114798..3.6:.3/.0//.43/94384.39 .-9 03.9..9.88.:9431470390757808.4.

/.08 %8/.0 83402-0//0/331472.3 -.3/574.9433.80/43 570/0130/7:08054800.3/14789.9 .9079. #963&. !741089. 3.05943834:7/.47/10/8-.9./../.5.80/ 43570/0130/7:08 :/989.943$07.9.//9439..0894800307039 897:.9.943.9.7/370.9.3209.9.24/04:/-03 ":.943858  .089.174290706:70/1472.3/70.57413944943..7941.84:7.24/014790:343 414:7/.370.8:089.91471:2.47/8.9./08 1:3.3-.80/439070.0834.07 %85.794131472.9.9$9.0 8.084157.7.808/0391/.9.9:7081706:03.3/.84:7.4390398.950809.90/.07.

9.1:3/.79.9.0..9. 3...7090/.95 4.088..57410 89.04:70.80  %017898905.894.0147144 :56:089438  3.084:7.885.-.80..93..80897:.9./.9.3/.4.4:23 0.0747.0.44:941700790 $"946:0790/.3/ %...90.88  893.0: .90 50784394574.4.7/039190.3903:2-0741/893.:.574139441 300/0/ 03.98394.4:39./3/.9.0/.039 .9.9.4:2305/0391 5488-0:36:008939084:7.0.0/.. 57413  8./945705.943413.3.:773/:73 90/8.2039.0/:0/57479488902/083 4: 7070.9:70./09084:7./0503/03.84807.574138-0898.9.9:70  .557457.0.7.#9.-.3/507.097.9347:5/...890 494 7084:7. /0391.9:7..0/.097.706:702039147/.

3/.3:207.709/.4&843909.:08 47.33/0 005390/.9.9..:9.70.059..2:2 .9.557457.3. - 074 -.082.:80419870. 88:08  %831472.-.3/8083909.945.:.3..90.570.7.438/07:83 .2./..389730398419084:7.039 3.3 .3:334347:394:......98809:5.70034:942009.-089053800.-08.80  .#9.4:23/9894-0:89..9.0.3//.  19070850.80  - .80.07.2:23:207.3/%.3/&507..80.9.-.3/ . 32:2 2.:./ ..-080705071472.4:23147288347:3343/..3/ 5071472.:088051:147 /.9.843 ..438/07.10//408349706:70/0.339007/.390..06:075071472..80  0850.  14:757410848.907331472.9.557457.-.0:  .0880/9.  .94343232:2.08973039 3.0020398.950-0.9..3//.0.39208 .3/1:9:70 706:70203982574.943 #0/:. :207.93902489 .0/  0  & /..557457...3.9508.0584:/0391549039.90/01.08.84 23203/080 .