You are on page 1of 30

1

Chapter-Two
Data Science
Overview of Data Science 2
 Data science is a multi-disciplinary field that uses

• scientific methods, processes, algorithms and systems to extract knowledge from


structured, semi-structured and unstructured sources.

 In academic areas data science continues to evolve as

• data mining, data warehousing, data modeling, big data and etc.

 It is used for creating data-centric artifacts and applications that can address specific scientific,
socio-political, business related, or other issues.

 Data scientist possesses a strong quantitative background

in statistics, linear algebra and programming skills.


Why Data Science? 3

• Data are available in various form ( structure &


unstructured ) and these days generated in bulk
from different source , essentially free and
ubiquitous data.
• The granularity, size and accessibility of data,
comprising both physical, social, commercial
and political spheres has exploded in the last
decade or more.
• Simple tools are not capable of processing this huge volume
and variety of data.
• Understanding, processing, extracting, visualizing and
communicate data is a hugely important skill, not only at the
professional level but even at the educational level for
elementary school kids, for high school kids, for college
kids.
Components of Data Science 4
► Data science consists of three components, that is, organizing, packaging and delivering data (OPD
of data).

1. Organizing the data: Organizing is where the planning and execution of


the physical storage and structure of the data takes place after applying the best
practices in data handling.
2. Packaging the data: Packaging is where the prototypes are created, the
statistics is applied and the visualization is developed. It involves logically as
well as aesthetically modifying and combining the data in a presentable form.
3. Delivering the data: Delivering is where the story is narrated and the value
is received. It makes sure that the final outcome has been delivered to the
concerned people.
Data Science Lifecycle 5
Phase 1: understand the various specifications, requirements, priorities and required
budget. Ask the right questions : about the required resources present in terms of people,
technology, time and data to support the project. need to frame the business problem and
formulate initial hypotheses (IH) to test.
Phase 2: you require analytical sandbox in which you
Phase 6 : evaluate whether the goal that has can perform analytics for the entire duration of the
planned in the first phase is achieved. identify project. You need to explore, preprocess and condition
data prior to modeling. You will perform ETLT (extract,
all the key findings, communicate to the transform, load and transform) to get data into the
stakeholders and determine if the results of the sandbox.
project are a success or a failure based on the
criteria developed in Phase 1.
Phase 3 determine the methods and techniques to draw
the relationships between variables. These relationships will
Phase 5: deliver final reports, briefings, code and set the base for the algorithms which you will implement in
technical documents. a pilot project is also the next phase.
apply Exploratory Data Analytics (EDA) using various
implemented in a real-time production environment. statistical formulas and visualization tools.
This will provide you a clear picture of the
performance and other related constraints on a small
scale before full deployment.
Phase 4: develop datasets for training and testing purposes. Decide whether
your existing tools will suffice for running the models or it will need a more
robust environment (like fast and parallel processing). analyze various learning
techniques like classification, association and clustering to build the model.
Data Science Workflow 6 6

What is the question/problem?


Who wants to answer/solve it?
Where will it be hosted?
What do they know/do now?
Who will use it?
How well can we expect to answer/solve it?
Who will maintain it?
How well do they want

What data is available?


Is it good enough?
Is it enough?
What are sensible measurements to derive from
this data? Units, transformations, rates, ratios,
etc.
Again, what are the measurements that tell the
real story?
How can I describe and visualize them
effectively? What kind of problem is it? E.g.,
classification, clustering, regression, etc.
What kind of model should I use?
Do I have enough data for it?
Does it really answer the question?

Did it work? How well?


Can I interpret the model?
What have I learned?
Data Vs Information 7
 Data is a representation of unprocessed:
• raw facts
• figures
• concepts
• instructions in a formalized manner, which is more suitable for processing, interpretation and
communication by humans or machines.
 Data may transfer a piece or partial meaning but not a complete sense.
 Data can be represented with the help of characters such as alphabets (A-Z, a-z), digits (0-9) or
special characters (+, -, /, *, , =, etc.) and picture ,sound and video.
• figures • alphanumeric
• shapes • non-alphanumeric characters
• tables
• numeric
Cont. . . 8

 Information:
• is a processed data on which decisions are based and
• transfers a complete meaning.
 Information is interpreted data, created from:
• organized
• structured and
• processed data in a particular context.
Data Processing Cycle 9

 Data processing is a restructuring or reordering of data in order:


• to increase usefulness
• to add values
• to avoid ambiguity
• to deal with complexity and
• for better representations.
 Data processing cycle has three main steps:

Input Process Output

Storages
Cont. . . 10
 Input

• is a data in a convenient form for further processing.

• The format will depend on the purpose of processing and processing machine.

• When a computer used, the input can be:

• directly obtained from users via input-devices.

• fetch from hard disk, CD, flash disk and etc.

 Process

• In this step the data obtained as an input further processed into more useful form.

• In electronic computer, a software or an application performs the processing.

 Output

• A result of the processing will be fetched as an output.

• The output from a particular process will be the final information required or may be used further as input for another
process.
Data: types and representations 11
 Data types can be describe from different perspectives.

• From computer programming: data types are attributes of data that controls the compiler or
interpreter how it used data.

• From data analytics: data types simply articulates us how the data exists.

 Data types from computer programming perspectives are:

• Integers-to store whole numbers.

• Booleans-to store true/false states.

• Characters-to store a single character.

• Alphanumeric strings-to store combination of characters and numbers.


Cont. . . 12
 Data types from data analytics perspectives are:

• Structured:-obeys a pre-defined data model and forthright for interpretations. E.g. tabular data.

• Semi-structured (Self-describing structure):-a form of structured data but not conform the formal
structure of data model instead contains tags or other markers for expressing semantic relations. E.g.
XML.

• Unstructured-neither follows a pre-defined data model nor have a self-describing structures.


Cont. . . 13
 From technical point of view-Meta Data

• Meta data is not a separate data structure.

• It provides additional information about a specific set of data.

• Meta data is a data about data.

• E.g. in a photograph: size, locations, time and etc are meta data.

• Meta data is highly applicable in semantic webs, big data and etc.
Data value Chain 14
 Big data is a set of strategies and technologies required to:
• gather
• organize
• process and
• gather insights from large datasets.
 Data value chain- describes the flow of information within a big data systems.
Data acquisition 15
 Data acquisition is a process of:
• gathering,
• filtering and
• cleaning data before its putted in data warehouse or further processed.
 Data acquisition is a major challenge in big data.
 The challenge is because the infrastructure:
• should support low, predictable latency in capturing data and executing query.
• should support dynamic and flexible data structure.
• should handle very high transaction volumes.
Data analysis 16
 Data analysis involves:
• exploring
• transforming and
• modeling data in order to make the raw data amenable(agreeable) in decision making.
 The goals of data analysis are:
• highlighting relevant data
• synthesizing and
• extracting useful hidden information.
 Related areas to data analysis includes:
• Data mining
• Business intelligence and
• Machine learning
Data curation 17
 Data curation refers to an active management of data to ensure its quality.
 Data curation includes activities such as:
• content creation • transformation
• selection • validation and
• classification • preservation of data.
 Data curation is done by data curators.
 Data curators are responsible for improving accessibility and quality of the data.
 The goals of data curation are:
• ensuring trustworthiness
• making data discoverable
• easing accessibility
• improving data reusability and
• making data fit their purpose
Data storage 18
 Data storage
• Is the persistence and management of data in a scalable way.
• It guarantees the applications fast access to the data.
 Relational Database Management Systems(RDBMS):
• RDBMS have been the main solution for data storage for almost 40 years.
• RDBMS have a property called ACID(atomicity, consistency, isolation and duration).
• ACID properties lacks flexibility with regard to schema change, fault tolerance and data volume(complexity) increase.
• Lack of flexibility makes RDBMS unsuitable for big data science.
 ACID properties in DBMS are:
• Atomicity: tells that the entire transaction takes place at once or doesn't happen at all.
• Consistency: states that the database must be consistent before and after the transaction.
• Isolation: multiple transaction should occur independently without interference.
• Durability: changes made by a successful transaction should occur even if the system failure happens.
 NoSQL data storage technologies designed as an alternative data model to support flexibility and scalability in data
storage.
Data usage 19
 Data usage covers data-driven business activities that needs
• access to data
• its analysis and
• the tools needed to integrate the data analysis within the business activity.
 Data usage in business decision making can enhance competitiveness through
• reduction of costs
• increased added value, or any
• other parameter that can be measured against existing performance criteria.
Big data: Basic concepts 20
 Big data is a large and complex collection of data sets.
 Big data is a set of strategies and technologies required to:
• gather
• organize
• process and
• gather insights from large datasets.
 Why big data? Big data is because:
• the volume of data drastically increased over time.
• the data set in organizations becomes so large it becomes difficult ( almost impossible) to process
using
 on-hand database management tools or
 traditional data processing applications.
Cont… 21
• Due to the advent of new technologies, devices, and communication means like social
networking sites, IoT and soon the amount of data produced by mankind is growing
rapidly every year.
.
Data produced
Before 2003 In 2011 In 2013
5B GB 5B GB/2dys

5B GB / 10min
The amount of data produced by us
If this data is stored inside disks and pile up them, it may fill an
entire football field
Cont… 22
 Big data is characterized by 3V and more:
• Volume: large amounts of data Zeta bytes/Massive datasets
• Velocity: Data at live streaming or in motion
• Variety: data comes in many different forms from diverse sources
• Veracity: can we trust the data? How accurate is it? etc..
Clustered computing and Hadoop 23

 Clustered computing:
• Due to big data individual computers are inadequate for computing.
• Therefor for addressing computational and high storage need of big data clustering
appeared.
• Big data clustering software combines the resources of many smaller machines.
 Advantages of clustered computing:
• Resource pooling-combining available storage space, CPU and memory for processing large
datasets.
• High availability- clustering embraces fault tolerant and robust computing environment for
increasing availability.
• Easy scalability-clustered computing is easily scalable horizontally by adding additional
resources to the cluster.
Cont… 24

 Clustered computing requires:


• managing cluster membership
• coordinating resource sharing and
• scheduling actual work on individual nodes.
 Cluster membership and resource allocation can be handled by software like Hadoop’s
YARN.
 YARN is an acronym stands for “Yet Another Resource Negotiator”.
Hadoop and Its ecosystem 25

 Hadoop is an open-source framework.


 It is designed to make interaction with big data easier.
 Hadoop allows distributed processing of large datasets across clusters like parallel
computing.
 Hadoop has four key characteristics:
• Economical-its economical because it used ordinary computers for extensive computation.
• Reliable-as it stores copies of the data on different machines.
• Scalable-it can be scaled simply by adding machines t the cluster.
• Flexible-it can store as much structured and unstructured data.
 Hadoop has four key components:
• Data management
• Data access
• Data process

Hadoop ecosystem 26

 Hadoop ecosystem evolved from its four components mentioned on previous slide.
 Generally the Hadoop ecosystem consists of:
• HDFS: Hadoop Distributed File System • HBase: NoSQL Database
• YARN: Yet Another Resource Negotiator • Mahout, Spark MLLib: Machine Learning algorithm
• MapReduce: Programming based Data libraries
Processing • Solar, Lucene: Searching and Indexing
• Spark: In-Memory data processing • Zookeeper: Managing cluster
• PIG, HIVE: Query-based processing of data • Oozie: Job Scheduling
services
Cont… 27
Big data lifecycle: with Hadoop 28

1. Ingesting data into the system


• Data ingestion is the first phase in big data processing.
• The data transferred to Hadoop from different sources like local files, databases or systems.
• Sqoop transfer data from RDBMS to Hadoop, whereas Flume transfers event data.
2. Processing the data in the storage
• Processing the stored data is the second phase.
• Big data is stored in the distributed file system:
• HDFS
• the NoSQL distributed data and
• HBase.
• Then data processing is done by MapReduce and Spark systems.
Cont. . . 29

3. Computing and analyzing data


• Computing is the third phase in big data processing lifecycle.
• Data is analyzed by processing frameworks such as Pig, Hive, and Impala.
• Pig converts the data using a MapReduce and then analyzes it.
• Hive is also based on the MapReduce programming and is most suitable for structured data.
4. Visualizing the results
• Access or visualizing the results is the forth phase.
• In this stage, the analyzed data can be accessed by users.
• Visualizing the results or access is performed by tools such as Hue and Cloudera Search.
30

End of Chapter-Two
Reading Assign: List AI-applications that you encountered in your life.

You might also like