You are on page 1of 19

DEPARTMENT OF INFORMATION TECHNOLOGY

INTERNAL ASSESMENT TEST I-SET 1-ANSWERKEY


COURSE CODE/SUBJECT CODE/NAME: C312/CCW331/Business Analytics DATE : 2.3.24
BRANCH / SEMESTER:IT / VI TIME :9.45 Am To 11.15 Am
ACADEMIC YEAR : 2023 – 2024 MARKS : 50
CO1 Acquire the knowledge for the Analytics Life Cycle
CO2 Understand real-world business problems and models with analytical solutions
CO3 Identify the business processes for extracting Business Intelligence
CO4 Learn predictive analytics for business forecasting
CO5 Apply analytics for supply chain and logistics management
CO6 Use analytics for marketing and sales
BLOOM'S TAXONOMY
Remembering Applying Evaluating
Understanding Analyzing Creating

PART A (5 x 2 = 10 Marks)
Compare Business Analytics and Data Science.
Business Analytics is the statistical study of business data to gain insights. Data
CO1 An 1. (2)
science is the study of data using statistics, algorithms and technology. Uses
mostly structured data.
Define Hypothesis Generation
Hypothesis generation is a quick exercise that allows to reflect on all the already-
CO1 R 2. known assumptions and insights related to user needs and behaviours, share them (2)
amongst team members, and derive initial ideas for service experiences or
features that could be offered.
Analyse healthcare in Analytics
Health care analytics is a subset of data analytics that uses both historical and
current data to produce actionable insights, improve decision-making, and
CO2 An 3. (2)
optimise outcomes within the health care industry. For example, using data to
identify high-risk patients, track disease progression, and evaluate the
effectiveness of treatments.
Define Business Intelligence and Who are the four types of BI users?
Business Intelligence simply refers to using data to make informed decisions. For
CO3 U 4. businesses to apply Business Intelligence or BI, it requires an assortment of (2)
strategies, tools, architecture and rules to modify the raw data into useful insights
that help in driving productivity in the business processes.
i)What is a Data Mart?(1)
A data mart is a subset of the data warehouse. It specially designed for a
particular line of business, such as sales, finance, sales or finance. In an
independent data mart, data can be collected directly from sources
ii) What is the main difference between OLAP and OLTP?(1)
CO3 U 5. The main difference between OLAP and OLTP is in the name: (2)
OLAP is analytical in nature, and OLTP is transactional. OLAP tools
are designed for multidimensional analysis of data in a data warehouse,
which contains both transactional and historical data. OLTP is
designed to support transaction-oriented applications by processing
recent transactions as quickly and accurately as possible.

PART B (2*13 = 26 Marks)


Explain in detail about Business Analytics Lifecycle With Example. (13
CO1 U 6.a
6 Steps in the Business Analytics Process )
Step 1: Identifying the Problem The first step of the process is identifying the business
problem. The problem could be an actual crisis; it could be something related to
recognizing business needs or optimizing current processes. This is a crucial stage in
Business Analytics as it is important to clearly understand what the expected outcome
should be. When the desired outcome is determined, it is further broken down into
smaller goals. Then, business stakeholders decide on the relevant data required to solve
the problem. Some important questions must be answered in this stage, such as: What
kind of data is available? Is there sufficient data? And so on.
Step 2: Exploring Data Once the problem statement is defined, the next step is to gather
data (if required) and, more importantly, cleanse the data—most organizations would
have plenty of data, but not all data points would be accurate or useful. Organizations
collect huge amounts of data through different methods, but at times, junk data or
empty data points would be present in the dataset. These faulty pieces of data can
hamper the analysis. Hence, it is very important to clean the data that has to be
analyzed. TO do this, you must do computations for the missing data, remove outliers,
and find new variables as a combination of other variables. You may also need to plot
time series graphs as they generally indicate patterns and outliers. It is very important to
remove outliers as they can have a heavy impact on the accuracy of the model that you
create. Moreover, cleaning the data helps you get a better sense of the dataset.
Step 3: Analysis Once the data is ready, the next thing to do is analyze it. Now to
execute the same, there are various kinds of statistical methods (such as hypothesis
testing, correlation, etc.) involved to find out the insights that you are looking for. You
can use all of the methods for which you havethe data. The prime way of analyzing is
pivoting around the target variable, so you need to take into account whatever factors
that affect the target variable. In addition to that, a lot of assumptions are also
considered to find out what the outcomes can be. Generally, at this step, the data is
sliced, and the comparisons are made. Through these methods, you are looking to get
actionable insights.
Step 4: Prediction and Optimization Gone are the days when analytics was used to
react.
Real Time Example
In today‗s era, Business Analytics is all about being proactive. In this step, you will use
prediction techniques, such as neural networks or decision trees, to model the data.
These prediction techniques will help you find out hidden insights and relationships
between variables, which will further help you uncover patterns on the most important
metrics. By principle, a lot of models are used simultaneously, and the models with the
most accuracy are chosen. In this stage, a lot of conditions are also checked as
parameters, and answers to a lot of ‗what if…?‗ questions are provided.
Step 5: Making a Decision and Evaluating the Outcome From the insights that you
receive from your model built on target variables, a viable plan of action will be
established in this step to meet the organization‗s goals and expectations. The said plan
of action is then put to work, and the waiting period begins. You will have to wait to
see the actual outcomes of your predictions and find out how successful you were in
your endeavors. Once you get the outcomes, you will have to measure and evaluate
them.
Step 6: Optimizing and Updating Post the implementation of the solution, the outcomes
are measured as mentioned above. If you find some methods through which the plan of
action can be optimized, then those can be implemented. If that is not the case, then you
can move on with registering the outcomes of the entire process. This step is crucial for
any analytics in the future because you will have an ever improving database. Through
this database, you can get closer and closer to maximum optimization. In this step, it is
also important to evaluate the ROI (return on investment). Take a look at the diagram
below of the life cycle of business analytics.
(OR)
i)List out the types of Analytics and explain in detail.(6)
The four subsets of data analytics are descriptive, diagnostic, prescriptive, and
predictive. Businesses across all types of industries utilize these specialty areas in
CO1 R 6.b analytics to increase overall performance at all levels of operations. These four types of (13)
analytics all work together and can be used together to help improve business
performance. Descriptive Analytics Descriptive analytics can show ―what happened‖
and is the foundation of data insights. According to Investopedia, it is the interpretation
of historical data to better understand changes that have occurred in a business. This
type of analytics can be used to gain an overall picture of how a business is performing
and is often used alongside predictive and prescriptive analytics. Common insights
include year-over-year comparisons, the number of users, and revenue per subscriber.
Diagnostic Analytics Diagnostic analytics addresses ―why things happened.‖ Common
diagnostic analytic techniques/insights include drill-down, data discovery, data mining,
and correlations. Companies use this data to identify patterns of behavior and make
deep connections within the data they have collected. In order to be effective,
diagnostic data must be detailed and accurate. Predictive Analytics Businesses use
predictive analytics to ―see the future‖ and predict ―what is likely to happen.‖
Existing data, modeling techniques, and statistical modeling are leveraged to generate
predictions about performance and future outcomes. Predictive models are especially
useful for marketing and insurance companies which need to make decisions based on
what could be coming up. Common processes in predictive analytics include decision
trees, neural networks, and regression models.Compared to descriptive and diagnostic
analytics, which is fairly common in most businesses, predictive analytics is more
intensive and many companies are not leveraging this type of analytics yet. Prescriptive
Analytics Prescriptive analytics, analytics driven by AI (Artificial Intelligence)
systems, helps companies make decisions and determine ―what they should do next.‖
This is the most in-demand type of analytics today, however, it is talent and resource-
expensive: Few companies have the skilled employees and resources to conduct it. This
type of analytics is on the leading edge of the analytical landscape and requires
sufficient investment and commitment across the entire organization that wishes to
perform it. Big data players like Apple, Netflix, and Facebook are currently conducting
prescriptive analytics successfully. AI itself falls within the category of prescriptive
analytics. It requires tremendous data and continuously updated data to help it learn,
refine its decisions, and then communicate and act on these decisions in a business
setting
ii)Explain Model Validation and Evaluation.(7)
Model Validation(4)
 Model validation is defined within regulatory guidance as ―the set of processes
and activities intended to verify that models are performing as expected, in line
with their design objectives, and business uses.‖ It also identifies ―potential
limitations and assumptions, and assesses their possible impact.‖
 In statistics, model validation is the task of confirming that the outputs of a
statistical model are acceptable with respect to the real data-generating process. In
other words, model validation is the task of confirming that the outputs of a
statistical model have enough fidelity to the outputs of the data-generating process
that the objectives of the investigation can be achieved.
The Four Elements
Model validation consists of four crucial elements which should be considered:
1. Conceptual Design
The foundation of any model validation is its conceptual design, which needs
documented coverage assessment that supports the model‘s ability to meet business and
regulatory needs and the unique risks facing a bank.
The design and capabilities of a model can have a profound effect on the overall
effectiveness of a bank‘s ability to identify and respond to risks. For example, a poorly
designed risk assessment model may result in a bank establishing relationships with
clients that present a risk that is greater than its risk appetite, thus exposing the bank to
regulatory scrutiny and reputation damage.
A validation should independently challenge the underlying conceptual design and
ensure that documentation is appropriate to support the model‘s logic and the model‘s
ability to achieve desired regulatory and business outcomes for which it is designed.
2. System Validation
All technology and automated systems implemented to support models have limitations.
An effective validation includes: firstly, evaluating the processes used to integrate the
model‘s conceptual design and functionality into the organisation‘s business setting;
and, secondly, examining the processes implemented to execute the model‘s overall
design. Where gaps or limitations are observed, controls should be evaluated to enable
the model to function effectively.
3. Data Validation and Quality Assessment
Data errors or irregularities impair results and might lead to an organisation‘s failure to
identify and respond to risks. Best practise indicates that institutions should apply a risk-
based data validation, which enables the reviewer to consider risks unique to the
organisation and the model.
To establish a robust framework for data validation, guidance indicates that the accuracy
of source data be assessed. This is a vital step because data can be derived from a variety
of sources, some of which might lack controls on data integrity, so the data might be
incomplete or inaccurate.
4. Process Validation
To verify that a model is operating effectively, it is important to prove that the
established processes for the model‘s ongoing administration, including governance
policies and procedures, support the model‘s sustainability. A review of the processes
also determines whether the models are producing output that is accurate, managed
effectively, and subject to the appropriate controls.
If done effectively, model validation will enable your bank to have every confidence in
its various models‘ accuracy, as well as aligning them with the bank‘s business and
regulatory expectations. By failing to validate models, banks increase the risk of
regulatory criticism, fines, and penalties.
The complex and resource-intensive nature of validation makes it necessary to dedicate
sufficient resources to it. An independent validation team well versed in data
management, technology, and relevant financial products or services — for example,
credit, capital management, insurance, or financial crime compliance — is vital for
success. Where shortfalls in the validation process are identified, timely remedial actions
should be taken to close the gaps.
Model Evaluation(3)
 Model Evaluation is an integral part of the model development process. It helps to
find the best model that represents our data and how well the chosen model will
work in the future. Evaluating model performance with the data used for training is
not acceptable in data science because it can easily generate overoptimistic and
over fitted models. There are two methods of evaluating models in data science,
Hold-Out and Cross-Validation. To avoid over fitting, both methods use a test set
(not seen by the model) to evaluate model performance.
 Hold-Out: In this method, the mostly large dataset is randomly divided to three
subsets:
1. Training set is a subset of the dataset used to build predictive models.
2. Validation set is a subset of the dataset used to assess the performance of model
built in the training phase. It provides a test platform for fine tuning model‘s
parameters and selecting the best-performing model. Not all modelling algorithms
need a validation set.
3. Test set or unseen examples is a subset of the dataset to assess the likely future
performance of a model. If a model fit to the training set much better than it fits the
test set, overfitting is probably the cause.
 Cross-Validation: When only a limited amount of data is available, to achieve an
unbiased estimate of the model performance we use k-fold cross-validation. In k-
fold cross-validation, we divide the data into k subsets of equal size. We build
models ktimes, each time leaving out one of the subsets from training and use it as
the test set. If k equals the sample size, this is called ―leave-one-out‖.
Model evaluation can be divided to two sections:
 Classification Evaluation
 Regression Evaluation

Illustrate in diagram the stages in Decision Making

Step 1: Identify the decision


You realize that you need to make a decision. Try to clearly define the nature of the
decision you must make. This first step is very important.
CO3 7.a (13)
An
Step 2: Gather relevant information
Collect some pertinent information before you make your decision: what information is
needed, the best sources of information, and how to get it. This step involves both
internal and external ―work.‖ Some information is internal: you‘ll seek it through a
process of self-assessment. Other information is external: you‘ll find it online, in books,
from other people, and from other sources.

Step 3: Identify the alternatives


As you collect information, you will probably identify several possible paths of action,
or alternatives. You can also use your imagination and additional information to
construct new alternatives. In this step, you will list all possible and desirable
alternatives.

Step 4: Weigh the evidence


Draw on your information and emotions to imagine what it would be like if you carried
out each of the alternatives to the end. Evaluate whether the need identified in Step 1
would be met or resolved through the use of each alternative. As you go through this
difficult internal process, you‘ll begin to favor certain alternatives: those that seem to
have a higher potential for reaching your goal. Finally, place the alternatives in a
priority order, based upon your own value system.

Step 5: Choose among alternatives


Once you have weighed all the evidence, you are ready to select the alternative that
seems to be best one for you. You may even choose a combination of alternatives. Your
choice in Step 5 may very likely be the same or similar to the alternative you placed at
the top of your list at the end of Step 4.

Step 6: Take action


You‘re now ready to take some positive action by beginning to implement the
alternative you chose in Step 5.

Step 7: Review your decision & its consequences


In this final step, consider the results of your decision and evaluate whether or not it has
resolved the need you identified in Step 1. If the decision has not met the identified
need, you may want to repeat certain steps of the process to make a new decision. For
example, you might want to gather more detailed or somewhat different information or
explore additional alternatives.
(OR)
Explain OLAP and OLTP.

Key differences: OLAP vs. OLTP

The primary purpose of online analytical processing (OLAP) is to analyze aggregated


data, while the primary purpose of online transaction processing (OLTP) is to process
database transactions.
You use OLAP systems to generate reports, perform complex data analysis, and
identify trends. In contrast, you use OLTP systems to process orders, update inventory,
and manage customer accounts.
Other major differences include data formatting, data architecture, performance, and
requirements. We‘ll also discuss an example of when an organization might use OLAP
or OLTP.
CO3 R 7.b (13)
Data formatting
OLAP systems use multidimensional data models, so you can view the same data from
different angles. OLAP databases store data in a cube format, where each dimension
represents a different data attribute. Each cell in the cube represents a value or measure
for the intersection of the dimensions.
In contrast, OLTP systems are unidimensional and focus on one data aspect. They use a
relational database to organize data into tables. Each row in the table represents an
entity instance, and each column represents an entity attribute.
Data architecture

OLAP database architecture prioritizes data read over data write operations. You can
quickly and efficiently perform complex queries on large volumes of data. Availability
is a low-priority concern as the primary use case is analytics.
On the other hand, OLTP database architecture prioritizes data write operations. It‘s
optimized for write-heavy workloads and can update high-frequency, high-volume
transactional data without compromising data integrity.
For instance, if two customers purchase the same item at the same time, the OLTP
system can adjust stock levels accurately. And the system will prioritize the
chronological first customer if the item is the last one in stock. Availability is a high
priority and is typically achieved through multiple data backups.
Performance
OLAP processing times can vary from minutes to hours depending on the type and
volume of data being analyzed. To update an OLAP database, you periodically process
data in large batches then upload the batch to the system all at once. Data update
frequency also varies between systems, from daily to weekly or even monthly.
In contrast, you measure OLTP processing times in milliseconds or less. OLTP
databases manage database updates in real time. Updates are fast, short, and triggered
by you or your users. Stream processing is often used over batch processing.
Requirements
OLAP systems act like a centralized data store and pull in data from multiple data
warehouses, relational databases, and other systems. Storage requirements measure
from terabytes (TB) to petabytes (PB). Data reads can also be compute-intensive,
requiring high-performing servers.
On the other hand, you can measure OLTP storage requirements in gigabytes (GB).
OLTP databases may also be cleared once the data is loaded into a related OLAP data
warehouse or data lake. However, compute requirements for OLTP are also high.
Example of OLAP vs. OLTP
Let's consider a large retail company that operates hundreds of stores across the
country. The company has a massive database that tracks sales, inventory, customer
data, and other key metrics.
The company uses OLTP to process transactions in real time, update inventory levels,
and manage customer accounts. Each store is connected to the central database, which
updates the inventory levels in real time as products are sold. The company also uses
OLTP to manage customer accounts—for example, to track loyalty points, manage
payment information, and process returns.
In addition, the company uses OLAP to analyze the data collected by OLTP. The
company‘s business analysts can use OLAP to generate reports on sales trends,
inventory levels, customer demographics, and other key metrics. They perform
complex queries on large volumes of historical data to identify patterns and trends that
can inform business decisions. They identify popular products in a given time period
and use the information to optimize inventory budgets.
When to use OLAP vs. OLTP
Online analytical processing (OLAP) and online transaction processing (OLTP) are two
different data processing systems designed for different purposes. OLAP is optimized
for complex data analysis and reporting, while OLTP is optimized for transactional
processing and real-time updates.
Understanding the differences between these systems can help you make informed
decisions about which system meets your needs better. In many cases, a combination of
both OLAP and OLTP systems may be the best solution for businesses that require
both transaction processing and data analysis. Ultimately, choosing the right system
depends on the specific needs of your business, including data volume, query
complexity, response time, scalability, and cost.

summary of differences: OLAP vs. OLTP

Criteria OLAP OLTP


OLAP helps you analyze large
OLTP helps you manage and
Purpose volumes of data to support decision-
process real-time transactions.
making.
OLTP uses real-time and
OLAP uses historical and aggregated
Data source transactional data from a single
data from multiple sources.
source.
OLAP uses multidimensional
Data structure OLTP uses relational databases.
(cubes) or relational databases.
OLAP uses star schema, snowflake OLTP uses normalized or
Data model
schema, or other analytical models. denormalized models.
OLAP has large storage OLTP has comparatively smaller
Volume of
requirements. Think terabytes (TB) storage requirements. Think
data
and petabytes (PB). gigabytes (GB).
Response OLAP has longer response times, OLTP has shorter response
time typically in seconds or minutes. times, typically in milliseconds
OLTP is good for processing
OLAP is good for analyzing trends,
Example payments, customer data
predicting customer behavior, and
applications management, and order
identifying profitability.
processing.
PART C (1*14 = 14 Marks)
Discuss how data analytics techniques used to analysis Financial Market Data
Data analytics in finance is growing in importance. Globally, an increasing number of
businesses are using data analytics to improve internal operations. They also rely on
data analytics to help them understand their customers on a deeper level. This allows
organizational leaders to make informed decisions that promote better business
outcomes.
Wondering, ―What is data analytics?‖ In short, data analytics is a practice that helps
professionals make sense of raw data for the betterment of an organization.
―The chief aim of data analytics is to apply statistical analysis and technologies on data
to find trends and solve problems,‖ said Thor Olavsrud, contributor to CIO.
Data analysis can benefit organizations across all industries. This is especially true of
financial institutions, which often have a sea of raw data to sift through. Used correctly,
financial data, such as purchasing behavior, along with credit card data, can be
invaluable to these companies.
The Importance of Data Analytics in Finance
Is data analytics useful for finance? Absolutely. Data analysis is part of finance at this
point. No financial organization can afford not to make use of data analysis.
The coronavirus pandemic has caused a tremendous amount of uncertainty in the
finance sector.
―The lighthouse in this uncertainty is the ability to use advanced data analytics to better
manage financials,‖ said Bassem Hamdy, author of The Importance of Data Analytics
in Finance. ―When a company is able to masterfully forecast cash flow and execute on
its strategic financial visions, it is empowered to serve its market and clients for
CO1&CO2 An 8.a (14)
decades to come.‖
Hamdy also explained that implementing a financial strategy starts with possessing an
understanding of the true financial position of a company. This entails having the
ability to answer questions using operational and financial data, not gut alone. This is
where a financial data analytics professional comes in.
Still curious about why data analysis is crucial to the success of financial
institutions? Check out Why Is Data Analytics Important?
How Data Analytics Is Revolutionizing the Finance Industry
data analytics is revolutionizing the finance industry. One way it is accomplishing this
is by reducing the component of human error from daily financial transactions.
why data analytics in finance has transformed the finance sector:

 Data analytics enables finance executives to turn structured or unstructured data


into insights that promote better decision making.
 Data analytics helps finance teams gather the information needed to gain a clear
view of key performance indicators (KPIs). Examples include revenue
generated, net income, payroll costs, etc.
 Data analytics allows finance teams to scrutinize and comprehend vital metrics,
and detect fraud in revenue turnover. This is helpful since financial services
experienced a huge increase in digital fraud activity in 2020.
Additionally, big data has improved the way stock markets work and has upgraded
investment-related decision making.
What Does a Finance Data Analyst Do?
Finance data analysts are professionals who help financial institutions utilize data to
make high-quality business decisions. One of their primary roles is examining financial
records. They do this for the purpose of preparing in-depth reports for a financial
organization.
Finance data analysts often are knowledgeable of and proficient in skills related to the
following topics:

 Data mining
 Financial analytics
 Understanding business models
 Financial forecasting
 Creating financial models
 Risk management
 Big data analytics
 Advanced analytics
 Data management
 Predictive analytics
 Microsoft Excel
 Algorithms and algorithmic trading
 Python
 Automation
 Data science
 Business intelligence
 Machine learning
 Artificial intelligence
 Real-time data flows
Financial analysts often work with key organizational leaders, such as chief financial
officers (CFOs). They help these professionals ensure the company makes sense of its
raw data and benefits from it.
The best candidates for a finance data analyst role are often junior analysts that support
business functions. These functions include marketing, finance or operations roles.
However, these individuals are usually asked to work closely with data to interpret and
communicate what they find in the data.
By earning one or more of the best data analytics certifications, these professionals can
prepare for a career in financial analysis.
The Future Role of Data Analytics in the Finance Industry
The future role of data analytics in finance is secure as data analysis is critical to the
success of financial institutions. After all, as the finance sector continues to digitize,
there will be more raw data for organizational leaders to interpret. Data analytics will
help them make use of the data.
Amazingly, just 0.5% of businesses make use of their data, according to Data and
Analytics in Financial Services. Those who practice financial data analysis can help
organizations make the most of the data they collect. You can get into data analytics in
finance with CompTIA Data+ training and certification.
To get your foot in the door to data analytics in finance, you‘ll need specialized skills.
CompTIA Data+ certification training provides the skills you need to work in finance
data analysis.
CompTIA Data+, which will be available in Q1 of 2022, offers a full training suite of
Official CompTIA CertMaster products. These products include:

 CertMaster Learn: CertMaster Learn provides comprehensive eLearning that


prepares you for the CompTIA Data+ certification exam.
 CertMaster Labs: CertMaster Labs provides hands-on experience in real virtual
environments.
 CertMaster Practice: CertMaster Practice is an online knowledge assessment
and certification exam practice and preparation companion tool.
Once you receive training for the CompTIA Data+ certification, it will be time to take
the certification exam. Some of the topics and skills the certification exam covers
include:

 Mining data
 Manipulating data
 Applying basic statistical methods
 Analyzing complex data sets
(OR)
Discuss how data mining techniques used to analysis customer perception towards
online shopping.
E-commerce ―basically stands for electronic commerce which relates to a website that
sells products or services directly from the site with the help of a shopping cart or
shopping basket system and payments can be done through cards, e-banking and cash
on delivery. It helps customers to buy anything form a pen to an insurance policy
from the comfort of their home or office and gift it to someone sitting miles apart just
by click
of the mouse. It offers various benefits to businesses for example, easy reach to fast
growing online community,
providing unlimited shelf place for products and services, merging global markets at
low operating costs. Ease to access internet is the major factor in rapid adoption to E-
commerce. For popularization of E-commerce in India the main essential factors are
safe and secure payment modes. Even though there are various benefits in shopping
online but just like every coin has two sides, there exist various reasons for not
shopping online for example lack of trust, security concerns, uncertainty about the
product and service quality, delay or non-delivery of goods, and lack of touch-and-feel
shopping experience.‖ Mobile Commerce (M-commerce) is the subset of
electronic-commerce, which includes all e-commerce transactions carried out using a
mobile device. Basically, M-commerce is the way of doing business in a state of
motion. ―M-commerce depends on the availability of mobile connectivity. M-
commerce offers multiple advantages like ubiquity, personalization, flexibility, and
distribution, instant connectivity, immediacy.
There are many ways in which businesses, government and people benefit from m-
commerce like-
 Selling a product or service which is information based (delivery directly to
mobile devices) or location based
 Improving productivity by gathering time critical information (reports,
CO2&CO3 An 8.b photographs) and SMS based up-to-date information. (14)
 The ability to access information on mobile, at affordable cost can change
people's lives and livelihoods in rural areas (Latest on the weather report or health
services). It can be used as the medium to educate and create awareness among the
rural people. Usages of Internet on mobile devices have lead to information access
overcoming geographical barriers and removed the training cost of mobile technology.
Customer behavior analytics is based on consumer buying behavior, with the customer
playing the roles of user, payer and buyer. The concern of many organizations is no
longer on the individual buyer but rather on collective or organizational buying
behavior which help in determining which customers are worth developing and
managing by putting unique strategies in place in order to attract specific
customers. Through analysis of
customers‟ behavior, accurate profiles are being generated by specifying needs
and interest and allowing
business to give customers what they want it, when they want, leading to a better
customer satisfaction thereby keeping them to come back for more.‖
Consumer behavior includes the study of individuals, groups or organizations about
their process of
selecting, securing, using and disposing the products, services, experiences or ideas to
satisfy needs and the impact of these processes on the consumer and society.
While ―large-scale information technology has been evolving separate transaction
and analytical systems, data mining provides the link between the two. Data
mining software analyzes relationships and patterns in stored transaction data
based on open-ended user queries. Data mining is the semi-automatic discovery
of patterns, associations, changes, anomalies, and statistically significant structures and
events in data.
Traditional data analysis is assumption driven in the sense that a hypothesis is formed
and validated against the data. Data mining, in contrast, is data driven in the sense that
patterns are automatically extracted from data.
Various studies on consumer purchasing behavior have been presented and used in real
problem. Data mining techniques are expected to be more effective tools for analyzing
consumer behavior. Data mining has quickly
emerged as a highly desirable tool for using current reporting capabilities to uncover
and understand hidden patterns in vast database and these patterns are then used in
models that predict individual behavior with high accuracy

Prepared By Verified By -HOD Verified By -COE Approved By


DEPARTMENT INFORMATION TECHNOLOGY
INTERNAL ASSESMENT TEST I-SET 2- ANSWERKEY
COURSE CODE/SUBJECT CODE/NAME: DATE : 2.3.24
BRANCH / SEMESTER:IT / VI TIME :9.45 Am To 11.15 Am
ACADEMIC YEAR : 2023 – 2024 MARKS : 50
CO1 Acquire the knowledge for the Analytics Life Cycle
CO2 Understand real-world business problems and models with analytical solutions
CO3 Identify the business processes for extracting Business Intelligence
CO4 Learn predictive analytics for business forecasting
CO5 Apply analytics for supply chain and logistics management
CO6 Use analytics for marketing and sales
BLOOM'S TAXONOMY
Remembering Applying Evaluating
Understanding Analyzing Creating

PART A (5 x 2 = 10 Marks)
What are the types of Data Analytic?
1) Descriptive Analytics: Describing or summarizing the existing data
using existing business intelligence tools to better understand what is
going on or what has happened.
2) Diagnostic Analytics: Focus on past performance to determine what
CO1 U 1. happened and why. The result of the analysis is often an analytic (2)
dashboard.
3) Predictive Analytics: Emphasizes on predicting the possible
outcome using statisticalmodels and machine learning techniques.
4) Prescriptive Analytics: It is a type of predictive analytics that is
used to recommend one ormore course of action on analyzing the data.
Define Interpretation.
Interpretation is the act of explaining, reframing, or otherwise showing your own
CO1 R 2. understanding of something. A person who translates one language into another (2)
is called an interpreter because they are explaining what a person is saying to
someone who doesn't understand.
What are the steps to implement Data warehouse?
The steps in data warehouse implementation include the following:
1. Gather Information.
2. Ideate the Data Warehouse Solution.
CO3 U 3.3. Create a Project Roadmap. (2)
4. Design the DWH architecture.
5. Develop & Test.
6. Deploy the DWH solution.
7. Provide Post-deployment Support.
What are the four kinds of Decision Support Systems?
As per level of management: Operational, Middle and Top Management 2. As
CO3 U 4. per type: Data-driven DSS, Model-drive DSS etc. Model-driven: These systems (2)
use mathematical models to analyze data and aid in decision-making. Data-
driven: These focus on accessing and manipulating large volumes of data
CO3 An 5. Differentiate is OLAP and Functions of OLAP? (2)
The main difference between OLAP and OLTP is in the name: OLAP is
analytical in nature, and OLTP is transactional. OLAP tools are designed for
multidimensional analysis ofdata in a data warehouse, which contains both
transactional and historical data. OLTP is designed to support transaction-
oriented applications by processing recent transactions as quickly and accurately
as possible

PART B (2*13 = 26 Marks)


i) Discuss about Deployment and Iteration and its benefits and challenges.(7)
Deployment of Advanced Analytics insights includes all operations to generate reports
and recommendations for end users, visualisation of key findings, self-service and
data discovery functionalities for business users, and finally, depending on the
size and scope of the analytical application, implementation of a scoring process
or workflows that integrate analytical outputs (in real time or not) with custom,
operational and core systems. During deployment, many iterations, enhancements
and fine-tuning activities might be necessary to finalise the deployment of the
system. Other activities necessary during deployment include Administration,
Security and Authorisation, as well as finalising Documentation and Transferring
Ownership to business and operations. Iterative development is a cyclical
methodology that promotes constant improvement. It is the nature of business
analytics, that once one project is complete it often spawns an understanding of
new requirements and derivative solutions that start the iterative process once
again. There are many reasons why you should use the iterative process in your
research and development efforts. It can provide various benefits to you and your
organization because it is: Flexible One reason to use the iterative process is for
its flexibility. A major benefit of the model is its ability to allow users to revise
and refine their products or processes quickly. This can be especially beneficial if
a company is still in the planning phase of product development and doesn't yet
have a completed model available. Usually, companies who use the iterative
process do so to help them evaluate and improve their current products or
processes. They might also use it as a troubleshooting strategy to help them arrive
at an effective solution. Regardless of how you choose to implement the process,
the iterative approach adapts to many scenarios and business types. It allows for
environmental or market changes and can aid your efforts to produce deliverables
CO1 R 6.a that match the needs of your customers. Useful The iterative process can also be a (13)
useful way for development teams to create new strategies and establish
successful products. Because every iteration improves on the previous step, it's
easy to understand what phase you're in with your product development. The
iterative process often starts with a rough prototype that enters a testing phase to
give you timely feedback as you work toward a completed project. It can also be
useful for producing visible results early on. Each cycle or milestone represents
significant improvements and changes that can optimize your timeline
management. Efficient Some alternative development approaches, like the
waterfall approach, rely on established steps to arrive at a desired result. When
using these processes, external or internal changes can sometimes disrupt teams'
ability to implement improvements quickly and stay on track for timelines and
specific requirements. In contrast, the iterative process allows for deviations in the
plan and for large changes mid-way through development. This can help
companies stay on target and quickly recover as they implement their changes.
Typically, iterative processes require an entire team's help as well. This can
increase efficiency because the iterative process often encourages dispersed
workloads and well-balanced teams. Cost effective Another reason companies
choose to use the iterative process is for its cost effectiveness. Compared to
methods like the waterfall approach, the iterative process can accommodate
changes to overall requirements and scope at lower costs. Again, this is because
the process encourages teams to rethink their existing offerings. Change is both
expected and necessary for this approach. Each cycle asks teams to evaluate their
product using new feedback and to incorporate necessary changes for the next
round. Traditional trial-and-error models can do this somewhat haphazardly. With
the iterative process, however, teams plan and strategize their decisions
beforehand to make sure they're optimizing their efforts. This can reduce overall
development costs in the long term. Accessible The iterative process is a useful
tool because it is highly accessible. It can encourage collaboration, clear
communication and transparency. Because the process highlights inconsistencies
and areas where teams can improve a project's design, code or ability to meet
client specifications, it's easy to track certain movements and decisions. This
feature can help eliminate misunderstandings. Presenting the results of the
iterations to clients or stakeholders can also be easier with this approach because
they can clearly visualize the product's evolution. Buildable The iterative
approach allows companies to improve their existing offerings consistently and
reliably. Each iteration cycle allows teams to evaluate areas for improvement and
to implement the lessons they learned. That means every new iteration is typically
better than the last. By improving the development process consistently, teams
can create thoughtful products and carefully design processes that possess
guaranteed quality. Low risk One last reason many businesses and development
teams choose to use the iterative approach is because it is relatively low risk.
Often, teams address the higher-risk aspects of a product first. Gradually, as the
process goes on, each iteration becomes more and more refined. This can reduce
the risk for major discoveries near the end of the process because teams have had
so long to address issues and concerns. The method can allow companies to
identify and resolve risks early.
I ii)Explain about Hypothesis Testing.(6)
A hypothesis statement or hypothesis tries to explain why something happened or what
may happen under specific conditions. A hypothesis can also help understand how
various variables are connected to each other. These are generally compiled as if-then
statements; for example, ―If something specific were to happen, then a specific
condition will come true and vice versa.‖ Thus, the hypothesis is an arithmetical method
of testing a hypothesis or an assumption that has been stated in the hypothesis.

Turning into a decision-maker who is driven by data can add several advantages to an
organization, such as allowing one to recognize new opportunities to follow and
reducing the number of threats. In analytics, a hypothesis is nothing but an assumption
or a supposition made about a specific population parameter, such as any measurement
or quantity about the population that is set and that can be used as a value to the
distribution variable. General examples of parameters used in hypothesis testing are
variance and mean. In simpler words, hypothesis testing in business analytics is a
method that helps researchers, scientists, or anyone for that matter, test the legitimacy or
the authenticity of their hypotheses or claims about real-life or real-world events.

To understand the example of hypothesis testing in business analytics, consider a


restaurant owner interested in learning how adding extra house sauce to their chicken
burgers can impact customer satisfaction. Or, you could also consider a social media
marketing organization. A hypothesis test can be set up to explain how an increase in
labor impacts productivity. Thus, hypothesis testing aims to discover the connection
between two or more than two variables in the experimental setting.

How Does Hypothesis Testing Work?

Generally, each research begins with a hypothesis; the investigator makes a certain
claim and experiments to prove that the claim is false or true. For example, if you claim
that students drinking milk before class accomplish tasks better than those who do not,
then this is a kind of hypothesis that can be refuted or confirmed using an experiment.
There are different kinds of hypotheses. They are:

 Simple Hypothesis: Simple hypothesis, also known as a basic hypothesis,


proposes that an independent variable is accountable for the corresponding
dependent variable. In simpler words, the occurrence of independent variable
results in the existence of the dependent variable. Generally, simple hypotheses
are thought of as true and they create a causal relationship between the two
variables. One example of a simple hypothesis is smoking cigarettes daily leads
to cancer.
 Complex Hypothesis: This type of hypothesis is also termed a modal. It holds
for the relationship between two variables that are independent and result in a
dependent variable. This means that the amalgamation of independent variables
results in the dependent variables. An example of this kind of hypothesis can be
―adults who don‘t drink and smoke are less likely to have liver-related
problems.
 Null Hypothesis: A null hypothesis is created when a researcher thinks that
there is no connection between the variables that are being observed. An
example of this kind of hypothesis can be ―A student‘s performance is not
impacted if they drink tea or coffee before classes.
 Alternative Hypothesis: If a researcher wants to disapprove of a null
hypothesis, then the researcher has to develop an opposite assumption—known
as an alternative hypothesis. For example, beginning your day with tea instead
of coffee can keep you more alert.
 Logical Hypothesis: A proposed explanation supported by scant data is called
a logical hypothesis. Generally, you wish to test your hypotheses or postulations
by converting a logical hypothesis into an empirical hypothesis. For example,
waking early helps one to have a productive day.
 Empirical Hypothesis: This type of hypothesis is based on real evidence,
evidence that is verifiable by observation as opposed to something that is
correct in theory or by some kind of reckoning or logic. This kind of hypothesis
depends on various variables that can result in specific outcomes. For example,
individuals eating more fish can run faster than those eating meat.
 Statistical Hypothesis: This kind of hypothesis is most common in systematic
investigations that involve a huge target audience. For example, in Louisiana,
45% of students have middle-income parents.

(OR)
Explain in detail about Business Analytics Lifecycle With Example
6 Steps in the Business Analytics Process
Step 1: Identifying the Problem The first step of the process is identifying the business
problem. The problem could be an actual crisis; it could be something related to
recognizing business needs or optimizing current processes. This is a crucial stage in
Business Analytics as it is important to clearly understand what the expected outcome
should be. When the desired outcome is determined, it is further broken down into
smaller goals. Then, business stakeholders decide on the relevant data required to solve
the problem. Some important questions must be answered in this stage, such as: What
kind of data is available? Is there sufficient data? And so on.
Step 2: Exploring Data Once the problem statement is defined, the next step is to gather
data (if required) and, more importantly, cleanse the data—most organizations would
have plenty of data, but not all data points would be accurate or useful. Organizations
collect huge amounts of data through different methods, but at times, junk data or empty
data points would be present in the dataset. These faulty pieces of data can hamper the
analysis. Hence, it is very important to clean the data that has to be analyzed. TO do
CO1 U 6.b this, you must do computations for the missing data, remove outliers, and find new (13)
variables as a combination of other variables. You may also need to plot time series
graphs as they generally indicate patterns and outliers. It is very important to remove
outliers as they can have a heavy impact on the accuracy of the model that you create.
Moreover, cleaning the data helps you get a better sense of the dataset.
Step 3: Analysis Once the data is ready, the next thing to do is analyze it. Now to
execute the same, there are various kinds of statistical methods (such as hypothesis
testing, correlation, etc.) involved to find out the insights that you are looking for. You
can use all of the methods for which you havethe data. The prime way of analyzing is
pivoting around the target variable, so you need to take into account whatever factors
that affect the target variable. In addition to that, a lot of assumptions are also
considered to find out what the outcomes can be. Generally, at this step, the data is
sliced, and the comparisons are made. Through these methods, you are looking to get
actionable insights.
Step 4: Prediction and Optimization Gone are the days when analytics was used to
react.
Real Time Example
In today‗s era, Business Analytics is all about being proactive. In this step, you will use
prediction techniques, such as neural networks or decision trees, to model the data.
These prediction techniques will help you find out hidden insights and relationships
between variables, which will further help you uncover patterns on the most important
metrics. By principle, a lot of models are used simultaneously, and the models with the
most accuracy are chosen. In this stage, a lot of conditions are also checked as
parameters, and answers to a lot of ‗what if…?‗ questions are provided.
Step 5: Making a Decision and Evaluating the Outcome From the insights that you
receive from your model built on target variables, a viable plan of action will be
established in this step to meet the organization‗s goals and expectations. The said plan
of action is then put to work, and the waiting period begins. You will have to wait to see
the actual outcomes of your predictions and find out how successful you were in your
endeavors. Once you get the outcomes, you will have to measure and evaluate them.
Step 6: Optimizing and Updating Post the implementation of the solution, the outcomes
are measured as mentioned above. If you find some methods through which the plan of
action can be optimized, then those can be implemented. If that is not the case, then you
can move on with registering the outcomes of the entire process. This step is crucial for
any analytics in the future because you will have an ever improving database. Through
this database, you can get closer and closer to maximum optimization. In this step, it is
also important to evaluate the ROI (return on investment). Take a look at the diagram
below of the life cycle of business analytics.
Explain the various strategic techniques used in implementing BI.
Business intelligence offers a myriad of benefits that can drastically transform an
organization's strategy and decision-making process:

 Customer Insight: BI provides a detailed understanding of customer


behavior, preferences, and trends. This allows organizations to tailor their
offerings and communication strategies, meeting customer needs proactively,
which, in turn, improves satisfaction and loyalty.
 Operational Efficiency: Through BI, companies can pinpoint inefficiencies
and swiftly enact corrective measures. Whether it's supply chain management
or internal processes, the visibility offered by BI leads to optimized
operations and cost savings.
 Competitive Advantage: Gaining competitive intelligence through a robust
BI strategy keeps organizations ahead of the curve. Understanding market
trends and competitor actions spark innovation and growth, ultimately
fostering a competitive edge.
 Predictive Analysis: The advent of predictive analytics and AI technologies
CO3 U 7.a in BI helps forecast future trends and scenarios. This allows organizations to (13)
strategize and adapt, preparing them for potential market shifts.
 Profitability and Resilience: BI serves as a strategic asset, significantly
boosting competitiveness and profitability when implemented effectively. It
paves the way for not just survival but thriving in an increasingly dynamic
business environment.
Building a Business Intelligence Strategy
Building a business intelligence strategy requires meticulous planning and a deep
understanding of the organization's objectives. Here are the crucial steps to ensure the
success of a BI strategy.
Define Your Objectives
The first step in developing a BI strategy is to clearly define your objectives. Identify
the specific business challenges you want to address and the key metrics you need to
track. Whether it's improving marketing ROI, enhancing customer segmentation, or
optimizing campaign performance, setting clear objectives is crucial.
Data Assessment
Evaluate the existing data infrastructure. Identify what data is currently available, how it
is captured and stored, and whether it serves the organization's needs. Assess the gaps
and determine what additional data needs to be captured or what tools are required to
better utilize this data.
Data Extraction and Transformation
A robust BI strategy requires a streamlined data pipeline. Partnering with an advanced
marketing analytics platform like Improvado ensures seamless data extraction,
transformation, and normalization. This allows you to integrate data from multiple
sources, such as social media, advertising platforms, and CRM systems, into a
centralized and standardized format.
Data Visualization and Analysis
Effective data visualization is the cornerstone of successful business intelligence.
Utilize powerful BI tools like QlikView to create interactive dashboards and reports.
These visual representations enable you to explore data, identify trends, and
communicate insights effectively.
Promote a Data-driven Culture
For a BI strategy to be effective, it's essential to foster a culture that values data-driven
decision making. This involves training employees on the use of BI tools and promoting
the benefits of data-backed decisions.
Implementing Self-Service Analytics
Empower your marketing and analytics teams with self-service analytics capabilities.
Provide access to intuitive BI software that allows users to explore and analyze data
independently. Self-service analytics enhances collaboration, accelerates decision-
making, and reduces dependency on IT resources.
(OR)
Describe in detail data ware house architecture and its components with neat sketch
A data-warehouse is a heterogeneous collection of different data sources organised
under a unified schema. There are 2 approaches for constructing data-warehouse:
Top-down approach and Bottom-up approach are explained as below.

CO3 C 7.b (13)

The essential components are discussed below:


1. External Sources –
External source is a source from where data is collected irrespective of the type of
data. Data can be structured, semi structured and unstructured as well.

2. Stage Area –
Since the data, extracted from the external sources does not follow a particular
format, so there is a need to validate this data to load into datawarehouse. For this
purpose, it is recommended to use ETL tool.

1. E(Extracted): Data is extracted from External data source.

2. T(Transform): Data is transformed into the standard format.


3. L(Load): Data is loaded into datawarehouse after transforming it into
the standard format.

3. Data-warehouse –
After cleansing of data, it is stored in the datawarehouse as central repository.
It actually stores the meta data and the actual data gets stored in the data
marts. Note that datawarehouse stores the data in its purest form in this top-
down approach.

4. Data Marts –
Data mart is also a part of storage component. It stores the information of a
particular function of an organisation which is handled by single authority.
There can be as many number of data marts in an organisation depending
upon the functions. We can also say that data mart contains subset of the data
stored in datawarehouse.

5. Data Mining –
The practice of analysing the big data present in datawarehouse is data
mining. It is used to find the hidden patterns that are present in the database or
in datawarehouse with the help of algorithm of data mining.

This approach is defined by Inmon as – datawarehouse as a central repository


for the complete organisation and data marts are created from it after the
complete datawarehouse has been created.

The examples of some of the end-user access tools can be:


6. Reporting and Query Tools
7. Application Development Tools
8. Executive Information Systems Tools
9. Online Analytical Processing Tools
10. Data Mining Tools

PART C (1*14 = 14 Marks)


Describe the legal and ethical issues involved in BI on Social Media.
A manager of a BI system that chooses to use cheaper data in his/her data mining
activities to save money. The data he/she chooses to implement involves personal credit
score reports. The cheaper data sets have a 20% possibility of being incorrect. The
manager did not see it as being an unethical decision when it was made, just a way to
continue to generate close-to-accurate reports and save money.
The impacting decision on 20% of the company‘s customers may have different results
CO2&CO3 An 8.a as more people are turned down for credit because inaccurate reports. It is not a crime to (14)
have implemented the inaccurate data sets but it may seem as an unethical practice to
others.
While it is important for managers to be able to make their own decisions, this example
decision being made should have involved more managers since it affected the whole
business.
The manager‘s choice could bankrupt the company as user start to leave their business
for more accurate competitive companies. As the example points out, sometimes there
is no really clear answer to wither an issue involves an ethical or legal choice and each
situation can be different. Trying to make decisions based on individuals‘ beliefs when
dealing with a company can amount to intellectual stalls and trying to come to a
decision can be expensive and time consuming.
Business Globalization
Today‘s society has come to the point where there are more solutions to problems than
ever before. What once was impossible can now be accomplished through the use of BI
and other technology similar to BI. It is not going to stop; technology is going to keep
advancing. What seems improbable now may be common in the near future.
Because of business globalization, there is also a larger separation between companies
and customers, companies and competitors than there was when everything was done
locally in the past. Larger separation between companies and the consumer has resulted
in unethical and sometimes illegal business decisions like data theft. Because of all the
technology used in big businesses, and resulting exposure to unethical practices by
some of the larger corporations like Enron, there is growing anxiety of large companies
to be free of unethical practices.
Additionally the general trust level of users has eroded to the point were trust really has
to be earned. Users are very aware of cases of identity information being lost to theft as
well as other case examples in the media. Users have taken up with the attitude of show
me or prove to me that they are safe, that there information is safe or they will not do
business.
IT Personnel in Ethics
It is so easy for BI managers to sit behind there desk and manage the data on a day to
day business thinking that ethical practices do not concern them. That is not the correct
attitude to have. Everyone employed in the information technology field has an
obligation to be part of company ethical policies and practices. It is not just about
creating schemas and data models, as IT managers they have more of an ethical decision
to make than there employers.
The BI manager knows more about the emerging technology, and has the best
knowledge of a company‘s technologies capabilities of what is possible. With all the
work that is done in an informational system and what is involved in information
delivery and business ethical dilemmas.
Ethical Issues in BI
While many ethical issue are obscure and hard to notice at the surface there is one a
number one concern brought up by most users and according to Hackathorn (2005), the
ethical issue in BI that is known by most is the involuntary release of personal
information that has lead to identity theft.
The theft of personal information like social security numbers, birth-dates, and credit
card numbers has allowed for technology skilled criminals to possibly walk away with
billions of dollars in innocent victims‘ money nationally.
(OR)
Discuss how data analytics techniques used to analysis Healthcare.

Data analytics in health care is vital. It helps health care organizations to evaluate and
develop practitioners, detect anomalies in scans and predict outbreaks in illness, per the
Harvard Business School. Data analytics can also lower costs for health care
organizations and boost business intelligence. The best way to discover how data is
used in health care is to take a look at real-life examples of big data analytics in
healthcare. Preventative care is essential for health care systems and patients. It can
help prevent future illness and patient readmissions to health systems. It can also
promote better patient outcomes and lower health insurance costs. This is especially true
CO1&CO2 An 8.b (14)
of high-risk patients and those with chronic diseases. Cancer screenings, well-child
visits and counseling on smoking cessation are all examples of preventative care. By
identifying risk factors that could have gone unnoticed, health care analytics can be
used to promote better preventative care.
Lisa Miller, contributor to the Vie Healthcare Consulting blog, gives an example of how
health care analytics can promote preventative care through insurance companies.
Miller explains that in 2017, Blue Cross Blue Shield analyzed several years of
pharmacy and insurance data. The data was related to opioid abuse and overdose.
Through the analysis, Blue Cross Blue Shield was able to effectively identify almost
750 risk factors that can predict whether or not someone is at risk of abusing opioids.
―Gathering all of this data was only possible with the help of analytics experts and the
right software solutions,‖ Miller said.
How Data Analytics in Health Care Improves Patient Care
One of the most amazing things about data analytics in health care is that it enables
health systems and clinicians to make better care decisions for patients.
―In health care, decisions often have life-altering outcomes—both for patients and the
population as a whole,‖ said Catherine Cote in the same Harvard Business School
article from above. ―The ability to quickly gather and analyze complete, accurate data
enables decision makers to make choices regarding treatment or surgery, predict the
path of large-scale health events and plan long-term.‖
Data analytics is helpful to health care professionals and organizations. Both health care
providers and health systems need health information and data that makes sense.
Without accurate data, they can‘t make decisions that are in patients‘ best interests. Data
analytics provides these institutions with the data they need to make decisions that lead
to superior patient care. This not only improves patients‘ quality of life, but can also
extend their life.
Health Care Data Analytics Helps with Population Health Management
Health care data analytics not only improves patient care, it also helps with population
health management. Population health management is the process of upgrading clinical
outcomes of a group of people via better care coordination. Improved patient
engagement is also a part of this process.
Health care data analytics can help with population health management. How? By
enabling data scientists to build predictive artificial intelligence (AI) models. These
models enable health care organizations to manage initiatives in the health of select
populations. This is primarily done through identifying the health care system‘s most
vulnerable patients.
―With these patients identified, organizations can perform outreach and interventions to
maximize the quality of patient care This is another example of how data analytics can
improve patients‘ lives and maximize the efficiency of health systems.
What Is the Future of Data Analytics in Health Care?
Like data analytics in all sectors, there is a solid future for data analytics in health care.
This is particularly true in light of the COVID-19 pandemic.
Data analytics in health care has grown in importance during the pandemic. Hundreds
of thousands of individuals around the world have required health care for treatment of
the coronavirus. Health care organizations have utilized data analytics to manage the
global health crisis and better treat patients.
The need for quality healthcare will remain constant. For this reason, data analytics in
health care will always be relevant, and jobs in this field will remain in-demand.

Prepared By Verified By -HOD Verified By -COE Approved By

You might also like