You are on page 1of 12

1

Decision modeling and analysis

Student’s name

Institutional affiliation

Course name

Instructor’s name
2

Data analysis is a core part of our day-to-day lives, especially in organizations and

companies hence more useful in decision making. We are currently in a world that is entirely

dominated by technology, and technology has made it so easy for people venturing into various

businesses or formulation of companies since it can collect vast amounts of information

necessary for decision-making in various companies businesses. After collecting data,

technology has also made it easy to analyze the same data and make correct decisions. As the

world is enhancing, more and more data is coming up that will make it tiresome for the physical

forms of data collection and analysis. Technologically, the software has been developed to

perform the tasks with ease. There are different forms of data analysis that we will discuss

together with the various elements that data analysis entails. The paper will look into real-life

applications and even how they are being used to arrive at the best decisions.

There are various data analysis elements that a student or a professional analyst should

understand to arrive at the right decision. They include; probability, distribution, uncertainty,

sampling, statistical inference, regression analysis, time series, forecasting methods,

optimization, and decision tree modeling. All these are core elements in the process of data

analysis and are also applied in real life. Any analyst must be keen on these elements and

understand them because there will be no analysis that can be done without incorporating these

elements.

Probability

This element can be termed as a science associated with the issue of uncertainty;

whenever we got one or more events that we are in doubt of their appearance, then probability
3

chips in to be used to estimate the probability of its occurrence. Probability involves some

calculation that reaches a figure that will help us in the prediction of what will happen and the

certainty of the event to occur. Probability is always expressed in different values that are

assigned to give the real prediction of the future. For example, whether it is going to rain or not,

we can do the calculations and conclude that it is 20% likely to rain, and the opposite is true

stated that it is 80% not likely to rain. Probability can also be expressed in fractions; hence it is

either one or less than one.

Since probability is the certainty of events, there are practical examples in real life that

need the element of probability. The primary practical example is the use of probability in

modern-day traffic control, computer processors, and even telephone interchange. When mobile

phones come to light, a few people owned them; due to their usefulness, by the use of

probability, it was projected that there would be a tremendous increase in mobile phone usage;

hence decided to make revolutions in the tech industry. Hence there are more of its applications

in the risk management of thriving businesses.

Distribution

Distribution is another core element in the analysis of data sets. It needs more

concentration and statistical reasoning to be applied effectively in data analysis. Distributions are

divided into empirical and theoretical by effective use of variations that might be evident in the

presented data. It is straightforward to identify the kind of distribution when given the data.

There are different distributions, such as a normal distribution, binomial, poison, and others.

Comparing empirical distributions and theoretical distributions, it is evident that theoretical one
4

is more preferred since it is hypothetically believed that it considers more the applications of

real-life events. The other aspect is the one of a probability distribution, where distribution plays

a part in explaining the likelihood of getting to the right figures.

Distributions are very paramount when understood. The different types of distributions

are well applied in real life in different aspects. The issue of skewness and kurtosis helps a lot in

this sector. There are several applications in real life, such as the income distributions in a

country. For the government to implement policies effectively, they have to understand the issue

of income distributions, and it also helps in the spread and allocation of taxes to be charged.

Also, by understanding the distribution element, they have managed to put in place policies that

have eradicated poverty. Also, in the stock markets, the distribution element is effectively

applied to put6 the rates in place. Also, in schools performances, the administrations always have

the intentions of making the results be of a normal distribution; hence it has a vast range of real-

life applications.

Uncertainty

It is an analysis targeted in the quantitative analysis of output from the data collected

depending on the data subjected to analysis. The whole quantification process is mainly

performed by estimating the related statistical quantities that we are interested in, such as median

and the likes of the mean. By doing all calculations, it comes up to do the determination of the

data outcomes. Various steps are used in performing the element of uncertainty. First, we

identify data to be inputted, and then we are aware of the parameters to be used in the whole
5

process, we describe the data. We come up with the samples then apply statistical methods in

completion of the required analysis.

The real-life application of uncertainty has a wide range. Since it the process of

identifying the possibility of errors in the occurrence of events. Like in a business environment,

understanding uncertainty is significant. In the suppliers' situation, the business managers can do

their data collection concerning the suppliers and make decisions on which supplier to trust and

the one not to trust/. In the process, the business would have avoided the risks encountered when

selecting the wrong supplier for the business. Uncertainty as an element plays a significant role

in averting risks by the organizations and modern companies, such as supplying inferior products

that may tarnish their name's name for them to thrive.

Sampling

It is a process where a certain proportion of the population is selected to represent the

whole population. There are always several methods that are used in this process. Depending on

the type of data analysis that is to be performed, this element is applied to given principles, either

random sampling or systematic sampling. It applies other elements such as probability for the

selection of the needed samples. Some sampling designs are probabilistic and non-probabilistic.

Probabilistic sampling is the best since it does not entail biasness. Simultaneously, non-

probabilistic is prone to biasness since most of it the sample selection depends on the mind and

ability of the researcher who collects the data. The samples are analyzed; conclusions and

decisions that are made from it always represent the whole population the sample came from.
6

In real life, maybe a nongovernmental organization wants to research to give back to

society or maybe formulate their new rules and regulations. The researcher would not be able to

move around in every household to collect the needed information. Still, they will remember on

sampling, which will help predict and give information about the whole population. Few samples

will be collected, and all the conclusions made will be taken to be for the whole population.

Hence, the organization will be able to make the decisions depending on the results found from

analyzing the samples.

Statistical inference

This is where we apply the analyzed data to analyze the properties and the distribution of

data's probabilities. An example is when we want to do the hypothesis testing using the

underlying data and calculation of estimates; the data used is always the one generated from the

sample of the population whose conclusions will be regarded as characteristics of the whole

population. In most cases, it is compared to the descriptive one, whose conclusions come from

the whole population analysis results. This element I mostly useful in solving the problems that

are significantly associated with statistical modeling hence easily analyzed. This element is

referred to as the proposition mode of data analysis due to examples of point estimates and

interval estimates, which are usually associated with this element. These elements' models

always work under various assumptions such as non-parametric, fully parametric, and semi-

parametric distributions of data to reach the conclusions.

The statistical inference has an inner meaning that is drawing conclusions based on the

sample data collected or the population as a whole. Besides making the conclusions, it also helps
7

in the hypothesis testing to come up with the solution. An example is when we want to compare

the performance of students in a class. We will use the data collected to come up with the null

and alternative hypotheses regarding the students' performance. Through the inferential statistic

element, we then subject the data into hypothesis testing to either reject or accept the hypothesis.

As a result, we will have to come up with decisions on what new policy we will implement in the

school system regarding her students' performance based on gender.

Regression analysis

This is an element of data analysis where we make the comparison of various variables. It

is usually a quantitative method of analyzing the collected data. It comprises the relationship

between one dependent variable, and the other has to be an independent variable. From the data,

we get to have the parameters and the real variables. We then generate the linear equation:

Y=a+bX representing the variables under study and the estimated parameters (Lu, et al, 2016).

We then do the linear regression using assumptions like linearity, multicollinearity,

homoscedasticity, and normal distribution to arrive at the desired conclusion. The relationship

between the two variables will play a critical role and arrive at a decision.

In hospitals, the medics do apply regression analysis in various fields. Whenever they

want to administer medication, they first determine its side effects and understand how they can

affect them. They first have to find out the relationship between the drugs to be administered and

the blood pressure level of the patient who is to take. By knowing the relationship between the

two variables then they will have the ability to know the level of dosage they have to administer.
8

It is more evident than regression analysis is fundamental in our daily lives, making the right

choices.

Time series

This statistical analysis technique deals with data that has a range over a long period;

when we say time series data, we refer to information collected at a particular time. It is usually

under periods or intervals of time; it is taken over at different times. There are different

categories of time series, namely cross-sectional data and pooled data. Cross-sectional data is the

one that is collected at the same point in a time, while the pooled data is the one comprising of

both time series and cross-sectional. There are different concepts associated with this element.

They are dependence, which is about the relationship between two variables at points in time,

and the concept of stationary, which implies the periods of data that always are constant. This

analysis method can predict the current and future happenings based on the past projections and

the data at hand.

In the current world, time series has many applications areas, the main one associated

with the country's economic forecasting. The past patterns are closely observed; hence project

the future happenings in the country. After comparing the real data on the ground, the authorities

are entitled to make decisions after the relevant conclusions are made. The nations' governments

can also come up with policies concerning the country's economic status and hence come up with

the proper implementations. Also, in schools, time series will enable the teachers to track the

trend in performance of the students at different times and hence can be able to predict future

performance.
9

Forecasting methods

This is one of the best techniques in predicting the future by using the trending data that

we have currently collected. It is divided into various methods that are based on the forecasting

element. They are a straight line, moving average, simple linear, and multiple linear regressions.

These are the best methods in the field of forecasting. Moving and average are based on the

historical data (Jeon, et al, 2013, March). Simultaneously, simple and multiple regression is

significant in comparing different variables, independent and dependent variables, to draw

conclusions and make decisions.

Forecasting can be applied in many businesses and companies to predict the future trends

in the current market they are in. the primary example is where we want to predict the yearly

turnover of a specific company basing our argument on the data that was collected more than ten

years before the moment of calculations and the actual predictions. It needs a variety of data

inputs to be accurate in the predictions. By being aware of the trends, the company will have the

ability to make rational decisions to thrive in a competitive environment.

Optimization

This data analysis element depicts the essence of collecting all the data that we are

exposed to then select the best and most useful one to draw exact conclusions. On the other part

optimization in the analysis is a complex extension that is goal-oriented. This optimization

process does not require specific significance in one specific goal. Still, it has a range of

objectives in which the most useful is selected to be used in decision making for an organization.
10

The optimization element is mostly applied in huge organizations, especially

multinational ones all over the world. Whenever they want to conduct their analysis, they get to

collect a lot of data because it comes from all over the world. So they have to apply the aspect of

data optimization where they will select the most critical data and hence use it for analysis. This

element plays a significant role in relieving the organizations of many costs since the analysis

will be done on very little but optimal data.

Decision tree modeling

This is an element used explicitly in the other data analysis elements' supervision, such as

regression. The model tree model is some leaf models used to represent the various distributions

of the collected data. It is a simple method that majors in solving all the analytic problems. The

tree ends up giving us the values of variance, which we use to draw conclusions and make final

decisions (Zeng, et al, 2019). The model has been divided into different groups, which are

bagging decision trees and boosting decision trees.

Decision tree modeling is usually applied in handling data, which is nonlinear hence

effective decision making. It can be in two variables, which are continuous and categorical. In

real life, this model can be applied in areas such as engineering to make a crucial decision, also

in businesses in the formulation of policies to run the business and thrive in the current

competitive world.

In conclusion, all researchers have to be aware of the above data analysis elements to

make rational decisions from the data they have collected. Correct conclusions cannot be reached
11

if the elements above are not followed. The final issue is how they are going to implement the

above elements. Schools and governments are urged to incorporate this in the syllabus for

adequate human resources in various national economy sectors, especially planning.


12

References

Jeon, H., Wilkening, M., Sridharan, V., Gurumurthi, S., & Loh, G. (2013, March). Architectural

vulnerability modeling and analysis of integrated graphics processors. In IEEE 10th

Workshop on Silicon Errors in Logic-System Effects (SELSE).

Lu, X., Zhou, M., Ammari, A. C., & Ji, J. (2016). Hybrid Petri nets for modeling and analysis of

microgrid systems. IEEE/CAA Journal of Automatica Sinica, 3(4), 349-356.

Zeng, L., Guo, J., Wang, B., Lv, J., & Wang, Q. (2019). Analyzing sustainability of Chinese coal

cities using a decision tree modeling approach. Resources Policy, 64, 101501.

You might also like