STATISTICAL CONCEPTS 1
Running Head: Statistical Concepts
Statistical Concepts
Name of the Writer
Name of the Institute
Date
STATISTICAL CONCEPTS 2
Statistical Concepts
Introduction
The studying of data is called Statistics. The facts-based knowledge of data utilizes the
best possible strategies to collect the data, use the appropriate method, and accurately present the
outcome. It is a pivotal method regarding how science’s revelation is done, how settlement on
the choices based on information is done, and how the whole study of statistics helps in
forecasting. It permits us to comprehend a subject significantly more profoundly. There are many
statistical concepts used for the analysis of various types of data. This paper will be explaining
five such concepts in detail with their real-life example and the importance of studying those
statistical concepts.
Normal Distribution
The first statistical concept chosen is the normal distribution. It is a probabilistic work,
which is a way to spread the estimates of a variable. It is symmetrical dispersion in which the
huge number of perceptions meet around the focal point, and the probabilities of values further
away from the mean are similarly condensing at two locations. Exceptional characteristics in
both transmission queues are relatively unlikely. Normal distributions are the most diverse of the
probabilities of results, as they correspond to many real-life phenomena such as growth, heart
rate, estimation error, and IQ values. The normal distribution is also called a bell curve and
gaussian distribution.
As with any probabilistic transmission, the normal distribution parameters completely
characterizes its shape and probability. It has two parameters, a standard deviation and a mean.
STATISTICAL CONCEPTS 3
The form of the normal distribution changes based on the values of the parameters, and thus it
doesn't have just one structure.
Properties of Normal Distribution
Despite their different shapes, all normal distribution possess the following four features:
All the normal distribution are symmetrical
The mean, median, and mode are equivalent.
Half the population is higher than mean, and half is less than the mean of the distribution.
The empirical rule allows defining normal distribution as a set of characteristics that are
in some good relationship to the average.
Empirical Rule
The Empirical rule for the normal distribution state that the total data fall within three
standard deviations from the average. The three categories on which the categorization of data is
possible is :
68% of the data lies within the first standard deviation.
95% of the data lies within the two standard deviation
99.7% of the data lies within the three standard deviation.
The empirical rule is regularly used in measurements to predict the final results. Once the
standard deviation is determined and before accurate information is collected, this rule can be
used as an unpleasant evaluation of information results. In such a case, this probability can be
used because information relevant to social issues may take a long time or even be
STATISTICAL CONCEPTS 4
unthinkable. The exact rule is also used as a sophisticated method of verifying the typology
of a vehicle. If so, many information centres go beyond three standard deviations; this
suggests that the distribution is not normal.
Real-Life Examples of Normal Distribution
Various events in real-life follows normal distribution. Tossing a coin, height, rolling a
dice, stock market movement, income distribution in an economy all follows the concept of
normal distribution. Some real-life examples of the normal distribution are:
Tossing A coin
Tossing a coin is the oldest method used in deciding the answers to disputes. Flipping a
coin is a common practice before every match. Apparent decency when flipping a coin is that
it has the same chance to think about both results. The opportunity to flip a coin on the head
is 1/2, and the equivalent is the tails. When we turn them both on, it is equivalent to one. If
we flip the coins several times, the total probability of getting the head remains constant at 1.
Thus, tossing a coin is a real-life example of the normal distribution.
Height
The height of the population is a case of normal distribution. Most people in this
population are of average size. The set of individuals, which are both larger and smaller than
those in the average size population, is almost equal in size, and few are surprisingly large or
incredibly small. In any case, height is certainly not the only distinguishing feature; various
hereditary and natural elements influence size. Therefore, in practical height follows the normal
distribution.
Rolling a Dice
STATISTICAL CONCEPTS 5
A transparent dice rolling is also an example of the normal distribution. Analysis has
shown that when running a dice several times, the probability of obtaining a "1" is from the
percentage of 0.15-0.18. When a dice moves for 1000 times, the likelihood of getting a "1" is
again equivalent, which is on average 1/6. There are 36 possible mixtures for a low probability of
two dies rolling at once. The likelihood of moving "1" again (with six possible combinations) is
about 6/36. No matter how many dices times someone throws at once, the probability of getting a
certain number will follow the normal distribution.
Importance of Studying Normal Distribution
There are multiple reasons why the study of normal distribution has gained significant
importance for the last few decades. The biggest reason is that the new world which has been
revolving around machine learning has its roots focused on probability distributions, especially
the normal distribution. Thus, to compete with the world, one must need to seek knowledge
related to normal distribution; otherwise, the person will be left far behind the growing
technology.
One explanation the normal distribution's study is necessary is that numerous educational
and psychological factors possess normal distribution. Proportions of understanding capacity,
inner-directedness, work fulfilment, and memory are among the innumerable psychological
factors, roughly ordinarily disseminated. Even though the dissemination is just around typical,
they are generally very close.
Another explanation regarding the importance of understanding normal distribution is
that it is easy to use for numerical analysts, which means that many kinds of measurable tests can
be defined for typical credits. Fortunately, these tests work, although the circulation is only close
STATISTICAL CONCEPTS 6
to the normal distribution. Some experiments work well, even with significant deviations from
the typical.
Z-Score
The value of the Z-score is an estimated numerical value, which is the ratio of the value
to the average cost of a set of goods. The Z-value counts with standard deviations from the mean.
If the value of Z is 0, it means that the value of the information point intrinsically linked to the
average. A Z value of 1 indicates an amount that represents the standard deviation from the
norm. A Z-value may be negative or positive, a positive value is higher than the mean, and a
negative cost is lower than the average.
This statistical concept has its applications in finance as Z-values are a share of
fluctuations in perception and can be used by retailers to help them make decisions about
publicizing unpredictability. The Z-score is also called Alzman Z-score after the changes he
made in this statistical concept for making it viable on the finance side.
The Z-score helps in normalizing the qualities of an ordinary conveyance by changing
over them into z-scores because it allows professionals to determine the probability that a point
will pass to normal standard transmission. Also, it will enable us to reflect on two notes from
different examples, which may have different methods and standard deviations.
How to Calculate Z-Score
Z values indicate to analysts and investors whether or not the result is typical for a
predetermined information index. Z-Scores also allow experts to adjust the effects of different
information indices to create results that can be compared more accurately with each other.
STATISTICAL CONCEPTS 7
Z-score can be calculated as:
Z=
Where,
X=Raw Score.
µ=Population Mean, and
σ=population standard deviation.
Edward Altman, University of New York’s professor, founded and implemented the Z-
Score formula at the end of the 1960s in response to the annoying and somewhat confusing
process that financial experts had to go through to decide closeness of the organization to
liquidation. Typically, Altman's development of the Z-Score equation has prompted speculators
to think about the overall monetary well-being of an organization.
Altman has for a long time, constantly rethought his Z-Score. He practices his advanced
formula of Z-score in many organizations before declaring it an authentic statistical concept that
can be used by organizations to identify how close they are for bankruptcy. It turned out that the
correctness of the Z-value ranged from 82% to 94%.
In 2012, Altman released a modified version of the Z-value, known as Altman Z-Score
Plus. It is used to evaluate private and public companies, non-production, and production
organizations as well as non-American and American organizations.
A Z-score is the yield of a credit-quality test that helps check the probability for a traded
on an open market organization. The premise of the Z-score is on five critical money related
STATISTICAL CONCEPTS 8
proportions that can be found and determined from an organization's yearly 10-K report. The
estimation used to decide the Altman Z-score is as per the following:
Ζ= 1.2A+1.4B+3.3C+0.6D+1.0E
Where,
A=working capital per assets,
B= Retained earnings per assets,
C= EBIT per assets,
D= Market value of equity/ book value of total liabilities,
E- Sales per assets.
In general, a score of less than 1.8 indicates that the organization is at risk of insolvency.
On the other hand, organizations with scores above 3 are reluctant to experience bankruptcy.
Real-Life Example of Z-Score
Z-score has its real-life examples in many areas. We can calculate the likelihood of
certain events by using the Z-score. Below is the real-life case of the Z-score:
Weight
Z-score helps in determining the probability of a newborn's weight less than 6 pounds
with a mean of 7.5 pounds and a standard deviation of 1.25 pounds. This probability is
determined by firstly calculating the Z-score, which is :
Z=
STATISTICAL CONCEPTS 9
Z=-1.2.
A Z-table is required to obtain the final result of the probability of a newborn whose
weight will be less than six pounds. When looking at table Z, the first step is to determine which
column to watch, which will be spoken to by the spot to one side of the decimal point and the
first spot to one side of the decimal point, which is - 1.2. The subsequent advance is to take a
gander at the digit that is two spots to one side of the decimal point, which is a 0, this decides the
segment one will take a gander.
Where both the line and the section converge each other demonstrates the likelihood. The
convergence in this situation is 0.1151. That reveals that the probability related to the child
weighing under 6 pounds is 0.1151, as shown below:
STATISTICAL CONCEPTS 10
Importance of Studying Z-Score
Z-score has significant importance in today's world because of its ability to analyzing the
data. The use of Z-score is essential not only in mathematics or finance, but it is of equal
importance in biology, especially in pediatric cardiology. The studying of the Z-score is vital as
it has its applications in many areas. As discussed above, it tells the likelihood of an organization
to be bankrupt and helps it in taking prior measures to prevent insolvency. Also, This
STATISTICAL CONCEPTS 11
methodology is fascinating for pediatric cardiology, and is its use has been increasing in this
area. For example, the left ventricle will grow in all children during their development. If a
patient who continually tears the aortic or mitral valve gets examined one after the other, it is
evident that this strange and inappropriate enlargement of the left ventricle gets rejected. The use
of a Z-scoring helps to locate neurotic increases in remaining ventricular measurements, which
are much higher than usual due to typical development, by determining an increased Z-scoring
over time.
These applications in different areas show that Z-score is crucial for accurate analysis of
data. Thus, the knowledge of the Z-score is essential for people to increase the percentage of
correct interpretation in every field of life.
Standard Deviation
The standard deviation is a measure of the extent to which different models group around
an average value in a large amount of information. At the point where the models' group close
enough and the bell curve is steep, the standard deviation is small. At the point where the models'
group far apart and the bell curve is usually flat, this indicates a moderately sizeable standard
deviation.
The use of standard deviation concerning the mean is not applicable to summarize
categorical data; only continuous data is relevant in standard deviation. Furthermore, a standard
deviation similar to the average is generally appropriate when the information is not wholly
blunted or abnormal.
Carl Pearson introduced the idea of the standard deviation in 1893. It is by far the most
important and most commonly used dispersion measurement. Its noteworthiness lies in the way
STATISTICAL CONCEPTS 12
that it liberates from those imperfections which harassed before strategies and fulfilled a large
portion of the properties of a decent proportion of scattering. Root mean square deviation is
another term used for standard deviation as it is the square base of methods for the squared
deviations from the number juggling mean.
How to Calculate Standard Deviation
Standard deviation can be calculated by:
S=
where,
s = sample standard deviation
∑ = sum
x̄= Sample Mean
n = total number of scores.
The following three steps also calculate standard deviation:
The mean worth is determined by including all the information focuses and
separating by the quantity of information focuses.
The difference for every datum point is determined, first by taking away the
estimation of the information point from the mean. Every one of those following
STATISTICAL CONCEPTS 13
qualities is then squared, and the outcomes added. At that point partition, the
finding by the quantity of information focuses less one
Square the result obtained in part 2 to get the standard deviation.
In case of a discrete series, the following methods are appropriate to calculate the
standard deviation:
Actual mean method
Assumed mean method
Step deviation method
The population standard deviation's formula is:
S=
Where,
S=population Standard deviation,
∑ = sum
µ=population mean
n=total number of scores
Real-life Example of Standard Deviation
STATISTICAL CONCEPTS 14
Standard deviation is a particularly valuable statistical concept in contributing and
exchanging systems as it assists measure with showcasing and security instability and anticipate
execution patterns. Since this mathematical concept is related to investment, it expects that an
index goal will have a small standard deviation from its reference list, since its repository is
likely to replicate the goal.
On the other hand, from the standard deviation, people expect that the development pool
will have a higher standard deviation than its relative market value because the managers of its
portfolio are working hard to achieve higher than average returns.
A lower standard deviation is not ideal. It all depends on what projects you take and how
much you expect to take risks. When speculators manage the level of variation in their portfolios,
they must take into account their resistance to volatility and their overall risk objectives.
Increasingly powerful financial experts can agree to a speculative system that chooses vehicles
with a higher degree of unpredictability than usual, while increasingly moderate speculators can
disagree.
Standard deviation is one of the most critical risk measures used by experts, portfolio
managers, and advisors. Risky companies report a standard deviation of total assets and their
components. A significant difference shows how far business deviates from a reasonable profit.
Because it is a simple measure, end-users and finance professionals always react to it. This
example shows how standard deviation has made it easier for investors to identify the risk and
then invest based on the decisions obtained from this statistical concept.
The standard deviation has its real-life examples in other areas as well. Its use is in sports
as the team often winning will have a lower standard deviation, in comparing the income
STATISTICAL CONCEPTS 15
between different departments, in comparing grades among students, etc. Thus, this statistical
concept is often applicable in comparing the results of specific scenarios.
Importance of Studying Standard Deviation
Standard deviation is of significant importance in the analysis of data. Without knowing
the standard deviation, one can not analyze the aggregate data and can not predict that either the
result obtained is good or bad. Standard deviation tells that either the data should be decreased or
increased, and thus for dynamic analysis and strategies, one must seek knowledge of the standard
deviation. Standard deviation's education is vital for researchers as it tells them how much data is
deviated and are their any outliers. It helps the researchers in identifying variations so that they
could study the reasons why these deviations exist, which will help in better presentation of the
researches.
The application of standard deviation in the investment side, make it more valuable as
people from every background seeks investment opportunities. To avail that opportunity
beneficially, one must know the standard deviation so that he could calculate the risk associated
with the investment and decides either to invest or not. The study of standard deviation has
decreased the failure in finance as people now prefer calculated risk, which helps them in
spending their money in a particular direction that will be profitable for them. These applications
of the standard deviation show that it is valuable and essential for people to seek knowledge of it
to analyze the situations in the best interest of people.
Correlation Coefficient
The correlation coefficient is a useful measure of communication quality between the
overall evolution of the two factors. Quality ranges from – 1 to 1. A number above or below one
means that an error has occurred in evaluating the connection. Coefficient -1 means a negative
STATISTICAL CONCEPTS 16
ideal ratio, while coefficient 1.0 means an excellent positive coefficient. A factor of 0 does not
indicate a direct correlation between the evolution of the two elements.
There are several types of composite factors, but the most consistent is Pearson's
coefficient (r). This is an assessment of the quality and evolution of the direct relationship
between the two factors. It cannot record non-linear relationships between the two elements and
cannot separate the dependent and autonomous factors.
In the evaluation of the method, precisely one is a positive ideal relationship between the
two factors. A definite increase in one variable also leads to a positive rise in another. When the
correlation coefficient is -1, there is an ideal negative relationship between the two factors, and
this shows that in coefficient correlation, the elements move in the opposite direction, which
means that an increase in one part decreases the other. If the relationship between the two
components is 0, there is no direct relationship between the two factors.
The quality of the relationship is different depending on the estimated connection factor.
For example, an estimate of 0.2 shows that there is a positive relationship between the two
elements, but it is powerless and probably irrelevant. Experts in some areas of research still
consider relationships to be significant only if they exceed 0.8. In all cases, a factor equal to or
greater than 0.9 would indicate a healthy relationship between two elements.
How to Calculate the Correlation Coefficient
Pearson product-moment correlation, the most used correlation coefficient can be
calculated by firstly determining the covariance of the two variables in the discussion. The
standard deviation of each factor should then be determined. The correlation coefficient is
determined by dividing the covariance by the standard deviation of the two elements.
STATISTICAL CONCEPTS 17
The standard deviation is a fraction of the dispersion of information from the standard.
Covariance is part of how these two factors change together, but its size is unlimited, making it
difficult to decipher. By isolating the covariance from the result of two standard deviations, we
can determine a standardized representation of the measurement. This whole calculation is the
correlation coefficient's calculation.
Assumptions of Karl Pearson's Correlation Coefficient
The suspicions and conditions for determining Pearson's proportions are as follows:
The evaluation of the set of information for comparison must be in the standard edition.
In the unlikely event that the data is usually relevant, it is generally closer to the mean at
this stage.
The word homoscedastic is a Greek word beginning with "ready to disperse."
Homoscedicity means equivalent changes. For all estimates of a free coefficient, the
equivalent term is an error. Misuse of homoscedasticity occurs when assuming that the
term error is smaller for a set of free estimates of variables and more significant for
another set of properties. It tends to be checked outwardly through a dissipate plot. The
information is supposed to be homoscedastic if the focuses lie similarly on the two sides
of the line of best fit.
If the information is consistent with a linear relationship, it should be direct. In the
unlikely event that data focuses on a line on a scattering curve, it then fills in an
elongated state.
STATISTICAL CONCEPTS 18
The factors that can control any stimulus are constant. The collection of information must
include permanent elements to record the Pearson report. In the unlikely event that one of
the information sets is in order, Spearman's assessment then will be an appropriate
measure.
Informational contact points should consist of two by two, so-called combined
statements. For each perception of a free factor, there is a question.
This information must not be abnormal. In case of exceptions, they may distort the
coupling factor at that time and make it unacceptable. A point is regarded as an exception
if it is unlikely to exceed +3.29 or -3.29 standard deviation; the use of dispersion diagram
is sufficient for this purpose.
Properties of Correlation Coefficient
Concerning A and B, the coefficient correlation is symmetrical.
Calculating the regression coefficient's geometric mean leads to the correlation
coefficient.
It must be independent of the origin.
The values should lie between -1 to +1.
Real-life Example of Correlation Coefficient
The microeconomics, which examines individual customers and businesses, shows many
examples where there is a positive correlation between factors, one of which is the relationship
between price and demand. In microeconomics and measurement research, one of the main ideas
that the other visitor is familiar with is the law of flexibility and demand and the impact on costs.
The elasticity and demand curve shows that if demand increases gracefully without the
STATISTICAL CONCEPTS 19
associated increase, there will be a corresponding increase in prices. So, if the interest in decency
or management also decreases, costs will also decrease.
The relationship between demand and costs is a case of causality and an equally positive
relationship. Widespread expansion leads to increased costs; the prices of proper administration
are quite high because more customers need it and are therefore willing to pay more. As demand
falls, this means that fewer people need the product and sellers need to reduce costs to convince
people to buy it.
On the contrary, negative correlation occurs between price and supply. With the decrease
in amount, with no observed change in the demand, the amount of the product increases. The
same number of buyers now want fewer products, which makes growing sense for any product
that is large in the eyes of the buyer.
Importance of Studying Correlation Coefficient
The studying of the correlation coefficient is essential along with the study of other
statistical concepts as it not only evaluates the relationship between the two variables, but it also
analyses how strong the relationship is. Based on this, it is crucial to study the correlation
coefficient so that people will know how much suitable the two elements are with each other. For
the accurate analysis of data, people should learn coefficient correlation so that in detail
interpretation of the variables can be done. The applications of correlation coefficient in real life
make it worthy of learning, as all the standard practices follow negative or positive correlation.
In investments, stock markets, etc. correlation coefficient is widely applicable, which also made
it useful for the people to learn about this statistical concept to succeed in their analysis.
Independent-Samples t-Test
STATISTICAL CONCEPTS 20
The independent-sample t-test, also known as the two-sample t-test, offline t-test or
online t-test, is a substantive preferential test that determines whether there is a measurable
critical difference between methods in two irrelevant matches.
The t-test's measurable noteworthiness and the t-test's impact size are the two essential
yields of the t-test. Actual permeability indicates that the contrast between the test centres
probably indicates a real difference between the swimmers as in the previous model and the
impact size indicates whether the difference is large enough to be most significant.
The one-sample t-test is similar to the independent-sample t-Test, with the difference that
it is used to compare a regular stimulus collection with a single number. For more trivial reasons,
we need to consider a confidence limit for a standard assortment of the same data.
The t-compliance test is used when each perception of one session combines with the
corresponding understanding of another meeting. At this point, we take an average value for each
of the cost increases and check whether the average value is more unusual than the total zero
The ranked independent-sample t-test is similar to the unclassified mill mileage test. Still,
with some exceptions, it becomes more rigorous (some terrible anomalies may invalidate the
results of the unclassified t-test).
Accompanying material is required for the independent-sample t-test:
One autonomous, straight out factor that has two levels.
Fixed variable station.
The hypothesis of Independent-Sample t-Test
STATISTICAL CONCEPTS 21
The null hypothesis (H0) and the alternative hypothesis (H1) for independent samples are
of two types:
H0: µ1 = µ2 (both populations are equivalent).
H1: µ1 ≠ µ2 (the two population groups are not equivalent)
Or on the other side.
H0: µ1 - µ2 = 0 (the contrast between the two populations is 0)
H1: µ1 - µ2 ≠ 0 (the difference between the two populations does not mean 0).
Where µ1 and µ2 are the respective populations for bundle one and bundle two
separately, it should note that the second set of theories can be obtained from the base set,
eliminating µ2 on both sides of the state.
To do this, one needs a set a validity level also called alpha that will allow to reject or
accept the chosen theory. In general, this value is to 0.05.
Assumption of normality of the dependent variable
Following assumptions are required for independent-sample t-test:
It requires the necessary variable is around regularly dissemination inside each gathering.
The observations should be independent.
The dependent variable must be consistent with an average flow in the population. This is
only necessary for tests that are smaller than anywhere else in the 25 unit range.
The standard deviation of the variables must be equivalent in both populations.
STATISTICAL CONCEPTS 22
How to Calculate t-Test
For determining the t-test, the consideration of three factors is essential. These include
the difference between the average quality of each data set (so-called average contrast), the
standard deviation of each observation and the amount of information estimated for each view.
The result of the t-test is the t-evaluation. This particular t-value is then weighting with
the value from the baseline table (called the T-distribution table). This ratio allows people to
decide how much randomness affects discrimination and whether what counts goes beyond this
possibility. The t-test considers whether the difference between the teams indicates a real
contrast in the study or whether it is unlikely to be irregular and unnecessary.
Real-life Example of Independent-Sample t-Test
Independent-sample t-test has its applications in analyzing the data for a large population
by taking out a sample from it. Let's assume we have been wondering if New Yorkers and
Washington spend money on the cinema every month, alternating with them. It's unrealistic to
ask every New Yorker and Washington people how much they spend on the film, so generally
speaking, we take a sample from both of the population like, 300 New Yorkers and 300
Washington people, and the average figures they spend are $14 and $18. The T-test asks whether
this difference is probably an illustration of the real contrast between Washington people and
New Yorkers in general, or whether it is perhaps an unmeasurable coincidence.
The independent-sample t-test asks if there was no contrast between Washington and the
New Yorkers in general, what are the chances that the sample selected at random from these
populations are as different as the varieties chosen at random? For example, if Washington
people and New Yorkers spent the same amount of money, it would be unlikely that 300
randomly selected Washington people would spend exactly $14 each, and 300 randomly selected
STATISTICAL CONCEPTS 23
New Yorkers would spend exactly $18 each. Therefore, if the tests give these results, we can
argue that the difference in samples is likely to illustrate.
This example shows the kind of scenarios where independent-sample t-test is applicable
in the real world.
Importance of Studying Independent-Sample t-Test
Studying independent-Sample t-Test is useful as it helps in identifying the statistical
difference between the two samples. Analysis of a large population by using independent-sample
t-test makes it easier to conduct as the sample taken from that population gives an accurate
result. For precise analysis, studying independent-sample t-test is vital as it helps in analyzing
extensive data quickly in a few steps. The development of different software such as SPSS has
made the analysis done through independent-sample t-test easier, which has made this statistical
concept more applicable in the real world. Independent-Sample t-test's knowledge is essential for
researchers and data analyst as it helps in giving more precise and quick results. The different
kinds of independent-sample t-test help analysts and researchers in determining outcomes of any
type of data and helps them concluding the test based on the fare results obtained by using
independent-sample t-test. This discussion shows that the knowledge and awareness regarding
independent-sample t-test are crucial primarily because of the increasing role of data analysts
and researchers and to be a useful and valuable analyst or researcher. One must seek knowledge
regarding Independent-sample t-test.
STATISTICAL CONCEPTS 24
References
Atangana, A., & Gómez-Aguilar, J. F. (2017). A new derivative with normal distribution kernel: Theory,
methods and applications. Physica A: Statistical mechanics and its applications, 476, 1-14.
Altman, E. I., Iwanicz‐Drozdowska, M., Laitinen, E. K., & Suvas, A. (2017). Financial distress prediction in
an international context: A review and empirical analysis of Altman's Z‐score model. Journal of
International Financial Management & Accounting, 28(2), 131-171.
STATISTICAL CONCEPTS 25
Arian, M., & Soleimani, M. (2020). Practical calculation of mean, pooled variance, variance, and standard
deviation, in meta-analysis studies. Nursing Practice Today.
Anjum, S. (2012). Business bankruptcy prediction models: A significant study of the Altman's Z-score
model. Available at SSRN 2128475.
Albatineh, A. N., Wilcox, M. L., Zogheib, B., & Kibria, G. B. (2018). Improved confidence interval
estimation of the population standard deviation using ranked set sampling: A simulation
study. Journal of Biostatistics and Epidemiology, 4(3), 173-183.
Bouvatier, V., Lepetit, L., Rehault, P. N., & Strobel, F. (2018). Bank insolvency risk and Z-score
measures: caveats and best practice. Available at SSRN 2892672.
Choudhary, R. (2018). Application of “independent t-test” by using SPSS for conducting physical
education researches. Hypothesis, 20(92), 99.
D'Agostino, R. B. (2017). Tests for the normal distribution. In Goodness-of-fit-techniques (pp. 367-420).
Routledge.
Derrick, B., Russ, B., Toher, D., & White, P. (2017). Test statistics for the comparison of means for two
samples that include both paired and independent observations. Journal of Modern Applied
Statistical Methods, 16(1), 9.
Einmahl, J. H., de Haan, L., & Zhou, C. (2016). Statistics of heteroscedastic extremes. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 78(1), 31-51.
Gerald, B. (2018). A Brief Review of Independent, Dependent and One Sample t-test. International
Journal of Applied Mathematics and Theoretical Physics, 4(2), 50-54.
Harrisson, S. (2018). The downside of dispersity: why the standard deviation is a better measure of
dispersion in precision polymerization. Polymer Chemistry, 9(12), 1366-1370.
Jiang, W. (2018). A correlation coefficient for belief functions. International Journal of Approximate
Reasoning, 103, 94-106.
STATISTICAL CONCEPTS 26
Ly, A., Marsman, M., & Wagenmakers, E. J. (2018). Analytic posteriors for Pearson's correlation
coefficient. Statistica Neerlandica, 72(1), 4-13.
Mare, D. S., Moreira, F., & Rossi, R. (2017). Nonstationary Z-score measures. European Journal of
Operational Research, 260(1), 348-358.
Nakagawa, S., Johnson, P. C., & Schielzeth, H. (2017). The coefficient of determination R 2 and intra-
class correlation coefficient from generalized linear mixed-effects models revisited and
expanded. Journal of the Royal Society Interface, 14(134), 20170213.
Panigrahi, D. (2019). Validity of Altman's 'Z' Score Model in Predicting Financial Distress of
Pharmaceutical Companies. NMIMS journal of economics and public policy, 4(1).
Parrill, F., McKim, A., & Grogan, K. (2019). Gesturing standard deviation: Gestures undergraduate
students use in describing their concepts of standard deviation. The Journal of Mathematical
Behavior, 53, 1-12.
Şahin, R., & Liu, P. (2017). Correlation coefficient of single-valued neutrosophic hesitant fuzzy sets and
its applications in decision making. Neural Computing and Applications, 28(6), 1387-1395.
Solutions, S. (2017). Conduct and interpret an independent sample t-test. Retrieved from.
Whitlock, M. (2019). Sampling from a Normal Distribution.
Welc, J., & Esquerdo, P. J. R. (2018). Real-Life Case Study: Identifying Overvalued and Undervalued
Airlines. In Applied Regression Analysis for Business (pp. 257-276). Springer, Cham.
Waller, D. L. (2016). Statistics for business. Taylor & Francis
Yalçınkaya, A., Şenoğlu, B., & Yolcu, U. (2018). Maximum likelihood estimation for the parameters of
skew normal distribution using genetic algorithm. Swarm and Evolutionary Computation, 38, 127-
138.
Zhou, H., Deng, Z., Xia, Y., & Fu, M. (2016). A new sampling method in particle filter based on Pearson
correlation coefficient. Neurocomputing, 216, 208-215.
STATISTICAL CONCEPTS 27