You are on page 1of 3

P-value hypothesis test does not necessarily make use of a pre-selected confidence

level at which the investor should reset the null hypothesis that the returns are
equivalent. Instead, it provides a measure of how much evidence there is to reject
the null hypothesis. The smaller the p-value, the greater the evidence against the
null hypothesis. Thus, if the investor finds that the p-value is 0.001, there is
strong evidence against the null hypothesis, and the investor can confidently
conclude the portfolio's returns and the S&P 500's returns are not be equivalent.

Although this does not provide an exact threshold as to when the investor should
accept or reject the null hypothesis, it does have another very practical
advantage. P-value hypothesis testing offers a direct way to compare the relative
confidence that the investor can have when choosing among multiple different types
of investments or portfolios, relative to a benchmark such as the S&P 500.

For example, for two portfolios, A and B, whose performance differs from the S&P
500 with p-values of 0.10 and 0.01 respectively, the investor can be much more
confident that portfolio B, with a lower p-value will actually show consistently
different results.

Compete Risk Free with $100,000 in Virtual Cash


Put your trading skills to the test with our FREE Stock Simulator. Compete with
thousands of Investopedia traders and trade your way to the top! Submit trades in a
virtual environment before you start risking your own money. Practice trading
strategies so that when you're ready to enter the real market, you've had the
practice you need. Try our Stock Simulator today >>

Related Terms
Null Hypothesis Definition
A null hypothesis is a type of hypothesis used in statistics that proposes that no
statistical significance exists in a set of given observations. more
Understanding Two-Tailed Tests
A two-tailed test is a statistical test in which the critical area of a
distribution is two-sided and tests whether a sample is greater than or less than a
certain range of values. more
Type II Error Definition
A type II error is a statistical term referring to the acceptance (non-rejection)
of a false null hypothesis. more
One-Tailed Test
A one-tailed test is a statistical test in which the critical area of a
distribution is either greater than or less than a certain value, but not both.
more
Econometrics: What It Means, and How It's Used
Econometrics is the application of statistical and mathematical models to economic
data for the purpose of testing theories, hypotheses, and future trends. more
P-test Definition
A P-test is a statistical method that tests the validity of the null hypothesis
which states a commonly accepted claim about a population. more

Partner Links

Related Articles

TRADING BASIC EDUCATION


Hypothesis Testing in Finance: Concept and Examples

ADVANCED TECHNICAL ANALYSIS CONCEPTS

Systematic Sampling: Advantages and Disadvantages

TOOLS FOR FUNDAMENTAL ANALYSIS

Lognormal and Normal Distribution

PORTFOLIO MANAGEMENT

What Does Value at Risk (VaR) Say About the "Tail" of the Loss Distribution?

FINANCIAL ANALYSIS

Standard Error of the Mean vs. Standard Deviation: The Difference

FINANCIAL ANALYSIS

Pros and Cons of Stratified Random Sampling


The standard deviation (SD) measures the amount of variability, or dispersion, from
the individual data values to the mean, while the standard error of the mean (SEM)
measures how far the sample mean (average) of the data is likely to be from the
true population mean. The SEM is always smaller than the SD.

KEY TAKEAWAYS
Standard deviation (SD) measures the dispersion of a dataset relative to its mean.
Standard error of the mean (SEM) measured how much discrepancy there is likely to
be in a sample's mean compared to the population mean.
The SEM takes the SD and divides it by the square root of the sample size.
SEM vs. SD
Standard deviation and standard error are both used in all types of statistical
studies, including those in finance, medicine, biology, engineering, psychology,
etc. In these studies, the standard deviation (SD) and the estimated standard error
of the mean (SEM) are used to present the characteristics of sample data and to
explain statistical analysis results. However, some researchers occasionally
confuse the SD and SEM. Such researchers should remember that the calculations for
SD and SEM include different statistical inferences, each of them with its own
meaning. SD is the dispersion of individual data values.

In other words, SD indicates how accurately the mean represents sample data.
However, the meaning of SEM includes statistical inference based on the sampling
distribution. SEM is the SD of the theoretical distribution of the sample means
(the sampling distribution).

Calculating Standard Deviation


\begin{aligned} &\text{standard deviation } \sigma = \sqrt{ \frac{
\sum_{i=1}^n{\left(x_i - \bar{x}\right)^2} }{n-1} } \\ &\text{variance} = {\sigma
^2 } \\ &\text{standard error }\left( \sigma_{\bar x} \right) = \frac{{\sigma }}
{\sqrt{n}} \\ &\textbf{where:}\\ &\bar{x}=\text{the sample's mean}\\ &n=\text{the
sample size}\\ \end{aligned}

standard deviation σ=
n−1

i=1
n
(x
i

x
ˉ
)
2

variance=σ
2

standard error (σ
x
ˉ

)=
n

where:
x
ˉ
=the sample’s mean
n=the sample size

The formula for the SD requires a few steps:

First, take the square of the diff

You might also like