You are on page 1of 111

WHYZELL?

AtZel l
,ourmi ssi
oni st obeIndi a’
sforemostt r
aini
ngcenter
,
revol
ut i
onizingcar eer sbymaki ngitaffordableandaccessibletoal l
.
Ourpr imar yobjectivei st
opr ovidetop-notcheducat i
onandski l
l
upgradat i
ont hroughpar t
nershipswithi ndustryexpert
sandby
i
ncor porati
ngt hebesteducat ionalpractices.Wear ecommi t
tedt o
deli
ver i
nghi gh-qual i
tylearni
ngexper iencest hatempowerindividuals
tothriveint heirprofessionaljourneys.

OUR PROGRESS
STUDENTSLI
FEI
MPACTED HOURSOFR&D ON EXPERTS I
NDUSTRYEXPERTASFACULTY CORPORATETRAI
NINGS
I
N EXPERI
ENTI
ALMETHOD AND MENTORS

OUR ACHI
EVEMENTS

STUDENTSATISFACTI
ON ASSI
STED FOR PLACEMENT STUDENTCERTI
FIED STUDENTSJOIN USBASI
S
RATI
O RECOMMENDATION
1

Required Disclaimer
CFA Institute does not endorse, promote or warrant the accuracy or quality of Zell Notes. CFA® and
Chartered Financial Analyst® are registered trademarks owned by CFA Institute.
2

Table of Contents
Rates and Return ...............................................................................................................................1
Interest Rates and Time Value of Money .................................................................................................. 1
Time Value of Money inFinance ......................................................................................................... 13
Statistical Measures of Asset Returns ................................................................................................ 18
Measures of Central Tendency ................................................................................................................ 18
Skewness, Kurtosis, and Correlation ....................................................................................................... 26
Probability Trees and Conditional Expectations ................................................................................ 37
Conditional Expectations and Expected Value ........................................................................................ 37
Portfolio Mathematics ..................................................................................................................... 41
Simulation Methods ......................................................................................................................... 49
Sampling and Estimation .................................................................................................................. 53
Sampling Methods, Central Limit Theorem, and Standard Error ............................................................ 53
Hypothesis Testing ........................................................................................................................... 58
Parametric and Non- Parametric Tests of Independence .................................................................... 65
Introduction to Linear Regression ..................................................................................................... 70
Linear Regression: Introduction .............................................................................................................. 70
Goodness of Fit and Hypothesis Tests ..................................................................................................... 73
Predicting Dependent Variables and Functional Forms ......................................................................... 80
Introduction to Big Data Techniques ................................................................................................. 83
Formulas ......................................................................................................................................... 91
Cumulative Z-Table .......................................................................................................................... 95
Student’s T-Distribution ................................................................................................................... 99
F-Table at 5% (Upper Tail) .............................................................................................................. 101
F-Table at 2.5% (Upper Tail) ........................................................................................................... 102
Chi-Squared Table .......................................................................................................................... 103
3

Foreword

Zell's CFA notes have been curated with a clear purpose – To fill the gap for individuals who don't feel very
comfortable studying from extensive study notes that demand special attention. At Zell Education, we are
focused on upskilling a student professionally and personally – To ensure that a student not only clears a
level but clears it with flying colours. With this in mind, we also ensure a student is upskilled and is in line
with the professional demands of the industry.

At Zell, we don't fret about a student's background, prior knowledge, or preferences. The aim is pretty
simple; we focus on ensuring that every student is on the same page as another and with the relevant
knowledge after clearing a level – A student can genuinely be called a professional.

With a state-of-the-art learning management system, we've imbibed the highest quality standards of
teaching coupled with an excellent placement team to ensure a student goes from training to the
application of the skills he's learned. We look at experiential learning at the forefront of our training to
have a 360 based approach for every applicant to be ready to perform at the highest level!

With the aforementioned in context, we bring to you these notes, created by us, for you, to truly help
make the difference and turn your journey with us into a memorable one.

Authored by: Jainam Gada, Megha Hemdev, Esha Vora

Illustrated by: Ravi Gupta, Aayush Shah


Rates and Return 1

Rates and Return

Interest Rates and Time Value of Money

LOS 1a: Interpret interest rates as required rates of return, discount rates, or opportunity costs

As read and understood in the pre-read material for time value of money, the time value of money
establishes the equivalence between cash flows occurring on different dates. An interest rate (or yield), is
a rate of return that reflects the relationship between differently dated cashflows and it can be thought
of in 3 ways:

Required Rate of Return


Will you invest in a particular investment if you are not getting sufficient returns?
No, you won’t do that. You might have some expectations of the returns that you wish to earn.
A required rate of return is when investors and savers are willing to invest or save their money. It is the
minimum amount needed by an investor to willingly invest their money which compensates them for the
risk taken by sacrificing present consumption.
Discount Rates
Like mentioned about, the interest rate is the return that reflects the relationship between differently
dated cashflows, which is they are also called discount rates. Discount rates are rates used to calculate
the present value of future cash flows.

Opportunity Costs
Opportunity cost is the cost of opportunity foregone. It is the cost of not choosing a particular option.
For example, if you have $100 and have two choices to either spend them on video games or save and
earn a 10% interest. If you choose to spend it on video games, then 10% is your opportunity cost.
Rates and Return 2

LOS 1a: Explain an interest rate as the sum of a real risk-free rate and premiums that compensate
investors for bearing distinct types of risk

Looking from investors point of a view, the interest rate here ‘r’ would be the risk-free interest rates plus
a set of premiums which would sum up the return an investor would require from investing in that asset.
Therefore, r= Real risk-free return + Inflation premium + Default risk premium + liquidity premium +
maturity premium
The real risk-free return is the theoretical rate of return on an investment that carries no risk of default
or loss of principal. It represents the return an investor can expect to earn on an investment with no risk
above inflation and hence is often used as a benchmark for evaluating the potential returns of other
investments, as riskier the investments are, the higher the required rate of return over the risk-free rate
is to compensate investors for taking an additional risk.
The inflation premium refers to the additional return that investors require to compensate them for the
expected inflation rate over a specified period. This rate is added as inflation reduces the purchasing
power of the investors and they should be compensated for the same.
The default risk premium is the additional return or yield that investors demand in compensation for the
risk of default on a particular investment. It reflects the perceived likelihood that the issuer of a debt
instrument, such as a bond or a loan, may fail to make scheduled interest payments or repay the principal
amount in full.
The liquidity premium is the additional return or yield that investors require in compensation for the lack
of liquidity or ease of trading of a particular investment. It reflects the risk associated with the inability to
quickly and easily convert an investment into cash without incurring significant transaction costs or price
discounts.

Assets like Small-cap stocks, Real Estate, Bonds with longer maturities would have liquidity premium
added to the ‘r’.

The maturity premium compensates investors for the increased sensitivity of the market value of debt to
a change in market interest rates as maturity is extended. For example: Since people are becoming
increasingly aware of the harm caused by fossil fuels, the long-term bonds of any company that operates
in this sector face a higher maturity risk. This is because there can always be chances that a viable
substitute becomes available in the market, making fossil fuels obsolete.
The sum of the real-risk free interest rate and inflation premium is the nominal risk-free interest rate.
Hence, the nominal risk-free rate accounts for the inflation.

(1 + nominal risk-free rate) = (1 + real risk-free rate) * (1 + inflation premium)


Rates and Return 3

LOS 1b: Calculate and interpret different approaches to return measurement over time and
describe their appropriate uses.
For investors, financial assets typically yield two sorts of returns. First, they may give regular income in
the form of cash dividends or interest payments. Second, the market value of a financial asset might rise
or fall, resulting in a capital gain or loss.
Some financial assets generate returns via only one of these sources. Non-dividend-paying stock investors,
for example, derive their return only from price movement. Other assets merely produce recurring
revenue. For example, defined benefit pension plans and retirement annuities provide income
distributions to beneficiaries throughout the course of their lives.
Holding Period Return
A holding period return refers to the total return earned by an investor over the period in which an
investment is held. Measures the percentage change in value of an investment, including any income
received (Dividends or interest) and nay change in the investment price or value (capital gains).
HPR for a single period can be calculated as (Ending Value - Beginning Value + Income) / Beginning Value
HPR = ((P1-P0) + I1) / PO
A holding period return can be computed for a period longer that one year. For example, if an individual
needs to know the return on an asset he held for 4 years, he needs to calculate the one-year holding
period for all 4 periods and calculate the holding period return as
HPR = [(1+R1) * (1+R2) * (1+R3) * (1+R4)] - 1
Rates and Return 4

Arithmetic or Mean Return


For comparing and understanding returns across different holding periods, it's essential to normalize them
to a common period. The most straightforward method is to compute a summary measure by taking the
simple arithmetic average of the holding period returns. The formula to calculate the arithmetic mean is:
𝑺𝒖𝒎 𝒐𝒇 𝒂𝒍𝒍 𝑽𝒂𝒍𝒖𝒆𝒔
𝑻𝒐𝒕𝒂𝒍 𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒗𝒂𝒍𝒖𝒆𝒔
Geometric Mean Return
The arithmetic mean return assumes that the initial investment amount remains constant throughout
each period. However, in an actual investment portfolio, the base amount changes annually due to the
effect of compounding.
As the portfolio generates earnings in one year, these earnings are added to the starting value of the
subsequent year's investment which is then compound with the earnings, and this process continues year
after year. As a result, the base amount constantly changes, impacting the overall performance of the
investment portfolio. The arithmetic mean return does not fully capture the compounding effect and may
not accurately reflect the actual returns earned in a portfolio over multiple periods. To address this, we
use geometric mean returns to account for the compounding of returns. A geometric mean return
provides a more accurate representation of the growth in portfolio value over a given time period than
the arithmetic mean return.
We can calculate the same using the formula:
𝟐
√(𝟏 + 𝒓𝟏) ∗ (𝟏 + 𝒓𝟐) … . (𝟏 + 𝒓𝑵) -1

Harmonic Mean
The harmonic mean is a statistical measure used to calculate the average rate or speed of values in a
dataset that have a reciprocal relationship. It is particularly useful when dealing with rates, ratios, or
speeds, where the influence of extreme values needs to be minimized.
The formula for calculating the harmonic mean of a set of values is as follows:
Harmonic Mean = n / (1/x₁) + (1/x₂) + (1/x₃) + ... + (1/xₙ)
The harmonic mean is a specialized concept of the mean that is suitable for averaging ratios. One common
application is seen in the investment strategy known as cost averaging, where a fixed amount of money
is periodically invested.
In cost averaging, the ratios being averaged represent prices per share at different purchase dates. These
ratios are then applied to a constant amount of money, resulting in a varying number of shares purchased
with each investment. The harmonic mean is used to effectively calculate the average price per share
over time, taking into account the varying number of shares acquired with each fixed investment amount.
Rates and Return 5

Illustration
Let's consider a numerical example to understand the concept of cost averaging:
Suppose an investor decides to invest $1,000 in a particular stock at different dates over the course of
one year. The prices per share at each investment date are as follows:
1. First investment: $50 per share (investing $1,000)
2. Second investment: $40 per share (investing another $1,000)
3. Third investment: $60 per share (investing another $1,000)
To calculate the average price per share using the harmonic mean, we first find the reciprocal (1/x) of
each price and then compute the mean of these reciprocals:
Harmonic Mean = (3 / [(1/50) + (1/40) + (1/60)]) = (3 / [0.02 + 0.025 + 0.01667]) ≈ 3 / 0.06167 ≈ 48.67
So, the harmonic mean of the prices per share is approximately $48.67. This means that on average, the
investor purchased shares at a price close to $48.67 during the cost averaging period.

Illustration
Returns data set: 5%, 8%, 10%, 6%, 7%, 9%, 22%, 28%
To calculate the arithmetic mean return (AM):
AM = (5% + 8% + 10% + 6% + 7% + 9% + 22% + 28%) / 8 ≈ 11.125%
To calculate the geometric mean return (GM):
GM = [(1 + 5%) * (1 + 8%) * (1 + 10%) * (1 + 6%) * (1 + 7%) * (1 + 9%) * (1 + 22%) * (1 + 28%)] ^ (1/8) - 1 ≈
8.693%
To calculate the harmonic mean return (HM):
HM = 8 / [(1/5%) + (1/8%) + (1/10%) + (1/6%) + (1/7%) + (1/9%) + (1/22%) + (1/28%)] ≈ 9.906%
In this example, the arithmetic mean return is higher due to the impact of the two outliers (22% and 28%).
The geometric mean return provides a more stable representation of the overall investment performance,
considering the compound effects of returns. The harmonic mean return is influenced by the outliers but
not as significantly as the arithmetic mean, as it considers the reciprocals of the returns.
In addition to arithmetic, geometric, and harmonic means, two other types of means can be used to
mitigate the impact of outliers in a dataset.
1. Trimmed Mean: The trimmed mean involves removing a small defined percentage of the largest and
smallest values from the dataset before calculating the mean. By discarding extreme values, the trimmed
mean provides a more robust average of the remaining observations.
2. Winsorized Mean: The winsorized mean replaces extreme observations in the dataset with the values
of their nearest observations. This process is done at both ends of the dataset to limit the influence of
outliers on the calculations. After replacing the extreme values, the mean is calculated by averaging the
remaining observations.
Rates and Return 6

Both the trimmed and winsorized means offer alternative ways to handle outliers and produce more
reliable measures of central tendency in datasets with extreme values.

LOS 1c: compare the money-weighted and time-weighted rates of return and evaluate
the performance of portfolios based on these measures

Money-Weighted Return
This method of return calculation applies the concept of internal rate of return (IRR) to the investment
portfolios. This takes into account all cash inflows and cash outflows. Money-weighted return is a useful
performance measure when the investment manager is responsible for the timing of cash flows. This is
often the case for private equity fund managers.

A cash inflow is any item that brings money towards the investor, and a cash outflow is any item that takes
money away from the investor.

Let us consider the following example:

Assume an investor buys a share of stock for $200 at t = 0, and at the end of the year (t = 1), she buys an
additional share for $240. At the end of Year 2, the investor sells both shares for $260 each. At the end of
each year in the holding period, the stock paid a $4 per share dividend. What is the money-weighted rate
of return?

Solution:

Step 1: Determine the timing of each cash flow and whether the cash flow is an inflow (+), into the account
or an outflow (-), available from the account.

t=0: Purchase of first share = +200 (inflow into the account)

t-1: Purchase of second share = +240

Dividends from the first share = -4

Subtotal, t=1 : +238


Rates and Return 7

t=2: dividend from two shares = -8

Proceeds from selling shares= -520

Subtotal at t=2: -512

Step 2: Calculate the net cash flow for each period and set the PV of cash inflows equal to the PV of cash
outflows

PVinflows = PVoutflow

200 + (238/1=r) = (512/1+r)

The intuition here is that we deposited $200 into the account at t = 0, then added $238 to the account at
t = 1 (which, with the $4 dividend, funded the purchase of one more share at $120), and ended with a
total value of $512.

Calculating Money-weighted returns on the financial calculator:

Key Strokes Display


[CF][2ND][CLR WORK] CF0=0.000
200 [ENTER] CF0=+200
[ ] 238 [ENTER] C01= +238
[ ] 512[+/-] [ENTER] C02 = -512
IRR = 13.86%

The money-weighted rate of return for this problem is 13.86%. We can see that the return is the realized
cash return that the investor takes from the investment account. The investment value on the last period
is considered the cash amount that the investor holds.

It is not an accurate representation of the portfolio return because the actual return is higher when we
consider the unrealized portion.

Time-Weighted Rate of Return

Time-weighted rate of return measures compounds growth. It is the rate at which $1 compounds over a
specified performance horizon. Time-weighting is the process of averaging a set of values over time.

Few steps to follow to calculate time-weighted return:

➢ Value the portfolio before any significant additions or withdrawals and break the evaluation
period into subperiods based on cash inflows and outflows.

➢ Compute the holding period return.

➢ Compute the annualized holding period return. For instance, if there are two subperiods, then we
must find the CAGR of these two periods. This is simply the geometric mean return.

Let us see the following example for a comprehensive understanding of time-weighted return:

Mr. Shah purchases some shares for Rs. 500. At the end of the year, he buys another share of the same
company for Rs. 520. At the end of the second year, Mr Shah sells both shares for Rs. 530 each.
Rates and Return 8

At the end of both years 1 and 2, the stock paid a dividend of Rs. 12 per share

Step 1: Break the evaluation into two periods based on the timing of cash flows.

Holding period 1

➢ Beginning value = Rs. 500


➢ Dividends paid = Rs. 12
➢ Ending value = Rs. 520

Holding period 2

➢ Beginning value = Rs. 1,040 (2 shares)


➢ Dividends paid = Rs. 24 (Rs. 12 per share)
➢ Ending value = Rs. 1,060 (2 shares)

Step 2: Calculate the HPR for each holding period.

HPR1 = [(520 + 12) ÷ 500] − 1 = 6.40%

HPR2 = [(Rs. 1,060 + 24) ÷ Rs. 1,040] − 1 = 4.23%

Step 3: Find the CAGR.

(1 + Time-Weighted Return)2 = 1.0640 × 1.0423

Note that we have taken the left-hand side to the power of 2 because there are 2 holding periods that we
have identified based on the cash flows.

Time-Weighted Return = [(1.064) × (1.0423)]1/2- 1 = 5.30%

This is easier to compute compared to the money-weighted return and is the preferred method for fund
managers. Note that the XIRR function in Excel can be used when there are irregular holding periods.

If funds are invested in a portfolio just before a poor portfolio performance, the money-weighted rate of
return will be lower than the time-weighted rate of return. On the other hand, if funds are invested in a
portfolio just before a period of relatively high returns, the money-weighted rate of return will tend to be
higher than the time-weighted rate of return.
Rates and Return 9

LOS 1d: Calculate and interpret annualized return measures and continuously
compounded returns, and describe their appropriate uses

Returns can be calculated or earned over various time periods, and sometimes it is necessary to annualize
returns that were originally computed for shorter or longer durations than one year. Annualizing returns
allows for convenient comparison across different time frames, enabling a standardized basis for analyzing
investments or performance metrics with varying reporting periods.

Non-Annual Compounding
Also known as multiple compounding or compound interest with a frequency other than once per year, is
a method of calculating interest on an investment or loan where the interest is added more frequently
within a year. In contrast to annual compounding, which adds interest only once per year, non-annual
compounding can involve quarterly, monthly, daily, or even continuous compounding.
When interest is compounded more frequently, the investment or loan grows more rapidly because
interest is earned (or charged) on the initial principal as well as on any previously accumulated interest.
The formula for calculating the future value of an investment with non-annual compounding is:
Future Value = P(1 + r/n)^(n*t)
Note: n is the number of compounding periods per year (e.g., quarterly compounding would be 4, monthly
would be 12).

Illustration
Suppose you invest $1,000 in a savings account that pays 5% interest annually, but it compounds monthly
(n = 12).
After one year (t = 1), the future value can be calculated as follows:
Future Value = $1,000 * (1 + 0.05/12)^(12*1)
Future Value = $1,000 * (1.0041667)^12 ≈ $1,051.16
So, after one year of monthly compounding at a 5% annual interest rate, your investment will grow to
approximately $1,051.16.
By compounding more frequently, the investment grows slightly more compared to annual compounding,
where the future value would be $1,050.00. This illustrates the impact of compounding frequency on the
growth of an investment over time.

Annualizing Returns
To annualize any return for a period shorter than one year, the return for the period must be compounded
by the number of periods in a year. A monthly return is compounded 12 times, a weekly return is
compounded 52 times, and a quarterly return is compounded 4 times. Daily returns are normally
compounded 365 times. For an uncommon number of days, we compound by the ratio of 365 to the
number of days.
Rates and Return 10

The formula for annualizing compounding is as follows:


Annualized Rate = (1 + Periodic Rate)^n - 1
Where:
• Periodic Rate is the interest rate or investment return for the given compounding period
(expressed as a decimal).
• n is the number of compounding periods per year.

Illustration
Let us take the following example:
The following information has been provided regarding three different stocks:
➢ Stock 1: Weekly Return = 0.10%
➢ Stock 2: Quarterly Return = 2.00%
➢ Stock 3: Return Over 18 months = 9.10%

Which of the following stocks has the highest annualised return?


We can standardise the returns to annualised returns in the following way depending on the time horizon
provided:
➢ Stock 1: (1 + 0.10%)52 - 1 = 5.33% (since there are 52 weeks in a year)
➢ Stock 2: (1 + 2.00%)4 - 1 = 8.24% (since there are 4 quarters in a year)
➢ Stock 3: (1 + 9.10%)12/18 - 1 = 5.98% (since we want to assess the 12-month return from an 18-
month period)
Notice that when we increase the periods (from weekly to annual or quarterly to annual), we extrapolate
the returns to the future. This assumes that the same level of return will continue over the period.
However, when we consider a return that exceeds one year, we simply see the compounded return on a
pro-rata basis.
Rates and Return 11

LOS 1e: Calculate and interpret major return measures and describe their appropriate
uses

Gross and Net Return


Gross return refers to the total return on a security portfolio before deducting fees for the management
and administration of the investment account. Net return refers to the return after these fees have been
deducted.
Pre-Tax and After-Tax Nominal Return
All forms of income (dividend. Short/long term capital gains) may be taxed differently. Pre-tax nominal
return refers to the return before paying taxes. After-tax nominal returns, as the name suggests, refers
to the return after tax liability is deducted.
For instance, consider a portfolio with an investment of Rs. 10 million. The end of period value is Rs. 13
million. The gross return is Rs. 3 million. Now assume that the fund manager charges an expense ratio
that costs the investor Rs. 500,000 from the gross return. The net return after the fee is deducted is Rs.
2.5 million. Also, assume that the dividends received were Rs. 100,000 over the holding period. Hence the
pre-tax nominal returns are Rs. 2.6 million. If the tax rate is 20%, then the after-tax nominal return will be
$2.08 million.
Real Return
Real return is the inflation-adjusted return. This form of return depicts the true return on investment
and gives an insight into the increase in purchasing power of the investor.
If:
(1 + Nominal Rate) = (1 + Real Rate) × (1 + Inflation)
Then:
(1 + Real Rate) = (1 + Nominal Rate) ÷ (1 + Inflation)
For example, if the return earned on the investment is 12.00% and inflation over the same period is 3.00%,
then the real return earned by the investor is:
(1 + Real Rate) = (1 + 12.00%) ÷ (1 + 3.00%)
(1 + Real Rate) = 1.0874
Real Rate = 0.0874 or 8.74%
How is this related to purchasing power?
Consider a Rs. 10,000 investments in an index fund which earns 12.00% at the end of the period, no taxes,
and no fees. So, if the investor sells the investment at the end of year one, they will receive Rs. 1,200 as a
gain plus the originally invested amount. The investor now has Rs. 11,200 of cash in hand at the end of
year 1. However, inflation alone has reduced the purchasing power of cash. So, out of this Rs. 11,200, the
same Rs. 10,000 could buy goods which are now worth Rs. 10,300.
Therefore, it is important that an investor’s wealth at least keeps up with inflation to maintain the
purchasing power and protect investor wealth.
Rates and Return 12

Leveraged Return
Leveraged return is a multiple of return on the underlying asset. An investment in derivative security,
such as a futures contract, produces a leveraged return because the cash deposited is only a fraction of
the value of the assets underlying the futures contract.

1) If an investment loses 4% of its value over 90 days, its annualized return is


closest to:
A. −15.25%.
B. −14.5%.
C. −17.0%.
2) If a stock’s initial price is $50 and its price increases to $62, its continuously
compounded rate of return is closest to:
A. 13.64%.
B. 21.5%.
C. 15.00%.
3) The value of an investment increases 5% before commissions and fees. This 5%
increase represents:
A. the investment’s net return.
B. the investment’s gross return.
C. neither the investment’s gross return nor its net return.

Answers:
1) A is correct as Annualized Returns = (1-0.04)(365/90)-1 = -15.25%
2) B is correct as Continuous compounding returns = LN(62/50) = 21.51%
3) C is correct as gross return is the total return after deducting commissions on trades and other costs
necessary to generate the returns, but before deducting fees for the management and administration of
the investment account. Net return is the return after management and administration fees have been
deducted.
Time Value of Money in Finance 13

Time Value of Money in Finance

LOS 2a: Calculate and interpret the present value (PV) of fixed-income and equity
instruments based on expected future cash flows

The timing of cash flows associated with financial instruments impacts their value, with cash received
sooner being valued more highly. This concept is known as the time value of money, which represents
the trade-off between cash flows received today versus those received in the future. It enables
comparisons between the current value (present value) of cash flows and those received at different
future times, taking into account an appropriate discount rate (r) that considers the type of instrument
and the timing and riskiness of expected cash flows.

The relationship between the present value (PV) and future value (FV) of a cash flow, where r is the
discount rate per period and t is the number of compounding periods, is generally expressed as follows:

FVt = PV * (1 + r)^t

This formula helps determine the future value of a cash flow based on the present value and the applicable
discount rate over a specific number of compounding periods.

Fixed-Income Instruments and the Time Value of Money

Fixed-income instruments are debt instruments, like bonds or loans, through which an issuer borrows
money from an investor, promising future repayment. The discount rate used for these instruments is an
interest rate, and the bond or loan's rate of return is commonly known as its yield-to-maturity (YTM).

Cash flows associated with fixed-income instruments generally follow one of three patterns:

1. Discount: An investor pays an initial price (PV) for the bond or loan and receives a single principal cash
flow (FV) at maturity. The difference (FV - PV) represents the interest earned during the instrument's life.

2. Periodic Interest: An investor pays an initial price (PV) for the bond or loan and receives periodic
interest cash flows (PMT) at predetermined intervals until maturity. The final interest payment and the
principal (FV) are paid at maturity.

3. Level Payments: An investor pays an initial price (PV) and receives uniform cash flows (A) at
predetermined intervals throughout the instrument's life, representing both interest and principal
repayment.

Discount Instruments are a type of debt instruments where the investor purchases the instrument at a
price that is lower than its face value or par value. In other words, the investor buys the instrument at a
discount to its future repayment value.

The key characteristic of a discount fixed-income instrument is that it does not make periodic interest
payments like a traditional coupon bond. Instead, the investor typically buys the instrument at a price
lower than its face value, and upon maturity, the investor receives the full face value as the repayment
amount.
Time Value of Money in Finance 14

The return on the investment in a discount fixed-income instrument is derived from the difference
between the purchase price (discounted price) and the face value received at maturity. This type of bonds
is often referred to as zero-coupon bond given the lack of intermediate interest cash flows, which for
traditional bonds are usually referred to as coupons.

Coupon Instrument are debt securities that pay periodic interest payments, known as coupon payments,
to the bondholders. These instruments are typically issued by governments, corporations, or other
entities as a way to borrow money from investors.

The term "coupon" refers to the fixed interest rate that the issuer agrees to pay to the bondholders. The
coupon rate is expressed as a percentage of the bond's face value or par value. For example, if a bond has
a face value of $1,000 and a coupon rate of 5%, it will pay $50 in annual interest to the bondholders
($1,000 x 5%).

Equity Instruments and Time Value of Money

Similar to fixed-income securities, the valuation of equity securities like common and preferred stock
involves assessing the present worth of their forthcoming cash flows. Notably, equities lack a maturity
date, and their cash flows may fluctuate over time. In the context of preferred stock, a predetermined
dividend is paid as a percentage of its par value (akin to a bond's face value).

Just as with bonds, we differentiate between the specified percentage that determines cash flows and the
discount rate applied to these cash flows. Equity investors demand a requisite return to warrant owning
a share, which serves as the discount rate for valuing equity securities.

Given that the constant dividend stream of preferred stock can be deemed perpetual, the perpetuity
formula is employed to ascertain its value:
𝑫(𝑷)
Preferred Stock Value = 𝒌(𝒑)

where: Dp = dividend per period kp = the market’s required return on the preferred stock

Common stock represents the residual claim on a company's assets once all other obligations have been
met. Unlike preferred stock, common stock typically doesn't guarantee fixed dividend payments. Instead,
dividend decisions for common stock are discretionary and made by the company's management. Given
the uncertainty of future cash flows, models are employed to estimate the value of common stock. In this
context, we will delve into three commonly used approaches, collectively known as dividend discount
models (DDMs), which we will explore further in the Equity Investments section. These models are as
follows:

1. Constant Future Dividend Assumption: Under this assumption, valuing common stock mirrors the
method used for preferred stock, employing the perpetuity formula.

2. Constant Growth Rate of Dividends Assumption: With this assumption, the constant growth DDM, often
referred to as the Gordon growth model, is applied. In this model, the value of a common share is
expressed as follows:
𝐷(1)
V0 = 𝐾𝑒−𝑔
Time Value of Money in Finance 15

where:
V0 = value of a share this period
D1 = dividend expected to be paid next period
ke = required return on common equity
g = constant growth rate of dividends

1) FIL Corporation preferred stock is expected to pay a $4 annual dividend in


perpetuity. If the required rate of return on an equivalent investment is 9%, one
share of FIL preferred should be worth:
A. $44.44.
B. $46.44.
C. $22.22.

2) Grover Company wants to issue a $10 million face value of 10-year bonds with an
annual coupon rate of 5%. If the investors’ required yield on Grover’s bonds is 6%,
the amount the company will receive when it issues these bonds (ignoring
transactions costs) will be:
A. less than $10 million.
B. equal to $10 million.
C. greater than $10 million.

Solutions:
1. A is correct as : 4 / 0.09 = $44.44

2. A is correct as the required yield is greater than the coupon rate, the present value of
the bonds is less than their face value: N = 10; I/Y = 6; PMT = 0.05 × $10,000,00
= $500,000; FV = $10,000,000; and CPT PV = −$9,263,991

LOS 2b: Calculate and interpret the implied return of fixed-income instruments given
the present value (PV) and cash flows
Implied Return for Fixed-Income Instruments

The implied return for fixed-income instruments refers to the expected rate of return that investors
anticipate from holding such instruments. It is a forward-looking estimate based on the current market
price of the instrument and its future cash flows, including interest payments and principal repayment.
Time Value of Money in Finance 16

Investors can derive the implied return by solving for the discount rate or yield that equates the present
value of all expected cash flows from the fixed-income instrument to its current market price. In other
words, the implied return is the interest rate at which the discounted future cash flows match the
instrument's current price. This is implied after the assumption that coupon payments are paid on time
and are reinvested at the same YTM or Coupon rate.

In case of a discount bond or instrument, an investor only receives a cash flow (par value) at maturity,
with (FV-PV) representing the implied return for a discount (zero-coupon) bond.

Unlike discount bonds, fixed-income instruments that pay periodic interest have cash flows throughout
their life until maturity. The uniform discount rate (or internal rate of return) for all promised cash flows
is the YTM, a single implied market discount rate for all cash flows regardless of timing as YTM assumes
an investor expects to receive all promised cash flows through maturity and reinvest coupon payments at
the same YTM.

LOS 2c: Explain the cash flow additivity principle, its importance for the noarbitrage
condition, and its use in calculating implied forward interest rates, forward exchange
rates, and option values.
The cash flow additivity principle asserts that the present value (PV) of a cash flow sequence equals the
cumulative sum of the PVs of its individual cash flows. Whether dealing with two distinct cash flow series
or splitting a single series into segments, the combined PVs remain consistent. This principle holds true
even when aggregating cash flows due at the same time. In essence, the PV of the whole is equivalent to
the sum of its components, offering flexibility in segmenting or consolidating cash flows.
The cash flow additivity principle is a fundamental concept utilized extensively in various pricing models
covered in the Level I CFA curriculum. It serves as the foundation for the no-arbitrage principle, also known
as the "law of one price." This principle asserts that if two sets of future cash flows are identical under all
circumstances, they should have the same price today. If a disparity exists, investors will swiftly exploit
the opportunity by purchasing the lower-priced set and selling the higher-priced one, thereby equalizing
their prices.
Three instances of valuation hinging on the no-arbitrage condition are forward interest rates, forward
exchange rates, and option pricing through a binomial model. We will delve deeper into each of these
examples when we delve into related concepts in the Fixed Income, Economics, and Derivatives topic
areas. At this point, it's essential to focus on the application of the principle that equivalent future cash
flows necessitate equivalent present values.
Forward Interest Rates
A forward interest rate represents the interest rate applicable to a loan that will be initiated at a future
date. The notation for a forward interest rate must specify both the loan's duration and the point in the
future when the funds will be borrowed. For instance, 1y1y denotes the rate for a 1-year loan to be
obtained one year from the present, while 2y1y refers to the rate for a 1-year loan scheduled two years
from now. Similarly, 3y2y signifies the 2-year forward rate three years into the future, and so forth.
17

On the other hand, a spot interest rate pertains to the interest rate for a loan set to occur today. In this
context, we will represent a 1-year rate today as S1, a 2-year rate today as S2, and so forth.
The cash flow additivity principle is applicable here in the sense that borrowing funds for three years at
the 3-year spot rate, or borrowing for one-year periods across three consecutive years, should yield the
same cost today. This relationship is depicted as follows: (1 + S3)^3 = (1 + S1)(1 + 1y1y)(1 + 2y1y). In reality,
any combination of spot and forward interest rates encompassing the same time span should yield the
same cost. By employing this concept, we can deduce implied forward rates from observable spot rates
in the fixed-income markets.
Forward Currency Exchange Rates
An exchange rate denotes the value of one country's currency in terms of another country's currency. For
instance, an exchange rate of 1.416 USD/EUR signifies that one euro (EUR) has a value of 1.416 U.S. dollars
(USD). In the Level I CFA curriculum, the currency in the numerator (USD in this example) is referred to as
the price currency, while the one in the denominator (EUR in this example) is known as the base currency.
Similar to interest rates, exchange rates can be presented as spot rates for immediate currency exchanges
or as forward rates for currency exchanges at a later date.
The percentage disparity between forward and spot exchange rates generally mirrors the difference
between the interest rates of the two countries. This alignment arises due to the existence of an arbitrage
opportunity that offers riskless profit when this relationship is violated.
For spot and forward rates presented as price currency/base currency, the no-arbitrage relationship can
be expressed as follows:
𝐹𝑜𝑟𝑤𝑎𝑟𝑑 1+𝑖𝑛𝑡𝑒𝑟𝑒𝑠𝑡 𝑟𝑎𝑡𝑒 (𝑝𝑟𝑖𝑐𝑒 𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑦)
= .
𝑆𝑝𝑜𝑡 1+𝑖𝑛𝑡𝑒𝑟𝑒𝑠𝑡 𝑟𝑎𝑡𝑒 (𝑏𝑎𝑠𝑒 𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑦)
Statistical Measures of Asset Returns 18

Statistical Measures of Asset Returns

Measures of Central Tendency

LOS 3a: Calculate, interpret and evaluate measures of central tendency and location to address
an investment problem.

Measures of Central Tendency

If you had a portfolio of 25 shares, will you mention the returns of every share to anyone who asks for it?
No, you won’t do that!
You will simply tell them the average return you earned on that portfolio.
Central Tendency measures help us identify the center average of the most frequent observation in a data
set.
There are three measures of Central Tendency:
⮚ Mean
⮚ Median
⮚ Mode
Measures of Central Tendency helps to summarize the value of an entire data set to a single value.

The Arithmetic Mean/ Mean:


It is simply the summation (addition) of all observations divided by the total number of observations.

Population Mean:
It is denoted by the Greek letter μ.

∑𝑁
𝑖=1𝑖 𝑋𝑖
𝜇=
𝑁

The population mean takes into consideration all the values of the dataset.
A population has only one population mean.
N is the size of the population.

Sample Mean
It is denoted by 𝑥
It is the average of all the observations in a sample.
We calculate the sample mean by adding all the observations and dividing it by the total number of the
observations.
Statistical Measures of Asset Returns 19

𝑥= ∑ 𝑥𝑖/𝑛
𝑖=1
n = sample size

The Median
The Median is the midpoint of a data set.
It divides the data into two equal parts. This suggests that half the data is above the median and half is
below the median.
To calculate the median, we must arrange the data in either ascending or descending order and then
select the middlemost value. The Median is not affected by outliers. If there are outliers in a data set, the
median is a better measure of central tendency.
The Mode
The most frequently occurring observation in a data set is called Mode.
It is possible to have more than one mode or no mode at all.
A distribution with 2 modes is called bimodal, with 3 modes is called trimodal.

Illustration

Calculate the mean, median, and mode for the following dataset.
4, 5, 10, 2, 6, 3, 8, 4, 2, 4, 9, 1, 5, 8, 1, 8, 3, 4, 7 and 6.

Solution
Sample size = n = 20
4 + 5 + 10 + 2 + 6 + 3 + 8 + 4 + 2 + 4 + 9 + 1 + 5 + 8 + 1 + 8 + 3 + 4 + 7 + 6 100
𝑥= = =5
9 20
To calculate the median we first need to arrange the data in ascending order.
1, 1, 2, 2, 3, 3, 4, 4, 4, 4, 5, 5, 6, 6, 7, 8, 8, 8, 9, 10

20+1𝑡ℎ 4+5
Median = observation = 10.5th observation = = 4.5
2 2
Mode = the most frequent observation = 4
Statistical Measures of Asset Returns 20

Measures of Location
Having discussed measures of central tendency, let's now explore an approach to describing the location
of data by identifying values at or below which specific proportions of the data are distributed. For
instance, knowing the 25th, 50th (median), and 75th percentiles of the annual returns on a portfolio
provides a succinct summary of the distribution of those returns.
Statisticians use the term "quantile" to refer to any value at or below which a stated fraction of the data
lies. Quantiles help us understand the spread and dispersion of the data and provide valuable insights into
its distribution.
In addition to the median, we can establish other dividing lines to split a distribution of data into smaller
segments. Quartiles divide the data into four equal parts, quintiles into five equal parts, deciles into ten
equal parts, and percentiles into one hundred equal parts.
The interquartile range (IQR) measures the spread of the middle 50% of the data and is calculated as the
difference between the third quartile (Q3) and the first quartile (Q1). Mathematically, IQR = Q3 - Q1.
The IQR is useful for understanding the central dispersion of the data and is less sensitive to outliers
compared to the range or standard deviation.
A box and whisker plot, also known as a box plot, is a graphical representation used to visualize the
distribution of data and to understand key statistics, such as the quartiles, range, and potential outliers.
It provides a clear and concise summary of the data's spread and central tendency.

Interpretation:
1. Box: The box in the plot represents the interquartile range (IQR), which is the range of the middle 50%
of the data. The bottom edge of the box corresponds to the first quartile (Q1), and the top edge represents
the third quartile (Q3). The height of the box (Q3 - Q1) shows the spread of the central 50% of the data.

2. Median: Inside the box, a line marks the median (Q2), which is the value that separates the lower 50%
from the upper 50% of the data. It represents the center of the data distribution.

3. Whiskers: The whiskers extend from the edges of the box to the minimum and maximum values within
a certain range (usually 1.5 times the IQR). They show the range of the data, excluding potential outliers.
Statistical Measures of Asset Returns 21

4. Outliers: Data points that fall outside the whiskers are plotted individually as points. These are
considered potential outliers, which are data values that deviate significantly from the rest of the data
points.

They are particularly useful when dealing with large data sets or multiple data distributions side by side,
allowing for quick insights into the data's characteristics.

Quantiles in Investment Practice


Quantiles are widely used in portfolio performance evaluation and investment strategy development.
Investment analysts rank assets, indexes, and portfolios based on their performance. They also assess
investment managers' performance relative to their peers using percentiles. Quantiles are crucial in
investment research as well, where data is divided based on objective characteristics like sales or market
capitalization. This division allows analysts to compare performance across different groups, aiding in
investment decision-making.

LOS 3b: Calculate, interpret, and evaluate measures of dispersion to address an


investment problem

Measures of Dispersion
Is it possible for 100 people to have the same income?
Is it possible for all shares listed on the stock market to give the same returns?
Is it possible for all the students in a school to have the same height?
The answer to all these questions is no, so we have to study variability or diversion.
In finance and investment, there is always a tradeoff between risk and reward. Higher the risk, the higher
the reward. Measures of central tendencies (mean, median, and mode) are the measures for the expected
reward.
What about the risk? What is the measure of the risk?
Dispersion is the measure of the risk. A risk-averse investor will try and maintain low risk as they fear
losing their money.

Range
Have you ever been shopping with your mom?
If yes, you might know the typical question she asks the vendor, which is ‘what is the range of the product
they are selling?’
The vendor would reply by stating the highest and lowest price in the range.
By asking this simple question, she knows where the price of all other products lies.
A range is a straightforward measure. It tells you a lot about the difference between extreme values.
When used with other measures such as quartiles provides beneficial information.

Range = Maximum value - Minimum value


Statistical Measures of Asset Returns 22

Illustration
Calculate the range for the following data.
1, 1, 2, 2, 3, 3, 4, 4, 4, 4, 5, 5, 6, 6, 7, 8, 8, 8, 9, 10

Solution
Maximum value = 10
Minimum value = 1
Range = 10 - 1 = 9
This means a difference of 9 between the extreme values: the more comprehensive the range, the higher
the volatility and fluctuations.
Mean Absolute Deviation
The range only uses two observations of a given set, the maximum and minimum value.
The Mean absolute deviation, on the other hand, uses all the observations in a data set.
We know that Σ (x- x) = 0 because the positive deviation would cancel the negative deviation.
To overcome this issue, we must consider the absolute values, i.e., ignore the signs.
|∑𝑛
𝑖=1 (𝑋−𝑥)|
𝑀𝐴𝐷 =
𝑛
Where;
𝑥 = simple mean
N = total number of observations

The Mean Absolute Deviation shows the amount of variability of all the observations from the mean. A
significant drawback of mean absolute deviation is that no further mathematical analysis can be possible
as absolute values are used to determine them.

Illustration

Calculate the mean absolute deviation for the following data set.
7,5,6,1,3,9,2.

Solution
To calculate the mean absolute deviation, we need to calculate the mean of the dataset
7 + 5 + 6 + 1 + 3 + 9 + 2 33
𝑥= = = 4.71
7 7
|7 − 4.71| + |5 − 4.71| + |6 − 4.71| + |1 − 4.71| + |3 − 4.71| + |9 − 4.71| + |2 − 4.71|
𝑀𝐴𝐷 =
7
2.29 + 0.29 + 1.29 + 3.71 + 1.71 + 4.29 + 2.71 16.29
= = = 2.327
7 7
Statistical Measures of Asset Returns 23

This means, on average, all the observations vary by a factor of 2.327 from the mean.

Variance and Standard Deviation


The mean absolute deviation uses absolute values of deviation to calculate the dispersion. But it also
happens to be a significant drawback as no further mathematical analysis is possible as absolute values
have been used.
Squaring can be a viable solution as the square of the negative numbers is positive. So instead of using
the absolute deviation, we will now use the squared deviation.
Variance is the average of squared deviation around the mean. Since squared deviations are used,
variance can never be negative.
Variance is calculated in square units, i.e., if we calculate the variance of the total distance traveled by
car, we might get an answer say 6-kilometer square. We can’t possibly explain what kilometer square
means. So we need to standardize this, and we consider the root of variance.
The standardized value of the Variance is called Standard deviation.
Population Variance and Population Standard Deviation.
If every deviation of the population is known, we can compute population variance.
Population variance is denoted by 𝜎 2
It is read as a sigma square.

∑𝑁 (𝑋𝑖 −𝜇)2
𝜎2 = 𝑖=1
𝑁

Where
μ = population mean
N = size of population

Population Standard deviation is calculated using the following formula.

∑𝑁 (𝑋𝑖 −𝜇)2
𝜎=√ 𝑖=1
𝑁

Sample Variance and Sample Deviation

In situations where you don’t get access to the whole population, we have to use the samples.

Sample variance is denoted by s²

∑𝑛 (𝑋𝑖−𝑥)
𝑠2 = 𝑖=1
𝑛−1

Where,
𝑥 = sample mean
N = sample size
Statistical Measures of Asset Returns 24

You might have noticed that the denominator of the sample variance is n-1
Sample statistics should always be an unbiased estimator of the population parameter.
By using n-1 as the denominator, we make it a better estimator, and it remains unbiased.

Sample standard deviation is computed using the following formula


∑𝑛 (𝑋𝑖−𝑥)
𝑠=√ 𝑖=1
𝑛−1

Downside Risk
Variances are used as a risk measures. Variances calculate the fluctuations of observation above and
below the mean value.
Logically, is it really risk if the returns are above average?
In some situations, we only calculate the fluctuations that are below mean. This is called downside risk.
Target downside deviation or target semi-deviation is a way to calculate risk.
A target value if set against which all the observations are measured. We are only concerned about the
values below the target.
Starget= Xi< Bn(Xi- B)2n-1
Where,
B = the target
Example
20%, 14%, 17%, 18%, 22%. Calculate the target downside deviation for
B = mean and B = 19%
x = 18.2%
Return Deviation from mean Deviation from target return
20% 20 – 18.2 = 1.8 20 – 19 = 1
14% 14 – 18.2 = -4.2 14 – 19 = -5
17% 17 – 18.2 = -1.2 17 – 19 = -2
18% 18 – 18.2 = -0.2 18 – 19 = -1
22% 22 – 18.2 = 3.8 22 – 19 = 3

S18.2= (-4.2)2+(-1.2)2+(-0.2)25-1=2.18%

S19%= (-5)2+(-2)2+(-1)25-1=2.738%
Statistical Measures of Asset Returns 25

Coefficient of Variation
Can we make a direct comparison between any given asset class by simply knowing the average return
and the risk involved?
No, a direct comparison between two asset classes is impossible as it won’t give us meaningful results.
So, to evaluate these kinds of cases and make meaningful comparisons, we use relative dispersion.
When we use the word relative, we are referring to it with respect to some quantity.
Relative dispersion is the amount of variability in distribution relative to a reference point or benchmark.

We use the coefficient of variation to calculate the relative dispersion


𝑆𝑥
𝐶𝑉 =
𝑥

Here,
𝑥 is the mean
𝑆𝑥 Is the sample standard deviation
CV calculates the relative dispersion concerning the mean. It computes variability per unit of mean or risk
per unit of expected return.

Illustration

Calculate the sample variance, sample standard deviation, and the coefficient of variation for the dataset
given below. 7,5,6,1,3,9,2.
Solution

The mean for this data is 4.71


Variance can be calculated as
𝑆2
(7 − 4.71)2 + (5 − 4.71)2 + (6 − 4.71)2 + (1 − 4.71)2 + (3 − 4.71)2 + (9 − 4.71)2 + (2 − 4.71)2
=
7−1
(2.29)2 + (0.29)2 + (1.29)2 + (−3.71)2 + (−1.71)2 + (4.29)2 + (−1.71)2 49.4287
= = = 8.2381
6 6

Standard deviation can be calculated as 𝑠 = √8.2381 = 2.8702

Coefficient of variation
𝑆𝑥 2.8702
𝐶𝑉 = = = 0.6093
𝑥 4.71

The above analysis reveals that the data varies by a factor of 2.807 from the mean and has relative
dispersion of 0.6093 concerning the mean.
Statistical Measures of Asset Returns 26

This also says that there is variability in data of 0.6093 per one unit of the mean.

Skewness, Kurtosis, and Correlation

LOS 3c: interpret and evaluate measures of skewness and kurtosis to address an
investment problem
Skewness
When we plot the data points on a graph or chart, we always get some unique curve on the graph.
Is there some way we can predict how the graph might look before plotting the points on the graph?
Yes, we can predict the shape of the graph by something called skewness.
Skewness is a measure of asymmetry.
A distribution is said to be symmetrical if its shape is equal on both sides of its mean.

Positively skewed (right tail)


A positively skewed distribution will have more data in the initial intervals and fewer data in the later
intervals.
Because of this, there is a hump shape formed on the graph followed by a tail.
If the returns of a particular asset are positively skewed, there are frequent losses of small magnitude and
a few significant gains.

Negatively skewed (left tail)


A negatively skewed distribution will have more data in the later intervals and fewer data in the initial
intervals.
Because of this, there is a tail shape formed on the graph followed by a hump.
Statistical Measures of Asset Returns 27

If the returns of a particular asset are negatively skewed, there are frequent gains of small magnitude and
a few significant losses.

No Skew/ symmetric distribution


Here the data is distributed equally on both sides of the mean.

Here 50% of the data is on the right side of the mean, and 50% is on the left side of the mean.

The formula of skewness:


1 ∑𝑛
𝑖=1 (𝑋𝑖−𝑥)3
Sample of skewness =
𝑛 𝑠3
Statistical Measures of Asset Returns 28

Sample skewness > 0 implies a positively skewed distribution.


Sample skewness < 0 implies a negatively skewed distribution.
Sample skewness = 0 implies the data is symmetrical.
Absolute values over 0.5 are considered to be significant.

Interpret kurtosis.
Kurtosis
Skewness tells us about the shape of the distributions. Did you observe the tails of each distribution?
Let us ask ourselves, are the tails essential, and if yes, why are they important?
Tails are used to study extreme events; these events can make or break an investor.
This is the reason why we study kurtosis; it helps us to determine the heaviness of their tails.

Leptokurtic distribution
The curve of a leptokurtic distribution is peaked and taller than a normal distribution.
It has fatter and heavier tails than a normal distribution.
Investment strategies that have a leptokurtic curve tend to be riskier. This is because a considerable
proportion of observations have minimal deviations from the mean and a considerable proportion of
observations have considerable deviations away from the mean.
The range of the observations is higher, and hence they are more volatile.

Platykurtic distribution
The curve of platykurtic distribution is less peaked flatter than a standard curve.
It has thinner and lighter tails than a normal distribution.
Investment strategies that have a platykurtic curve tend to be less risky. This is because the observations
are clustered around the mean. The range of the observations is generally narrow, and hence they are
less volatile.

Mesokurtic distribution
It is the same as a normal distribution. It has a measured kurtosis of 3.
Statistical Measures of Asset Returns 29

1 ∑𝑛
𝑖=1 (𝑋𝑖−𝑥)4
Sample kurtosis =
𝑛 𝑠4

A normal distribution has a kurtosis of 3

Excess kurtosis = Sample kurtosis -3


Excess kurtosis > 0 implies the distribution is leptokurtic
Excess kurtosis < 0 implies the distribution is platykurtic
Excess kurtosis = 0 implies the distribution is mesokurtic
Absolute values in excess kurtosis over 1.0 are considered significant.

Illustration

Calculate the Skewness and Kurtosis for the following dataset.


4, 5, 10, 2, 6, 3, 8, 4, 2, 4, 9, 1 and 5

Solution
To calculate the Skewness and Kurtosis, we need to compute the mean and the standard deviation of the
dataset.
63
𝑥= = 4.8461
13

∑ (𝑋𝑖−𝑥)2 91.692
𝑆𝑥 = √ √ = 2.764
𝑛−1 13−1
Statistical Measures of Asset Returns 30

X (X-x) 𝑿 − 𝒙) 𝟐 (𝑿 − 𝒙)𝟑 (𝑿 − 𝒙)𝟒


4 -0.8462 0.7160 -0.6058 0.5126
5 0.1538 0.0237 0.0036 0.0006
10 5.1538 26.5621 136.8971 705.5468
2 -2.8462 8.1006 -23.0555 65.6196
6 1.1538 1.3314 1.5362 1.7725
3 -1.8462 3.4083 -6.2922 11.6164
8 3.1538 9.9467 31.3705 98.9377
4 -0.8462 0.7160 -0.6058 0.5126
2 -2.8462 8.1006 -23.0555 65.6196
4 -0.8462 0.7160 -0.6058 0.5126
9 4.1538 17.2544 71.6723 297.7156
1 -3.8462 14.7929 -56.8958 218.8299
5 0.1538 0.0237 0.0036 0.0006
63 0.0000 91.6923 130.3669 1467.1971

1 130.3669
𝑆𝑘𝑒𝑤𝑛𝑒𝑠𝑠 = ∗ = 0.47490
13 2.764

This answer is close to 0; this suggests that the data might be normally distributed.
1 1467.1971
𝐾𝑢𝑟𝑡𝑜𝑠𝑖𝑠 = ∗ = 1.933717
13 2.7644

Excess kurtosis = 1.933717 –3 = -1.06628


This suggests that the data is platykurtic in nature, which means it has thinner tails compared to a normal
distribution.
The Central tendencies of this data are
Mean = 4.84
Median = 4
Mode = 4
Hence, we can make the inference that the data is normally distributed. The absolute value of
skewness above 0.5 is considered to be significant.
Statistical Measures of Asset Returns 31

LOS 3d: Interpret correlation between two variables to address an investment problem

Covariance
Have you ever wondered what the relationship is between shares of Reliance industry and Nifty 50 or the
relationship between shares of HDFC Bank and Kotak Mahindra?
Variance tells us about the dispersion or volatility of only one variable. However, covariances tell us how
the values of one random variable depend on the other.
It calculates the interdependence of two random variables.

𝐶𝑜𝑣(𝑅𝑖 , 𝑅𝑗 ) = 𝐸{[𝑅𝑖 − 𝐸(𝑅𝑖 )][𝑅𝑖 − 𝑅𝑗 ]


Here
Ri = return of asset i
Rj = return of asset j
E(Ri ) = expected value of Ri
E(Rj ) = expected value of Rj

Covariance of Rj with itself is equal to the variance of Rj

𝐶𝑜𝑣(𝑅𝑗 , 𝑅𝑖 ) = 𝐸 {[𝑅𝑗 − 𝐸(𝑅𝑗 )][𝑅𝑖 − 𝐸(𝑅𝑖 )] = 𝑉𝑎𝑟(𝑅𝑖 )


Covariance may have the following range
−∞ < 𝐶𝑜𝑣(𝑅𝑗 , 𝑅𝑖 ) < ∞

Covariance Matrix

Cov(A,B) = Cov(B,A)

𝛴(𝑥 − 𝑥)(𝑦 − 𝑦)
𝐶𝑜𝑣(𝑋, 𝑌) =
𝑛
Statistical Measures of Asset Returns 32

Sample Variance
{[𝑋𝑖 − 𝑋][𝑌𝑖 − 𝑌]}
𝑆𝑋,𝑌 =
𝑛−1

Where,
𝑋𝑖 = an observation of variable X
𝑌𝑖 = an observation of variable Y
𝑋 = mean of variable X
𝑌 = mean of variable Y
𝑛 = number of periods

Correlation Coefficient
There is an inherent flow in covariance as it can take tremendous values. It becomes difficult to interpret
these values.
To overcome these problems, we study another measure called correlation coefficient or correlation.
Correlation measures the strength of a linear relationship between two random variables.
It is denoted by the Greek letter ‘rho’ 𝜌𝑥,𝑦 or rxy

It is a unit-free measure and ranges from -1 to 1, making it easy to interpret the results.
−1 ≤ 𝑟𝑥𝑦 ≤ 1
𝐶𝑜𝑣 (𝑅𝑖, 𝑅𝑗)
𝐶𝑜𝑟𝑟 (𝑅𝑖, 𝑅𝑗) =
𝜎(𝑅𝑖) 𝜎(𝑅𝑗)
Statistical Measures of Asset Returns 33
Statistical Measures of Asset Returns 34

Drawbacks of Correlation
The correlation coefficient does not explain anything about a nonlinear relationship.
There might be some random variables that are statistically related but are not actually dependent.
For example,
There are chances that 100 days data of Nifty 50 and rainfall in Mumbai night have a strong correlation
accidentally.

Presence of outliers
There can be extreme value in data which affects the correlation coefficient.

As we can see in the above graph that by removing the outlier from our calculation, the calculated
correlation significantly affects the value. Further investigation should be conducted for such data points
to confirm whether they are valid or caused by randomness in the data.
Statistical Measures of Asset Returns 35

You are provided with the following information.


∑10
𝑖=1 𝑥 = 52, ∑10
𝑖=1 (𝑥 − 𝑥)2 = 69.6, ∑10
𝑖=1 (𝑥 − 𝑥)3 = −39.84 𝑎𝑛𝑑 ∑10
𝑖=1 (𝑥 − 𝑥)4 =
853.152

1. Calculate the sample standard deviation from the information provided.

A. 2.451
B. 2.638
C. 2.781

2. Calculate the coefficient of variation with the information provided above.

A. 0.5073
B. 0.4713
C. 0.5348

3. Calculate the sample skewness with the information provided above.

A. -0.27057
B. -0.1852
C. -0.21701

4. Calculate the sample kurtosis with the information provided above.

A. 1.097
B. 1.355
C. 1.818
Statistical Measures of Asset Returns 36

Answers

1. We can calculate the sample standard deviation by using the following formula
∑𝑛 (𝑋𝑖−𝑥) 69.6
𝑠=√ 𝑖=1
=√ = 2.781
𝑛−1 9

2. The coefficient of variation can be calculated as


𝑆 2.781
CV = 𝑋𝑥 = 5.2 = 0.53478

3. Sample skewness can be calculated as

1 ∑𝑛
𝑖=1 (𝑥−𝑥)3 1 39.84
= ×− = −0.1852
𝑛 𝑠3 10 2.7813

4. Kurtosis is calculated using the following formula.


1 ∑𝑛
𝑖=1 (𝑥−𝑥)4 1 853.152
= ×− = 1.4263
𝑛 𝑠4 10 2.7814
Probability Trees and Conditional Expectations 37

Probability Trees and Conditional Expectations

Conditional Expectations and Expected Value

LOS 4a: Calculate expected values, variances, and standard deviations and demonstrate
their application to investment problems

Expected Value
The expected value of a Random Variable is the weighted average of all possible outcomes of a Random
Variable.
Here the weights used are the probability of the outcomes.

E(x) = 𝛴 P(xᵢ) xᵢ

= P(x1) x1 + P(x₂) x₂ + P( x₃) x₃ +....+ P( xₙ) xₙ

Similarly, if we use the conditional probability in the above formula, we get conditional expectation
E [X| Y] = P [ X₁ |Y] x₁ + P[ X₂ |Y] x₂ +.....+ P [Xₙ |Y] xₙ
Conditional expectations are calculated when the outcomes are contingent on some other events
happenings. The use of tree diagrams can best understand these.

Example:
Probability Trees and Conditional Expectations 38

Calculate the expected EPS

E [EPS] = P[EPS | good economy] EPS + P [EPS | bad economy] x EPS


Expected EPS = 120 x 0.42 + 105 x 0.18 + 90 x 0.12 + 50 x 0.28
= $94.1

Variance and standard deviation measure the dispersion of outcomes around the expected value or
forecast.
The variance of a random variable is the expected value (the probability – weighted average) of squared
deviations from the random variable’s expected value.
σ 2(X) = E[X − E(X)]2

Variance is a non-negative number (greater than or equal to 0) since it is calculated as the sum of squared
terms. A variance of 0 signifies that there is no dispersion or risk in the data. In this case, the outcome is
certain, and the quantity X is not subject to randomness. On the other hand, a variance greater than 0
indicates the presence of dispersion in the outcomes. As the variance increases, the dispersion also
increases, assuming all other factors remain constant.

Standard deviation is the square root of variance. If the random variable is return in percent, standard
deviation of return is also in units of percent

LOS 4b: Formulate an investment problem as a probability tree and explain the use of
conditional expectations in investment application

Expected values or expected returns can be calculated using conditional probabilities, which are based on
the likelihood of specific events occurring. Conditional expected values are contingent on the outcome of
Probability Trees and Conditional Expectations 39

some other event, and they are used by analysts to revise their expectations when new information
becomes available.

For instance, let's consider the effect of an import tariff on oil on the returns of a domestic oil-producing
company's stock. The stock's conditional expected return, given that the government imposes the tariff,
will be higher than the conditional expected return if the tariff is not imposed.

Using the total probability rule, we can estimate the unconditional expected return on the stock by
summing the expected return given no tariff, multiplied by the probability that a tariff will not be enacted,
and the expected return given a tariff, multiplied by the probability that a tariff will be enacted. This
approach allows us to account for the uncertainties surrounding the tariff's implementation and its
potential impact on the stock's returns.

We can understand this better with the help of an example:


Assume a domestic oil-producing company's stock has two possible scenarios:

Scenario 1: No Import Tariff on Oil


Expected return given no tariff: 8%
Probability of no tariff: 70%

Scenario 2: Import Tariff on Oil


Expected return given tariff: 12%
Probability of tariff: 30%

Using the total probability rule, we can estimate the unconditional expected return on the stock as
follows:

Unconditional Expected Return = (Expected return given no tariff * Probability of no tariff) + (Expected
return given tariff * Probability of tariff)

Unconditional Expected Return = (8% * 70%) + (12% * 30%)


Unconditional Expected Return = 5.6% + 3.6%
Unconditional Expected Return = 9.2%

LOS 4c: Calculate and interpret an updated probability in an investment setting using
Bayes’ formula

Bayes Formula
Can the arrival of any new market system affect the prior calculated probability?

Let’s understand this with the help of an example. Say you can come to know through someone that the
share prices of ONGC are going to increase.
Probability Trees and Conditional Expectations 40

Example:

The share prices increase in 2 scenarios when the prices of oil increase and when the prices of oil decrease.
We now know that the share price will rise, but will the probability of the events leading to the increase
in share price remain the same?
No, the probability needs to be adapted given the arrival of new information.
What will be the updated probability that the oil prices decreased, given that share prices of ONGC
increased?

𝑃 [𝑂𝑁𝐺𝐶 ↑ ∩ 𝑜𝑖𝑙 ↓]
𝑃 [𝑜𝑖𝑙 ↓ | 𝑂𝑁𝐺𝐶 ↑] =
𝑃 [𝑂𝑁𝐺𝐶 ↑ ∩ 𝑜𝑖𝑙 ↓] + 𝑃 [𝑂𝑁𝐺𝐶 ↑ ∩ 𝑜𝑖𝑙 ↑]

56%
=
56% + 12%

56
=
68

= 82.35%

So, the updated probability that the prices of oil decrease given the share prices of ONGC increase is
82.35%
Portfolio Mathematics 41

Portfolio Mathematics
LOS 5a: Calculate and interpret the expected value, variance, standard deviation,
covariances, and correlations of portfolio returns
The expected return on the portfolio (E(Rp)) is the weighted average of the expected returns on the
component securities using their respective proportions of the portfolio in currency units as weights.
Calculation of the expected value of the portfolio is simply
𝑛

𝐸(𝑅𝑝) = ∑ 𝑊𝑖 𝑅𝑖
𝑖=10

Where,
Wi = proportion or asset i
Ri = Return on asset i

E(Rp) = W₁R₁ + W₂R₂ + W₃R₃ + ......WₙRₙ

Portfolio Variance
Returns on assets in any market are interdependent, and they also vary from themselves. To calculate the
portfolio variance, we use the following formula
𝑛 𝑛

𝑉𝑎𝑟(𝑅𝑝) = ∑ ∑ 𝑊𝑖 𝑊𝑗 𝐶𝑜𝑣(𝑅𝑖 𝑅𝑗)


𝑖=1 𝑗=1

So, if we were to calculate the portfolio variance of returns of assets A and B


Var(Rp) = Wa*Wa*Cov(Ra, Ra) + Wb*Wb*Cov(Rb, Rb) + Wa*Wb*Cov(Ra, Rb) + Wb*Wa*Cov (Ra, Rb)

The highlighted terms in the formula are equal since Cov (Ra, Rb) = Cov(Rb, Ra)
𝑉𝑎𝑟(𝑅𝑝) = 𝑊𝑎² 𝑉𝑎 + 𝑊𝑏² 𝑉𝑏 + 2 𝑊𝑎 𝑊𝑏 𝐶𝑜𝑣 (𝑅𝑎, 𝑅𝑏)

= 𝑊𝑎²𝜎²(𝑅𝑎) + 𝑊𝑏²𝜎²(𝑅𝑏) + 2𝑊𝑎𝑊𝑏 𝜎(𝑅𝑎)𝜎(𝑅𝑏) ⍴(𝑅𝑎𝑅𝑏)


Portfolio Mathematics 42

Illustration

Asset Wi Ri S.d

A 30% 20% 10%

B 70% 30% 8%

Solution

⍴AB = 0.75
Calculate expected portfolio Value and portfolio standard deviation
E(Rp) = Wa Ra + Wb Rb
= 0.3 × 0.2 + 0.7 × 0.3

= 27%

𝑆𝑑(𝑅𝑝) = √𝑊𝑎² 𝜎²(𝑅𝑎) + 𝑊𝑏²𝜎²(𝑅𝑏) + 2𝑊𝑎𝑊𝑏 𝜎(𝑅𝑎)𝜎(𝑅𝑏) ⍴(𝑅𝑎𝑅𝑏)

= √0.32 × 0.12 + 0.72 × 0.082 + 2 × 0.3 × 0.7 × 0.1 × 0.08 × 0.75

= √0.006556
= 8.0969%
Portfolio Mathematics 43

Illustration
Calculate E(X), E(Y) 𝜎𝑥 𝜎𝑦 𝑟𝑥𝑦 for the given data

Economy Probability GDP (X) Nifty return (Y)

Good 0.6 8 16%

Normal 0.3 6.5 12%

Poor 0.1 4 2%

Solution
E(x) = P(X). X
= 0.6 × 8 + 0.3 × 6.5 + 0.1 × 4
= 7.15
E(Y) = P(Y). Y
= 0.6 × 16% + 0.3 × 12% + 0.1 × 2%
= 13.4%
𝑉𝑎𝑟(𝑋) = 𝛴𝑃(𝑋) [𝑋 − 𝐸(𝑋)]²
= 0.6 (8-7.15)² + 0.3 (6.5- 7.15)² + 0.1(4-7.15)²
= 0.4335 + 0.12675 + 0.99225
= 1.5525
𝑉𝑎𝑟(𝑌) = 𝛴𝑃(𝑌) [𝑌 − 𝐸(𝑌)]²
= 0.6[0.6 - 0.134] ² + 0.3 [0.12-0.134] ² + 0.1 [0.02 - 0.134]²
= 0.1302936 + 0.0000588 + 0.0012996
= 0.1320948
𝜎𝑦 = √0.1320948
= 36.35%
For calculation of rX,Y we need to calculate Cov(X,Y)
𝐶𝑜𝑣(𝑋, 𝑌)
𝑟𝑥𝑦 =
𝜎𝑋 𝜎𝑌

Cov(X,Y) = E { [ X - E(X) ] [ Y- E(Y)] }


= 06(8-7.15) × (0.16-0.134) + 0.3 (6.5 – 7.15) × ( 0.12 - 0.134) + 0.1 (4-7.15) × (0.02 - 0.134)
= 0.01326 + 0.00273 + 0.03591
= 0.0519
0.0519
𝑟𝑥𝑦 =
1.24599 × 0.3629
= 0.1086861
Portfolio Mathematics 44

Since the magnitude of r is high, it suggests that there is a strong linear relationship. And a positive sign
indicates it’s a positive relationship.

Normal Expected values

Mean 𝛴𝑥 𝐸(𝑥) = 𝛴𝑃(𝑥) 𝑥


𝑛
Variance 𝛴 (𝑥 − 𝑥)2 𝑉𝑎𝑟(𝑥) = 𝛴𝑃 [𝑋 − 𝐸(𝑋)]
𝑛
Covariance 𝛴 (𝑥 − 𝑥 )(𝑦 − 𝑦) 𝐸 {[ 𝑋 − 𝐸(𝑋)]² [ (𝑌 − 𝐸(𝑌)²]}
𝑛
Portfolio Mathematics 45

1. Calculate the Cov(Ra,Rb) given the correlation coefficient ab= 0.75 and (Ra) = 25% and (Rb) = 65%

A. 12.19%
B. 13.18%
C. 16.14%

2. Calculate the expected value of the EPS in the scenario given below

Economy Probability EPS

Good 0.2 $5.75

Normal 0.6 $4

Poor 0.2 $2

A. $3.916
B. $3.95
C. $3.93

3. Calculate the portfolio standard deviation for the following scenario provided that 𝜌𝑎𝑏 =0.6.

Asset Wi Standard deviation

A 0.4 0.04

B 0.6 0.02

A. 5.15%
B. 2.52%
C. 4.27%
Portfolio Mathematics 46

Solution

𝐶𝑜𝑣 (𝑅𝐴 ,𝑅𝐵 )


1. 𝑃𝑎𝑏 = 𝜎 (𝑅𝐴 )× 𝜎 (𝑅𝐵 )

𝐶𝑜𝑣 (𝑅𝐴 , 𝑅𝐵 ) = 𝑃𝑎𝑏 × 𝜎 (𝑅𝐴 ) × 𝜎 (𝑅𝐵 )

= 0.75 X 0.25 X 0.65


= 0.12187
= 12.19%
2. 𝐸 (𝑋) = 𝛴 𝑝(𝑥)𝑋

ε(EPS) = P[EPS | Good] + P[EPS | Normal] + P [EPS | Poor]


= 0.2 x 5.75 + 0.6 x 4 + 0.2 2
= $3.95

3. 𝑃𝑜𝑟𝑡𝑓𝑜𝑙𝑖𝑜’𝑠 𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 = 𝑊𝐴2 (𝜎𝐴 )2 + 𝐸𝐵2 (𝜎𝐵 )2 + 𝛿𝐴𝐵 × 2 × 𝑊𝐴 × 𝑊𝐵 × 𝜎𝐴 × 𝜎𝐵

= 0.4² x 0.04² + 0.6² x 0.02² + 0.6 x 2 × 0.04 x 0.02 x 0.4 x 0.6

= 0.0006304

Portfolio standard deviation = √0.0006304

= 0.0251078

= 2.52%
Portfolio Mathematics 47

LOS 5c: Define shortfall risk, calculate the safety-first ratio, and select an optimal portfolio
using Roy’s safety-first criterion.

A bank offers a 5% return on the amount that is deposited. If you invest your money in an asset, the
general logic will be that the asset provides you a higher rate of return than what the bank provides, i.e.,
you want the return to be greater than 5%.
So, the risk you face is below a threshold of 5%, i.e., if the assets' returns are below 5%, you have made a
loss as the bank provided you a guaranteed 5% return.
What are the chances that your return falls below a threshold?
There’s a term called the shortfall risk. It is the probability that your investments/ returns fall below a
particular threshold.
P[ RP < RL ]
So, answer this, do we want our investment strategies to have a higher shortfall or lower shortfall risk?
Obviously lower because the lower the shortfall risk, the higher the chances of making more returns.
An optimum investment strategy will have a minimum shortfall risk.
Roy’s safety-first criterion states that the optimum portfolio will minimize the probability of shortfall risk.
The first step - Calculate the SF Ration

𝐸(𝑅𝑝 ) − 𝑅𝐿
𝑆𝐹 𝑅𝑎𝑡𝑖𝑜𝑛 = [ ]
𝜎𝑝

The second step - Choose a portfolio that has the highest SFR
E.g.,
X Ltd provides a return that follows an approximately normal distribution with a mean of 7% and a
standard deviation of 2%
Y Ltd provides a return that follows an approximately normal distribution with a mean of 10% and a
standard deviation of 12%.
The bank provides a return of 5%
Determine which investment strategy is most desirable using Roy’s safety-first criteria.
For A
E(RA) = 7%
𝜎𝐴 = 7%
RL= 5%
7−5
𝑆𝐹 𝑅𝑎𝑡𝑖𝑜 𝑓𝑜𝑟 𝑎𝑠𝑠𝑒𝑡 𝐴 = =1
2
P(z < -1) = 0.15866
48

For B
E(RB) = 10%
𝜎𝐵 = 12%
10 − 5
𝑆𝐹 𝑅𝑎𝑡𝑖𝑜 𝐹𝑜𝑟 𝑎𝑠𝑠𝑒𝑡 𝐵 = = 0.4167
12
P(z < -0.4167) = 0.33845
We can see that the probability of shortfall for asset B is greater than that of asset A.

If we visualize these values on a graph

Compared to investment A the shaded area in investment B indicates a higher probability that the
investment will fall below 5%
Simulation Methods 49

Simulation Methods
LOS 6a: Explain the relationship between regular and lognormal distributions and why the
lognormal distributions are used to model asset prices.
Log Normal Distribution
Is it possible for the share price to have a negative value? And as a matter of fact, is it possible for the
value of any asset to be negative?
No, the share prices can't go negative
So, is it correct to use a normal distribution for share prices of a particular distribution?
So, to take care of the negative values, we use exponents.
Compute the following values
2⁻¹ = 0.5
3⁻² = 1/9
7⁻² = 1/49
We get a positive value no matter whether a positive or negative number is raised to the power.
Let’s say
𝑋 ~ 𝑁(𝜇, 𝜎 2 )

𝑒 𝑥 ~𝐼𝑛 𝑁(𝜇, 𝜎 2 )
A typical log distribution is positively skewed, unlike normal distribution where the outcomes can be
between - to the values of a log. Normal distributions are bounded by 0, as asset prices cannot be hostile.
So far, we have established that we use the normal distribution to model returns, but to model Asset
prices, we use logNormal distribution.

Price Relatives
There are chances that returns on a particular investment go below -100%. This means that there are
chances that asset prices can go below 0.
We can avoid this by using logNormal distribution to model price relatives.
Price relatives are simply the ratio of end of the year price and beginning of the year price
Price Relative = 51/50 = (1 + arithmeticng period return)
If the Price Relative has a value of 0 corresponds to a holding period return of -100%.
For e.g.,

Calculate the HPR if S1 = 100 and S0 = 80


𝑆1
= 1 + 𝐻𝑃𝑅
𝑆0
Simulation Methods 50

100
= 1 + 𝐻𝑃𝑅
80
HPR = 1.25 - 1
= 25%

LOS 6b: Describe Monte Carlo simulation and explain how it can be used in investment
applications
There can be several unforeseen events that can take place in a financial setting. We should be ready for
any such events.
But how can we predict such events even before they happen?
We run simulations to predict any foreseen events.
Monte Carlo Simulation.
This technique is based on the repeated generation of values of a security/share/asset by changing the
risk factors that affect the values of securities.
The risk factors will be a random variable having a probability distribution function of its own, with some
parameters that the analyst specifies.
To carry out these values is very tedious, and hence a computer is used to calculate the random values
based on the probability distribution function assigned.
These random values are used to price the value of the security/ asset stock etc.
This procedure continued many times, possibly for 1000 times.
After collecting the value of securities using various random values, inferences are drawn about the
distribution of the stimulated asset values, the expected values, and the variance.
Monte Carlo simulations are used to:
⮚ Value complex securities

⮚ Simulate the pension fund assets/liabilities over time

⮚ Value of portfolio

⮚ Calculate the next value at risk estimate.

Historical Simulation.
Instead of using the risk factors, we use actual changes in the value or actual changes in the risk factor
over time.
All the changes that have occurred over the prior periods are used.
Each interaction is calculated using randomly selected past changes for risk factors and calculating the
asset's value.
The actual distribution of the risk factors is available and is hence not to be estimated.
Simulation Methods 51

But ask yourself the question: are past changes in risk factors a reasonable estimation of what future
changes might be?
No, because there might be such rare events that might not have occurred in the past before.
This means it cannot answer a ‘what if’ question but a Monte Carlo simulation can.
The limitation of a Monte Carlo simulation is that it is tough to calculate the values as the methods are far
too complex. Also, if reasonable assumptions are not used, the values generated will be of no use.

Asset A B C
E[R] 5% 6% 7%
𝝈R 8% 4% 10%

1. Given a threshold of 5%, we use Roy’s safety first criteria to determine the optimal assets

A. A
B. B
C. C

2. for a log Normal distribution, calculate the P[X < 0]

A. 0.5
B. 0.25
C. 0
3. Calculate the Rcc if the price of a stock today is $150 and three years back was $25

A. 21.45%
B. 59.72%
C. 81.71%

Answers
1.

Assets A B C

SF ratio
5−5 6−5 7−5
=0 = 0.25 = 0.2
8 4 10

SF ratio of asset B is more significant than any other asset; it's the optimum asset to invest in.
Simulation Methods 52

2. By definition, the Log Normal distribution is bounded by zero hence P [x < 0] = 0

𝑆1 150
= 1 + 𝐻𝑃𝑅3 = = 1 + 𝐻𝑃𝑅3
𝑆0 25

6 = 1 + 𝐻𝑃𝑅3

In (6) = Rcc x 3 = 59.72%


Sampling and Estimation 53

Sampling and Estimation

Sampling Methods, Central Limit Theorem, and Standard Error

LOS 7a: Compare and contrast simple random, stratified random, cluster, convenience,
and judgmental sampling and their implications for sampling error in an investment
problem

If you want to know how the market performed on a particular day, will you study all the stocks that exist
in the world?
You won’t do that. Instead, you will analyze the market index to draw some conclusions.
All the stocks present in the market form a population, whereas stocks present in the market index form
a sample.
Studying the entire population can be impractical or impossible in some cases. So we study samples
collected from the population to draw inferences about the population parameters.
Simple Random Sampling
This is a method used to collect samples from any population. Here all the items or persons in a population
are equally likely to be included in the sample.
For example
You need a sample of 10 items from a population of 1000 items.
To collect the sample, you number each item from 1-1000.
After numbering, you use a random number generator and select the item.
You repeat this experiment 9 more times, and the result is a set of 10 items.
Systematic Sampling
Every nth stem from a population is selected from the sample.
E.g., you want a sample of 5 items out of a population of 50.
You would collect every 9th item from the population to be individual in your sample.
For example, an Auditor will check every fifth invoice while auditing the client.
Stratified Random Sampling
Here we first bifurcate the population into small groups so that each group has some specific
characteristic.
After creating these small groups or stratum, a random sample is selected from each group, and the results
are then pooled.
Sampling and Estimation 54

The number of samples from each group is selected based on the relative size of the sub-group concerning
the population.
E.g., If we want a sample based on the population of people in India, we will first make subgroups based
on the states of India.
Then we will select at random people from each state. The state with the highest population will have the
highest number of members in the sample, and the state with the lowest population will have the lowest
number of members.
Stratified random sampling is used in bond indexing because it is difficult to replicate the entire population
of bonds.
So the population of bonds is stratified into subgroups by factors like duration, maturity, coupon rate etc.
And a random sample is selected from each category and combined to form the final sample.
In random sampling, we select the observations in random order; on the other hand, in stratified random
sampling, we divide the population into subgroups and then select a set of observations from these
subgroups. By creating a sample using stratified random sampling, we make a sample representative of
the population.

Cluster Sampling
In this sampling technique researcher gathers data from a large and diverse population. Instead of
selecting individual elements randomly from the entire population, cluster sampling involves dividing the
population into smaller groups or clusters. These clusters are then randomly selected, and data is
collected from all the elements within the selected clusters.
If all the members in each sampled cluster are sampled, this sample plan is referred to as one-stage cluster
sampling. If a subsample is randomly selected from each selected cluster, then the plan is referred as two-
stage cluster sampling.
Non-Probability Sampling

This method does not involve a fixed and random selection process like probability sampling methods.
Instead, they rely on the researcher's judgment and sample selection capabilities. Two major types of non-
probability sampling methods are introduced here:

1. Convenience Sampling: Convenience sampling involves selecting the most readily available individuals
or subjects to be part of the sample. This method is chosen for its ease of access and practicality rather
than its representativeness of the entire population. Researchers may choose participants based on their
proximity, availability, or willingness to participate, which may introduce bias into the sample.

2. Judgmental Sampling: Judgmental sampling, also known as purposive or subjective sampling, involves
the researcher using their expertise and judgment to handpick specific individuals or groups that they
believe will provide valuable insights or represent certain characteristics of interest. This method allows
the researcher to focus on specific traits or attributes, but it can also be subject to personal bias and may
not be representative of the entire population.
Sampling and Estimation 55

Sampling Error.
When we create a sample from any population, the sample statistics, i.e. {x, s², s} are different from the
population parameters.
This is because all values of a population are not included in a sample.
Sampling Error is nothing but the difference between the sample statistic and population parameter.
For, e.g.:
Sampling error of the mean 𝑥 − 𝜇
Sampling Distribution
We can draw out many samples from a population. If we create 100 equal-sized samples from a
population, there will be 100 mean, 100variance, and 100 standard deviations.
i.e., a sample of mean = {𝑥1 , 𝑥2 , 𝑥3 … . . 𝑥100 }

Sample space of standard deviation = { s₁ , s₂ …...s₁₀₀}

Sampling distribution of sample statistics is a probability distribution of all possible sample statistics
computed from a set of equal-sized samples randomly drawn from the same population.

LOS 7b: Explain the central limit theorem and its importance for the distribution and
standard error of the sample mean
This theorem states that if random samples of size n are made from a population with mean and 𝜇
variance 𝜎 2
Then the sampling distribution of the sample mean i.e {𝑥1 , 𝑥2 , 𝑥3 … . . 𝑥𝑛 }
Approaches an approximately normal distribution as the sample size becomes large. With parameter
𝜎2
mean and variance
𝑛

In simple word
Suppose n samples are made from a population. The sampling distribution of the mean will follow an
approximately normal distribution given n is large.
N is said to be sufficiently large if n>30. The mean of the sampling distribution is equal to the population
mean i.e
𝜎2
The variance of the sampling distribution of sample mean is
𝑛

Calculate and interpret the standard error


The standard error is the standard deviation of the sampling distribution of the sample means.
𝜎
𝜎𝑥 = , when population standard deviation is known
√𝑛
𝑆𝑥
𝑆𝑥 = , when population standard deviation is not known
√𝑛
Sampling and Estimation 56

𝜎𝑥 = standard error of sample mean

𝑆𝑥 = standard error of sample mean


𝜎 = standard deviation of population

𝑆𝑥 = standard deviation of sample


n = size of the sample

Illustration

The average marks scored by students appearing for a math test were 25 marks with a population
standard deviation of 2 marks. Calculate the stand error of a sample means for a sample size of 30.

Population standard deviation is known, therefore,


𝜎 5
𝜎𝑥 = = = 0.912 𝑚𝑎𝑟𝑘𝑠
√𝑛 √30
x= n = 530= 0.912 marks

This means if we were to collect many samples, each of size 30, from the population of students appearing
for math tests.
The sampling distribution of the sample mean will have a mean = 25 marks and a standard error of 0.912
marks.

Illustration
The average marks scored by students appearing for a math test were 25 marks with a sample standard
deviation of 3 marks. Calculate the standard error of a sample mean for a sample size of 30. Since the
population variance is unknown, the standard error is calculated as
3
𝑆𝑥 = = 0.547 marks
√30
This means that the sampling distribution of the sample mean will have a mean equal to 25 and a standard
error equal to 0.547. The standard error decreases as the sample size n increases. This is because we get
more accurate results by including more observations in the sample.
Sampling and Estimation 57

LOS 7c: Describe the use of resampling (bootstrap, jack knife) to estimate the sampling
distribution of a statistic.
There are two other ways to estimate standard error of sample mean. Here, we use resampling methods.

Jack Knife method

Here, we calculate multiple sample means – from sample with one observation removed each time.

The standard deviation of these observation of sample mean can be used as standard error of sample
means.

It is a simple tool which can be used if data in hand is relatively small.

Bootstrap method

Here we create multiple samples of size n from the data or population. We then calculate the standard
deviation of the sample means which will be an estimate of standard error of sample mean.

This method is difficult as it is computationally demanding. (Calculations are difficult)

It can increase the accuracy of our estimates compared to when only one sample is used to estimate the
standard deviation.

It can be used to calculate confidence intervals in additions of mean, such as median. And can also be
used to estimate the distribution of complex statistics.

1) A simple random sample is a sample drawn in such a way that each member of
the population has:
A. some chance of being selected in the sample.
B. an equal chance of being included in the sample.
C. a 1% chance of being included in the sample.

2. To apply the central limit theorem to the sampling distribution of the sample
mean, the sample is usually considered to be large if n is at least:
A. 20.
B. 25.
C. 30.

3. Which of the following techniques to improve the accuracy of confidence intervals


on a statistic is most computationally demanding?
A. Jackknife resampling.
B. Systematic resampling.
C. Bootstrap resampling.
Hypothesis Testing 58

Answers:

1. B In a simple random sample, each element of the population has an equal probability of being selected.
The 1% chance answer option allows for an equal chance, but only if there are 100 elements in the
population from which the random sample is drawn.

2. C Sample sizes of 30 or greater are typically considered large.

3. C Bootstrap resampling, repeatedly drawing samples of equal size from a large dataset, is more
computationally demanding than the jackknife. We have not defined systematic resampling as a specific
technique.

Hypothesis Testing

LOS 8a: explain hypothesis testing and its components, including statistical significance,
Type I and Type II errors, and the power of a test.
Have you ever wondered how doctors concluded that the vaccines for covid-19 were effective against the
virus?
How does a manager come to know that some training exercises lead to imposed staff efficiency?
The above questions are answered by a statistical concept called hypothesis testing.

What is Hypothesis?
A hypothesis is where we make a statement about something, particularly about the population
parameter.
For example- We believe that the mean annual return provided by any stocks in the BSE is greater than
0%
Hypothesis testing collects a representative sample and examines it to see if our statement/hypothesis
holds true.
Hypothesis Testing 59

Testing Procedure

State the hypothesis



Select the appropriate test statistic

Specify the level of significance



State the decision rule regarding hypothesis

Collect a sample and calculate the sample/ statistic


Make a decision regarding the hypothesis


Make a decision based on the test result

The Null Hypothesis and Alternative Hypothesis

The tests that are carried out are based on the null and alternative hypotheses derived from the
hypothesis.
Null Hypothesis [H0]

A null hypothesis is the current state of knowledge or belief related to a population parameter. The null
hypothesis is the hypothesis the research wants to reject.
It is denoted as H₀ and read as “H not.” It is a simple statement about the population parameter. A null
hypothesis will always be equal to something.
Alternative Hypothesis [Ha]
An alternative hypothesis is a hypothesis the researcher believes to be true.
It contradicts the null hypothesis.
It is usually the alternative hypothesis that is being tested.
An alternative hypothesis is denoted as 𝐻𝑎

CASE I
A researcher believes that the average rate of inflation is not 5.78%. Here the Null hypothesis and the
alternative hypothesis are:
𝐻0 : 𝜇 = 5.78% 𝐻𝑎 : 𝜇 ≠ 5.78%
Hypothesis Testing 60

CASE 2
A researcher believes that the average rate of inflation is more significant than 5.78%. Here the null and
alternative hypotheses are:
𝐻0 : 𝜇 ≤ 5.78% 𝐻𝑎 : 𝜇 > 5.78%

CASE 3
A researcher believes that the average rate of inflation is less than 5.78%. Here the null hypothesis and
alternative hypothesis are:
𝐻0 : 𝜇 ≥ 5.78% 𝐻𝑎 : 𝜇 < 5.78%
Did you notice that the null hypothesis always had an equals condition embedded in it for all the 3 cases?

Distinguish between one tailed and two-tailed tests

Based on the hypothesis, we have to decide whether alternative hypotheses can be one-sided or two-
sided.
Consider Case 1 from the above example. The researcher simply believes that the inflation rate is not
5.78%.
This means he believes the inflation rate can take any value apart from 5.78%.
In cases where the research question is whether the population parameter is simply different from a
value, we used a two-tailed test.
A two-tailed test for the population mean can be structured as (Case 1)
𝐻0 : 𝜇 = 𝜇0 𝐻𝑎 : 𝜇 ≠ 𝜇0
Since the alternative hypothesis allows the values to be above or below the hypothetical parameter, a
two-tailed test uses two critical points.
How to make decisions for a two-tailed z-distribution test statistic at a 5% level of significance.
1. We compute the test statistic
2. We compute the critical values or rejection points at a 5% level of significance i.e

𝑍𝛼 = 𝑍0.025 = ±1.96. These values are compounded using the z-table


2

1. We reject the null hypothesis of,

Test statistic > 𝑍𝛼 or test statistic < 𝑍𝛼 or in this case test statistic > 1.96 or test statistic < 1.96
2 2
Hypothesis Testing 61

We can visualize this in the graph below

Now consider the case 2 and 3 of the above example


The researcher believes the average inflation rate to be (Case 2) greater than or (Case 3) less than the
average rate of inflation.
We use a one-tailed test in cases where the researcher believes that the population parameter is greater
than or less than a said value.
One-tailed test for a population mean can be structured as:

Upper tail [Case 2]


𝐻0 : 𝜇 ≤ 𝜇0 𝐻𝑎 : 𝜇 ≥ 𝜇0
Lower Tail [Case 3]
𝐻0 : 𝜇 ≥ 𝜇0 𝐻𝑎 : 𝜇 < 𝜇0
How to make decisions for a one-tailed z-distributed test statistic at a 5% level of significance.
For upper tail
1. We compute the test statistic
2. We compute the critical value or rejection point at a 5% level of significance i.e
Za =Z0.05 = +1.645
3. Z = Z0.05= +1.645. This value is computed using the z-table.
4. We reject the null hypothesis if,
Test statistic > Z
Or in this case
Hypothesis Testing 62

Test statistic > 1.645

We can visualize this in the graph below

We can carry out the same operations to make decisions for a lower tail test.
The null hypothesis is rejected if
Test statistic < - Z
Or in this case
Test statistic < -1.645

We can visualize this in the graph below


Hypothesis Testing 63

LOS 8b: construct hypothesis tests and determine their statistical significance, the
associated Type I and Type II errors, and power of the test given a significance level

Test Statistic
A test statistic is a numerical summary of a data set that reduces the data into one value that can be used
to test the hypothesis.
𝑆𝑎𝑚𝑝𝑙𝑒 𝑠𝑝𝑎𝑐𝑒 − 𝐻𝑦𝑝𝑜𝑡ℎ𝑒𝑠𝑖𝑧𝑒𝑑 𝑣𝑎𝑙𝑢𝑒
𝑇𝑒𝑠𝑡 𝑆𝑡𝑎𝑖𝑠𝑡𝑖𝑐 =
𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑒𝑟𝑟𝑜𝑟 𝑜𝑓 𝑠𝑎𝑚𝑝𝑙𝑒 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐
The test statistic is a Random Variable
It can follow many different distributions based on the characteristics of the sample
We will look at four different distributions for test statistics.
The test statistic is the value against which the critical values are compared.

Type I error and Type III error

What are errors?


Errors are mistakes in making the wrong judgement.
There are always chances that the sample provided might not be representing the population accurately.
We must keep in mind these things while testing a specific hypothesis. When we are conducting a
hypothesis, we make inferences about the population parameter based on the data drawn from the
sample.
So there are always chances that errors might occur.
What are the types of errors that can occur?

There are 2 types of error that can occur


Type I error
It is the rejection of a null hypothesis when it's true.

Type II error
It is accepting the null hypothesis when it is false

Level of Significance

The level of significance is the probability of making a Type I error.


𝛼 = 𝑃[𝑇𝑦𝑝𝑒 𝐼 𝑒𝑟𝑟𝑜𝑟] = 𝑃[𝑅𝑒𝑗𝑒𝑐𝑡 𝐻0 |𝐻0 𝑖𝑠 𝑡𝑟𝑢𝑒]
So if 𝛼 = 10%, it means there are 10% chances that we might reject 𝐻0 given 𝐻0 is True.
The level of significance needs to be specified to calculate the critical values.
Hypothesis Testing 64

LOS 8c: Compare and contrast parametric and nonparametric tests, and describe
situations where each is the more appropriate type of test
Up until now, we carried out various tests. Did you notice that there was an assumption of normality
present in all the tests?
These tests are all parametric test
A non-parametric test is used when no population parameter is considered in situations where parametric
assumptions don't hold. This might happen if
1. the population has a non-normal distribution
2. the data is ranked [ordinal scale]
3. that testing does not involve population parameter of the distribution
Uses of Non-Parametric Test
Nonparametric procedures are mainly used in four scenarios: (1) when data do not meet distributional
assumptions, (2) when there are outliers, (3) when data are presented in ranks or an ordinal scale, and (4)
when relevant hypotheses do not involve specific parameters.

1. Data Distributional Assumptions: Nonparametric methods are used when the data does not meet the
assumptions of a specific probability distribution, making traditional parametric tests inappropriate or less
reliable.
2. Outliers: Nonparametric techniques are effective when dealing with datasets that contain outliers, as
they are less sensitive to extreme values and can provide more robust results.
3. Ranked or Ordinal Data: Nonparametric tests are suitable for data presented in ranks or on an ordinal
scale, where the precise numerical values may not carry a meaningful interpretation, but the order or
ranking of the data is relevant.
4. Hypotheses Not Involving Parameters: Nonparametric tests are applicable when the hypotheses being
tested do not involve specific population parameters, and the focus is on comparing distributions,
medians, or other non-quantifiable aspects.
Parametric and Non- Parametric Tests of Independence 65

Parametric and Non- Parametric Tests of Independence


LOS 9a: Explain parametric and nonparametric tests of the hypothesis that the
population correlation coefficient equals zero, and determine whether the hypothesis
is rejected at a given level of significance
Correlation measures the strength of the linear relationship between two random variables if R = 0 this
means that no correlation exists to address the question that the actual correlation coefficient is equal to
zero; we conduct the following test
𝐻0 ∶ 𝑝 = 0 v/s 𝐻𝛼 ∶ 𝑝 ≠ 0
(no relation exists) [relation exists]
Test statistic

𝑟 √𝑛−2
~ 𝑡𝑛−2
√1−𝑟 2
Where
r = sample correlation
n = sample size
Example
Based on a sample size of 12, a researcher calculated the sample correlation to be 0.1 79. The researcher
firmly believes that population correlation is not equal to zero. Carry out a suitable hypothesis test at a
5% level of significance to conclude his theory.
𝐻0 ∶ 𝑝 = 0 v/s 𝐻𝛼 ∶ 𝑝 ≠ 0

𝑟 √𝑛−2 0.179√122−2
Test statistic = = = 1.976
√1−𝑟 2 √1−0.1792

the critical values = +𝑡120,2.5% = +1.98


Parametric and Non- Parametric Tests of Independence 66

We have insufficient evidence to reject H0 at a 5% level of significance. So we can conclude correlation


present between two variables.

Spearman’s rank correlation.

This is a non-parametric test. This test is used to check whether 2 sets of rank are correlated.

Spearman’s rank correlation is calculated as:

6∑ 𝑑𝑖 2
𝑟𝑠 = 1 - 𝑛( 𝑛2 −1)

Where,

𝑟𝑠 = rank correlation

di = difference between the ranks

n = sample size

The test statistic remains the same i.e.


𝑟𝑠 √𝑛−2
√1− 𝑟2𝑠

Distinguish between a parametric and nonparametric test.


Up until now, we carried out various tests. Did you notice that there was an assumption of normality
present in all the tests?
These tests are all parametric test
A non-parametric test is used when no population parameter is considered in situations where parametric
assumptions don't hold. This might happen if
1. the population has a non-normal distribution
2. the data is ranked [ordinal scale]
3. that testing does not involve population parameter of the distribution

Some examples of the non-parametric test are sign test, spearman rank correlation, or goodness of fit
test
Parametric and Non- Parametric Tests of Independence 67

LOS 9b: Explain test of independence based on contingency table data.


We are aware that a contingency table illustrates the number of observations from a sample that have
two characteristics.

Here, we have a contingency table where, characteristics are gender (male, female) and political parties.
(BJP, Congress, others).

Gender

BJP Congress Others Total


Male 26 13 5 44
Female 20 29 7 56
Total 46 42 12 100
We can use this data to test if choice of political party is independent of gender of the citizen.

There are two rows: i = 1,2.

There are three rows: j = 1,2,3.

So, 𝑂𝑖𝑗 means an observed value with row ‘i’ and column ‘j’.

𝑂1,2 = 13

Test statistic

This is a Chi square statistic


𝑟 𝐶
2
(𝑂𝑖,𝑗 − 𝐸𝑖,𝑗 )2
𝑋 = ∑ ∑
𝐸𝑖,𝑗
𝑖=1 𝑗=1

𝑂𝑖𝑗 = the number of observations in cell i,j; row i, and column j.

𝐸𝑖𝑗 = the expected number of observation for cell i,j.

r = the number of row categories

c = the number of column categories

Calculation of 𝐸𝑖,𝑗

Expected values contingency table


𝑡𝑜𝑡𝑎𝑙 𝑜𝑓 𝑟𝑜𝑤 𝑖 ×𝑡𝑜𝑡𝑎𝑙 𝑜𝑓 𝑐𝑜𝑙𝑢𝑚𝑛 𝑗
𝐸𝑖,𝑗 = 𝑡𝑜𝑡𝑎𝑙 𝑜𝑓 𝑎𝑙𝑙 𝑟𝑜𝑤𝑠 𝑎𝑛𝑑 𝑐𝑜𝑙𝑢𝑚𝑛𝑠

44 × 42
𝐸𝑖,𝑗 = 100
= 18.48

Gender

BJP Congress Others


Male 20.24 18.48 5.28
Female 25.74 23.52 6.72
Parametric and Non- Parametric Tests of Independence 68

The degrees of freedom = (r – 1) (C -1) = (2 – 1) (3 – 1) = 2

HO: Political party is independent of gender.

HA: Political party is dependent/ not independent of gender

Testing with 5% level of significance.

χ = 5%
2
Critical values = 𝑋2,0.05 = 5.991

Test statistic
(26−20.24)2 (20−25.74)2 (7−6.72)2
𝑋22 = + + ⋯+
20.24 25.74 6.72

= 5.86

We don’t have sufficient evidence to reject the null hypothesis at 5% level of significance.

This means that the choice of political parties are independent of the gender of the citizens.

1. The returns of assets X and Y are known to have the same variance; a sample of 5 returns
from asset X and 10 returns of asset Y have sample variance 𝑆𝑥2 = 47% 𝑎𝑛𝑑 𝑆𝑦2 = 12.6%
based on the information that the returns follow a normal distribution. Calculate the F-test
statistics.

A. 3.730
B. 0.268
C. 1

2. A random sample consists of 10 observations from a normal distribution gives the following
values. 𝑥 = 12.02%; 𝑠 2 = 27.674.
Based on the sample, the researcher believes that this population's variance is different from
20. Test at 5% level of significance whether the population variance is 20.

A. Reject the null hypothesis at a 5% level of significance.


B. Fail to reject the null hypothesis at a 5% level of significance.
C. Accept null hypothesis at 5% level of significance.

3. The total profits earned by two similar-sized companies were collected over 10 years. The
researcher calculated the sp2=12.7505, the difference between the sample mean of both the
companies was -$1.9 million, and computed the test statistic, which has the value of -0.841. Test,
whether the mean profits earned are equal at 5 % level of significance
Parametric and Non- Parametric Tests of Independence 69

A. The population mean of both the companies is equal.


B. The population mean of both companies is not equal.
C. The information provided is insufficient.

Answers

1. The F – test statistics is 𝐹 = 𝑆12 > 𝑆22

So the F – test statistics = 47/12.6 = 3.730

2. We are testing the following Hypothesis


𝐻0 : 𝜎 2 = 20 V/S 𝐻𝛼 : 𝜎 2 ≠ 20

n = 10
df = 10 -1 = 9

(𝑛−1)𝑆 2 9×27.674
Chi-squared test stat = = = 12.453
𝜎02 20
The critical values are
2 2
𝑋9,0.975 𝑎𝑛𝑑 𝑋9,0.025

2.70 & 19.02


Since the test statistic lies within the critical values, we have insufficient evidence to reject the null
hypothesis at a 5% level of significance.

3. Since we are testing the differences between the means of two populations where variances are
unknown, we assume them to be equal because we are provided with pooled variance.

We are testing the following hypothesis


𝐻0 ∶ 𝜇1 − 𝜇2 = 0 V/S 𝐻𝛼 ∶ 𝜇1 − 𝜇2 ≠ 0
The test statistic calculated by the researcher is -0.841

The degrees of freedom, in this case, is 10+10-2 = 18

𝑇ℎ𝑒 𝑐𝑟𝑖𝑡𝑖𝑐𝑎𝑙 𝑣𝑎𝑙𝑢𝑒𝑠 = ±𝑡18,2.5% = ±1.882

The test statistic is within the critical values; we have insufficient evidence to reject the null hypothesis
at a 5 % level of significance. So it is safe to conclude that the population mean of both the companies
is equal.
Introduction to Linear Regression 70

Introduction to Linear Regression

Linear Regression: Introduction

LOS 10a: Describe a simple linear regression model, how the least squares criterion is
used to estimate regression coefficients, and the interpretation of these coefficients

Regression is a statistical method that is used in finance, investing and other disciplines to determine the
strength and the relationships between a dependent variable and an independent variable.

Why do we use regression?

The purpose of SLR is to explain the variation or changes in a dependent variable in terms of the variation
in a single dependent variables.

Simple Liner Regression (SLR) is also used for forecasting and financial analysis.

Dependent Variables

It is the variables whose variation is explained by the independent variable. The dependent variable is also
referred as endogenous, explained or predicted variable.

Independent Variable

It is the variable used to explain the variation of the dependent variable.

It is also called explanatory, exogenous or predicting variable.

For example,

If we want to predict the stock returns based on GDP growth, which variables is dependent?

GDP will be the independent variable as it will be used to predict the stock returns. Therefore, stock
returns are dependent. GDP is independent.
Introduction to Linear Regression 71

Describe the least square method criterion, how it is used to estimate the regression co-
efficient, and their interpretation.
Single Regression Model

𝑦𝑖 = 𝑏0 + 𝑏1 𝑥𝑖 + 𝐸𝑖 ; i = 1, 2, 3,…, n

The following linear regression model is used to describe the relationship between 2 variables, x and y.

𝑦𝑖 = ith observation of the dependent variable, y

𝑥𝑖 = ith observation of the independent variable, x

𝑏0 = regression intercepts term

𝑏1 = regression slope coefficient

𝐸𝑖 = residual term for the ith observation/ error term

This model best estimates an equation for a line through a scatter plot of the data that “best” explains
the observed value for the dependent variable y based on the independent variable x.

The regression line or line of best fit.

ŷ𝑖 = 𝑏ˆ0 + 𝑏ˆ1 𝑥𝑖 ; i = 1, 2, 3,.., n

ŷ𝑖 = estimated value of yi given 𝑥𝑖

𝑏ˆ0 = estimated intercepts term

𝑏ˆ1 = estimated slope coefficient

The regression lines is the line that minimizes the sum of squared differences between the predicted
values (ŷ𝑖 ) and the actual values (𝑦𝑖 ). The sum of all these differences is called sum of squared errors (SSE).

A regression lines minimizes the SSE. Therefore, it is also called ordinary least square regression and the
estimated (ŷ𝑖 ) are called the least square estimates.

The slope coefficient (𝑏ˆ1 )

The estimated slopes coefficient (𝑏ˆ1 ) describes the change in y per one unit of x.
𝑐𝑜𝑣(𝑥, 𝑦)
𝑏ˆ1 =
𝜎2 𝑥
The intercept parameter (𝑏ˆ0 )

It is the point of intersection on y axis or the vertical axis at x = 0.

𝑏ˆ0 = 𝑦 − 𝑏ˆ1 𝑥

This means that the regression line passes through a point whose coordinates are (𝑥, 𝑦 ).
Introduction to Linear Regression 72

Example

Compute the slope coefficient and the intercept term with the following information.

Covariance (Nifty 50, Reliance) = 0.00239

Variance of Reliance = 0.002857

Variance of Nifty 50 = 0.007881

Mean return of Reliance = 62.94%

Mean return of Nifty 50 = 40.48%


𝑐𝑜𝑣 (𝑁𝑖𝑓𝑡𝑦 50, 𝑅𝑒𝑙𝑖𝑎𝑛𝑐𝑒)
𝑏ˆ1 = 2 = 0.8365
𝜎𝑅𝑒𝑙𝑖𝑎𝑛𝑐𝑒

𝑏ˆ0 = 𝑁𝑖𝑓𝑡𝑦 50 − 𝑏ˆ1 × 𝑅𝑒𝑙𝑖𝑎𝑛𝑐𝑒 = 29.08%

In the above example we calculated the regression coefficients i.e. 𝑏ˆ0 & 𝑏ˆ1

We can now use these values to predict the value for the returns provided by Reliance.
̂ = 𝑏ˆ0 + 𝑏ˆ1 𝑁𝑖𝑓𝑡𝑦 50
𝑅𝑒𝑙𝑖𝑎𝑛𝑐𝑒
When Nifty 50 = 73.5%

𝑅𝑒𝑙𝑖𝑎𝑛𝑐𝑒 = 29.08% + 0.8365 x 73.5%

𝑅𝑒𝑙𝑖𝑎𝑛𝑐𝑒 = 90.56% (Predicted value)


Introduction to Linear Regression 73

Goodness of Fit and Hypothesis Tests

Analysis of Variance

In this part of the chapter we will study a statistical procedure for analyzing the variability of the
dependent variable.

* Total sum of squares (SST)𝑆𝑆𝑇 = ∑𝑛𝑖=1 (𝑦𝑖 − 𝑦)2

SST is the total variation in the dependent variable. It is calculated as the sum of squared difference of the
actual y axis and the mean of y.

Regression Sum of Squares (RSS)


𝑛

𝑅𝑆𝑆 = ∑ (ŷ𝑖 − 𝑦)2


𝑖=1

RSS measures the variation in the dependent variables that is explained by the independent variable. It is
the sum of the squared distances between the predicted y-values and mean of y.

Sum of Squared errors (SSE)


𝑛

𝑆𝑆𝐸 = ∑ (𝑦𝑖 − ŷ)2


𝑖=1

SSE measures the unexplained variation in the dependent variables. It is also known as sum f=of squared
residuals.

It is the difference between the actual y-values and predicted y-values.

Total Variation = Explained Variation + Unexplained Variation

SST = RSS + SSE

After calculating the variation. We make an ANOVA table.

Source of variation df Sum of Squares Mean Sum of Squares


Regression (explained) k RSS MSR = RSS/ k
Errors (unexplained) n-k-1 SSE MSE = SSE/ n-k-1
Total n-1 SST

k is the number of slope parameter estimated

The regressions df = k

The error df = n-k-1


Introduction to Linear Regression 74

Standard error of estimate (SEE)

𝑆𝐸𝐸 = √𝑀𝑆𝐸
It is the standard deviation of the residuals.

Lower SEE means that the model is a good fit.

Coefficient of Determination (R2)

The coefficient of determination is the proportion of the total variation (SST) in the dependent variable
(y) explained by the independent variables.
𝑅𝑆𝑆
𝑅2 = = 𝑟2
𝑆𝑆𝑇
R2 = 0.68 means 68% of the variation in the dependent variable in explained by the independent variable.
Introduction to Linear Regression 75

Illustration

Complete the ANOVA table for the Reliance regression example and calculate the R2 and the standard
error of the estimate (SEE)

Sources of Variation df Sum of Squares Mean sum of squares


Regression ? 0.03563 ?
Error ? 1.36437 ?
Total 1.4

Solution

n = 50

df for regression = 1

df for error = 50-k-1 = 50-1-1 = 48


𝑅𝑆𝑆 𝑅𝑆𝑆
𝑀𝑆𝑅 = 𝑘
= 1
= 0.03563
𝑆𝑆𝐸 1.36437
𝑀𝑆𝐸 = 𝑛−𝑘−1 = 48
= 0.028424
𝑅𝑆𝑆 0.03563
𝑅2 = 𝑆𝑆𝑇
= 1.4
= 0.1686

𝑆𝐸𝐸 = √𝑀𝑆𝑅 = √0.028424 = 0.1686

F- Statistic

An F-test assesses how well a set of independent variables, as a group, explain the variation in the
dependent variable.

Here the null hypothesis and alternative hypothesis are

𝐻0 : 𝑏1 = 0 𝑣/𝑠 𝐻𝑎 : 𝑏1 ≠ 0
Please note that this is a one tailed test as the values of F – distribution cannot be less than 0.

Here the test statistic is


𝑀𝑆𝑅
𝐹=
𝑀𝑆𝐸
Decision rule

Reject the null hypothesis if Test stat > Critical value.

For simple linear regression, there is only one independent variable, so the F-test is equivalent to a t-test.
Introduction to Linear Regression 76

Illustration:

Calculate and interpret F – statistic using the ANOVA table from the previous question.

Solution
𝑀𝑆𝑅 0.0356
𝐹= = = 1.2536
𝑀𝑆𝐸 0.0284
df numerator = k = 1

df denominator = n - k – 1 = 48

The Null hypothesis

𝐻0 : 𝑏1 = 0 𝐻𝑎 : 𝑏1 ≠ 0
Critical value

F 0.05, 1, 48 = 4.0426

Test statistic = 1.25366

Test statistic < Critical value

1.25 < 4.04

Therefore, we fail to reject H0 at 5% level of significance.

This means that there is a possibility that our slope b1 = 0

LOS 10b: Explain the assumptions underlying the simple linear regression model, and
describe how residuals and residual plots indicate if these assumptions may have been
violated
Assumption underlying a linear regression model

Linear Relationship

A linear regression exists between the dependent and the independent variable. A linear regression is not
appropriate if the relationship between the variables is non-linear.
Introduction to Linear Regression 77

Homoskedasticity

The variance of error terms must be constant for all observation. Homoskedasticity refers to the case
where prediction errors all have the same variance.

Heteroskedasticity means where the above conditions is violated.


Introduction to Linear Regression 78

Independence

The residual term is independently distributed, i.e. the residual value should be independent of all other
residual values.

If the variables x and y are not independent then their residuals will also be not independent. Making our
estimates of variance as well as regression parameter incorrect.
Introduction to Linear Regression 79

Normality

The residual term are normally distributed. Because of this we can conduct the hypothesis test to
determine the goodness of fit for their model.

Outliers are observation that are far from the regression line. Outliers will have influence on the regression
line and the parameter estimate.

This means the OLS model will not be able to fit the other observation well.
Introduction to Linear Regression 80

Predicting Dependent Variables and Functional Forms

LOS 10c: calculate and interpret measures of fit and formulate and evaluate tests of fit
and of regression coefficients in a simple linear regression

Predicted values for the dependent variable are the values based on the estimated regression coefficient
(b0, b1) and a predicted value of independent variable.

ŷ = 𝑏ˆ0 + 𝑏ˆ1 𝑋𝑝

ŷ = predicted value of dependent variable

𝑋𝑝 = forecasted value of independent variable

Illustration

We are provided with the following question


̂50
̂ = 50.64% + 0.30377 𝑁𝑖𝑓𝑡𝑦
𝑅𝑒𝑙𝑖𝑎𝑛𝑐𝑒
Calculate the predicted value of Reliance returns if forecasted Nifty 50 returns are 20%.

Solution
̂ = 50.64% + 0.30377 × 20
𝑅𝑒𝑙𝑖𝑎𝑛𝑐𝑒
= 56.72%

Confidence Interval of the predicted value

ŷ ± ( 𝑡𝑐 × 𝑠𝑓 )

i.e. ŷ − ( 𝑡𝑐 × 𝑠𝑓 ) < ŷ < ŷ ± ( 𝑡𝑐 × 𝑠𝑓 )



𝑡𝑐 = The t critical values at 2 level of significance and df = n - 2

𝑠𝑓 = standard error of forecast


2
1 (𝑥−𝑥)
𝑠𝑓2 = 𝑆𝐸𝐸 2 [1 + 𝑛
+ (𝑛−1) 𝑆 2 ]
𝑥

Where,

SEE2 = variance of the residual

𝑆𝑥2 = variance of the independent variable

X = value of independent variable for which he forecast was made.


Introduction to Linear Regression 81

Illustration

Calculate a 95% predicted interval on the predicted value of Reliance returns from the previous example.

̂ = 56.72%, 𝑆𝐸𝐸 2 = .0284, 𝑆 𝑥2= 78.81, 𝑥= 40.48%, n=50


𝑅𝑒𝑙𝑖𝑎𝑛𝑐𝑒

1 [20 − 40.48]2
𝑆𝑓2 = 𝑆𝐸𝐸 2 [1 + + ]
50 49 × 78.81

= 0.0284 (1.12861)

= 0.0320526

Sf = √0.0320526

= 0.1790

𝑡0.025,48 = ±2.01

So the prediction interval is as follows


̂ ± (2.01 𝑥 0.1790)
𝑅𝑒𝑙𝑖𝑎𝑛𝑐𝑒
56.72% ± (2.01 𝑥 0.1790)
(20.741%, 92.699%)

LOS 10d: Describe the functional forms of simple linear equations.


The simple linear regression model assumes that the relationship between x and y is linear.

What if the relationship is not linear?

In such cases we can transform one of the variables or both of the variables to establish a linear
relationship.

Log – Lin model

We use this when the dependent variable is logarithmic and the independent variable is linear.

The regression model looks like this

𝑙𝑛 (𝑦𝑖 ) = 𝑏0 + 𝑏1 𝑋𝑖 + 𝐸𝑖
Here, the slope coefficient is interpreted as the relative change in the dependent variable for an absolute
change in the independent variable.

Lin – Log model

The model look like

𝑦𝑖 = 𝑏0 + 𝑏1 𝑙𝑛 (𝑋𝑖 ) + 𝐸𝑖
Introduction to Linear Regression 82

Here, the slope coefficient is described as the absolute change in dependent variable for a relative change
in the independent variable.

Log – Log model

Here our model looks like

𝑙𝑛 (𝑦𝑖 ) = 𝑏0 + 𝑏𝑖 𝑙𝑛 (𝑋𝑖 ) + 𝐸𝑖
Here the slope coefficient is interpreted as a relative change in dependent variable for a relative change
in the independent variable.
Introduction to Big Data Techniques 83

Introduction to Big Data Techniques

The increasing importance of fintech cannot be understated. Terms like Big Data,
blockchain, and algorithmic trading come into common use. Ensure that you are familiar
with the basic terminology and an understanding of the applications of the common
fintech tools.

LOS 11a: Describe aspects of “fintech” that are directly relevant for the gathering and
analyzing of financial data.
Fintech literally means “financial technology”. It is a blanket term that refers to all the technological
developments that come up in the financial sector.
These developments assist the decision-making process by running complex algorithms, machine learning
tools, and automation processes.
The key areas of fintech include:
• Data management: large data sets must be stored, processed, and analyzed.
• Artificial intelligence (AI): Non-linear relationships can be captured from artificial intelligence
methods. Large and complex datasets can be understood in far less time than they would be if a
human tried to break the datasets down.
• Automation: Automated trading and automated advice, including robo-advisors. These result in
lower transaction costs, and they save time and effort.
• Financial recordkeeping: This may be required by the intermediaries.

What is Big Data?


Big Data refers to all the potentially useful information that is generated in the
economy. Traditional data sources include mass media, financial market data, analyst
conference calls, news and so on. Big Data includes non-traditional data sources.
Some of the non-traditional sources of data are:
• Usable data which individuals generate via social media posts, online logs and
reviews.
• Corporate exhaust, potentially useful information like bank records and retail
scanner data generated by various businesses.
• Date receptors, such as radio frequency identification chips, are present in
numerous devices such as smartphones and smart gadgets. The broad network of
such devices is referred to as the Internet of Things.

Characteristics of Big Data include:


• Volume.
• Velocity.
• Variety.
Introduction to Big Data Techniques 84

The volume of big data continues to grow by a huge magnitude. Earlier, data used to be
measured in megabytes which has now gone from Gigabytes to Terabytes or even
Petabytes (1000 terabytes).

Velocity is the speed of communication of the data. Real-time data like stock market
feeds are said to have low latency. Data that is communicated periodically or with a lag
are said to have high latency.

The data exists in a variety of structures and degrees. This ranges from spreadsheets to
databases, photos or even a web page code. It also includes unstructured forms such as
videos.

When Big Data is utilized for inference or prediction, a "fourth V" called veracity becomes relevant,
referring to the credibility and reliability of diverse data sources. Evaluating the trustworthiness and
dependability of these sources is a crucial aspect of any empirical investigation. However, veracity
becomes particularly critical in the context of Big Data due to the multitude of sources contributing to
these extensive datasets. The challenge of distinguishing quality from quantity is further amplified by the
scale and complexity of Big Data.

What are the various methods of processing and visualizing data?


The processing of data includes:
• Capture: Collecting and transforming data to use it.
• Curate: Checking for bad and missing data and assuring the quality.
• Store: Archiving and accessing data.
• Search: Look for important data from the total stored data.
• Transfer: Removing data to wherever it is needed from their storage or source.

Visualization techniques include the familiar charts and graphs that display structured
data. Mind Maps are a good example that displays logical relations amongst concepts.

Other methods are used to visualize less structured data.

There is a huge challenge to take advantage of Big Data. Analysts must do enough due
diligence to ensure that the data they use are of high quality, checked for the possibilities
of outliers, bad or missing data, or sampling biases. The volume of data collected must
be enough for its intended purpose.

There is a need to process data before its usage, and hence it can be problematic with
qualitative and unstructured data. This is a process that can be done with the help of
artificial intelligence. Neural networks are an example of artificial intelligence in that they
are programmed to process information in a way similar to the human brain.
Introduction to Big Data Techniques 85

LOS 11b: Describe Big Data, artificial intelligence, and machine learning
Machine learning is an important development in the field of artificial intelligence. A computer algorithm
is given data inputs with no assumptions about their probability distributions and may be given outputs
of target data. The algorithm is designed to learn without human assistance and to model the output data
based on the input data.

This typically requires huge data sets. A typical process of machine learning begins with a training dataset
where the algorithm looks for relationships. A validation dataset is then used to refine these relationship
models, which can then be applied to a test dataset to analyze their predictive ability.

What is supervised and unsupervised learning?

Supervised learning is when the input and output data are labelled; the algorithm learns to model the
outputs from the inputs.

Unsupervised learning means the input data is not labelled, and the machine learns to describe the data
structure itself.

Deep learning is a technique that uses neural layers to identify the patterns, going from simpler patterns
to more complex ones. Deep learning includes speech and image recognition as well. Popular examples
are virtual assistants like Siri and Alexa. Image recognition is also widely used.

Machine learning can produce models that can overfit and underfit the data:

➢ Overfitting is when the machine learns the input and output data perfectly and treats noise as
true parameters. It also identifies spurious patterns and various other relationships.
➢ Underfitting occurs when the machine fails to identify the relationships and patterns and treats
true parameters as noise. A further problem with machine learning is that it can form a black box
which creates outcomes that are not readily explainable.

LOS 11c: describe applications of Big Data and Data Science to investment management
The various applications of fintech in the investment management industry are:

Text Analytics

Analyzing of unstructured data in which is in text or voice forms. In the finance industry, text analytics has
the potential to partially automate specific tasks such as evaluating company regulatory filings.

For instance, during results season, the companies roll out their results, and such fintech applications can
be used to analyze management commentary quickly and take positions.

Natural Language Processing

This refers to the use of computers and artificial intelligence to interpret human language. Speech
recognition and language translation are among the uses of natural language processing.
Introduction to Big Data Techniques 86

It can be used to check regulatory compliance, examine employee communications, or evaluate large
volumes of research reports. Risk governance requires learning of a firm’s exposure to a wide variety of
risks. Financial regulators require firms to perform risk assessments and stress testing.

Machine learning and other techniques related to Big Data can be useful in modelling and testing risk,
particularly if firms use real-time data to monitor risk exposures. Amazon Echo’s Alexa makes widespread
use of this feature of machine learning.

Algorithmic Trading

This refers to the computerized execution of security trading, which is based on a predetermined set of
rules. It can be used to place huge orders by determining the best way to divide orders amongst
exchanges. It can also be used for high-frequency trading, which takes advantage of intraday security
mispricing.

J.P Morgan, Goldman Sachs and various other bulge bracket banks have turned their focus to algorithmic
trading in the past few years to reduce trading costs.

High-frequency trading is another relatively recent emergence whereby small amounts of mispricing are
captured in bulk trades. This has significantly helped arbitrage funds because it is important to time the
mispricing quickly before it disappears.

Robo-Advisors

Robo-advisors are online platforms that provide automated investment advice based on a customer’s
answers to survey questions. The survey questions are designed specifically to know the financial position,
return objectives, risk tolerance, and constraints such as time horizon and liquidity needs. The survey
questions basically work as an IPS.
This has disrupted the wealth management industry in two main ways:

• Fully Automated Digital Wealth Managers: There is no human intervention in the decision-making
process. The robo-advisor understands the client’s needs and builds a portfolio accordingly.
• Adviser-Assisted Digital Wealth Managers: A virtual financial adviser may offer portfolio planning
services and periodic reviews over phone calls. The platform is still largely led by fintech.

These services offer passively managed investments with low fees, low minimum account sizes, traditional
asset classes, and conservative recommendations. These factors attract a wider pool of investors and
encourage retail participation in financial markets.

However, the reasoning behind the robo-adviser’s recommendations may not be transparent. This is an
issue for wealthy investors because it is important to know the thought process and investment rationale
before trusting a wealth manager with large sums of money.
Introduction to Big Data Techniques 87

Risk Analysis

AI and machine learning techniques can be integrated in scenario analysis to assess the downside risk of
an investment. They may be used to forecast the risk of declining earnings, for example, before the event
actually happens.

Machine learning tools can assess even the quality of data. This is important because if the input itself is
not clean, then the output is not valuable regardless of the quality of the model.

Describe financial applications of distributed ledger technology.


What is a distributed ledger?

A database that is shared on a network so that each participant has an identical copy. It has a consensus
mechanism to validate new entries, and only authorized personnel can access the data.

What is a blockchain?

It is a distributed ledger that records transactions sequentially in blocks and links these blocks in a chain.
Each block has a cryptographically secured “hash” that links it to the previous block. The consensus
mechanism requires some computers to solve a cryptographic problem. These are referred to as miners.
Mining requires use of supercomputers and vast resources of power. This imposes substantial costs on
any attempt to manipulate a historical record. For this reason, a blockchain is more likely to succeed with
a large number of participants in its network.

Elon Musk once Tweeted that cryptocurrency mining was unsustainable due to the energy
consumption, which sent large cryptocurrencies down by an average of 40%.

What is Defi in Fintech?

DeFi is an acronym for decentralized finance. It is an emerging technology that recreates financial services
with blockchain technologies, usually Ethereum and smart contracts. Like conventional banking, DeFi
allows users to perform financial transactions, such as transfers, lending, investing, trading or savings.

However, DeFi allows consumers access to these financial facilities without the presence of an
intermediary entity. There is no brokerage, exchange, or financial institution involved in the DeFi
ecosystem. This allows DeFi’ s application to go beyond conventional boundaries, making it accessible
across markets, regions, and different layers of society.
Introduction to Big Data Techniques 88

What is a permissionless and a permissioned networks?

Distributed ledgers can take either form. The difference is in permissionless networks, all participants
can view all transactions. No central authority and hence no single point of failure. This removes the doubt
between the two parties.

Permissioned networks have different levels of access. A permissioned network might allow government
regulators permission to view transaction history. This would cause an increase in transparency and
decrease compliance cost.

Financial Applications of Distributed Ledger Technology

Cryptocurrency

This electronic exchange medium allows participants to engage in real-time transactions without a
financial intermediary. These are usually on a permissionless network.

This has had a huge impact on the companies, and a few companies have already had initial coin offerings
in which they sell cryptocurrency for money or another cryptocurrency. Candidates should be aware that
there have been frauds with such coin offerings.

Matic (Polygon) is a cryptocurrency developed by Indians, and they were India’s first crypto
billionaires.

Tokenization

Transactions of real estate, antiques, collectables, etc., can be verified by tokenization of these assets.
This is essentially a digital proof of ownership. These tokens are transferable but cannot be divided and
sold separately from the asset.

Additionally, smart contracts are electronic contracts that can be programmed to self-execute based on
terms or rules agreed to by the counterparties. For instance, a stop loss or a buy order to be executed if
certain conditions exist in the market.

Post-Trade Clearing and Settlement

Distributed ledgers could automate many of the processes currently outsourced to third parties. The
technology has the potential to bring about real-time trade verification and settlement. This would also
reduce counterparty risk.

Compliance

Blockchain and cryptocurrencies are still in the nascent stage, and there is a lack of regulation around their
applications. Firms that use such technology must maintain a lot of real-time data, specifically for trading
activities and automation processes.
Introduction to Big Data Techniques 89

The ledger technology itself could store highly sensitive information, so the users of these technologies
must be aware of the anti-money laundering and know-your-customer regulations before using these
applications.

1. Fintech is most accurately described as:

A. The application of technology to the financial services industry.


B. The replacement of government-issued money with electronic currencies.
C. The clearing and settling securities trades through distributed ledger technology.

2. Text analytics is appropriate for applications to:

A. Economic trend analysis


B. Large. Structured datasets
C. Public but not private information

3. A key criticism of robo-advisory services is that:

A. They are costly for investors to use.


B. The reasoning behind their recommendations can be unclear.
C. They tend to produce overly aggressive investment recommendations.

4. A factor associated with the widespread use of algorithmic trading is increased:

A. Market efficiency
B. Average trade sizes
C. Trading destinations

5. Which of the following statements about distributed ledger technology is most accurate?

A. A disadvantage of blockchain is that past records are vulnerable to manipulation.


B. Tokenization can potentially streamline transactions involving high-value physical assets.
C. Only parties who trust each other should carry out transactions on a permissionless network
Introduction to Big Data Techniques 90

Answer
1. A is correct. The application of technology to the financial services industry

2. A is correct. Economic trend analysis

3. B is correct. The reasoning behind their recommendations can be unclear

4. C is correct. Trading destinations

5. B is correct. Tokenization can potentially streamline transactions involving high-value physical


Formulas 91

Formulas

The Time Value of Money


● Nominal risk-free rate = 𝑟𝑒𝑎𝑙 𝑟𝑖𝑠𝑘 𝑓𝑟𝑒𝑒 𝑟𝑎𝑡𝑒 + 𝑒𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝑖𝑛𝑓𝑙𝑎𝑡𝑖𝑜𝑛 𝑟𝑎𝑡𝑒

● Required interest rate on a security = 𝑛𝑜𝑚𝑖𝑛𝑎𝑙 𝑟𝑖𝑠𝑘 𝑓𝑟𝑒𝑒 𝑟𝑎𝑡𝑒 + 𝑑𝑒𝑓𝑎𝑢𝑙𝑡 𝑟𝑖𝑠𝑘 𝑝𝑟𝑒𝑚𝑖𝑢𝑚 +
𝑙𝑖𝑞𝑢𝑖𝑑𝑖𝑡𝑦 𝑝𝑟𝑒𝑚𝑖𝑢𝑚 + 𝑚𝑎𝑡𝑢𝑟𝑖𝑡𝑦 𝑟𝑖𝑠𝑘 𝑝𝑟𝑒𝑚𝑖𝑢𝑚

● Effective annual rate = (1 + 𝑝𝑒𝑟𝑖𝑜𝑑𝑖𝑐 𝑟𝑎𝑡𝑒)𝑚 − 1

● Continuous compounding: 𝑒 𝑟 − 1 = 𝐸𝐴𝑅

𝑝𝑚𝑡
● 𝑃𝑉𝑝𝑒𝑟𝑝𝑒𝑡𝑢𝑖𝑡𝑦 = 𝐼/𝑌

● FV = 𝑃𝑉(1 + 𝐼/𝑌)𝑁

Organizing, Visualizing and Describing Data


∑𝑁
𝑖=1 𝑋𝑖
● Population mean: µ = 𝑁

∑𝑁
𝑖=1 𝑋𝑖
● Sample mean: X̄ = 𝑛

𝑛
● Geometric mean return (RG): 1 + RG = √(1 + 𝑅1) × (1 + 𝑅2) × … (1 + 𝑅𝑛)

𝑁
● Harmonic mean: X̄ H = ∑𝑁 1/𝑥𝑖
𝑖=1

● Weighted mean: X̄ W = ∑𝑛𝑖=1 𝑤𝑖 𝑋𝑖


𝑦
● Position of the observation at a given percentile, y: 𝐿𝑦 = (𝑛 + 1) 100

● Range = maximum value – minimum value

● Excess kurtosis = sample kurtosis – 3

∑𝑛
𝑖=1 |𝑋𝑖 – 𝑋̄ |
● MAD = 𝑛

∑𝑁 (𝑋𝑖 −µ)2
● Population variance = 𝜎 2 = 𝑖=1
𝑁

Where µ = population mean & N = number of possible outcomes

∑𝑛 (𝑋− 𝑋̄)2
● Sample variance = 𝑠 2 = 𝑖=1
𝑛−1
Formulas 92

Where 𝑋̄ = sample mean & n = sample size

𝑆𝑋 𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑋
● Coefficient of variation: CV = ̄
𝑋
= 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑋

(𝑋𝑖 −𝐵)2
● Target downside deviation: 𝑆𝑡𝑎𝑟𝑔𝑒𝑡 = √∑𝑛𝑎𝑙𝑙 𝑋𝑖<𝐵 𝑛−1

Probability Concepts
● Joint probability: P(AB) = 𝑃(𝐵) × 𝑃(𝐵)

● Addition rule: P(A or B) = P(A) + P(B) – P(AB)

● Multiplication rule: P(A and B) = P(A) × P(B)

● Total probability rule: P(R) = 𝑃(𝑆1 ) × 𝑃(𝑆1 ) + 𝑃(𝑆2 ) × 𝑃(𝑆2 ) … … + 𝑃(𝑆𝑁 ) × 𝑃(𝑆𝑁 )

● Expected value: E(X) = ∑P(𝑋𝑖 ) 𝑋𝑖 = ∑𝑃(𝑋1 )𝑋1 + ∑𝑃(𝑋2 )𝑋2 … . . ∑𝑃(𝑋𝑛 ) 𝑋𝑛

● Variance from a probability model: Var(X) = 𝐸{[𝑋 − 𝐸(𝑋)]2 }

𝐶𝑜𝑣(𝑅𝑖 ,𝑅𝑗 )
● Cov(𝑅𝑖 , 𝑅𝑗 ) = 𝜎 (𝑅𝑖 )𝜎(𝑅𝑗 )

● Portfolio expected return: 𝐸(𝑅𝑃 ) = ∑𝑁


𝑖=1 𝑤𝑖 𝐸(𝑅𝑖 ) = 𝑤1 𝐸(𝑅1 ) + 𝑤2 𝐸(𝑅2 ) … … +

𝑤𝑛 𝐸(𝑅𝑛 )

● Portfolio variance: Var(𝑅𝑃 ) = ∑𝑁


𝑖=1 ∑𝑁
𝑗=1 𝑤𝑖 𝑤𝑗 𝐶𝑜𝑣(𝑅𝑖 , 𝑅𝑗 )

𝑚𝑎𝑟𝑘𝑒𝑡 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑎𝑠𝑠𝑒𝑡 𝑖𝑛𝑖𝑛𝑣𝑒𝑠𝑡𝑚𝑒𝑛𝑡 𝑖


Where 𝑤𝑖 = 𝑚𝑎𝑟𝑘𝑒𝑡 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑝𝑜𝑟𝑡𝑓𝑜𝑙𝑖𝑜

● Bayes’s formula:

𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 𝑛𝑒𝑤 𝑖𝑛𝑓𝑜𝑟𝑚𝑎𝑡𝑖𝑜𝑛 𝑓𝑜𝑟 𝑎 𝑔𝑖𝑣𝑒𝑛 𝑒𝑣𝑒𝑛𝑡


Updated probability = 𝑢𝑛𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑎𝑙 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑡𝑦 𝑜𝑓 𝑛𝑒𝑤 𝑖𝑛𝑓𝑜𝑟𝑚𝑎𝑡𝑖𝑜𝑛
×
𝑝𝑟𝑖𝑜𝑟 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 𝑒𝑣𝑒𝑛𝑡

𝑛!
● Combination (binomial) formula: 𝑛 𝐶𝑟 = (𝑛−𝑟)!𝑟!

𝑛!
● Permutation formula: 𝑛 𝑃𝑟 = (𝑛−𝑟)!
Formulas 93

Common Probability Distributions


𝑛!
● Binomial probability: p(x) = (𝑛−𝑟)!𝑥! 𝑝 𝑥 (1 − 𝑝)𝑛−𝑥

● For a binomial random variable: E(X) = np; variance = np(1-p)

● For a normal variable:

o 90% confidence interval for X is 𝑋̄ − 1.65𝑠 𝑡𝑜 𝑋̄ + 1.65𝑠

o 95% confidence interval for X is 𝑋̄ − 1.96𝑠 𝑡𝑜 𝑋̄ + 1.96𝑠

o 99% confidence interval for X is 𝑋̄ − 2.58𝑠 𝑡𝑜 𝑋̄ + 2.58𝑠

𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛 − 𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 𝑚𝑒𝑎𝑛 𝑥−µ


● Z= 𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛
= 𝜎

[𝐸(𝑅𝑝 )− 𝑅𝐿 ]
● SFRatio = 𝜎𝑝

● Continuously compounded rate of return: 𝑟𝑐𝑐 = 𝐼𝑛 (𝑆1 /𝑆0 ) = 𝐼𝑛(1 + 𝐻𝑃𝑅)


𝑥2 −𝑥1
● For a uniform distribution: 𝑃(𝑥1 ≤ 𝑋 ≤ 𝑥2 ) = 𝑏−𝑎

Sampling and Estimation


● Sampling error of the mean = sample mean – population mean = x̅ -µ

● Standard error of the sample mean known population variance: 𝜎𝑥 = 𝜎/√𝑛

● Standard error of the sample mean unknown population variance: 𝑠𝑥 = 𝑠/√𝑛

● Confidence interval: point estimate ± (reliability factor × standard error)

𝜎
● Confidence interval for the population mean: x̅ ± 𝑧𝜎
2 √𝑛
94

Hypothesis Testing
𝑥 − µ0
● Tests for population mean = µ0 : z-statistics = 𝑠
√𝑛

𝑠21
● Test for equality of variances: F =
𝑠22
, 𝑤ℎ𝑒𝑟𝑒 𝑠21 > 𝑠22

𝑑−µ𝑑𝑧
● Paired comparison test: t-statistic =
𝑠𝑑

● Test for difference in means:

𝑋1 −𝑋2
o t-statistic = 1 (𝑠𝑎𝑚𝑝𝑙𝑒 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒𝑠 𝑎𝑠𝑠𝑢𝑚𝑒𝑑 𝑢𝑛𝑒𝑞𝑢𝑎𝑙)
𝑠2
1 𝑠2
2
2
( + )
𝑛1 𝑛2

𝑋1 −𝑋2
o t-statistic = 1 (𝑠𝑎𝑚𝑝𝑙𝑒 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒𝑠 𝑎𝑠𝑠𝑢𝑚𝑒𝑑 𝑒𝑞𝑢𝑎𝑙)
𝑠2 2
𝑝 𝑠𝑝 2
( + )
𝑛1 𝑛2

𝑟 √𝑛−2
● Test for correlation: t = √1−𝑟 2

𝐶𝑜𝑣𝑋𝑌
● Regression slope:𝑏1^ = 2
𝜎𝑋

𝑟𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 𝑠𝑢𝑚 𝑜𝑓 𝑠𝑞𝑢𝑎𝑟𝑒𝑠


● Coefficient of determination: 𝑅2 =
𝑡𝑜𝑡𝑎𝑙 𝑠𝑢𝑚 𝑜𝑓 𝑠ℎ𝑎𝑟𝑒𝑠

● Standard error of estimate = √𝑚𝑒𝑎𝑛 𝑠𝑞𝑢𝑎𝑟𝑒𝑑 𝑒𝑟𝑟𝑜𝑟


Cumulative Z-Table 95

Cumulative Z-Table

Standard Normal Distribution

P(Z ≤ z) = N(z) for z ≥ 0

z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6169 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 06517
0.4 0.6554 0.6591 0.6628 0.6664 6700 0.6736 0.6772 0.6808 0.6844 0.6879

0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7375 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389

1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 `0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319

1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767

2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936

2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
Cumulative Z-Table 96

CUMULATIVE Z-TABLE (CONT.)


Standard Normal Distribution

P (Z ≤ z) = N(z) for z ≥ 0

z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.4960 0.4920 0.4880 0.4840 0.4801 0.4761 0.4721 0.4681 0.4641
– 0.4602 0.4562 0.4522 0.4483 0.4443 0.4404 0.4364 0.4325 0.4286 0.4247
0.1
– 0.4207 0.4168 0.4129 0.4090 0.4052 0.4013 0.3974 0.3936 0.3897 0.3859
0.2
– 0.3821 0.3783 0.3745 0.3707 0.3669 0.3632 0.3594 0.3557 0.3520 0.3483
0.3
– 0.3446 0.3409 0.3372 0.3336 0.3300 0.3264 0.3228 0.3192 0.3156 0.3121
0.4

– 0.3085 0.3050 0.3015 0.2981 0.2946 0.2912 0.2877 0.2843 0.2810 0.2776
0.5
– 0.2743 0.2709 0.2676 0.2643 0.2611 0.2578 0.2546 0.2514 0.2483 0.2451
0.6
– 0.2420 0.2389 0.2358 0.2327 0.2297 0.2266 0.2236 0.2207 0.2177 0.2148
0.7
– 0.2119 0.2090 0.2061 0.2033 0.2005 0.1977 0.1949 0.1922 0.1894 0.1867
0.8
– 0.1841 0.1814 0.1788 0.1762 0.1736 0.1711 0.1685 0.1660 0.1635 0.1611
0.9

– 0.1587 0.1562 0.1539 0.1515 0.1492 0.1469 0.1446 0.1423 0.1401 0.1379

1.0
– 0.1357 0.1335 0.1314 0.1292 0.1271 0.1251 0.1230 0.1210 0.1190 0.1170

1.1
– 0.1151 0.1131 0.1112 0.1093 0.1075 0.1057 0.1038 0.1020 0.1003 0.0985

1.2
– 0.0968 0.0951 0.0934 0.0918 0.0901 0.0885 0.0869 0.0853 0.0838 0.0823

1.3
– 0.0808 0.0793 0.0778 0.0764 0.0749 0.0735 0.0721 0.0708 0.0694 0.0681
1.4

– 0.0668 0.0655 0.0643 0.0630 0.0618 0.0606 0.0594 0.0582 0.0571 0.0559

1.5
– 0.0548 0.0537 0.0526 0.0516 0.0505 0.0495 0.0485 0.0475 0.0465 0.0455
Cumulative Z-Table 97

1.6
– 0.0446 0.0436 0.0427 0.0418 0.0409 0.0401 0.0392 0.0384 0.0375 0.0367

1.7
– 0.0359 0.0351 0.0344 0.0336 0.0329 0.0322 0.0314 0.0307 0.0301 0.0294

1.8
Cumulative Z-Table 98

– 0.0287 0.0281 0.0274 0.0268 0.0262 0.0256 0.0250 0.0244 0.0239 0.0233

1.9

– 0.0228 0.0222 0.0217 0.0212 0.0207 0.0202 0.0197 0.0192 0.0188 0.0183

2.0
– 0.0179 0.0174 0.0170 0.0166 0.0162 0.0158 0.0154 0.0150 0.0146 0.0143

2.1
– 0.0139 0.0136 0.0132 0.0129 0.0125 0.0122 0.0119 0.0116 0.0113 0.0110

2.2
– 0.0107 0.0104 0.0102 0.0099 0.0096 0.0094 0.0091 0.0089 0.0087 0.0084

2.3
– 0.0082 0.0080 0.0078 0.0076 0.0073 0.0071 0.0069 0.0068 0.0066 0.0064

2.4

– 0.0062 0.0060 0.0059 0.0057 0.0055 0.0054 0.0052 0.0051 0.0049 0.0048

2.5
– 0.0047 0.0045 0.0044 0.0043 0.0041 0.0040 0.0039 0.0038 0.0037 0.0036

2.6
– 0.0035 0.0034 0.0033 0.0032 0.0031 0.0030 0.0029 0.0028 0.0027 0.0026

2.7
– 0.0026 0.0025 0.0024 0.0023 0.0023 0.0022 0.0021 0.0021 0.0020 0.0019

2.8
– 0.0019 0.0018 0.0018 0.0017 0.0016 0.0016 0.0015 0.0015 0.0014 0.0014

2.9
– 0.0013 0.0013 0.0013 0.0012 0.0012 0.0011 0.0011 0.0011 0.0010 0.0010

3.0
Student’s T-Distribution 99

Student’s T-Distribution

Level of Significance for One-Tailed Test


df 0.100 0.050 0.025 0.01 0.005 0.0005

Level of Significance for Two-Tailed Test


df 0.20 0.10 0.05 0.02 0.01 0.001
1 3.078 6.314 12.706 31.821 63.657 636.619
2 1.886 2.920 4.303 6.965 9.925 31.599
3 1.638 2.353 3.182 4.541 5.841 12.294
4 1.533 2.132 2.776 3.747 4.604 8.610
5 1.476 2.015 2.571 3.365 4.032 6.869

6 1.440 1.943 2.447 3.143 3.707 5.959


7 1.415 1.895 2.365 2.998 3.499 5.408
8 1.397 1.860 2.306 2.896 3.355 5.041
9 1.383 1.833 2.262 2.821 3.250 4.781
10 1.372 1.812 2.228 2,764 3.169 4.587

11 1.363 1.796 2.201 2.718 3.106 4.437


12 1.356 1.782 2.179 2.681 3.055 4.318
13 1.350 1.771 2.160 2.650 3.012 4.221
14 1.345 1.761 2.145 2.624 2.977 4.140
15 1.341 1.753 2.131 2.602 2.947 4.073

16 1.337 1.746 2.120 2.583 2.921 4.015


17 1.333 1.740 2.110 2.567 2.898 3.965
18 1.330 1.734 2.101 2.552 2.878 3.922
19 1.328 1.729 2.093 2.539 2.861 3.883
20 1.325 1.725 2.086 2.528 2.845 3.850

21 1.323 1.721 2.080 2.518 2.831 3.819


22 1.321 1.717 2.074 2.508 2.819 3.792
23 1.319 1.714 2.069 2.500 2.807 3.768
24 1.318 1.711 2.064 2.492 2.797 3.745
25 1.316 1.708 2.060 2.485 2.787 3.725

26 1.315 1.706 2.056 2.479 2.779 3.707


27 1.314 1.703 2.052 2.473 2.771 3.690
28 1.313 1.701 2.048 2.467 2.763 3.674
29 1.311 1.699 2.045 2.462 2.756 3.659
30 1.310 1.697 2.042 2.457 2.750 3.646

40 1.303 1.684 2.021 2.423 2.704 3.551


Student’s T-Distribution 100

60 1.296 1.671 2.000 2.390 2.660 3.460


120 1.289 1.658 1.980 2.358 2.617 3.373
∞ 1.282 1.645 1.960 2.326 2.576 3.291
F-Table a t 5% (Upper Tail) 101

F-Table a t 5% (Upper Tail)

F-Table, Critical Values, 5% in Upper Tail

Degrees of freedom for the numerator along top row

Degrees of freedom for the denominator along side row

1 2 3 4 5 6 7 8 9 10 12 15 20
1 161 200 216 225 230 234 237 239 241 242 244 246 248
2 18.5 19.0 19.2 19.2 19.3 19.4 19.4 19.4 19.4 19.4 19.4 19.4 19.4
3 10.1 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 8.74 8.70 8.66
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.91 5.86 5.80
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.68 4.62 4.56

6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.00 3.94 3.87
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.57 3.51 3.44
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.28 3.22 3.15
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.07 6.01 2.94
10 4,96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.91 2.85 2.77

11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 2.79 2.72 2.65
12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.69 2.62 2.54
13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 2.60 2.53 2.46
14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 2.53 2.46 2.39
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.40 2.33

16 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 2.42 2.35 2.28
17 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.45 2.38 2.31 2.23
18 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 2.34 2.27 2.19
19 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 2.31 2.23 2.16
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.28 2.20 2.12

21 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37 2.32 2.25 2.18 2.10
22 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 2.30 2.23 2.15 2.07
23 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32 2.27 2.20 2.13 2.05
24 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 2.25 2.18 2.11 2.03
25 4.24 3.39 2.99 2.76 2.49 2.60 2.40 2.34 2.28 2.24 2.16 2.09 2.01

30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.09 2.01 1.93
40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 2.00 1.92 1.84
60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.92 1.84 1.75
120 3.92 3.07 2.68 2.45 2.29 2.18 2.09 2.02 1.96 1.91 1.83 1.75 1.66
∞ 3.84 3.00 2.60 2.37 2.21 2.10 2.01 1.94 1.88 1.83 1.75 1.67 1.57
F-Table a t 2.5% (Upper Tail) 102

F-Table a t 2.5% (Upper Tail)

F-Table, Critical Values, 2.5% in Upper Tails

Degrees of freedom for the numerator along top row

Degrees of freedom for the denominator along side row

1 2 3 4 5 6 7 8 9 10 12
1 648 799 864 900 922 937 948 957 963 969 977
2 38.51 39.00 39.17 39.25 39.30 39.33 39.36 39.37 39.39 39.40 39.41
3 17.44 16.04 15.44 15.10 14.88 14.73 14.62 14.54 14.47 14.42 14.34
4 12.22 10.65 9.98 9.60 9.36 9.20 9.07 8.98 8.90 8.84 8.75
5 10.01 8.43 7.76 7.39 7.15 6.98 6.85 6.76 6.68 6.62 6.52

6 8.81 7.26 6.60 6.23 5.99 5.82 5,70 5.60 5.52 5.46 5.37
7 8.07 6.54 5.89 5.52 5.29 5.12 4.99 4.90 4.90 4.76 4.67
8 7.57 6.06 5.42 5.05 4.82 4.65 4.53 4.43 4.36 4.20 4.10
9 7.21 5.71 5.08 4.72 4.48 4.32 4.20 4.10 4.03 3.96 3.87
10 6.94 5.46 4.83 4.47 4.24 4.07 3.95 3.85 3.78 3.72 3.62

11 6.72 5.26 4.63 4.28 4.04 3.88 3.76 3.66 3.59 3.53 3.43
12 6.55 5.10 4.47 4.12 3.89 3.73 3.61 3.51 3.44 3.37 3.28
13 6.41 4.97 4.35 4.00 3.77 3.60 3.48 3.39 3.31 3.25 3.15
14 6.30 4.86 4.24 3.89 3.66 3.50 3.38 3.29 3.21 3.15 3.05
15 6.20 4.77 4.15 3.80 3.58 3.41 3.29 3.20 3.12 3.06 2.96

16 6.12 4.69 4.08 3.73 3.50 3.34 3.22 3.12 3.05 2.99 2.89
17 6.04 4.62 4.01 3.66 3.44 3.28 3.16 3.06 2.98 2.92 2.82
18 5.98 4.56 3.95 3.61 3.38 3.22 3.10 3.01 2.93 2.87 2.77
19 5.92 4.51 3.90 3.56 3.33 3.17 3.05 2.96 2.88 2.82 2.72
20 5.87 4.46 3.86 3.51 3.29 3.13 3.01 2.91 2.84 2.77 2.68

21 5.83 4.42 3.82 3.48 3.25 3.09 2.97 2.87 2.80 2.73 2.64
22 5.79 4.38 3.78 3.44 3.22 3.05 2.93 2.84 2.76 2.70 2.60
23 5.75 4.35 3.75 3.41 3.18 3.02 2.90 2.81 2.73 2.67 2.57
24 5.72 4.32 3.72 3.38 3.15 2.99 2.87 2.78 2.70 2.64 2.54
25 5.69 4.29 3.69 3.35 3.13 2.97 2.85 2.75 2.68 2.61 2.51

30 5.57 4.18 3.59 3.25 3.03 2.87 2.75 2.65 2.57 2.51 2.41

40 5.42 4.05 3.46 3.13 2.90 2.74 2.62 2.53 2.45 2.39 2.29
60 5.29 3.93 3.34 3.01 2.79 2.63 2.51 2.41 2.33 2.27 2.17
120 5.15 3.80 3.23 2.89 2.67 2.52 2.39 2.30 2.22 2.16 2.05
∞ 5.02 3.69 3.12 2.79 2.57 2.41 2.29 2.19 2.11 2.05 1.94
Chi-Squared Table 103

Chi-Squared Table

Values of χ2 (Degrees of Freedom, Level of Significance)

Probability in Right Tail

Degrees 0.99 0.975 0.95 0.9 0.1 0.05 0.025 0.01 0.005
of
freedom
1 0.000157 0.000982 0.003932 0.0158 2.706 3.841 5.024 6.635 7.879

2 0.020100 0.050636 0.102586 0.2107 4.605 5.991 7.378 9.210 10.597

3 0.1148 0.2158 0.3518 0.5844 6.251 7.815 9.348 11.345 12.838

4 0.297 0.484 0.711 1.064 7.779 9.488 11.143 13.277 14.860

5 0.554 0.831 1.145 1.610 9.236 11.070 12.832 15.086 16.750

6 0.872 1.237 1.635 2.204 10.645 12.592 14.449 16.812 18.548

7 1.239 1.690 2.167 2.833 12.017 14.067 16.013 18.475 20.278

8 1.647 2.180 2.733 3.490 13.362 15.507 17.535 20.090 21.955

9 2.088 2.700 3.325 4.168 14.684 16.919 19.023 21.666 23.589

10 2.558 3.247 3.940 4.865 15.987 18.307 20.483 23.209 25.188

11 3.053 3.816 4.575 5.578 17.275 19.675 21.920 24.725 26.757

12 3.571 4.404 5.226 6.304 18.549 21.026 23.337 26.217 28.300

13 4.107 5.009 5.892 7.041 19.812 22.362 24.736 27.688 29.819

14 4.660 5.629 6.571 7.790 21.064 23.685 26.119 29.141 31.319

15 5.229 6.262 7.261 8.547 22.307 24.996 27.488 30.578 32.801

16 5.812 6.908 7.962 9.312 23.542 26.296 28.845 32.000 34.267

17 6.408 7.564 8.672 10.085 24.769 27.587 30.191 33.409 35.718

18 7.015 8.231 9.390 10.865 25.989 28.869 31.526 34.805 37.156

19 7.633 8.907 10.117 11.651 27.204 30.144 32.852 36.191 38.582

20 8.260 9.591 10.851 12.443 28.412 31.410 34.170 37.566 39.997


Chi-Squared Table 104

21 8.897 10.283 11.591 13.240 29.615 32.671 35.479 38.932 41.401

22 9.542 10.982 12.338 14.041 30.813 33.924 36.781 40.289 42.796

23 10.196 11.689 13.091 14.848 32.007 35.172 8.076 341.638 44.181

24 10.856 12.401 13.848 15.659 33.196 36.415 39.364 42.980 45.558

25 11.524 13.120 14.611 16.473 34.382 37.652 40.646 44.314 46.928

26 12.198 13.844 15.379 17.292 35.563 38.885 41.923 45.642 48.290

27 12.878 14.573 16.151 18.114 36.741 40.113 43.195 46.963 49.645

28 13.565 15.308 16.928 18.939 37.916 41.337 44.461 48.278 50.994

29 14.256 16.047 17.708 19.768 39.087 42.557 45.722 49.588 52.335

30 14.953 16.791 18.493 20.599 40.256 43.773 46.979 50.892 53.672

50 29.707 32.357 34.764 37.689 63.167 67.505 71.420 76.154 79.490

60 37.485 40.482 43.188 46.459 74.397 79.082 83.298 88.379 91.952

80 53.540 57.153 60.391 64.278 96.578 101.879 106.629 112.329 116.321

100 70.065 74.222 77.929 82.358 118.498 124.342 129.561 135.807 140.170
CHAPTER WI
SE
FULLLENGTH MOCKS
UNITTESTS

You might also like