You are on page 1of 145

Walden University

COLLEGE OF MANAGEMENT AND TECHNOLOGY

This is to certify that the doctoral dissertation by

Zachary A. Smith

has been found to be complete and satisfactory in all respects,


and that any and all revisions required by
the review committee have been made.

Review Committee
Dr. Mohammad Sharifzadeh, Committee Chairperson,
Applied Management and Decision Sciences Faculty

Dr. William H. Brent, Committee Member,


Applied Management and Decision Sciences Faculty

Dr. Robert T. Aubey, Committee Member,


Applied Management and Decision Sciences Faculty

Chief Academic Officer

Denise DeZolt, Ph.D.

Walden University
2008
ABSTRACT

An Empirical Investigation of Initial Public Offering (IPO) Price Performance

by

Zachary Alexander Smith

M.B.A., Capella University, 2005


B.S., Old Dominion University, 2004

Dissertation Submitted in Partial Fulfillment


of the Requirements for the Degree of
Doctor of Philosophy
Applied Management & Decision Science

Walden University
August 2008
ABSTRACT

For decades, researchers have disagreed on the magnitude and predictability of abnormal

securities’ price performance generated by initial public offerings (IPOs). The problem

researched in this study was (a) whether IPOs generate abnormal price performances

during their initial years of public trading and (b) if IPOs systematically generate

abnormal price performance, then the identification of predictable abnormal price

performance is in conflict with the theory of market efficiency. The purpose of this study

was to identify the best specified and most powerful method of abnormal performance

detection and apply that method to examine the price performance of IPOs. Matched by

size, industry, and book-to-market ratios this study explored which of the resulting seven

portfolios and matched-firm methods of abnormal performance detection produced the

best specified and most powerful test statistics. Additionally it considers if IPOs generate

abnormal performance. The researcher used the event study approach for the research

design along with the buy and hold abnormal return (BHAR) method of aggregating

returns to conduct this analysis. The findings were that (a) all of the matched-firm

methods of abnormal performance detection were best specified and most powerful and

(b) the IPOs generated statistically significant abnormal price performances in all of the

hypotheses tested. These findings are in conflict with the theory of market efficiency and

imply the existence of social injustice in the IPOs’ premarket pricing and allocation

process. Revisions to the IPO premarket pricing and allocation method may lead to

significant positive social change; the revisions could prevent unfair advantages gained

by those investors with privileged premarket IPO allocations and could provide equal

opportunities for investors to benefit from the IPO process.


 
An Empirical Investigation of Initial Public Offering (IPO) Price Performance

by

Zachary Alexander Smith

M.B.A., Capella University, 2005


B.S., Old Dominion University, 2004

Dissertation Submitted in Partial Fulfillment


of the Requirements for the Degree of
Doctor of Philosophy
Applied Management & Decision Science

Walden University
August 2008
3325352

3325352
2008
DEDICATION

I dedicate this dissertation to my children—Cooper and Harris. Without your

unconditional love and understanding, I do not think I would have completed this

process. Thank you and I love you.


ACKNOWLEDGMENTS

I would first like to thank my mother for giving me the drive to attempt to obtain

my Ph.D., and my father for being there for me throughout completion of my degree. In

addition, I would like to thank my dissertation committee for their assistance: Dr.

Mohammad Sharifzadeh, CFA (Faculty Mentor/Dissertation Committee Chair), Dr.

William Brent (Dissertation Committee Member), and Dr. Robert Aubey (Dissertation

Committee Member). Finally, I would like to thank my wife of eight years, Tamara, for

suffering through this and my other academic adventures.

ii
TABLE OF CONTENTS

DEDICATION.................................................................................................................... ii

TABLE OF CONTENTS................................................................................................... iii

LIST OF TABLES............................................................................................................ vii

LIST OF FIGURES ......................................................................................................... viii

CHAPTER 1: INTRODUCTION TO THE STUDY...........................................................1


Introduction....................................................................................................................1
Background of the Study ...............................................................................................4
Problem Statement .........................................................................................................6
Purpose of the Study ......................................................................................................7
Research Questions and Hypotheses .............................................................................7
Section 1: Specification and Power ........................................................................ 8
Section 2: Initial & Subsequent Short-Term Performance ..................................... 9
Section 3: Longer-Term Performance .................................................................. 12
Section 4: The Quiet Period and Lockup Expiration ............................................ 12
Theoretical Base...........................................................................................................13
Definition of Terms......................................................................................................16
Assumptions.................................................................................................................18
Limitations ...................................................................................................................19
Significance of the Study.............................................................................................20
Social Change ..............................................................................................................21
Summary......................................................................................................................22

CHAPTER 2: LITERATURE REVIEW ...........................................................................23


IPO Process..................................................................................................................23
Anomalous IPO Events................................................................................................25
Initially Abnormally Positive Performance .......................................................... 25
Long-term Negative Performance......................................................................... 27
The Quiet Period ................................................................................................... 29
Lockup Expiration ................................................................................................ 30
Securities’ Pricing, Estimating Returns, and Efficient Markets ..................................31
Methodological debate.................................................................................................33
Measuring Normal Performance........................................................................... 33
Benchmark Construction ...................................................................................... 34
Benchmark Construction ................................................................................ 35
Measuring Abnormal Performance....................................................................... 40
BHAR vs. CAR............................................................................................... 41
Summary......................................................................................................................43

iii
CHAPTER 3: RESEARCH METHOD .............................................................................45
Introduction..................................................................................................................45
Research Design and Methodology .............................................................................45
Justification for Using the Event Study Methodology.......................................... 45
Approach to Implementing the Event Study Methodology .................................. 49
Setting and Sample ......................................................................................................49
Canvassing a Significant Target Population ......................................................... 50
Data Collection ............................................................................................................51
Data Analysis ...............................................................................................................52
The Variables ........................................................................................................ 52
The Hypotheses of Research................................................................................. 53
Section 1: Specification and Power Analysis ................................................. 53
Section 2: Initial and Subsequent Short-term Performance............................ 56
Section 3: Longer-term Performance.............................................................. 58
Section 4: The Quiet Period & Lockup Expiration......................................... 59
Summary......................................................................................................................61

CHAPTER 4: RESULTS...................................................................................................63
Introduction..................................................................................................................63
Specification and Power ..............................................................................................63
Test of the Hypotheses.................................................................................................64
Section 1: Test for Specification........................................................................... 64
Section 2: Tests of Power ..................................................................................... 68
Section 3: Initial Performance .............................................................................. 74
IPO Performance (Pre-issuance)..................................................................... 75
Initial Day of Public Trading .......................................................................... 77
Section 3: Long-term Abnormal Performance...................................................... 78
Section 4: Quiet and Lockup Expiration............................................................... 80
Conclusions..................................................................................................................81

CHAPTER 5: SUMMARY, CONCLUSION, AND RECOMMENDATIONS ...............83

Introduction........................................................................................................................83
Overview......................................................................................................................83
Interpretation of findings .............................................................................................85
Specification and Power Analyses........................................................................ 86
Biases ............................................................................................................. 86
Matched-Firm vs. Matched-Portfolio Approach............................................. 87
Matched Firm—Specification and Power....................................................... 89
Short-term IPO Performance ................................................................................ 90
Long-term IPO Performance................................................................................. 92
Event-related IPO Performance ............................................................................ 93
Implications for Social Change....................................................................................95
Short-term IPO Performance and Income Disparity in the United States. ........... 95

iv
Chinks in the Armor of Market Efficiency ........................................................... 97
Integration and Summary...................................................................................... 99
Validity and Reliability of the Research Results .......................................................101
Recommendations for Action ....................................................................................104
Recommendations for Further Study .........................................................................105
Conclusion .................................................................................................................107

REFERENCES ................................................................................................................109

APPENDIX A:.................................................................................................................115
Power Analysis—1-Year Event Horizon—n=50.......................................................115

APPENDIX B ..................................................................................................................116
Power Analysis—1-Year Event Horizon—n=500.....................................................116

APPENDIX C ..................................................................................................................117
Power Analyses—2-Year Event Horizon—n= 50.....................................................117

APPENDIX D..................................................................................................................118
Power Analyses—2-Year Event Horizon—n= 500...................................................118

APPENDIX E ..................................................................................................................119
Power Analysis—3-Year Event Horizon—n=50.......................................................119

APPENDIX F...................................................................................................................120
Power Analysis—3-Year Event Horizon—n=500.....................................................120

APPENDIX G..................................................................................................................121
Power Analysis—4-Year Event Horizon—n=50.......................................................121

APPENDIX H..................................................................................................................122
Power Analysis—4-Year Event Horizon—n=500.....................................................122

APPENDIX I ...................................................................................................................123
Average Initial Day IPO Performance by Year and Month.......................................123

APPENDIX J ...................................................................................................................126
Yearly Average of Pre-Trade Performance 1997-2007 .............................................126

APPENDIX K..................................................................................................................127
Yearly Abnormal Performance at the Expiration of the Quiet Period.......................127

APPENDIX L ..................................................................................................................128
Abnormal Performance & Lockup Expiration...........................................................128

APPENDIX M .................................................................................................................129

v
Popular Benchmarks used to Represent Normal Performance ..................................129

APPENDIX N..................................................................................................................130
Methods used in Long-term Event Studies................................................................130

CURRICULUM VITAE..................................................................................................131

vi
LIST OF TABLES

Table 1. Specification analysis I ..................................................................................... 663


Table 2. Specification analysis II.................................................................................... 674
Table 3. Pre-public trading averaged monthly IPO performance ..................................... 73

vii
LIST OF FIGURES

Figure 1. One-year power curve. ...................................................................................... 69


Figure 2. One-year power curve (n=500). ........................................................................ 70
Figure 3. Four-year power curve (n=50)........................................................................... 71
Figure 4. Four-year power curve (n=500)......................................................................... 72
Figure 5. Long-term abnormal performance..................................................................... 79
Figure 6. BHAR (IPO V. Matched-firm).......................................................................... 80

viii
CHAPTER 1:

INTRODUCTION TO THE STUDY

Introduction

In this study, I have analyzed events that influence the price performance of

unseasoned initial public offerings (IPOs). Recently researchers have spent considerable

time debating whether IPOs produce abnormal performance (see Brav, Geczy, &

Gompers, 2000; Brown & Weinstein, 1985; Cheng, Chueng, & Po, 2004; Schultz, 2003).

Researchers have formed a substantial empirical base to support the hypotheses

concluding that IPOs have generated abnormal market performances (see Affleck-

Graves, Hedge, & Miller, 1996; Ibbotson, 1975; Loughran & Ritter, 2004; Reily &

Hatfield, 1969). In this study, I tested the specification and power of different

methodological approaches used to identify abnormal performance and the prevalence of

abnormal performance associated with events occurring in response to the issuance of

unseasoned IPOs. Both of these concepts are critical components of the research project.

Therefore, I will address them individually in the following paragraphs.

Researchers choose to use benchmarks or market proxies to model normal

performance when examining IPO performance because IPOs lack historical performance

data. Without historical performance data, researchers have limited ability to produce

reliable estimates of expected return. To construct a model of normal performance that

does not rely on the correlation of the market and event firms’ historical performance,

researchers need to find a method that captures systematic and idiosyncratic risks

involved in a security’s price performance. Researchers have substituted models based


2

upon firm-specific characteristics to gauge normal performance (referred to in this

document as the matched-firm approach). To ensure that researchers are using the best

criteria to model normal performance they must conduct specification and power

analyses on the methods they use to detect abnormal performance. The majority of the

problems affecting the results of event studies occur when the event horizon is

lengthened, however, Barber and Lyon (1997) and Lyon, Barber, and Tsai (1999)

provided examples of biases that influence the results of short-term analyses.

The biases that are likely to influence the results of longer-term analyses are,

according to Lyon, Barber, and Tsai (1997) the rebalancing, skewness, and survivorship

biases. I will discuss each of these biases in chapter 2 and make modifications to the

models of normal performance in an attempt to minimize these biases impact on the test

statistics’ accuracy. In addition, Lyon et al. (1997) found that “the analysis of long-run

abnormal returns is treacherous” (p. 198), because of these biases; therefore, the

importance of addressing each of these biases and ensuring that the statistical method

used accurately identifies abnormal performance is a critical component of research

projects testing market efficiency.

After I select a well-specified method that can accurately identify abnormal

performance, I will begin testing hypotheses related to events that occur as companies

issue unseasoned equity shares. The hypotheses related to unseasoned equity issuance

include (a) short-term outperformance, (b) longer-term underperformance, and (c)

abnormal performances surrounding the expiration of the quiet and lockup periods.

Researchers have tested all of these hypotheses using different samples and
3

methodologies. The reasons why I have reexamined these hypotheses include evidence of

biased benchmarks, empirical disagreement regarding the significance of the abnormal

performance, small-sample biases, and a need for researchers to update some of the tests.

Previous studies conducted on IPO performances did not combine tests of the

methodology used to detect abnormal performance and tests of the hypotheses previously

mentioned. This combination is necessary to evaluate the accuracy of the conclusions

derived from the analyses, according to Lyon et al. (1999).

The hypotheses that I have evaluated in this study are as follows. First, IPOs

produce substantially positive abnormal performances, generally between the offer of the

issue and close of trade on the first day of public trading (see Krigman, Shaw, &

Womack, 1999; McDonald & Fisher, 1972; Loughran & Ritter, 2004; Reily & Hatfield,

1969). The magnitudes of these gains are significant; Ritter (2003) found an average

initial IPO return of 18.4% for IPOs listed from 1960-2000. Second, IPOs, on average,

underperform standard benchmarks in studies of longer-term performance. Ritter (1991)

found, that IPOs underperform standard market indices by 27.39% over a 3-year period;

this is equivalent to an investor underperforming the benchmark by 9.13% a year. Third,

events occur in the initial year of trading following an IPO that produce abnormal

performances. These events are the expiration of the quiet and lockup periods. For

example, Bradley, Jordan, and Ritter (2003) showed that the expiration of the quiet

period may cause a significant 4.1% positive performance movement in IPOs (p. 1), and

Field and Hanka (2001), calculated a significantly negative 1.5% return as the lockup

period expires (p. 471). In these cases, abnormal performances seem to occur
4

systematically; if I substantiate the systematic nature of these anomalous events, investors

should adjust their risk/return expectations for these offerings and resulting from these

events.

Background of the Study

The historic price performances of IPOs traded on the New York Stock Exchange

(NYSE), National Association of Securities Dealers Automated Quotation (NASDAQ),

and the American Stock Exchange (AMEX) produce interesting questions regarding what

return investors should require for their invested capital. First, shortly after companies

offer the IPOs to the market, they generate abnormal returns; the magnitude is, according

to Ritter (2003), 18.4%. Performance of this magnitude indicates that the market is either

misvaluing IPOs by pushing the price above its fundamental value, or the underwriters of

IPOs are consistently undervaluing these issues; both of these outcomes do not fit well

within the efficient market paradigm. Second, the mysterious performances of IPOs seem

to continue as they mature; researchers have shown that the IPOs, when compared against

standard benchmarks, obtain longer-term negative performances. For example, Ritter

(1991) found that if investors use a buy-and-hold investment strategy, investing in all

IPOs that companies have issued to the public and holding these issues for 3-years, they

would have obtained a -27.39% BHAR.

Along these same lines, at the conclusion of the quiet and lockup periods, IPOs

exhibit abnormal performances in a systematic manner, trending in one direction or

another (positive or negative performance). Bradley, Jordan, and Ritter (2003)

documented that at the expiration of the quiet period IPOs produce an abnormal return of
5

4.1%. Field, and Hanka (2001) and Garfinkle, Malkiel, and Bontas (2002), found that at

the expiration of an IPO’s lockup period, firms produce negative performances ranging

from 1.5 to 4.47%. If investors know that at the end of the lockup period the trading

volume of a security increases because there is an increase in the liquidity provided for

the issue, then investors should incorporate the knowledge of this future event into the

value of the offering at the IPO’s issuance.

Researchers have attempted to document the anomalous nature of IPO

performance, but flaws in the statistical methodologies used to test for abnormal return

cause major disagreements in the interpretation of their findings and conclusions (see

Mitchell & Stafford, 2000; Barber & Lyon, 1997). The flaws in researchers’

methodologies are biases that are evident when researchers use different methods to

aggregate cumulative returns and different portfolios constructions to attempt to

approximate normal performance. Researchers contaminate portfolios because they fall

prey to the rebalancing, skewness, new issue, and survivorship biases, which I will

address briefly in chapter 2. Researchers also disagree on the methodology of aggregation

of abnormal returns generated by IPOs. The two competing methodologies are the BHAR

and the Cumulative Abnormal Return (CAR), which I will also examine in chapter 2. In

addition, the research base is plagued with small sample biases and fragmentation. To

begin to correct the preceding biases and potentially damaging methodological

constructions, I propose the following.


6

Problem Statement

The problems that I will address in this dissertation are threefold: (a) there is

fragmentation in IPO performance literature, (b) researchers have relied on small sample

sizes in examining IPO performance (quiet and lockup expiration), and (c) previous

analyses are discredited because the methodological approach used to detect abnormal

performance is misspecified and or not powerful. The questions addressed in this study,

using a sample of unseasoned IPOs issued from 1985-2002, are:

1. Does the pairing of the matched-firm benchmark approach with the BHAR

methodology produce well-specified test statistics for identifying

abnormal performance in event studies? Inquiry was based upon the

research findings presented in Barber and Lyon (1997), Mitchell and

Stafford (2000), and Gutts, Mills, and Roberts (1997).

2. I relied on the results and conclusions reached in Ritter and Welch (2002)

and Cheng, Cheung, and Po (2004) to examine questions related to

whether the returns generated shortly after the unseasoned IPO’s issuance

is positive and significant and whether this abnormal performance is

constrained to the period prior to public trading was based upon results

and conclusions reached in.

3. Is the relationship between long-term negative abnormal performance and

IPO issuance real? If it is, is this negative abnormal performance

significant? I used a similar approach to Ritter (1991) methodology to

construct my approach to analyze longer-term abnormal performance.


7

4. Is there significant positive abnormal performance at the conclusion of the

quiet period? I based my inquiries upon the findings and conclusions

reached in Bradley, Jordan, and Ritter (2003) and Bradley, Jordan, Ritter,

and Wolf (2004).

5. Does a negative abnormal performance at the conclusion of the lockup

period exist? If it does, what is the significance of it? I have based the

analysis of abnormal performance at the expiration of the lockup period

upon the findings and conclusions generated in Field and Hanka (2001)

and Garfinkle, Malkiel, and Bontas (2002).

After I identified the best-specified approach to use for identifying abnormal returns, the

hypotheses were tested and I evaluated the hypothesized relationships using a

consistently applied method.

Purpose of the Study

The purpose of this study is to examine the price performance of IPOs using a

well-specified and powerful method of detecting abnormal performance: (a) at its public

offering and (b) at subsequent time intervals in response to this offering. The goals of this

research project was to investigate theories related to the performance of IPOs, and to test

these theories using historical pricing data for IPOs that were issued from January 1985

to 2002.

Research Questions and Hypotheses

In the ensuing section, I have outlined the hypotheses of this research project.

There are four sections: (a) specification and power analysis, (b) short-term performance,
8

(c) longer-term performance, and (d) events related to IPO performance. The results of

the first section influenced the remainder of the analyses. Therefore, readers should

evaluate the hypotheses in the order that they are presented and then collectively.

Section 1: Specification and Power

In this section, I have outlined how I executed the specification and power

analyses in this research project. Two different techniques functioned as measures of

normal performance for the event firm, these approaches were: (a) the matched-firm

approach and (b) the portfolio-matching approach. Using both techniques, I matched the

event firms based upon a combination of the three different firm-specific variables: (a)

market capitalization, (b) industry affiliation, and (c) market-to-book ratios.

After the specification tests were completed, I chose the best-specified benchmark

of normal performance and used this method to conduct the remainder of the analyses.

For my first hypothesis test, I took data from 1985-2002 and conducted 10 nonrepeating

random samples of 50 companies per year; with 180 samples distributed evenly across

the 18- years. For each benchmark and sample, I matched the benchmark return against

the simulated event firm and the BHAR was calculated.

Hypothesis 1:

H0: Mean BHAR is zero.


H1: Mean BHAR is not zero.

After running the preceding tests on the 180 samples of 50 companies, I

calculated the Empirical Size (ES) statistic. I then compared the ES statistic against the

constructed confidence interval and checked the specification of the method. Next, I
9

combined the results of this analysis with the ensuing power analysis to select a

benchmarking method.

In the power analysis, I took the same samples that were used in the specification

analysis and simulated abnormal performances at different intervals to determine when

each of the six methods reacted to the simulated abnormal performances of +/- 1, 5, 10,

15, 30, 50, and 75%.

Hypothesis 2:

H0: Mean BHAR is zero


H1: Mean BHAR is not zero

I then combined the results of the individual tests, 180 samples of 50 companies, and

calculated the Empirical Power (EP)—I calculated this statistic by dividing the number of

times I rejected the null hypothesis by the number of tests conducted.

Section 2: Initial & Subsequent Short-Term Performance

In section 2, I have evaluated whether IPOs, upon their initial offering, generate

substantial abnormally positive performance results--when compared against the

expected return on the IPO. There are dissenting academic opinions regarding when the

initial abnormal IPO performance occurs, whether it occurs, and which statistical

methods can identify these abnormal performances. I have contrasted Loughran and

Ritter (2004) against Cheng, Cheung, and Po (2000) to provide a basis to explore the first

debate, and Mitchell and Stafford (2000) compared with Ritter (1991) and Barber and

Lyon (1997) to provide detailed information pertaining to the second debate.

In reference to when abnormal IPO performance occurs, Loughran and Ritter

(2004) claim that the initial underperformance is generated in the first day of trading,
10

whereas Cheng, Cheung, and Po (2000) conclude that the abnormal trading profits are

only available prior to the opening of public trading. I have provided evidence to quell

this debate and provided significant results that should illustrate whether initial

abnormally positive performance is obtainable between the offer and the first day of

trade, during the first day of trade, or during both periods.

When researchers attempt to provide evidence as to whether the abnormal

performance actually occurs, a disagreement sometimes arises. This disagreement

generally stems from the researcher’s choice of using the BHAR or the CAR method to

calculate the aggregated abnormal performance. Based upon the formulaic properties

used to calculate the two methods of aggregation, which I have discussed in chapter 2, I

have chosen to use the BHAR method over the CAR. I made this decision because the

BHAR better represents the abnormal return an investor (buy and hold) would obtain

over an identified period; researchers’ compound BHARs in their calculations, however,

they do not compound CARS.

Researchers have shown the BHAR method of aggregating returns produces

overstated test-statistics. This occurs when researchers use portfolios as proxies for

normal returns, according to Barber, and Lyon (1997). However, when researchers use

the matched-firm technique their results do not suffer from the same statistical problems.

I have illustrated how these biases influence the results of the study in chapter 2 and

design a method in chapter 3 to test the power and statistical significance of the different

methods.
11

In Hypothesis 3, I have determined whether IPOs produce abnormal performance

in the period prior to public trading using the following hypothesis test.

Hypothesis #3:

H 0 : R IPOAverage ( Offering → InitialTrade ) ≤ 0


H 1 : R IPOAverage ( Offering → InitialTra de ) > 0

In the second test of this section, Hypothesis 4, I have compared the aggregate average

monthly performance obtained investing in IPOs prior to public trading against the

average performance of the best performing, in this period, standard market index. I

relied on standard indices to estimate normal performance because the performances

occur prior to public trade; I cannot match the event firm to a portfolio or matched-firm

prior to public trade.

Hypothesis 4:

H 0 : R IPOMonthly (Offering → InitialTrade ) ≤ R DJIAMonthly


H 1 : R IPOMonthly (Offering → InitialTrade ) > RDJIAMonthly

By testing Hypothesis 5, I have determined whether the abnormal performances

that occur prior to the issuance of unseasoned equity continue after companies issue their

shares. In this analysis, I have matched the returns of the IPO against the best-specified

and most powerful benchmark identified in Hypotheses 1 and 2.

Hypothesis 5:

H 0 : R IPO ( Day 1) ≤ RIndustyMat chedFirm


H 1 : R IPO ( Day 1) > R IndustyMat chedFirm
12

After Hypotheses 2 through 5 were tested, I had determined whether initial abnormally

positive performance was generally constrained to the period preceding the issuance of

shares.

Section 3: Longer-Term Performance

If the hypothesized relationships in Hypotheses 3 and 4 are correct, then using

either the offering price or the initial trade as the starting price will affect the actual

estimate of long-term abnormalities in IPO performance. Therefore, I did not include the

performance obtained in these two periods to analyze longer-term IPO performance. The

general question I was interested in answering was whether IPOs significantly

underperform my benchmarks, and when the underperformance occurs.

Hypothesis 6:

H 0 : BHAR ( TradingDay 2 − 350 ) = 0


H 1 : BHAR ( TradingDay 2 − 350 ) ≠ 0

I chose to focus on three years of data because the metrics tested in Hypotheses 1 and 2

deteriorate significantly, in terms of power, as I increased the event horizon beyond 3-

years (see Appendix E - H).

Section 4: The Quiet Period and Lockup Expiration

In this section of the analysis, I was interested in determining whether IPOs

exhibit significant abnormal performance in the 5-day period occurring at the end of the

quiet and lockup expiration. I have used the five trading-day period surrounding the

expiration of the quiet and lockup periods as the measurement period. First, for the quiet

period, at the conclusion of the quiet period analyst can issue opinions pertaining to the
13

newly issued shares fundamentals, the analysts’ opinions are overwhelmingly positive

and induce a significantly positive performance at the conclusion of this period.

Therefore, I proposed the following hypothesis test:

Hypothesis 7:

H 0 : R Quiet ≤ R IndustyMatchedFirm
H 1 : R Quiet > R IndustyMatchedFirm

As the shares of the newly issued IPO approach the expiration of the lockup period, the

officers of the company and other powerful shareholders can sell their shares on the open

market. Prior to this expiration, they are constrained. Because of the increase in volume

of shares trading, of the available for public trade shares, and the likelihood that powerful

shareholders will tend to diversify their holdings, it was my conjecture that the shares

will generate a significantly negative performance event.

Hypothesis 8:

H 0 : R Lock −up ≥ R IndustyMat chedFirm


H 1 : R Lock −up < RIndustyMatchedFirm

By forwarding, theses hypotheses I was interested in addressing whether or not the

markets react to events related to IPO issuance. I will now discuss how this concept of

market efficiency evolved, and how researchers have applied event studies to detect

market inefficiencies.

Theoretical Base

The theoretical basis of this evaluation begins with the conceptual market

framework suggested by Bachelier (1900/1964), Cowles and Jones (1937), and


14

Alexander (1961) and defined by Fama (1965), which resulted in the concept of market

efficiency. Bachelier (1900/1964) wrote, “At any given time, the market believes in

neither a rise nor a fall of true prices” (p. 26). Bachelier’s work went relatively

unrecognized until the 1960s, but other researchers independently discovered the

sentiments of this theorist’s conjectures. Thirty-seven years later, Cowles and Jones

(1937) found that the aggregate monthly stock price changes exhibited specific patterns

from 1835-1935; namely, if the market rose (fell) in the preceding month, it had a 65%

chance of rising (falling) in the current month (pp. 282-283). From this, after some

careful analysis, Cowles et al. (1937) concluded that there are systematic performances

obtained in the market that contrast with the fair-game market concept. Alexander (1961)

took this conclusion further, he illustrated that investors could generate profitable trading

strategies using knowledge of these patterns.

Fama (1965) agreed with some of the conclusions reached by Alexander (1961).

Specifically, that securities performances exhibit certain dependencies. However, Fama

claimed that when investors consider the profit derived from trading strategies used to

exploit the aforementioned departures from normality, the benefits are minuscule.

Moreover, Fama stated that Alexander presupposes that the investor applying his trading

rules would buy (sell) the securities at their lowest (highest) values; this is a faulty

assumption because securities markets are difficult, if not impossible, to time. As

researchers discovered anomalies that do not fit within theoretical boundaries of the

concept of market efficiency, they are continuously questioning the concept of market

efficiency. However, this concept of market efficiency provides the fundamental basis
15

from which researchers base their evaluation of securities’ price performance; it would

suffice to state that it is the bedrock of modern finance.

In the current research project, I define Fama’s efficient market hypothesis (EMH;

Fama, 1976) as follows. First, at any given time, securities’ prices reflect their

fundamental value. Second, if information is available, the market (the market in this case

would be all investors) readily consumes and interprets the information correctly. In the

words of Fama (1970), the implied relationship between market efficiency and expected

returns rests on:

The assumption that the conditions of market equilibrium can be stated in


terms of expected returns and that equilibrium expected returns are formed
on the basis of (and thus “fully reflect”) the information set Φt have a
major empirical implication--they rule out the possibility of trading
systems based only on information Φt that have expected profits or returns
in excess of equilibrium expected profits or returns. (p. 384)

This hypothesis is highly controversial because researchers have illustrated that there

exist potential flaws in the theory of market efficiency (e.g., IPO performance). Since

Fama (1976) articulated the EMH, analysts and academics alike have been carefully

analyzing market efficiency, some finding trading rules and evidence against the efficient

market paradigm and some reporting results in support of this hypothesis.

Two subsets of research project’s analyzing the efficient market quandary have

received a permanent position in the literature focused on testing market efficiency: (a)

those attempting to build an appropriate methodological platform from which they may

test market efficiency, and (b) those interested in testing market efficiency. Ball and

Brown (1968) pioneered the first group of research and Fama, Fisher, Jensen, and Roll

(1969) designed the application of the event study methodology to test market efficiency.
16

This event study methodology provided the framework to test different models that

attempt to measure abnormal returns. The second groups of researchers have tested

whether events generate abnormal performance (see Affleck-Graves, Hedge, & Miller,

1996; Ibbotson, 1975; Loughran & Ritter, 2004; Reily & Hatfield, 1969 for examples of

tests run on IPO-related events). Because any tests of market efficiency are joint tests of a

model of normal performance combined with tests of market efficiency it is appropriate

to deal with each separately. Market efficiency and models of normal performance form

the foundation from which this analysis will progress. In chapter 2, I will thoroughly

analyze IPO performance, ensuring to consider elements from both subsets of the

following questions: (a) how should empiricists measure normal and abnormal

performance and (b) how might IPOs generate abnormal performance?

Definition of Terms

Efficient market hypothesis: “A market in which prices always ‘fully reflect’

available information is called ‘efficient’” (Fama, 1970, p. 383). Fama proceeded to rank

the forms of market efficiency in terms of the information that concern analysts. In the

weak form of this hypothesis, the information component is historical stock prices. In the

semi-strong form, the information the analyst is concerned with is whether the market

adjusts to other information that is available to the public. Finally, in the strong form,

researchers testing market efficiency are concerned with certain investors who possess

monopolistic access to certain information (p. 383).


17

Initial public offering (IPO): The IPO is an event in the history of a company

where a private company first sells its shares to the public and thus changes its ownership

structure from a private enterprise to a publicly traded company.

Issuance (of the IPO): After the 20-day ‘waiting period’, which is given to

investors to analyze the registration statement (Simon, 1989, p. 297), the SEC allows

companies to issue their shares to the public and list on a designated exchange. The

issuance occurs the day of the IPO, or when shares begin to trade on their public

exchange.

Lockup period (of the IPO): An agreement between the underwriter and the issuer

that ensures that any shares held by insiders (management, directors, employees, and

affiliated parties) after the offering, will not be sold until 180 days after the registration

statement becomes effective1.

Offering (of the IPO): The offering of price of a security is the price that an initial

investor must pay to receive an allocation of shares in the offering2. This occurs after the

registration statement has been filed with the SEC and occurs on the day prior to or the

day of the public offering.

Quiet period (of the IPO): The quiet period is a period of time that extends from

the time a company files its registration statement with the Securities and Exchange

Commission (SEC) to the time the SEC declares the statement is effective. During this

period the SEC places restrictions on how the newly issued companies’ staff and its

1
see: http://www.sec.gov/investor/pubs/analysts.htm
2
see: http://moneycentral.hoovers.com/global/msn/index.xhtml?pageid=1954
18

representative are to communicate to the public3; such restrictions allow the market and

investors enough time to determine the value of the security without influence from the

firm’s management or analysts (Bradley, Jordan, Ritter, & Wolf, 2004, p. 3).

Underwriter (underwriting syndicate): The underwriting syndicate is comprised

of potentially a lead underwriter, managers or comanagers, and book-runners, although,

for the purpose of this study, unless noted, it is not important to distinguish who

underwrote the IPO, just that the company issued shares.

Assumptions

In this research project, I have attempted to refrain from imposing any undue

restrictions or assumptions on the data analyzed or tools used to analyze the data.

Although there are obvious underlying assumptions, I have addressed the majority of

these in order of importance. A standard assumption of the normative approach used to

analyze security’s price performance is that when analyzing concepts related to a

security’s risk and return the assumed relationship is linear. In addition, researchers

generally assume that a security’s returns are identically and independently distributed

and drawn from a normal distribution. This concept means that trading based upon

technical trading rules would provoke investors to expect the market to distribute their

returns equally through time.

The independence and normal distribution concepts are the basic assumptions

underlying the normative approach to securities’ analysis. These concepts also explain

the performance behavior an investor would likely expect if markets were efficient.

3
see: http://www.sec.gov/answers/quiet.htm
19

These assumptions allowed me to illustrate the differences of how the world actually

works and how it should work. I have imposed mechanisms to model normal

performance (i.e. match-firm preferred to the matched portfolio approach to measuring

abnormal returns) that have attempted to correct for departures from this neat

interpretation of the world.

I have also generalized the results of sensitivity analysis run on a randomly

sampled population to a nonrandom sample. I have concluded that canvassing the

population of IPOs has provided a more appropriate measure of abnormal performance

than could have obtained by conducting a random sample of this population. I decided to

canvass this population because the data is difficult to obtain, and one of the goals of this

research project was to increase the size of samples used to test for abnormal

performance. Therefore, I have assumed that if a testing procedure produces test statistics

that are well specified and powerful in random samples they will behave similarly in

nonrandom samples.

Limitations

I have restricted these analyses to a limited set of data, including only those

companies that issued unseasoned shares from 1985 to 2002 on the U.S. financial

exchanges. Because I have chosen to focus solely on analyzing unseasoned equity

offerings, I have not attempted to generalize the conclusions reached in this analysis to all

public offerings--specifically seasoned equity offerings (SEOs). Researchers have

provided evidence that SEOs underperform standard benchmarks as much as unseasoned

issuances (see Jegadeesh, 2000; Spiess & Afflect-Graves, 1995). I have decided not to
20

broaden the scope of this analysis to include SEOs. However, I believe by combining the

outcomes of studies attempting to identify abnormal performances generated by SEOs

and IPOs researchers may find better mechanisms to analyze longer-term performance,

and find greater linkages between the anomalies then currently exist.

When analyzing the results and significance of conclusion reached in conducting

event studies, researchers should consider the sample size as either a limitation or

strength. It would be optimal if the researcher analyzed the performance of all

unseasoned IPOs listed on the U. S. financial markets historically. A study of this

magnitude would take an extraordinary amount of time to execute, but the rationale

behind not executing this strategy is access to data. Even access to bulk data on IPOs

prior to 1995, is costly and difficult to obtain. Therefore, I have chosen to use the

available data that is readily attainable in this analysis.

Significance of the Study

This study is significant to the academic and professional communities because

the results will provide insight as to why researchers have labeled IPO performance as an

anomaly. Studies that have preceded the current research project used different samples,

event horizons, and methods to examine the abnormal performance of IPOs. This

fragmentation has generated incredible disagreement among the authors and theorists.

Studies that attempt to explain these anomalies need to integrate the tests of methods and

procedures used to identify abnormal performance with actual tests on IPO performance

to help interested parties to access the accuracy of conclusions reached.


21

The intent of this study is to reintegrate tests of methods and tests of abnormal

performances related to unseasoned IPO issuance, while using a significant sample of

existing data (1985-2002). The concepts that will be tested are that (a) IPOs produce

abnormally positive performance prior to trading publicly, (b) in the longer-term they

produce abnormally negative performance, (c) IPOs produce abnormally positive

(negative) performance at the expiration of their quiet (lockup) periods and perform

abnormally over 1-, 2-, and 3-year time horizons.

Social Change

Abnormal IPO performance affect investors and management or firm affiliates in

different ways. In this paragraph, I will briefly outline the social problems abnormal IPO

performance may generate. The first problem is that the process of IPO issuance does not

allow investors equal opportunity to economic outcomes. Specifically, the underwriting

syndicate, whose purpose in the issuance process is to allocate shares of the offering,

selectively allocates shares of the issue to privileged clients, without allowing the market

to bid for the offering. Second, the actions taken by the underwriting syndicate stifle the

efficiency of the issuance process in the following ways: (a) selective allocation of

shares, (b) determining a price without reference to market forces, and (c) I have

compared the ensuing positive abnormal performances to fixing a price ceiling on an

asset or commodity. I have developed these two social change concepts in chapter 5 and

they have serious implications for how investors think about market efficiency.

Revision in the IPO premarket pricing and allocation will eliminate IPOs’

abnormal performances enjoyed by a few investors and provides equal opportunities for
22

all investors to benefit from the IPO process. This will lead to significant positive social

change in the distribution of wealth and income in the society.

Summary

Based on the assumptions embedded in the normative financial theory, the

financial markets are efficient and, therefore, at any time a security’s price reflects its

fundamental or intrinsic value. To value a security, analysts use various sources of

information; this information set provides the basis for valuation. If information, based

upon historical analyses of securities’ price performances, provides evidence that the

market will react to a preplanned event in a specific manner, causing the price of a

security to trend in one direction or another, the market should integrate this information

into the price of a security. Thus, the security’s price should reflect this information when

the information becomes available. Therefore, if investors are aware that significant

performance movements will occur in the future, they should adjust the current price by

discounting the future value of the firm. In terms of IPO pricing and performance, it does

not seem as though this adjustment is being made by analysts or investors.

The focus of this study is to evaluate the legitimacy of anomalies uncovered by

researchers in regards to IPO performance. In chapter 2, I have reviewed literature related

to: (a) the actual process of issuing shares, (b) anomalous events related to IPO

performance, (c) securities pricing, and (d) the methodology used to detect abnormal

performance. In chapter 3, I have formulated the research methodology for testing the

hypotheses presented in the proceeding chapter. In chapter 4, I have presented the results

of the study, and in chapter 5, I summarized and concluded the analysis.


CHAPTER 2:

LITERATURE REVIEW

In this chapter, I have examined the fundamental concepts embedded within a

model of abnormal performance and identified key events that occur because of the

unseasoned issuance of shares on the U.S. financial exchanges. I have accomplished this

in two stages, by examining the IPO process and events related to this process and

conducted research addressing methodological difficulties that arise when researchers

attempt to measure abnormal returns. In this section, I have initially examined events that

researcher have historically suggested to generate abnormal IPO performance and

described various models that have been used to measure abnormal performance.

Examinations of these two areas of research provide the theoretical basis from which I

have conducted the remainder of the analyses.

IPO Process

The process of issuing unseasoned securities is complex. First, there are conflicts

of interest between the organization’s managers, investors, and underwriters. Second,

there is a complex underlying structure (legal and informational), which needs to be

adhered to when the company issues its shares. The conflicts and interrelationships

between the aforementioned policies and parties and their influence on the price

formulation of IPOs are beyond the scope of this section of this paper.

The process of equity issuance begins with the construction of a letter of intent,

which is a document outlining the agreement pertaining to the process of equity issuance

between the issuer and the underwriting syndicate. This document outlines the
24

underwriters’ compensation schedule, provides a price range for the IPO, and

approximates the number of shares available for issue (see Dalton, Certo, & Daily, 2003).

The underwriters’ services are obtained for numerous reasons during the IPO process; the

initial reason is to draft a preliminary prospectus, which is also referred to as the Red

Herring or the S-1 Statement (per SEC forms; Dalton et al., 2003, p. 292). In addition, the

underwriters offer services related to share distribution, market stabilization, and market-

making activities for the securities offered.

Twenty days after the filing of the preliminary prospectus, this document becomes

the official prospectus; at this time, the SEC has another 20 days to review the official

prospectus (see Ellis, Michaely, & O’Hara, 1999). Once the prospectus is official, the

underwriting syndicate plays a more traditional role of the investment banker and

conducts a road show. In this road show, the underwriting syndicate attempts to solicit,

create, and evaluate the public’s interest in the shares of their offering; typically, this

process takes three to four weeks (see Ellis et al., 1999). Once the prospectus is effective

and approved, the SEC evokes another 20-day waiting period, which the SEC may

discretionarily waive (see Ellis et al., 1999).

After the SEC has approved the waiver or once the issuer’s 20-day waiting period

has elapsed, the underwriter and the issuer can begin to place their offering. The issuing

company and underwriters generally meet between the close of trade on the day prior to

the intended date of issuance before the company issues its shares to decide on the exact

date of issuance. At this time, the underwriter and the issuer reach a firm agreement about
25

all of the details of the offering (e.g. the price, share amount of shares offered, & the

price of each share).

From the filing of the initial registration to the offering and issuance, through a

period ranging from 25 to 40 days after the offering occurs, the officers and analysts

affiliated with the company are not to make any forward-looking statements or issue

guidance other than the guidance contained in the official prospectus (see Ritter, 1998).

In addition, the underwriter and the issuer normally agree on a lockup period, which

typically, lasts for 180 days after the company issues the security (see Wiggenhorn &

Madura, 2005). Both of these periods allow market forces to determine an appropriate

market value for the security without interference from market professionals or corporate

insiders.

Anomalous IPO Events

I have provided a synopsis of the events IPOs face in their initial year of

seasoning, in this section I shifted the attention to anomalies found within this period and

thereafter. I have examined the following in this study: (a) the performance IPOs obtain

from offering to their initial trade, and from their initial trade to close of trade on the first

day of trading, (b) longer-term performance, and (c) performance occurring at the

expiration of the quiet and lockup periods. Now I will turn to examining the empirical

conjectures pertaining to the actual IPO events.

Initially Abnormally Positive Performance

The most visible abnormality that currently exists in studies of IPO performance

is that IPOs tend to produce extremely abnormally positive performance results a short
26

duration after going public. This excess abnormal return occurs either in the preissuance

period or in the 1-day performance of the post-offering period (see Krigman, Shaw, &

Womack, 1999; Loughran & Ritter, 2004; McDonald & Fisher, 1972; Reily & Hatfield,

1969). Miller and Reilly (1987) found that the extent of this underperformance was

approximately 9.87% (p. 34), and Ibbotson, Sindelar, and Ritter (1994) reiterated this

sentiment by concluding that “first–day returns average 10-15%” (p. 66).

Cheng, Cheung, and Po (2004) found, while studying IPO price performance on

the Hong Kong financial market, that no trading profits were obtainable once IPOs begin

trading publicly (p. 853), this finding contrasts those reached in Miller and Reilly (1987),

an analysis of IPOs listed in the U.S. markets. Historically, researchers seem to have

assumed that IPOs obtained profits in the first trading day. Alternatively, perhaps, they

have ignored the negative social and process implications attached to an empirical finding

that the positive IPO performance is constrained to the pretrading period. If the abnormal

performance is constrained between the offer and issuance, then the distributions of

shares, and whom the shares are distributed, become a more fundamental question, in

regards to fairness of distribution of these shares. This question is relevant because the

underwriting syndicate holds an unfair informational advantage over the majority of the

investing public.

The pertinent questions are does the underwriting syndicate exploit the

informational advantage and is the initial abnormally positive performance a result of an

under pricing / rebating scheme (see Ritter & Welch, 2002)? However, the investing

public may create abnormal performance because they are acting irrationally when
27

attempting to value these IPO. This irrational analysis may occur because the investors

know about the historical pricing anomaly (short-term abnormally positive performance),

and in turn demand for new issues are exacerbated and unwarranted optimism, in the

post-issuance performance capability of these securities, increases, thus pushes the share

price away from its fundamental value in the aftermarket (see Garfinkle, Malkiel, &

Bontas, 2002).

Case in point, Purnanandam and Swaminathan (2004) found in a study of 2,000

IPOs that went public from 1980 to 1997 the median IPO was actually overvalued when

compared against firms matched by operating characteristics-- they approximate

overvaluation at approximately 14% to 50% (p. 812). Since the investment bankers, who

are underwriting these offerings, are considered market experts, it seems that the market

would consider a systematic underpricing a disadvantageous finding for the EMH. If

investors are responsible for this abnormal performance then researchers should seriously

question market efficiency because the markets are irrationally pricing individual

securities in a reaction to systematic events related to the process of issuing unseasoned

equity shares.

Long-term Negative Performance

Researchers have also provided evidence in support of the theory that IPOs suffer

from long-term price underperformance when measured against standard benchmarks

(see Affleck-Graves, Hedge, & Miller, 1996; Ibbotson, 1975; Loughran, & Ritter, 1995;

Ritter, 1991). Ritter (1989) found that, in his sample of IPOs issued from 1975-84, IPO’s

3-year holding period returns (HPR) underperformed portfolios matched based upon
28

market capitalization and industry characteristics by 27.39% (p. 4); Ibbotson, Sindelar,

and Ritter (1994) found similar results analyzing IPO data from 1970-1990. Ritter (1989)

and Ibbotson (1994) suggested that on average IPOs underperform standard benchmarks

from the end of the initial trading day to at least the firm’s five-year publicly traded

anniversary.

Recently, a debate about how researchers apply the event study methodology to

measure long-term abnormal performance has instigated questions about whether the IPO

process actually generates abnormal performance. The fight for the concept of market

efficiency is alive and well. Researchers attempt to dismiss anomalies found that run

counter to the conceptual framework of the EMH by conducting significance and power

analyses on the methods previously used to test for abnormal performances--thus,

criticizing the applicability of tests applying the event study methodology to measure

longer-term performance. Explicitly, Schultz (2003), illustrated how the number of IPOs

issued increases as the market peaks, he conjectures that the managers of private firms

use pseudo-timing to time their issuances.

This pseudo-timing phenomenon illustrates how IPOs that are initially priced in a

hot market, to become overpriced in the aftermarket. For example, the fair intrinsic value

of the offering of ABC Corp. is $6 million, but the market has inflated this value to $6.5

million because supply is constrained and the company does not allow investors to bid for

allocations of the company. Due to the limited supply, the excess demand for shares of

ABC pushes the price past its fundamental value; as the market recognizes their error

there is an increased probability that they will correct this error. According to Schultz
29

(2003) the probability of a decrease in stock prices following the issuance of unseasoned

equity shares at a market peak is significantly greater than 50%, and the probability of an

increase in the new issues stock price is significantly less than 50%. Therefore, because

the market bids up the value of the share in the short-term, the probability that the shares

will decrease in value increases because the market has pushed the shares away from

their fundamental value due to excess demand. In a complementary study, Brav, Geczy,

and Gompers (2000) analyzed long-term IPO performance and hypothesize that the

underperformance found in the long-term performance of these issues is not an anomaly.

Brav et al. (2000) reasoned that IPO underperformance is not an IPO phenomenon, but

part of a broader systematic movement based upon a firm’s size and book-to-market ratio

(p. 246) or idiosyncratic risks. These two theories are representative of research projects

have been focused on attempting to reconcile the potential anomalous behavior of longer-

term IPO performance with the theory of market efficiency.

The Quiet Period

The quiet period is the market’s terminology for SEC regulation #5180, enacted

in 1971, it states that companies are not to issue forecasts, or predictions related to

revenues, income, and earnings per share, or publish “opinions concerning values” (see

Bradley, Jordan, & Ritter, 2003, p. 5). The rationale behind the enactment of the quiet

period is that, according to Bradley, Jordan, Ritter, and Wolf (2004) it provides investors

with the necessary time to value the company without insider interference or influence.

The quiet period facilitates the investor’s search for a fair value of the underlying assets

owned by the company.


30

At the conclusion of the quiet period, the SEC allows investment firms to initiate

coverage of a security. The reason why this period is so interesting is that Bradley,

Jordan, and Ritter (2003) have found that from 1996-2000, for all IPOs issued, analysts

initiated coverage on 76% of firms, and of these 76%, analysts initiated coverage on 96%

of these issues as a strong buy or a buy (p. 33). This is not what I would expect;

structurally, I would prefer to see a distribution that, from a probabilistic standpoint,

firms rated would just as likely receive a positive rating as a negative rating. According to

Bradley et al. (2003) when analysts initiate coverage, immediately after the quiet period,

the IPOs affected by this event experienced a significantly positive abnormal return of

4.1% in a five-day window surrounding the event (p. 33). If analysts left the newly issued

IPOs uncovered at the conclusion of their quiet period, firms experienced an insignificant

abnormal return of 0.1% (see Bradley et al., 2003, p. 33). In 2004, Bradley, Jordan,

Ritter, and Wolf (2004) attempted to expand this study to include IPOs that went public

from January 2001 through July 2002; the impact of the expiration of the quiet period

during this time horizon was insignificant (p. 11). In this dissertation, I endeavored to

answer why the two research projects differ and if so analyze whether abnormal

performance is significant during the expiration of the quiet period.

Lockup Expiration

Researchers, in the past, have not built a solid case to declare that abnormal

performance occurs as the lockup period expires. However, Field, and Hanka (2001)

found that from 1988 to 1997, during the expiration period investors experienced a three-

day abnormally negative performance of 1.5% (p. 471). The results from Garfinkle,
31

Malkiel, and Bontas (2002) were in agreement with Field et al. (2001), although the

Garfinkle et al. (2002) found that negative performance experienced during the expiration

of the lockup period was to 4.47%. The two different percentages vary remarkably and

the methods that the researchers used to calculate abnormal returns are questionable. It is

my goal to add clarity and specificity to this potential anomaly.

Securities’ Pricing, Estimating Returns, and Efficient Markets

In contemporary finance literature, analyst estimate the value of securities based

upon two sets of information: (a) the price of the security at a historical reference point

and (b) information that may affect a firm’s valuation from that reference point until the

time that the investor is interested in valuing the firm. This information set contains a

combination of events or states of the world; events included in this information set could

be public, economic, firm specific, industry affiliation, and international variables. Using

algebraic notation, and the most basic assumptions, the market arrives at prices as

follows, according to Fama (1975), the “joint distribution for security prices at time t” (p.

167) are as follows:

f m ( p1,t + r ,..., p n,t + r Φ tm−1 ) , (1)


Φtm−1 : is the information that the market uses to determine a securities’ price at
time t.
p1,t +r ,..., pn,t +r : is the different possible prices for a given security at time period
t+r, where r>0. (Fama, 1976, p. 134)

This formula illustrates that the market evaluates information received prior to attempting

to value a security, and once investors’ analyze the pertinent information contained in the

information set they then determine the appropriate price using probabilistic assessments.
32

Because I am interested in the performance of securities, I need to change emphasis from

the pricing of securities, to the returns obtained by investing in securities.

Since I have focused on estimating, the ex post expected market return on IPOs it

is appropriate to comment on the general model of estimation before discussing error

terms, and the actual calculation of abnormal returns in regards to IPO performance. The

following is the general market model used to estimate expected return on a security (see

Campbell, Lo, & MacKinlay, 1997):

E ( Rˆ it ) = α i + β i R mt + ∈
ˆ it
(2)
Ε[∈it ] = 0

α i
: is the intercept
~ ~
cov( Rit , Rmt )
βi = ~ (Fama, 1976, p. 67)
σ 2 ( R mt )
R mt
: is the Return on the Market Proxy
~
Rit : is the return on security i in time period t
~
∈ it
: estimated error of forecast

By adjusting this equation, analysts can construct a model to evaluate abnormal returns

against normal returns. I define this relationship according to Fama (1976):

~ =R~
∈it it − (αi + βi Rmt ) (3)

Under normal conditions, analysts can use this model or a more complicated factor model

to estimate the normal expected return on a security. Although, without the historical data

required to use these general formulas I used a more complicated model to estimate these

parameters—this is where the matched-firm and portfolio-matching strategies come into

consideration.
33

Methodological debate

Event studies were made popular by Ball and Brown (1968) and Fama, Fisher,

Jensen, and Roll (1969); the intended purpose of these studies were to test market

efficiency. The event study methodology is the historically accepted method used when

attempting analyzing a IPOs performance from both short- and long-term event windows

(see Bradley, Jordan & Ritter, 2003; Ibbotson, 1975; Ritter, 1991). In the present research

project, I define an event study as an analysis of the reaction of a securities’ price or

performance to an event or information (e.g. dividend changes, IPOs, SEOs, M&A, etc.).

Researchers that have chosen to focus on analyzing the accuracy of methodological

processes involved in conducting event studies have been remarkably active within the

last decade. The reasons for the increasing skepticism in the results of studies that attempt

to measure abnormal returns are problems with the following: (a) the use of biased

benchmarks and (b) the aggregation of IPO performance across time.

Measuring Normal Performance

Because IPOs do not possess historical returns from which a firm’s idiosyncratic

risk may be measured, researchers need to create proxies to estimate their expected

performance. Campbell, Lo, and MacKinlay (1997) provided a possible solution, to use a

market-adjusted-return model, which takes the formula in Equation #2 and sets the

intercept, α to 0, and, the beta coefficient, β to 1. However, by setting β = 1 ,

researchers would be expecting securities to produce a return equivalent to the market’s

return, and this implies that the risk-free rate of return is 0%. These two assumptions will

cause the error term of this equation to capture firm-specific risk, and the percentage rate
34

of return that is obtainable investing in a risk-free security, in addition to abnormal

performance. Researchers choose to avoid this method of measuring abnormal

performance because it is ineffective and inefficient. In the remainder of this section, I

have provided an evaluation of methods and biases generated by researchers using

different approaches to model normal performance.

Benchmark Construction

Researchers have shown that models used to measure the long-term performance

of stock returns are prone to misspecification (Lyon, Barber, & Tsai, 1999, p .165). The

cause of misspecification differs depending on the methods researchers use to model

normal performance. Researchers can approximate how poor a model of normal

performance is by taking samples of historic performance data and conducting

specification and power analyses on the model to test whether the model accurately

identifies abnormal performance in simulations. Studies related to methodological test of

this sort have made remarkable strides lately. Initially, researchers compared IPO returns

to standard benchmarks (e.g. Russell 3000 Index, or S&P 500), but this was ineffective

when they attempted to analyze IPO performance because IPOs lack the historical

performance data that would be used to measure the strength of the relationship between

a firm and the matching index. To create an accurate benchmark, without using historical

data, researchers have constructed portfolios or matched the event firm to a non-event

firm. The portfolio- and firm-matching approaches are more accurate than simply

matching the event firm to a standard market index, and reducing the beta coefficient to

one and the intercept to zero.


35

Benchmark Construction

There are many different benchmarks researchers use to attempt to simulate

normal performance. The basic elements that differentiate these benchmarks are: (a)

whether researchers construct the benchmark using a portfolio or a matched-firm

approach, (b) what factors researchers use to model normal performance, and (c) how

researchers weight securities in portfolios (market- or value-weight). The popular

benchmarks used in recent research have been: (a) traditional indices, (b) the Fama and

French Three-Factor Model (this method is used when researchers choose to use the CAR

method instead of the BHAR method to detect abnormal performance), (c) reference

portfolios, and (d) matched-firm approaches. In this research project it was unnecessary

make complex adjustments to portfolio-matching techniques because the firm-match

approach was well specified and powerful. Therefore, I will only explain the portfolio-

and firm-matching strategies used in this research project and the biases that affect the

accuracy of these benchmarking techniques.

Skewness bias. The skewness bias is prevalent in each of these matching

techniques; however, this bias has a more pronounce influence on the specification and

power analyses conducted on the matched-portfolio approach to benchmarking.

According to Barber and Lyon (1997) when reviewing a population of longer-term

securities’ returns researchers will find more returns above 100% than they would find in

excess of -100%. This occurs because, when an investor holds a long position in a

security, it is impossible to lose more than 100% of the investment. When using standard

tests researchers expect as many observations to fall beneath the -100% threshold, as they
36

would expect to observe over the 100% threshold. The existence of extreme positive

observations, in excess of 100%, will increase a sample’s standard deviation; these

extreme events are then overrepresented if we assume the underlying distribution is

normal (see Barber et al., 1997, p. 347). Barber et al. (1997) determined that in

simulations with observations drawn from a distribution with a known distributional

mean, in 6.6% of samples the researcher would conclude that the population mean is

significantly less than its known mean because of this skewness bias (p. 348).

Matched-portfolios. The matched-portfolio approach used to act as a benchmark

for normal performance in event studies is the most popular. It is this popular because

once the researcher determines how the portfolios should be matched there are, typically,

few portfolios to construct. In contrast, when using the matching firm technique

researchers have to match each event firm to a matched firm; for portfolios, researchers

would have an index based upon deciles of market capitalization or industry affiliation.

The matching portfolio approaches to benchmarking are traditionally prone to

misspecification due to the following benchmark biases: (a) new-listing, (b) rebalancing,

(c) survivorship and (d) skewness biases. The new-listing bias occurs when recently

issued securities are included in the portfolios used to gauge normal performance.

Empirical studies have shown that new issues produce positive (negative) abnormal

performances at different times during their maturation process, which leads researchers

to either underestimate (overestimate) abnormal returns experienced by the event firm

(see Lyon, Barber, & Tsai, 1999, p. 169). The rebalancing bias inflates the returns of the

matching portfolio, therefore, creating a negative impact on the event firm’s abnormal
37

performance (see Lyon et. al., 1999). The survivorship bias may cause researchers to

underestimate (overestimate) abnormal performance experienced by an event firm if the

researcher determines that the event firm outperforms (underperforms) the matching

portfolio.

Matched-firm. Barber and Lyon (1997), Lyon, Barber, and Tsai (1999), and Ang

and Zhang (2004), examined whether the 3-Factor, 4-Factor, carefully constructed

reference portfolios, and matched-firm approach are well specified and powerful methods

used to identify abnormal performance. The results of the aforementioned researchers’

statistical tests indicate that the matched-firm approach is better specified and more

powerful than the competing methods.

Currently, researchers are using some combination of a firm’s market

capitalization, book-to-market ratio, industry affiliation, or correlation to detect abnormal

performance using the matched firm approach (Ang & Zhang, 2004; Barber & Lyon,

1997; Lyon, Barber, & Tsai). To implement the matched-firm approach I needed to

consider how to identify the most applicable benchmark. When researchers use different

methods to identify their benchmarks, the results of their studies may differ vastly. For

example, if market capitalization and industry affiliation are the variables used to create

the benchmark then researchers need to identify, which filter (size or industry) to run first

and the boundaries placed around the filter. The results of analyses will vary depending

on which firm attribute researchers used first to sort the matched firms—market

capitalization or industry affiliation.


38

Historically researchers have successfully employed filtering techniques to the

preceding variables to measure the abnormal performance of a security with one

exception--matching by industry. Matching by industry is problematic because

researchers typically accompany this method by initially sorting the event-firms based on

other characteristics--namely, market capitalization. Ritter (1991), founds that for IPOs

listed from 1975-1984 only 36% of IPOs could be matched based their three-digit

Standard Industrial Classification Code (SIC), 21% were matched based upon their two-

digit SIC, and the remainder of the sample had to be matched to complementary or like

industries at Ritter’s discretion.

Loughran and Ritter (1995) explained why researchers would choose not to match

firms based upon the industry criteria. Loughran et al. (1995) reasoned that if firms go

public to take advantage of an industry-wide misvaluation, then matching firms based on

industry affiliation hinders the metric’s ability to uncover abnormal performance (p. 27).

Researchers have shown that this objection is generally faulty, and that the waves of

public offerings are not industry-specific. Normally, there exists a broader underlying

market trend. In addition, Loughran et al. (1995) felt that there are too few publicly

traded firms occupying the same industry to incorporate further filtering mechanisms. For

example, researchers may have problems attempting to filter first based upon market

capitalization and then industry affiliation, which is a reasonable pairing of variables.

Researchers need to conduct further research on how to better account for the

industry effect. Spiess and Affleck-Graves (1995) found that the industry-effect accounts

for one-third of long-term underperformance generated by SEOs. Since SEOs generate


39

abnormal performances comparable to IPOs I will use matching firm and portfolio

techniques based upon industry affiliation in the ensuing specification and power

analyses. Researchers that attempt to identify longer-term abnormal performance have to

overcome many obstacles when using the matched-firm approach sorted by industry

affiliation, but as Spiess et al. (1995) concluded, the trouble involved in overcoming this

obstacle may be worth the effort.

It is important to note that the majority of specification tests using the control firm

matching technique have focused on using the market capitalization and book-to-market

ratio firm attributes to estimate normal performances by sorting. Barber and Lyon (1997),

Lyon, Barber, and Tsai (1999), and Ang and Zhang (2004) all used market capitalization

and market-to-book ratios to detect abnormal performance, while Ang et al. (2004)

additionally examined using an event firm’s correlation with a non-event firm. Again,

researchers need to conduct further specification and power tests on different matching

techniques to evaluate whether additional factors lead to statistically better specified and

higher power tests of abnormal performance in random and non-random testing

situations. These additional factors include, according to Lyon, Barber, and Tsai (1999)

return performance, earnings surprises, and a company’s price-to-earnings ratio.

One criticism of the matched-firm approach was raised by Brav, Geczy, and

Gompers (2000) they reasoned, if researchers match firms that have not had an

unseasoned issuance (see Loughran & Ritter, 1995) within the five-year period preceding

the event, studies using data pre-dating 1978 may include long-term loser stocks in their

matched-firms. I define this long-term loser hypothesis as follows: long-term winners


40

will underperform long-term loser by 25% in the 3-period following the date of

calculation using securities’ performance data from 1926 to 1982 (De Bondt & Thaler, p.

804). The criticism has many assumptions supporting it; two important ones that make

this criticism less relevant to the current study are: (a) the matched-firms have to be long-

term losers and (b) I am not analyzing securities’ performances that pre-date 1980.

To summarize, Barber and Lyon (1997) concluded that researcher minimize the

new listing, rebalancing, and skewness biases by using the BHAR technique and the

matched-firm approach to benchmarking. The matched-firm approach corrects for these

biases in the following ways (a) researchers can eliminate new issues from the control

group, (b) there is no rebalancing, and (c) since the researcher uses a single-firm

benchmark, it is as likely to have a skewed distribution as the event firm is (Barber et al.,

1997, p. 354). Simulation techniques used to measure the independent variables ability to

identify abnormal performance have shown that this seemingly simple method is,

statistically, the best-specified and most powerful method used to measure abnormal

performance. Interestingly enough, researchers have applied the event study methodology

measure abnormal performance actively since approximately 1969, and the use of the

matched-firm approach seems to be in its infancy. The essential question left for me to

answer is, which variables best estimate normal performance, and are most accurate.

Measuring Abnormal Performance

After deciding on a benchmark to simulate normal performance, researchers have

to consider how to aggregate returns to determine if an event firm has experienced an

abnormal event becomes critical. Currently, there exists substantial debate on how
41

researchers should approach measuring this abnormal performance. Is the CAR or the

BHAR method the best-suited model to use to identify abnormal performance? In this

section, I have explained the differences between the BHAR and CAR methods of

aggregation and why I chose to use the BHAR method in my analyses of IPO

performance.

BHAR vs. CAR

When researchers attempt to aggregate abnormal returns they generally use one of

two popular methods, the BHAR or CAR technique. Both of these methods have their

advantages and disadvantages. I can illustrate the differences between these two methods

effectively by citing specification analyses. In one specification analysis, Barber and

Lyon (1997) found that when comparing the results of the CARs and BHARs researchers

arrive at conflicting conclusions in 3.7% of their analyses, and if samples consisted of

primarily small firms this rate increases to 4.7% (p. 368).

To aggregate and analyze abnormal returns using the CAR method, according to

Barber and Lyon (1997), researchers use the following calculation:

AR it = R it − E ( R it ) , where (4)

τ
CARiτ = ∑ ARit , and (5)
t =1

Rit: is the return on the on asset i in period t


E(Rit): is the estimate return on asset i in period t
ARit: is the abnormal return produced by security i in period t
42

To understand the potential differences of the two aggregation techniques I will now

introduce the BHAR calculation (Barber et al., 1997, p. 344):

τ τ
BHARit = ∏[1 + Rit ] − ∏[1 + E( Rit )] (6)
t =1 t =1

According to Sharifzadeh (personal communication; December 30, 2006), the calculation

of the BHAR can be thought of as the return obtained by a day trader that moves in and

out of a security at the opening of and close of the trading day every day. Analysts use

the geometric method of aggregation to calculate the BHAR, where as researchers use the

arithmetic method to calculate the CAR. The difference is that the BHAR calculation is a

more precise measure of security’s returns.

Sharifzadeh (personal communication; December 30, 2006), also stated that for

any holding period the average periodic return calculated using the geometric method is

smaller than the average periodic return calculated using the arithmetic method. For

example, Schaeffer (2005) claimed “assuming market returns are flat, a 50% loss in one

month followed by a 100% return the following month results in a CAR of 25%, despite

the fact that the stock is now trading at its initial price” (p. 5).

As Barber and Lyon (1997)) illustrate, the BHAR more accurately reflects an

investor’s investment experience, because the returns are compounded. Mitchell and

Stafford (2000) contend that this procedure is faulty, because not all investors are

interested in comparing their returns against investors who bases their strategy for

investing on a buy-and-hold investment strategy (p. 296). This argument is


43

understandable, although in this research project I am only concerned with the

performance of IPOs measured against the buy and hold investor’s return experience.

Therefore, henceforth I will assume the researchers can obtain the best measure of an

investor’s experience of investment performance using BHAR methodology.

Barber and Lyon (1997) mentioned that CARs are just biased predictors of

BHARs. Barber et al. (1997) illustrate this finding using a 12-month CAR and running a

regression against the 12-month BHAR calculation for a random sample of 200,000

observations. Barber et al. (1997) provided regression results that produce an intercept

and slope coefficients of -.013 and 1.041, with an adjusted R-Squared value of 77.6% (p.

346). According to Barber et al. (1997) if the CAR methodology were unbiased the slope

and intercept coefficients would not be significantly different from 0 and 1, respectively.

Additionally, according to Sharifzadeh, M. (personal communication, December 30,

2006), The Institute of Charted Financial Analyst (CFA institute) recommends that

investment professionals use the geometric (BHAR) methodology to calculate returns, in

accordance with this recommendation the SEC has issued guidance to mutual funds and

portfolio managers advocating the use of the geometric approach to calculate returns

when reporting their performance.

Summary

In this section, I have examined the issuance processes of IPOs, defined event

horizon for events that may produce abnormal performance, provided a structure for

analyzing IPO performance, and methodologies used to measure abnormal performance.

Possibly the most important decision I have reached in this section is to use the BHAR
44

method over the CAR. I based my decision primarily on the theory that CARs are biased

predictors of BHARs. I supported this decision by reasoning that BHARs measure

investor returns, which more accurately reflects an investor’s buy and hold performance.
CHAPTER 3:

RESEARCH METHOD

Introduction

In this section, I have focused upon elements of the research methodology that:

(a) are used to conduct event studies, (b) test the hypotheses contained in the IPO

literature, and (c) present a methodological approach for evaluating abnormal

performances related to IPO issuance. In the beginning of this chapter, I will explain how

I will evaluate the hypotheses tested in my dissertation using the event study design.

Next, I will describe the sample size and frame used in this study. In addition, I

developed the rationale for canvassing a broad population. Then, I will introduce the

reader to the database from which I obtained the data needed for testing of the majority of

the hypotheses. Finally, I fully developed all of the hypotheses tested in this dissertation.

At the conclusion of this chapter, I will have provided the reader with a comprehensive

review of the elements of the research method, and defend the use of the selected

research method.

Research Design and Methodology

Justification for Using the Event Study Methodology

I have chosen to use the event study methodology in the current research project. I

decided on this method because in this dissertation I have analyzed the change in the

performance of securities’ price formation as systematic events occurrence. I described

the event study design as a subset of the existing data research methodologies.

Historically, researchers have applied the event study methodology to measure


46

performance reactions to a significant number of corporate events (e.g. stock splits,

earnings releases, mergers and acquisitions, dividend changes, etc). Campbell, Lo, and

MacKinlay (1997) stated that Dolley presented the first published event study in financial

literature in 1933. Dolley studied the effects of stock splits on an organization’s common

stock. Two papers, in particular, have provided a more recent rationale for the wide

acceptance of this method. Ball and Brown (1968) used this method to analyze how a

change in income affects the performance of a security, and how quickly the market

adjusts to this change. Similarly, Fama, Fisher, Jensen, and Roll (1969) used this method

to document how the securities’ markets adjust to securities’ pricing splits and a

security’s dividend history. The preceding pioneers of this method created the event

study methodology to measure the impact of information on the performance of a

security.

The reasons for choosing the event study approach to examine abnormal

performance changes in response to IPO issuance are as follows. First, I chose the event

study methodology to examine some specific events that had already occurred, that is,

changes in the performance of a security presumably resulting from information

pertaining to a specific event or occurrence. Second, judging by the number of research

projects that have applied the methodology, the event study methodology is the preferred

method employed by researchers attempting to examine questions related to market

efficiency. Moreover, because I am questioning whether the markets efficiently price and

adjust for information embedded in the issuance process, it is appropriate to use the event

study methodology to conduct my analysis of market efficiency with regard to IPO


47

performance. Third, the event study methodology has been around for more than 75

years, which would suggest that the method is not in its infancy and is a reasonable tool

of measurement. Finally, as I have argued in the next three paragraphs, the event study is

the most suitable methodology for my research as compared to survey methodologies

(longitudinal, cross-sectional, and time series), correlational or casual comparative

methodology, case study approach, and experimental designs methodology.

Survey methods seemed to be a reasonable approach when I began thinking about

how to analyze potential abnormal performance occurring throughout the IPO maturation

process. However, when I drilled down into the details of how researchers should apply

each method it became apparent that the event study framework would be better suited to

examine abnormal performance occurring throughout the IPO process. For cross-

sectional approaches researchers look at occurrences at one point in time, in this research

project, I used historical data—I took observations over a substantial time horizon. If I

choose to use the longitudinal approach, I would have to use the same sample throughout

the entire study. Because my sample spanned a 22-year period, during which new

companies went public every year, and given that IPO performance has the potential to

change over time—the longitudinal approach would not be suitable for this type of

analysis.

The event study approach is also more suitable than the causal comparative

methodology for this research. According to Rumrill et al. (2004), when researchers use

the causal comparative research method, they are interested in comparing, “differences

between derived or intact groups on theory-driven dependent measures” (p. 258).


48

Rumrill et al. further argues that causal comparative approach is suitable for the study of

naturally occurring groupings, such as, race, gender, and socioeconomic strata (p. 258).

However, my research is about measuring the abnormal performances of IPOs resulting

from information transmitted into the market and as such, the event study approach is a

more suitable approach than the causal comparative approach

I also considered other research methodologies before deciding on the event study

methodology, they were: (a) case study, (b) correlational, and (c) experimental. After

reviewing the description of the case study approach found in Leedy and Ormrod (2005),

I felt that one of the overarching focuses of the event study is to pay particular attention

to the contextual clues that influence the events occurrence. In my opinion, by using a

case study, I would be focusing more on describing why IPOs are performing abnormally

and less emphasis on the question of whether IPOs produce abnormal performance, and it

is the latter question I was interested in answering. In correlational studies, researchers

are interested in describing a relationship between two or more variables, which is the

focus of this research project; however, I am interested in describing how different the

performances are or how much the return on IPOs differ from expectations. The

identification of a correlation between two of my variables, would not enable me to state

that there is a potential anomaly, only that I have found a correlation between these

issues. Therefore, the event study methodology is preferred to correlational methodology.

Finally, when researchers conduct experiments, they need to randomly assign participants

into the control and experiment groups and I would need to manipulate some factors in

one group and not in the other. Because I am using historical data and cannot manipulate
49

events that have already occurred, experimental design methodology is not suitable for

this study.

Approach to Implementing the Event Study Methodology

When conducting event studies researchers have to focus on numerous elements

of the methodology. According to Campbell, Lo, and MacKinley (1997) the outline of an

event study is as follows: (a) event definition, (b) selection criteria, (c) identifying normal

and abnormal performance, (d) estimation procedures, (e) testing procedures, (f)

empirical results, and (g) interpretations and conclusions (pp. 151-2). In the hypothesis

subsection of the data analysis section of this chapter, I have addressed steps a-e, in

addition, I have evaluated steps f and g in subsequent chapters. Because IPOs have no

pre-event returns, I am unable to estimate the IPO’s correlation with the benchmark.

Therefore, I replaced my estimation period with a simulation and sensitivity analysis (see

Hypothesis 1).

For the testing procedure, I used the BHAR in favor of the CAR method to

identify abnormal performance. To estimate the return on the benchmark, I have,

executed a specification and power analysis and decided what combination of factors

produce the best-specified estimates of abnormal performance. With the proceeding

elements of my design of this event study specified, I will turn to a discussion of the

setting and sample from which I will make inferences about abnormal performance.

Setting and Sample

I could not obtain data throughout the entire sample to analyze IPO performance

occurring prior to public trade and on the first day of public trading. My initial sample
50

was comprised of all securities traded on major U.S. exchanges that went public from

1985 to 2002. I used this sample for the analyses conducted on: (a) the specification and

power analyses, (b) the longer-term performance of IPOs, and (c) the analyses conducted

at the conclusions of their quiet and lockup periods. However, data from the CRSP

database, used for the aforementioned analyses are not readily available for the two short-

term analyses. The sample shrank from approximately 5583 observations for longer-term

analyses, to 5529 observations used to evaluate performance around the expiration of the

quiet and lockup periods, to 2143 observations for IPOs first day of trade performance,

and to 1876 observations for the analysis conducted on pre-trade performances. In the

analyses associated with the long-term, quiet, and lockup analyses I used all available

data on IPOs that went public from 1985-2002. This time horizon shrank considerable to

January 1, 1997 to December 22, 2005 for tests conducted on the initial day of public

trading and to April 12, 1996 to January 28, 2008 for tests conducted on pre-trade

performance. I am accepting this shrinkage in sample size because I can support the

results of the tests whose sample size shrunk by complimentary findings, for the results

that are somewhat controversial, the sample size remained large.

Canvassing a Significant Target Population

The execution of my research design included canvassing the entire population of

unseasoned equity securities issued from 1985-2002. I based my rationale for canvassing

the entire population on the fragmentation of research projects in which researchers

analyzed different events related to IPO performance. This concept is critically important

to the significance of this research project--I have tested the hypotheses through both
51

time and used a significant number observations. To illuminate the problem, Bradley,

Jordan, and Ritter (2003) found that from 1996 to 2000 IPOs experienced a 3.1% positive

abnormal performance event in the 5-day window of time surrounding the expiration of

the quiet period. Bradley et al. (2003) quickly reverse their position; Bradley, Jordan,

Ritter, and Wolf (2004) found that between January of 2001 and July of 2002 the

abnormality disappears. The second research project examines just 37.5% of the

observations analyzed in the original study. In addition, the period analyzed in the second

study was only 1.5 years, the original period was 4 years, I believe that researchers need

to expand both of these studies and that the finding presented in the second study may be

a result of a small sample bias.

Data Collection

The data collection process also may prove to bias the research project. In this

research project, since I am interested in evaluating existing data, I have pulled and

pooled data from a variety of sources to produce the most accurate reflection of the

population as possible. The potential sources that I have used to obtain data are the

Center for Research on Securities Prices (CSRP) database, Standard & Poor’s Compustat

database, Thomson Financial, The Wall Street Journal securities’ pricing databases,

Google Finance, Hoovers IPO database, the IPO Reporter, Edgar Online IPO, and various

governmental and financial web sites. The first goal of this research project was to

identify all IPOs that have gone public in the 18-year period from 1985-2002.

For the 1985 to 1996 period, I will use the Field-Ritter dataset of company

founding dates, as used in Loughran and Ritter (2004; as noted on


52

http://bear.cba.ufl.edu/ritter/foundingdates.htm). This data set provides me with a

substantial listing of IPOs, but as Ritter notes on his website, it is not a comprehensive

IPO list, I excluded those IPOs that do not have a reliable founding dates from the

analyses. I used the firm name to query the CSRP/COMPUSTAT merged database to

find historical pricing data for IPOs that issued shares from 1985 to 2002. If after

searching through the preceding resources for pricing and other data, if I do not obtain the

data using the identified resources I have decided to drop the IPOs from the analysis.

Data Analysis

The Variables

RIPOAverage(Offering→InitialTrade)
: The average return on unseasoned IPOs in the pre-public

trading period

RDJIAMonthly
: The average monthly return for the Dow Jones Composite Index

RIPOMonthly (Offering →InitialTrade ) : Pre-trade returns aggregated in month cohorts.

R IPO ( Day 1) : The return obtained by IPOs in their initial day of trade.

R IndustyMat chedFirm : The return obtained by the matched firm benchmark, utilizing industry

affiliation as the matching criteria.

BHAR ( TradingDay 2 − 750 ) : The average BHAR calculated from trading day 2 until trading day

750.

R Quiet : The average return obtained by an IPO during the 5-day period surrounding the

expiration of the quiet period.


53

RLock−up : The average return obtained by an IPO during the 5-day period surrounding the

expiration of the lock-up period.

The Hypotheses of Research

In the ensuing section, I have outlined the hypotheses of this research project.

There are four sections, which are as follows: (a) specification and power analysis, (b)

short-term performance, (c) long-term performance, and (d) events related to IPO

issuance. The results of some of these sections influence other analyses. Specifically, the

selection of a well-specified model in section 1 has influence over each of the subsequent

analyses. Therefore, the reader should evaluate each of the hypotheses individually and

then collectively.

Section 1: Specification and Power Analysis

Researchers have recently begun to debate the validity of tests that identify

abnormal performance in long-term event studies. Barber and Lyon, (1997), Lyon,

Barber, and Tsai (1999), and Purnanandam, and Swaminathan, (2004), derived

remarkably different conclusions from their analyses. After a thorough review of the

three preceding documents, it is apparent that any application of a method used to detect

abnormal performance needs to be accompanied by a specification and power analysis.

Therefore, I ran a specification analysis and simulated abnormal performance using the

Brown and Warner (1985) methodology, which calls for 250 samples of 50 simulated

events (p. 6). I drew these samples using the stratified sampling technique to ensure I

gave each year of performance an equal weighting. I used companies comprising the
54

Russell 3000 Index, in each year of the sample period, as the population securities

available for matching.

I took ten random samples of 50 securities each year. For the specification

analysis I have taken this data and calculate the average BHAR, standard deviation, and

the empirical size (ES) statistic of the of specification analysis. Also using this sample, I

have imposed abnormal performance, and analyzed whether this abnormal performance

is identified given the model of normal performance, whether theoretical rejection levels

are adhered to (power analysis), and whether the model fails to identified this

performance over from trading day 2 until trading day 750 (approximately 3 years).

The specific methodologies used to test for abnormal performance that were

analyzed in this analysis are the reference portfolio and the matched-firm approach,

which were matched based upon the market capitalization, industry affiliation, and book-

to-market ratios. When using the matched-firm approach sorting by market capitalization

and book-to-market ratios I first sorted firms based upon market capitalization,

identifying the closest 25 firms when compared against the event firm, and then identified

a firm with the closet book-to-market ratio. To sort using the firm’s industry affiliation

and market capitalization, I first sorted by industry affiliation and then by market

capitalization. Finally, I sorted the matched-firm approaches using just the market

capitalization or industry characteristic based on the given characteristics. After this sort,

using the market capitalization approach, I scanned the sample to find the closet match or

using the industry approach I grouped the sample by industry affiliation and then

conducted a random sample to obtain a match. If I could not find a 4-digit SIC Code
55

industry match I moved to a 3-digit match, and if I could not identify a 3-digit match, I

removed the firm from the subsequent analysis. Similarly, if no book-to-market match

could be identified the firm was removed for the analysis. If data from a matching

company is missing, I replaced the missing data with a new matched-firm.

For Hypothesis 1, I have taken data from 1985 to 2002 and conducted 10 random

nonrepeating samples of 50 companies—overall, I distributed the 180 samples evenly

across the 18-year period. For each benchmark and each sample, I matched the

benchmark return against a randomly sampled company or the matching portfolio and

then the BHAR was calculated.

Hypothesis 1:

H0: BHAR = 0
H1: BHAR ≠ 0

After running the preceding test on the 180 samples of 50 companies, I calculated the ES

statistic. I then compared this statistic against a confidence interval to determine the

specification of each method. I combined the results of this analysis were combined with

the ensuing power analyses to select a benchmarking method.

In the power analysis, I took the same sample that was mentioned in the

specification analysis and simulated abnormal performance at different intervals to

determine when each of the seven methods of abnormal performance detection reacted to

simulated abnormal performances of +/- 1, 5, 10, 15, 30, 50, and 75%.
56

Hypothesis 2:

H0: BHAR = 0
H1: BHAR ≠ 0

Next, I combined the results of the individual tests, and calculated the Empirical Power

(EP)—I calculated the EP by taking the number of times I rejected the null hypothesis

divided by the number of tests conducted. I then used the results of the specification and

power analyses to decide which benchmarking metric was the best specified.

Section 2: Initial and Subsequent Short-term Performance

In this round of tests I analyzed whether and when IPOs generate substantial

abnormally positive returns. The initial test of abnormal IPO performance is constrained

to the period prior to public trading, from the offering to the initial public trade. I

conjecture that, as expected in the efficient market theory, on average, speculators or

investors’ using systematic trading schemes cannot obtain abnormal profits after the issue

begins public trading. However, the results of analyses attempting to test this hypothesis

have presented evidence that investors can obtain abnormal performance either prior to

public trading or on the first day of public trade—most analyses do not differentiate

between the two trading periods. Therefore, in my hypothesis tests I have assumed after

issuance no abnormal performance occurs.

To test Hypothesis 3, I used data from various sources--Google Finance, Hoovers

IPO database, The IPO Reporter, Edgar Online IPO--to obtain the IPOs offering price,

the initial opening price on the first day of trade, and industry affiliation. To generate a

sample for this analysis, I used IPO data using the aforementioned search tools and I
57

compiled a canvassed sample of 1,876 IPOs, companies issued these shares from April

12, 1996 to January 29, 2008. First, I determined if the performances produced in this

period were significantly different from zero (Hypothesis 3), and then in the Hypothesis

4, I determined if the average monthly performance of the IPOs in the pre-public trading

period were significantly greater than the best performing benchmark.

Hypothesis 3:

H 0 : RIPOAverage(Offering→InitialTrade) ≤ 0
H1 : RIPOAverage(Offering→InitialTrade) > 0

To conduct the analysis found in the Hypothesis 4, I calculated the average

monthly return for the IPO group, and then assumed that an investor obtains an allocation

of one share of the issuing firm’s stock at its offering and sells it at the opening price on

the initial day of public trading. The individual performances were then average and

grouped by offering month. I grouped these firms so that I could compare the returns to a

standard benchmark. The conjectured relationship was as follows:

Hypothesis 4:

H 0 : R IPOMonthly (Offering →InitialTrade) ≤ R DJIAMonthly


H 1 : RIPOMonthly (Offering → InitialTrade) > RDJIAMonthly

In Hypothesis 5, I determined whether the abnormal performance that occurred

prior to the issuance of unseasoned equity offerings continued to occur when the shares

begin public trading. In this analysis, I have matched the returns of the IPOs against the

best-specified and most powerful method of benchmarking—identified in Hypothesis 1

and 2. The sample in this analysis was obtained using the list of IPOs found in the Field-
58

Ritter dataset of company founding dates previously mentioned, the sample was

canvassed and compared against the CRSP database to identify the opening price and the

closing price for the IPOs used in this sample, on the first day of trading data. The CRSP

database started tracking this data in 1997; therefore, tests were constrained the period

1997 to 2005 periods, which yielded 2143 observations.

Hypothesis 5:

H 0 : R IPO ( Day 1) ≤ R IndustyMat chedFirm


H 1 : R IPO ( Day 1) > R IndustyMat chedFirm

After Hypotheses 2 through 5were tested, I had determined whether initial abnormally

positive performance was generally constrained to the period preceding the issuance of

shares, and provided a detailed account of short-term IPO performance.

Section 3: Longer-term Performance

Given that the results of the hypothesized relationships in Hypotheses 3 and 4 are

correct, using either the offering price or the initial trade as the starting price would have

influenced the estimate of long-term abnormal IPO performance. Therefore, I decided not

to include the performance obtained in these two periods to analyze longer-term IPO

performance. The general questions I was interested in answering is whether IPOs

significantly underperform the chosen benchmarks, and when long-term

underperformance occurs. In order to answer the question took the list of IPOs found in

the Field-Ritter dataset focusing on the 1985-2002 period.

I then calculated the IPO’s and the selected benchmark’s compounded returns for

each holding period from trading day 2 through 750. I then subtracted the benchmark
59

returns from the IPO’s returns; the resulting value, which was the BHAR, I then averaged

the BHAR across the sample, grouped by trading day.

Hypothesis 6:

H 0 : BHAR ( TradingDay 1 − 750 ) = 0


H 1 : BHAR ( TradingDay 1 − 750 ) ≠ 0

I chose to focus on only three years of data because the power of the metrics tested in

Hypothesis 1 and 2 deteriorate significantly, as I increased the event horizon beyond

three years (see Appendices E-H).

Section 4: The Quiet Period & Lockup Expiration

In this round of the analysis, I was interested in determining whether IPOs exhibit

significant abnormal performance within the 5-day period surrounding the expiration of

the quiet and lockup periods. The quiet period is a period that occurs after an issuing

company submits their initial registration to the Securities and Exchange Commission.

This period last for 25 to 40 days after the company issues its shares to the public. In this

period, the SEC prohibits the company and the companies’ agents from issuing forward-

looking statements about the future prospects of their company. The lockup period

typically ends 180 days after the company goes public, but the underwriters can alter this

date. I used the five-day trading period surrounding the expiration of the quiet and lockup

periods as the measurement period--to determine whether abnormal performance

occurred at these expiration points.

The samples canvassed to conduct these analyses were the same as previously

mentioned in the analysis of long-term performance. However, if I identified problems


60

with any of the IPOs (e.g. the dates do not match as expected or a firm trades prior to its

issuance) they were excluded from this portion of the analysis. I calculated the

compounded holding period return (HPR) for the IPO and the benchmark, for the five-

day window surrounding the conclusion of the quiet and lockup periods. I then subtracted

the benchmark from the IPO’s HPR and averaged over the entire sample to determine the

results of the following tests.

Hypothesis 7:

H 0 : BHAR Quiet ≤ B H A R IndustyMat chedFirm


H 1 : BHAR Quiet > B H A R IndustyMat chedFirm

When I analyzed the performance around the expiration of lockup, the sample and

methodology were the same; however, instead of a predicted positive abnormal

performance as the quiet period expired, I predicted that there would be a negative

performance result as the lockup period expired. I forwarded this conjecture because as

the shares of the newly issued IPO approach the expiration of the lockup period, the

restriction of insider shares expires and underwriting syndicate allows the officers of the

company and other powerful shareholders to sell their shares on the open market. Prior to

this expiration, they were constrained from selling these shares. Because of the increase

in volume of securities available for public trade and the likelihood that powerful

shareholders will tend to diversify their holdings, it was my conjecture that the shares

will experience a significantly negative performance event.


61

Hypothesis 8:

H 0 : BHAR Lock − up ≥ B H A R IndustyMat chedFirm


H 1 : BHAR Lock − up < B H A R IndustyMat chedFirm

By forwarding, the preceding hypotheses I was interested in addressing whether the

market reacts to events related to IPO issuance, which occur systematically, and generate

abnormal performance. These tests, and the substantial period that I used to test whether

these events produce abnormal performance either counter or substantiate the claims

forwarded by researchers.

Summary

The proceeding section provided the fundamental relationships that I was

interested in evaluating in my dissertation, and how I endeavored to test them. Of

particular interest was how well the different benchmarks estimated normal performance,

and how well specified and powerful the tests that I have run to identify abnormal

performance were. Currently, researchers seem to be focusing on identifying and

examining different methodological applications used to conduct event studies; therefore,

the significance and the power tests are critically important to the academic interpretation

of the results of this study. At the conclusion of the proposed analysis, I have provided

the academic and professional communities with direct tests analyzing whether events

associated with an organization’s issuance generated abnormal performances. Given that

I have used daily data in this analysis, investors and other interested parties will be able

to visualize the performance of the average IPO from its issuance to its three-year
62

anniversary. Focusing on the entire three-year data set should enable investors to improve

their expectations in regards to long-term IPO performance.


CHAPTER 4:

RESULTS

Introduction

In this chapter of the dissertation, I have reported the results obtained from the

tests of eight hypotheses. In the first section, I have reported the results of the power and

specification analysis. I used the best performing benchmarking method, identified by

specification and power analyses, to test for abnormal IPO performance. In the next

section, I have analyzed whether unseasoned IPOs generate abnormal performances prior

to public issuance and on the first day of public trading. I followed this analysis by an

evaluation of longer-term IPO performance. Finally, I have evaluated whether IPOs

perform abnormally after the expiration of their quiet and lockup periods. I have

addressed each of these sections in their respectively order, in the following paragraphs.

Specification and Power

The purpose of this section was to determine, which method of benchmarking was

more effective testing for abnormal performance of the IPOs. Based upon the review of

literature, I employed two broad methodological strategies to conduct specification and

power analyses--the matched-portfolio and the matched-firm approaches. I performed the

portfolio-matching strategies using daily return data from Dr. Kenneth French’s web site4

based upon three characteristics: (a) market capitalization, (b) market capitalization and

book-to-market ratios, and (c) industry affiliation. In addition, I implemented the

matched-firm approach by taking the characteristics of the randomly selected firm and

4
http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/index.html
64

matching the firm with a similar firm from the pool of Russell 3000 Stock Index based

upon: (a) market capitalization, (b) market capitalization and book-to-market ratios, and

(c) industry affiliation.

I randomly drew 50 companies, without replacement, from the Russell 3000 list

of companies, each year from 1985 to 2002, and repeated this procedure 10 times each

year. I matched each company, from the draw of fifty companies, to portfolios and firms

based upon the aforementioned characteristics. I then obtained 4-years of performance

data from the CRSP database for the randomly selected firms, and their matched-firm

counterparts--relying on the previously mentioned portfolio data. I then calculated

BHARs for each of the benchmark for 1-, 2-, and 3-year time horizons. When attempting

to use a portfolio matching strategy to conduct event studies relying on the BHAR

method to calculate abnormal performance, they will often conclude that abnormal

performance is evident when no abnormal performance has occurred.

Test of the Hypotheses

In this section, I have provided the results from the tests of the hypotheses used to

detect abnormal performance. In the initial rounds of this analysis, I selected the method

used to detect abnormal performance based upon specification and power analyses. In the

subsequent hypotheses tested, I transitioned to testing whether IPOs perform abnormally

throughout their maturation process.

Section 1: Test for Specification

I conducted this analysis based upon the methodological approach documented by

Ang and Zhang (2004). The purpose of the specification analysis was to determine what
65

methodology has the least type I error. That is, to minimize the probability of concluding

that there had been an abnormal return when in reality abnormal performance had not

occurred. To test whether a method was well specified Ang et al. (2004) used the

empirical size (ES) statistic. In these tests, I conducted simulations on a sample of data

and calculated whether the ES is close to “the pre-specified significance level at which

the test is conducted” (see Ang et al., 2004, p. 261), if this occurs I can claim that the test

was well specified.

The hypothesis test for specification was as follows:

Hypothesis 1:

H0: BHAR is zero


H1: BHAR is not zero

To determine which benchmarking methodology led to a well-specified test statistics, I

randomly selected companies from the lists of the yearly Russell 3000 Index. I randomly

drew event firms from the pool of Russell 3000 companies each year and paired them

with matched firms, and matched portfolios based upon their market capitalization,

industry affiliation, and book-to-market ratios. The percentages showed the percent of

rejections or identifications of abnormal performances across the entire set of samples. In

each of the 18 years, I generated 10 samples of 50 observations, and evaluated these

samples using Barber and Lyon (1997) tBHAR test statistic. With this statistic, I calculated

whether abnormal performance was significant in each of these samples. If I identified

abnormal performance, rejected the null, this rejection was identified and given a value of

one--if not rejected, given a value of zero. I then summed the results and divided the sum

by 180, the number of samples. Therefore, the percentages found in Table 1 illustrate
66

how many times the researcher found statistically significant abnormal performance

when none was expected. The following table presents the results of specification

analyses:

Table 1

Specification Analyses I

Empirical Size - Specificaiton


Matched Firm Matched Portfolio
Years MCap Ind Ind -> MCap MCap -> BE/MC MCap MCap -> BE/MC Ind
1 5.00% 3.89% 4.44% 3.33% 43.89% 44.44% 46.67%
2 2.78% 1.67% 3.33% 2.78% 31.67% 25.56% 56.67%
3 2.22% 1.67% 2.22% 1.11% 33.33% 28.89% 65.56%
4 5.56% 3.89% 3.89% 2.78% 47.22% 36.67% 79.44%

At a 5% level of significance, the ES interval ranges from .01816 to .08184—I

calculated this confidence interval using the following formula:

.05(1 − .05)
.05 ± 1.96
180 (7)

All of the approaches using the matched-firm approach failed to reject the null, declaring

that no abnormal performance had occurred, which I had not anticipated, but this is

extremely positive result. However, each of the portfolio-matching strategies rejected the

null hypothesis; identifying abnormal performance even though, I have not simulated

abnormal performance. Given the results of the preceding analysis, it is obvious that the

matched-firm approaches produced better-specified results than the matched-portfolio

technique.

To attempt to differentiate between the different matched-firm approaches used to

identify abnormal performance, I tested for abnormal performance using an entire years’
67

sample--n = 500 firms. In Table 4, exhibits how different the ES statistic was when the

sample is increased from 50 to 500—the confidence interval, due to the increase in

observations and decrease in samples, increased in range, all samples that identified

abnormal performances between 0 and 15.06% was not significantly different from a 5%

level of significance.

Table 2

Specification Analysis II

Empirical Size - Specificaiton


n=500
Time Horizon (Yrs) MCap Ind (Rand) Ind -> MCap MCap -> BE/MC
1 0.00% 0.00% 0.00% 0.00%
2 0.00% 0.00% 0.00% 5.56%
3 11.11% 0.00% 0.00% 5.56%
4 0.00% 0.00% 5.56% 0.00%

As the sample size increases from 50 to 500, it is obvious that the observed percentage of

rejections move closer to theoretical or expected rejection levels. This occurs because,

without simulating abnormal performance, researchers would expect to detect no

abnormal performance. As the sample size increases, the ES statistic approached zero, to

verify this conclusion compare the ES statistics found in Table 1 with those in Table 2.

Given the preceding results, I found I believe it is evident that the matched-firm

approach is a better-specified method of abnormal performance detection than the

methods using portfolio-matching strategies. However, I could not find significant

statistical evidence to support my affirmation that the matched firm approach using

industry affiliation as the matching criteria produces better-specified test statistics than

the other matched firm approaches. When samples contain 50 firms the matched-firm
68

approach using market capitalization and book-to-market ratios perform a little better

than the industry benchmarks; on the other hand, when samples contain 500 firms the

matched-firm approach using industry affiliation was better- specified than the competing

methods, but again, the differences are not statistically significant.

Section 2: Tests of Power

The purpose of power analysis was to determine what test method had the least

type II error, and which methodology had the highest power. To run the power test, I

calculated the empirical power (EP) statistic by simulating abnormal performance to

determine at which level of abnormal performance various methods of detection

identified simulated abnormal performances. I based the method used to calculate the EP

statistic upon the method advanced by Ang and Zhang (2004); however, I simulated

abnormal performance at levels of +/- .01, .05, .10, .15, .20, .30, .50, and .75. In essence,

to calculate the EP statistic I have forced the average abnormal performance away from

zero, the hypothesis is as follows:

Hypothesis 2:

H0: BHAR is zero


H1: BHAR is not zero

To continue the examination of the seven potential methods, I conducted power

analyses by simulating abnormal performance using the same number of samples and

sample sizes as in the specification analysis. By taking the average performance and

either adding or subtracting a certain percentage of performance to simulate abnormal

performances I have identified significant differences between the matched-firm and


69

portfolio-matching strategies. Figure 1 illustrates how fast each of the seven methods of

abnormal performance detection responded to simulated abnormal performance.

Figure 1. One-year power curve.

In Figure 1, all of the matched-firm approaches have a defined power curve--the

traditional U or V shaped curve—the power curve are centered approximately centered

on zero, the point where no abnormal performance is simulated. In comparison, the

portfolio-matched benchmarks had no defined structure or at least not the structure

needed to make credible inferences pertaining to the power of the benchmark. Again, the

portfolio benchmarks failed to approach acceptable standards that are necessary to judge

the benchmarks ability to detect abnormal performance; in the remaining analyses, the

portfolio techniques were not included in the analyses because I did not considered them

to be meaningful alternatives to the matched-firm approach.

Figure 1 shows that even if the event firm produced were 15% of abnormal

performance, the competing matched-firm approaches only rejected the null hypothesis
70

(identifying abnormal performance) in approximately 30% of samples. This seems rather

low; researchers would reject simulated abnormal performances of greater than 30% in

70% of the tests. After comparing these numbers with the results found in Figure 2 where

I expanded the number of firms per sample to 500 from 50, the matched firm approach

rejected 80% of the samples if I imposed 10% points of abnormal performance, compared

with approximately a 17% rejection using the smaller sample. Therefore, as I increased

the sample size, the power curve narrowed making the employed methodology

appropriate.

Figure 2.One-year power curve (n=500).

There is still no statistically significant difference between the different matched-

firm approaches to benchmarking. When conducting these tests I was concerned with the

speed at which the metric deteriorates. As I increased the event period, for example when

analyzing long-term abnormal performance, the method’s ability to detect abnormal


71

performance decreases. I presented evidence in Figure 3, using a four-year time horizon

with a simulated abnormal performance of 10%; I identified of abnormal performance

6% of the time, which was a third as powerful as the one-year time horizon. However,

relative power differences between the two time horizons were not consistent throughout

the entire power curve; sometimes methods used to test the four year time horizon was

1/6th as powerful as the one-year time horizon and near the extremes +/- 75%, is the

differences decrease to 6/9ths

Figure 3. Four-year power curve (n=50).

It is apparent that, as researchers lengthen the study’s time horizon concerned for

falling prey to Type II statistical error--not rejecting the null even though abnormal

performance has occurred--increases. As the sample size is increased, this error should

decrease and the power of the statistical tests used in the analysis increased. While

reviewing the changes in the shapes of the power curves in Figures 3 and 4, I noticed the

significant changes in the power curve as the sample size was increased. However, I

could not detect any simulation of abnormal performance under 15% by the BHAR-
72

Matched firm approach to benchmarking using a four-year time horizon. It is my

subjective assessment that researchers need to be skeptical of any analysis using an event

horizon in excess of 3 years. Even a three-year event window may be too long of a

horizon to lead to a meaningful.

Figure 4. Four-year power curve (n=500)

In this power analysis, as the sample size increased (50 companies → 500

companies) the number of samples decreases immensely (180 samples → 18 samples).

The bands used to evaluate whether I could state with a 95% confidence that the metric is

powerful enough to detect abnormal performance at given levels of performance

increased as the number of observations used in the study decreases. I employed the

following formula to construct the confidence interval:

.95(1 − .95)
.95 ± 1.96
18 (8)

The following confidence bounds were constructed, on the upper end 100% and at the

lower bound 85%, this suggested that if abnormal performance in excess of positive or
73

negative 10% is experienced--using a one-year time horizon--I can be 95% confident that

abnormal performance actually occurred. As the time horizon of the event increases to

two (three) years the simulated abnormal performance necessary to induce 95%

confidence in the outcome of the analysis, exceeds 20% (30%). I have presented a more

comprehensive review of my power analyses in Appendix C through H.

The purpose of the test of Hypothesis 1 and 2 was to determine which of the

seven benchmarking methods selected in the literature review would provide the best

balance between specification and power or which method most appropriately balanced

Type I and II errors. As in most analyses I was more concerned with type I errors than

type II--more succinctly it is better to err on the side of caution, not to conclude that

abnormal performance occurs when in actuality it had occurred, than to conclude that

abnormal performance occurred when in reality it did not. In the sensitivity analysis, the

only benchmarking method that did not generate well-specified test statistics was the

matched-firm based upon market capitalization alone. After taking more time to analyze

the differences between the competing methods it is evident that, although not

statistically significant, the match-firm strategies based upon industry affiliation and the

market capitalization combined with the book-to-market ratio were the best specified of

the competing benchmarking techniques, which implies that these methods are better than

the portfolio approaches.

Even though I cannot conclude that the matched-firm approach to benchmarking

using industry affiliation was statistically better than the matched-firm approach based

upon market capitalization and book-to-market ratios, from a qualitative perspective this
74

conjecture made sense. In the preceding specification analysis using both a sample size of

50 and 500, the results of my analysis show that both metrics produce well-specified test

statistics and in the power analyses both metrics performed similarly. However, the

additional information needed to obtain data for the matching strategy using market

capitalization and book-to-market ratios make this technique less preferable. Using IPO

data from my list and querying the CRSP database for data on 82 IPOs I found that I

could match all queries based upon industry affiliation. However only 24 of these IPOs

generated all of the data needed to calculate either market capitalization or the book-to-

market ratios--this means only 29% of the IPOs would have survived the initial sort.

Continuing this argument, I attempted to obtain at least 90% of the original sample, by

matching the IPOs based upon the appropriate characteristics, if I matched only 29% of

the sample using CRSP data this would leave 61% of the sample without a match. The

only effective alternative matching procedure I identified was conducting a Google

search using the company’s ticker or name, the Google search results are less reliable,

and should be used as a last resort – therefore, I have used the matched-firm approach

using industry affiliation as the matching criteria in the following analyses.

Section 3: Initial Performance

In the following section, I have focused on the initial trading period. The main

questions posited in the following section were whether unseasoned IPOs produced

abnormal performances in the time proceeding public trading and if this abnormal

performance continued into the first day of public trading. I first reported the results of
75

the analysis carried out prior to public trade and analyzed whether IPOs produce

abnormal performance on their first day of trade.

IPO Performance (Pre-issuance)

I now turn to addressing the results of analyses conducted on IPO performances

obtained prior to public issuance, from offer to issuance. I used standard indexed

benchmarks to gauge normal performance, because the pre-issue performance period was

generally constrained to a period when the U.S. markets were not trading—the company

issues its shares on the night preceding its initial day of trade.

Hypothesis 3:

H 0 : R IPOAverage (Offering → InitialTrade ) ≤ 0


H 1 : R IPOAverage ( Offering → InitialTra de ) > 0

X −0
The t test is as follows - t = , the sample average was 11.74%, with a sample
S n

standard deviation of 31.16%, and 1876 observations taken from April 12, 1996 to

January 29, 2008. The resulting t statistic was 16.32, which is outside the critical value of

1.645 for a one-tail t test given a 5% level of significance. On average, from April 12,

1996 to January 29, 2008 unseasoned IPOs produced an 11.74% return, this result was

statistically significant from zero using a 95% level of confidence, therefore, I rejected

the null hypothesis in favor of the alternative hypothesis.

The preceding analysis illustrated the difference between the performance

obtained by IPOs pre-public trading and an expectation of zero abnormal performance,

because this is the pre-public trading period, there is no specific way to pair the
76

individual IPO performance with a benchmark. Therefore, I have aggregated the returns

into monthly IPO performances, these performances assume the investor obtains shares

of the IPO in the offering and sells them at the initial trade on the first day of public

trading. In Appendix J, I have illustrated how abnormal IPOs perform in pre-public

trading. To accomplish this goal I calculated the average monthly performance of IPOs

versus those of DJIA, Russell 3000, and the NASDAQ Composite Indices.

Table 3

Pre-Public Trading Averaged Monthly IPO Performance

Comparision of Average Monthly Performances


IPO DJIA Russell 3000 Nasdaq
Sample Average Return 8.96% 0.55% 0.46% 0.23%
Standard Deviation 0.14 0.04 0.04 0.08
Count 139.00 139.00 139.00 139.00
T-value 7.81 1.60 1.22 0.33

As the numbers in Table 3 indicate, at 5% level of significance for a one-tail t test (t

critical of 1.66), I rejected the null hypothesis for only the IPO sample, implying that the

IPO group experienced significant abnormal returns. None of the benchmark indices

produced abnormal returns. However, the t statistic obtained for DJIA was 1.60, which is

quite close to the critical value.

Given that the DJIA was the best performing benchmark, I continued my analysis

by testing for the abnormal performance of the averaged monthly IPO performance prior

to public trade cohort against the monthly performance obtained by the DJIA.

Hypothesis 4:

H 0 : RIPOMonthly(Offering→InitialTrade) ≤ RDJIAMonthly
H 1 : RIPOMonthly(Offering→InitialTrade ) > RDJIAMonthly
77

I then carried-out a paired t test to conduct this analysis. I calculated the resulting

t value using the method mentioned earlier in this section. The average difference

between the IPO group and the DJIA was 8.41%, with a sample standard deviation of

13.86%, and observations’ occurring over 139 months--the computed t statistics was

7.15. Again, with a 95% level of significance for a one-tailed test the critical value of t is

1.66; therefore, I rejected the null hypothesis and determined that significant abnormal

performance occurred during the pre-public trading period when compared against

standard indices.

Initial Day of Public Trading

In this round of the analysis, I attempted to answer whether IPOs generate

abnormal performance on the first day of public trading. To answer the question I used

IPOs issued to the public from January 1, 1997 to December 22, 2005, the sample

contains 2,143 observations, and I conducted the following analysis.

Hypothesis 5:

H 0 : R IPO ( Day 1) ≤ R IndustyMat chedFirm


H 1 : R IPO ( Day 1) > R IndustyMat chedFirm

Using a standard t test, I uncovered the following; the average return across the IPOs in

this analysis was 3.44%, and the average performance of the matched-firm benchmark

was 0.13%. The sample standard deviation was 16.27%; therefore, I have calculated a t

value of 9.423, which when compared to a critical value of 1.645, at a 95% level of

significance, indicated that the IPOs abnormal returns on the first day of trade are

statistically significant. I rejected the null hypothesis, in favor of the alternative


78

hypothesis, which stated that the returns of IPOs on the first day of trade are significantly

different from the returns obtained for the matched-firm benchmark.

Section 3: Long-term Abnormal Performance

In this round of the analysis, I turned to evaluating significant abnormal

performance occur after the short-term abnormal performances. To accomplish this I

canvassed the population of IPOs issued in the U.S. from January 1, 1985 to December

31, 2002. I used 5583 IPOs that in the analysis, and took each of these IPOs and matched

them to a firm using industry. I then calculated the product of the HPRs of the IPOs and

subtracted the product of the matched-firm benchmarks HPR. Finally, I used a derivation

of the BHAR formula as per the following:

BHAR = ∏ (1 + rIPO ) − ∏ (1 + rIndustryMatch )


(9)

Next, after each trading day, I averaged the individual BHARs and I calculated the

standard deviations, across the entire sample. Therefore, the output, which encompasses

trading day 2 through 750, is the averaged BHAR across the entire sample over the

specified time horizon. Next, I conducted two-tailed t test for all 749 time-horizons.
79

Figure 5. Long-term abnormal performance

Analysis of the data provided in Figure 5 shows that, during trading days 5 through 12

IPOs significantly underperformed the matched-firm benchmark, at day 17 the trend

changed positive and it was significantly positive until trading day number 120 (with one

insignificant reading on day 33)--at day 120 the BHAR is 1.934%. The averaged BHAR

continued along insignificantly, but positive, until reaching trading day 161. However,

the BHAR did not generate a significantly negative BHAR until it reached 201 trading

days of seasoning. The BHAR remained significantly negative through the rest of the

analysis.
80

BHAR
(IPO V. Industry Firm Match)

5.00%
Cumulative Average

0.00%
-5.00%
BHAR

-10.00%
-15.00%
-20.00%
-25.00%
1 82 163 244 325 406 487 568 649 730
Number of Trading Days
(Seasoning)

Figure 6. BHAR (IPO V. matched firm)

In Figure 5, I provided the t values that I calculated when comparing IPO

performance to the matched-firm returns. Figure 6 shows that the trend of the BHAR was

unmistakably negative. Overall, there is an abnormally positive performance of

approximately 3 % occurring within the first year and at the end of year three the

maximum abnormally negative performance occurred, which was -22.41%.

Section 4: Quiet and Lockup Expiration

To construct a test for abnormal performance at the expiration of the lockup and

quiet periods I canvassed the same population of IPOs that I used in the long-term

analysis. The number of observations for the quiet and lockup period analyses was 5529.

To carry out these analyses I calculated the 5-day BHAR surrounding the date in which

the quiet period ended and the lockup period expired.


81

Hypothesis 6:

H 0 : BHAR Quiet ≤ B H A R IndustyMat chedFirm

H 1 : BHAR Quiet > B H A R IndustyMat chedFirm

For the analysis of performance surrounding the expiration of the quiet period, the sample

average BHAR was 1.64% for the five-day period surrounding the event and the sample

standard deviation was 13.9%. I used this data and calculated the t statistic of 8.75. Using

a 95% level of significance the critical value was 1.645; I rejected the null hypothesis and

concluded that at the conclusion of the quiet period IPOs produce a significantly positive

abnormal performance.

Hypothesis 7:

H 0 : BHAR Lock − up ≥ B H A R IndustyMat chedFirm


H 1 : BHAR Lock − up < B H A R IndustyMat chedFirm

In the analysis of the performance resulting from the expiration of the lockup

period, I identified a significantly negative performance of 1.42%. In addition, the sample

standard deviation was 34.4%, therefore, the resulting t test produce a test statistic of -

3.08, and with a 95 % level of significance the critical t value is, again, -1.645. Therefore,

again, I rejected the null hypothesis and conclude that the -1.42% of performance is

significantly negative.

Conclusions

In the preceding section, I have (a) presented a well-specified and powerful

method of identifying abnormal performance when conducting event studies, (b) shown

that short-term abnormal IPO performance is positive, (c) illustrated that events occurring
82

throughout the IPO process instigate abnormal performances, and (d) provided a

description of IPO performance over the initial three years of seasoning. The results of

the analyses related to event specific performances--abnormal performances occurring at

the expiration of the quiet and lockup periods--generated significant, but not substantial

abnormal performance. However, initial abnormal performance of 11% in the pre-

offering period and 3% in the initial trading day, together with long-term

underperformance of IPOs in excess of 30% seem to suggest that substantial performance

abnormalities occur when IPOs issued unseasoned equity shares.


CHAPTER 5:

SUMMARY, CONCLUSION, AND RECOMMENDATIONS

Introduction

In this chapter, I have reviewed the findings of the analyses conducted in the

preceding chapters, provided an interpretation of these findings, and summarized the

conclusions reached in the preceding sections. In addition, I discussed the implications of

the analyses for social change and recommendations are advanced. The outline of this

chapter is as follows: (a) overview, (b) interpretation of findings, (c) implications for

social change, (d) recommendations, and (e) conclusions.

Overview

The overarching goal of this dissertation was to evaluate whether IPOs produce

predictable abnormal performances around specific events and over specific time

horizons. I have divided the analyses into four groups: (a) specification and power

analyses, (b) short-term IPO performance, (c) longer-term IPO performance, and (d)

event-related IPO performance. In the following paragraphs, I have provided an overview

of the results of the analyses conducted to evaluate the hypotheses presented in the four

hypothesis testing sections.

The results of the specification and power analyses quelled some methodological

concerns I had mentioned in the literary review. All of the matched-firm (matched using

market capitalization, industry and market capitalization, industry, and market

capitalization and book-to-market ratios) approaches used as benchmarks for normal

performance produced well-specified test statistics. However, I found misspecified test


84

statistics for each method of benchmarking based upon portfolio constructions (using

primarily the same matching characteristics). This trend continued in the power analyses.

For the matched-firm approaches, the power curves centered around zero and the curve

maintained a U or V shape as I simulated abnormal performance, while the power curves

of matches based upon portfolio construction did not produce power curves centered on

zero and produced no definite form. At the conclusion of the analysis, I determined that

there was little specification and power differences between the numerous matched-firm

approaches, but all of these approaches were preferable to the portfolio-based matching

strategies.

In the analysis of short-term IPO performance, I found that both prior to issuance

and in the first trading day after issuance IPOs produced significantly positive abnormal

performances. The magnitude of performances differed substantially, for the preoffering

period IPOs, on average, generated 11.74% of abnormal performance. For the first day of

public trading IPOs, when compared against the matched-firm benchmark, produced a

significant abnormally positive performance of 3.44%. I also aggregated and combined

the monthly pre-IPO returns, compared them against the average monthly return on the

DJIA, and discovered that they produced a significantly positive abnormal monthly

performance of 8.41%.

Next, I turned to analyzing long-term IPO performance, from trading day 2

through day 750 (~3 years of daily data). Initially, IPOs underperformed their matched-

firm comparisons significantly, but shortly thereafter, they trended positive and had

significantly positive abnormal performances until trading day 120. At the 161st trading
85

day the IPO’s BHARS, on average, began to produce negative performances, and on

trading day 201, the negative performance began to be significant. After this significant

abnormal performance reading, approximately at the end of the first year of trading, IPOs

continuously and significantly underperformed their matched-firm benchmarks up until

their 3-year anniversaries. The magnitude of the long-term underperformance is evident;

the 3-year BHAR was a significant -22.41%.

In the final set of analyses, I analyzed whether significant abnormal performances

occur at the expiration of the quiet and lockup periods. When testing for abnormal

performance at the conclusion of the quiet period I discovered a significantly positive

abnormal performance of 1.64%. Similarly, at the conclusion of the lockup period, I

discovered a significantly negative performance result of -1.42%. In this section, I have

summarized the major findings generated in this analysis of IPO performance; I will now

turn to discuss how the reader should interpret these results.

Interpretation of findings

In this section of the dissertation, I have attempted to integrate the findings of the

results section, with the conjectured relationships presented in the literary review. Again,

I have addressed each of the four research sections separately. Therefore, I have

evaluated the findings of each of the four sections in the following order: (a) specification

and power analyses, (b) short-term IPO performance, (c) longer-term IPO performance,

and (d) event-related IPO performance.


86

Specification and Power Analyses

The results of the specification and power analyses were very surprising. First, I

spent a great deal of time developing ideas for dealing with misspecified tests and biases

that plague many models used as models of normal performance; in hindsight this time

expenditure seems unnecessary. Second, I was struck by the severe differences in the

specification and power of the two main methods of benchmarking—the matched firm

and portfolio construction. Third, it is important to highlight that there were similar

specification and power tests produced by all of the matched firm approaches to

benchmarking. I will now address each of these issues, respectively, in the following

sections.

Biases

In my opinion, Barber and Lyon (1997) and Lyon, Barber, and Tsai (1999)

forwarded the best and most recent attempts to evaluate the specification and power of

methods used to model normal performance in event studies. In Barber et al. (1997) the

researchers spend a great deal of time discussing biases related to methods used to detect

abnormal performance, but state that all of these biases were overcome when using the

matched-firm approach combined with the BHAR method. However, in Lyon et al.

(1999) the researchers continue the investigation of methods used to test for abnormal

performance, and attempt to correct the portfolio-based metrics using bootstrapped t

statistics and creating pseudo-portfolios to calculate empirical p values.

The explanation of the bootstrapping procedures and the calculation of the

empirical p values are beyond the scope of this analysis, but all of this work seems
87

unnecessary because the matched-firm approach to benchmarking, after researchers made

these complicated adjustments to the competing portfolio methods, the matched-firm

approach still produced better-specified test statistics than either method. The

complicated procedures, implemented in Lyon, et al. (1999), do improve the results and

distribution of the power test of the portfolio methods, as much as 20% improvement in

power given a +/- 10% simulation of abnormal performance. Although the curves are still

unevenly distributed, the empirical p value approach was the more powerful method of

detecting negative abnormal performance and the bootstrapped t statistic is a more

powerful technique to use to identify abnormally positive performance. When I compared

these to the symmetric power-curves obtained using the matched-firm approach, the

complicated adjustments seemed unnecessary. Using the matched-firm approach to

benchmarking seems to be the correct choice.

Matched-Firm vs. Matched-Portfolio Approach

When discussing the event study literature I believe it is imperative to point out

the magnitude of the differences between the matched-firm and matched-portfolio

approaches to benchmarking. In an important study analyzing event study methodologies,

Brown and Warner (1985) make a convincing argument against conducting events

studies using daily returns, again the researchers focus on how the defined benchmarks

are biased and other researchers build their cases for not conducting event studies using

daily returns around the proceeding conclusions reached in Brown et al. (1985).

However, taking the Brown et al. (1985) analysis and attempting to project it on present

day analyses would not be relevant, because the researchers were concern solely on
88

analyzing the ability of portfolio-based metrics to identify abnormal performance. Brown

et al. (1985) found similar problems with the portfolio approach as Barber and Lyon

(1997) these findings seem to encourage additional skepticism in the results of any long-

term analysis of event studies, regardless of the method used to aggregate the returns.

Researchers should approach with skepticism any evaluation of event studies using basic

portfolio-matching strategies. However, this skepticism should not transcend the portfolio

population and infect the match-firm approaches without evidence of problems resulting

from the use of the matched-firm benchmarking technique. Researchers should first

choose the metric and then evaluate it.

In the hypothesis tests, I found that all of the matched-firm approaches were well

specified and seemed to be reasonably powerful. In these analysis, I was more concerned

with how well specified the given metric was, minimizing false positives, than

concluding that no abnormal performance had occurred, when in actuality it had not.

Portfolio-matching techniques generated poorly specified test statistics in all situations,

and across all samples. The reason that the portfolio-matching techniques perform poorly

is that the portfolio construction approach minimizes idiosyncratic risk and the matched-

firm approach does not. When researchers calculate the BHAR using an event firm and

the portfolio-approach to benchmarking, researchers categorize the idiosyncratic risk

associated with the event firm as abnormal performance. Contrarily, when the matched-

firm approach is used there is not an attempt, by construction, to minimize either the

matched-firm or the event firm’s idiosyncratic risk, researchers allow both metrics to
89

perform abnormally--there is no normalizing agent. Portfolio methods are, theoretically,

poor predictors of abnormal performance.

Matched Firm—Specification and Power

Many researchers have attempted to discredit the results of analyses that illustrate

that abnormal performance occurs in IPO performance. Researchers pursuing this line of

inquiry generally discredit some element of the statistical methodology used to measure

the abnormal performance. There are many approaches researchers can use to attack

methods constructed to detect abnormal performance, these researchers can discredit: (a)

the method of calculating abnormal returns (CAR v. BHAR), (b) the statistical method

used to identify abnormal performance (specification and power analyses), or the

benchmark’s construction (portfolio- v. matched-firm or quip that the abnormality is a

size/industry phenomenon). After constructing arguments against abnormal performance

based upon the aforementioned concepts, the argument then seems to regress to problems

that have plagued evaluations of securities’ performances for decades or since their

inception--the distributions of returns or BHARs seems to be skewed, peaked, and have

fat tails. These are problems that plague every analysis of securities’ performance, and if

someone were to generate a solution to this problem, this solution would have already

been integrate into statistical tests of security’s performance (whether researchers use a

different distribution when conducting analyses on securities performance – Gaussian v.

Non-Gaussian--or something totality original).

The point I am attempting to raise is that all models are not accurate reflections of

reality—they are normative models of finance used in research projects. Even if those
90

same problems that plague the majority of models plague the data examined in this

analysis, economic and statistical assumptions, the procedure of detecting abnormal

performance using the matched-firm approach has produced well-specified results

throughout this and other specification analyses. I would have liked to generate better

results from in the power analyses, however, given the shape and structure of the power

curves across each of the matched-firm approaches; I am satisfied with the outcomes of

this analysis. The matched-firm approach to benchmarking for normal performance

seems to work very well--determining what firm characteristic to base the application of

the matched-firm approach is a more difficult problem to solve.

Short-term IPO Performance

The important questions I attempted to answer in this section of the analysis were

whether short-term abnormal IPO performance is constrained to the period prior to public

trade. Researchers have previously shown that the abnormal performance is significantly

positive in the short-term (between the offer and the close of trade on the first trading

day), however some have questioned whether the abnormal performance occurs prior to

or after the IPO begins public trading. The outcome of this analysis surprised me.

Historically, researchers seem to have categorized the first-day returns as

occurring from the offer to the close of trade on the first day of trade. Recently, Cheng,

Cheung, and Po (2001) found that in a study of the Hong Kong IPO market, abnormal

returns occurred only in the period prior to public trade. This conclusion led me to

question whether conclusions reached in previous analyses of the IPO market in the U.S.

were accurate. If abnormal performance is constrained to the period prior to public trade,
91

then questions about market efficiency are not valid because this concept of market

efficiency does not apply to interactions between the underwriting syndicate and the

prospective initial purchasers of IPO shares. Generalization of the conclusion reached in

Cheng et al. (2001) to IPO performances in the U.S. would have been a critical finding.

However, the results from my analysis illustrated the opposite; that abnormal

performances occurred in both periods--prior to public trade and on the first day of public

trading. The results were both positive and statistically significant: (a) 11.74% abnormal

return for the period prior to trading, and (b) 3.31% abnormal return for the initial day of

public trading.

Looking at the trends of the performances of these two abnormalities, it seems

that the risk-adjusted anomaly has been increasingly pronounce in the post bubble period,

or at least after 1998 (see Appendix J), and there is a correlation between the volume of

IPOs issued in a month and their initial performances in the 1st day of trading. Even

though the regression ran on the 1st day of trading was significant, the researcher believes

that these statements in this paragraph are subjective, therefore, researcher need to run

further tests (expanding the sample) before producing reliable results. However, when

looking at the preissuance period, I found that, the relationship between the risk and

return seems to be suspect in the post-bubble period. Reviewing the heightened values of

the test statistics, it seems as though companies offered investors a greater amount of

return on IPOs for less risk than in the period during and prior to the bubble.

The other interesting finding is that, for the first day of trading, there is a

relationship between the number of IPOs issued in any given month and the performance
92

of IPOs issued in that month. By running a regression of the monthly initial day IPO

performance against the number of IPOs issued in that month I produced the following

statistics: (a) R2=25.5%, (b) n=107, (c) ALPHA = 6.67, with P-Value =.019, (d) Beta =

121, with P-Value = .000, (d) FCRTICAL=34.25, and (e) the significance of F 6.15E-8.

These two findings fit theories attempting to explain current trends of IPO performance.

For example, if markets produce greater returns in times of greater issuance, the

relationship between performance and speculation seems to be important--more is good--

not based upon fundamental value. Likewise, the deterioration of the relationship

between risk and return, in the period following the 2000-bubble, seems to imply that

investors did not understanding the magnitude of the risks they were taking to allow them

to obtain the returns they were obtaining. It seems like one of the two parties, the

company (underwriting syndicate) or the investor, was being either overly generous or

overly greedy. It will be interesting to see how or if, as liquidity dried up substantially in

2008, this risk/return relationship will change in 2008.

Long-term IPO Performance

The academic community has reviewed analyses of long-term IPO performances,

critically. However, the skeptics have not been unreasonably critical, methodological

problems, small sample sizes, and a lack of integrated analyses of various components of

the long-term performance quandary have been lacking. Results presented in the

specification and power analyses illustrate how effective the matched-firm approach to

detecting long-term abnormal performance is. The real question, when considering the
93

longer-term analysis of IPOs using this matched-firm technique is if the method would be

powerful enough to identify abnormal performance.

The results and the trends seem to conform with traditional theories pertaining to

longer-term IPO underperformance. Brav, Geczy, and Gompers (2000) focus on a five-

year analysis of IPO performance, and seem to conclude that IPOs really do not produce

significant long-term term underperformance. However, I found, using the matched-firm

approach by industry, a significant three-year IPO underperformance of -22.41%. This

finding is similar to the results found in Ritter (1989) of -27.39% at the end of the first

three-years of seasoning. The conjecture remains, IPOs tend to produce abnormally

negative performances in the first three years of seasoning and somewhere between year

4 and 5 these negative performances seem to normalize.

Event-related IPO Performance

In this section, I have discussed the results reached in analyses of the expiration of

the quiet and lockup periods. The results of both of these analyses add credibility to

comparison papers and stand in contrast with analyses that produce contrary results. I will

begin by discussing the results presented in analysis of the quiet period and follow this by

the results found in the lockup period.

I found, analyzing the performance around the expiration of the quiet period, a

significantly positive abnormal performance of 1.64%, using a sample of IPOs that went

public from 1985 through 2002. Bradley, Jordan, and Ritter (2003) found that from 1996

to 2000 IPOs produced a significant cumulative five-day average return of 3.1%.

However, Bradley, Jordan, Ritter, and Wolf (2004) followed this analysis up by taking a
94

sample of 96 IPOs issued in 2001 through 2002 and found that IPOs no longer generated

significant abnormal performance at the expiration of the quiet period. The results of both

of these analyses make sense, in the 1996-2000 periods I obtained an average BHAR of

3.29% and in the 2001-2002 period the average return during the expiration of the quiet

period was equivalent to -.71%. From 1985-2002, on average, IPOs produce significant

and substantial abnormal performances, however these abnormalities are not as large as

previously thought.

After conducting an analysis of the lockup expiration, I found that IPOs produce a

significant abnormally negative performance of 1.42% when compared to firms matched

by industry. Using a sample from 1988 to 1997 Field and Hanka (2001) found similar

results, a significant abnormal performance of negative 1.5%; however, Garfinkle,

Malkiel, and Bontas (2002) using a sample of IPOs issuing shares from July 1997 to

December 1999 identified a significantly negative 4.47% abnormal performance.

Garfinkle et al. (2002) compared the returns generated on an investment in the NASDAQ

market index against the event firms, and Field et al. (2001) used the CAR method of

abnormal return aggregation, benchmarking with the CRSP Value-Weighted Market

Index. After reviewing Appendix L, yearly abnormal performance data surrounding the

lockup expiration, I am not sure how to account for the vast differences between the

results presented in the Garfinkle et al. (2002) analysis and the results obtained in this

sample and those from Field et al. (2001).

There exist, substantial discrepancies between what I have found and analyses

conducted by more seasoned researchers in the field, however, my conclusions are as


95

follows. Even though IPOs do generate significant abnormal performances in the five-day

period surrounding the expiration of the quiet and lockups period, the magnitude of these

abnormal performances are minuscule, 1.64% and -1.42%. However, the investor only

exposes his or her capital to the investment for the five-day period, and if investors repeat

this investment strategy 61 to 521 times each year, the gains could justify the time and

effort it would take to employ investments using these strategies.

Implications for Social Change

In this section, I have integrated the findings related to abnormal IPO

performance and market efficiency with concepts that researchers could use as catalysts

for social change. Some of the connections are obvious, but in some instances, they may

be somewhat opaque. The main headlines of this section are: (a) short-term IPO

performance a factor that increases the income disparity in the U.S., and (b) in this

research project I have uncovered problems with the concept of market efficiency, that

is, abnormal IPO performance provides evidence that markets are inefficient.

Short-term IPO Performance and Income Disparity in the United States

In a speech given by the Chairman of the Federal Reserve Board of Governors,

Bernanke (2007) states that although Americans should not be guaranteed equality of

outcomes, we should guarantee equality of opportunity to these outcomes. Chairman

Bernanke concludes that those at the top of the income distribution have been gaining

more wealth in percentage terms than those in the middle and lower portions of the

income distribution curve, and similar relationship exists when comparing middle to

lower on the income distribution curve. The reader can link this conversation to the short-
96

term analyses of IPO performance conducted in the preceding sections. To reiterate the

findings, in the pre-public issuance period IPOs generate abnormally positive

performances of 11.74%, and on the initial day of public trade, these shares produced a

significantly positive performance of 3.31%. The underwriting syndicate, whose role in

this process is complicated, discretionally allocates these shares, while one of their major

objectives is to incite interest in the offering.

It makes more sense for the promoters of these offerings to court investors with

higher incomes or access to more capital, than it would to court the average investor.

Even if a mechanism existed, which the underwriting syndicate could use to transfer

ownership of the company’s shares these underwriters would refrain from using an

alternative means to allocate the shares, because a portion of their compensation is their

ability to generate and sustain interest in the issue. However, the Dutch auction process of

issuance is an efficient means to allocate shares and this mechanism provides the

underwriter and the company with an approximate fair-value for the company at the time

of offering without offering investors discounts on price of the issue. Nevertheless, this

method of issuance has not become the standard; the reason for this is complex.

First, reason behind the transfer of a portion of ownership in the corporation—

there could be different reasons for this transfer. Is the issuing company attempting to

extend the life of the organization, does the organization want to transfer their risks, or is

it some combination of the two objectives? If the owners are interested in the

continuation of the enterprise and have confidence about the future prospects of the

organization, then they will most likely hold onto a significant proportional share of
97

ownership in the company. Alternatively, if they are interested in a transfer they will

attempt to liquidate their stake and obtain the maximum payoff for their organization—

this process may include the owner’s attempts to deceive the shareholders. Owners that

are interested in transferring some ownership, but also believe that they can obtain

longer-term success could sacrifice the initial 15% of abnormal performance to promote

good will and then conduct an SEO once their longer-term initiatives payoff to finish the

transfer.

Returning to Chairmen Bernanke’s commentary, the opportunity for economic

opportunity should be equivalent across classes, but the actual economic profits obtained

by people need not be equivalent. It is unjust to allow a privileged class of investors’

unfettered access to profit opportunities, if at the same time you restrain other economic

agents from the opportunity to participate in this process. Again, the Dutch auction

process seems to be a good solution to this problem. Investors need to ask the following

question: What is the reason for the equity issuance—the succession planning or risk

transfer?

Chinks in the Armor of Market Efficiency

Empirical research projects focused on detecting abnormal IPO performance

attack the EMH somewhere in between the weak and semi-strong forms of the

hypothesis. Performances obtained prior to public trading have nothing to do with

discrediting market efficiency, but if researchers consider that companies issue these

newly issued shares at or above their intrinsic value, when compared against their peers

they may find question the efficiency of the issuance process. In a sample of more than
98

2,000 IPOs issued from 1980 to 1997 Purnanandam and Swaminathan (2004) found that

IPOs, at their offer, are typically overpriced when compared to their non IPO

counterparts by 15% to 50% depending on the matching criteria.

If initially the issues were on average, overpriced, then why prior to public trading

would these same overpriced issues increase in value in excess of 11.74%? Then in the

initial day of trade, why do these shares obtain another increase of ~3%. These findings

do not fit well together and do not support the EMH. Purnanandam and Swaminathan

(2004) presented further evidence, which help to build a case against market efficiency.

Purnanandam et al. (2004) found that when the strength of the initial misvaluation of

IPOs is excessively positive the ensuing initial performance is greater than when IPOs are

either appropriately valued or undervalued.

Should an efficient market understand when a systematic misvaluation occurs and

correct this inefficiency rapidly? In longer-term analyses of IPO performance, after the

initial underperformance, IPOs then underperform benchmarks by a substantial amount

until either the fourth or the fifth year. Should it take four to five years for the market to

adjust for an initial misvaluation of shares? Likewise, the abnormal performance

experienced at the expiration of the quiet and lockup periods produce significant

abnormal performances; these performances range from 1% to 2%, just minor chinks in

the armor of the concept of market efficiency. Abnormal performances and misvaluations

of substantial proportions found in short- and long-term analyses of IPO performance

may be considered more than a chink in the armor, but a significant deterioration in the

quality and effectiveness the concept of market efficiency.


99

Integration and Summary

The two arguments developed in the preceding section are very important

elements of economic justice and understanding. First, as we allow the markets to

operate, we allow an unequal distribution of economic outcomes throughout the

population; however, we have to ensure that every citizen has an equal opportunity to

participate in economic activities. There is no concept of equality of opportunity

embedded in the process of issuing unseasoned shares of equity to the public.

In the global economy, investors are now seekers of information, the underwriting

syndicate no longer needs to sell investors on a concept or idea--investors can evaluate

the information contained in the offering prospectus and determine if the issue should be

included into their investment strategy. The underwriting syndicate can still play a role in

advising corporations on how to structure their organization, their issuance processes, and

provide an initial valuation of the entity, but is their interference with market forces still

relevant given the advances in communications methods present in today’s financial

systems? Market forces will generally make a more accurate assessment of a firm’s

prospects than a group of privileged investors or, for that manner, the underwriting

syndicate. Freeing up the shares through an auction process may stifle the irrationally

positive response to an issue that is already over valued by 10% to 30%. The positive

11.74% abnormal performance between the offer and the initial trade, followed by a

3.34% abnormal performance obtained on the first day of trade seems to be an irrational

response to newly issued shares.


100

I now will explain why the actions taken by the underwriting syndicate hinder

market efficiency: (a) selective allocation of shares, (b) determining a price without

reference to market forces, and (c) compare the ensuing positive abnormal performances

to fixing a price ceiling on an asset or commodity. If the underwriters impose a ceiling on

new issues, the issuer obtains a payment that is less than it could have obtained without

the ceiling, the quantity demanded is greater than the quantity supplied, and a nonmarket

entity (the underwriting syndicate) rations the shares. As the shares hit the market, those

kept out of the bidding process in the premarket force the price of the security to its

market-clearing price. If companies allowed the markets to operate in the preoffer period,

the process would operate in a more efficient manner.

The market efficiency questions are easier to address, researchers believe either

the magnitude of an anomaly is significant enough to question market efficiency or they

need more proof. The researcher has different interpretations of the magnitude of the

anomalies found in this analysis. Abnormal performances occurring around the expiration

of the quiet and lockup periods are significant, but 1 to 2% is not substantial. However, a

significant 10% abnormal performance for an IPO that was already overpriced, followed

by a 4- to 5-year period of adjustment period to trend back towards normality, does not fit

within the concept of market efficiency. Again, a problem that affects investors and the

firm seem to be resulting from the underwriting syndicates’ interference in the price

discovery process, which is unnecessary given the rapid nature of the dissemination

process of information among all parties.


101

Revision in the IPO premarket pricing and allocation will eliminate IPOs’

abnormal performances enjoyed by a few investors and provides equal opportunities for

all investors to benefit from the IPO process. This will lead to significant positive social

change in the distribution of wealth and income in the society.

Validity and Reliability of the Research Results

In this section, I have presented my views on the validity and reliability of the

results obtained in this research study. In the following paragraphs, I have addressed the

sample sizes used in each analysis of abnormal performance, the power and specification

results, the reliability of this analysis, and the appropriateness of using the event study

methodology to conduct this analysis. I will first address the validity and then turn to

addressing the reliability.

When researchers attempt to determine whether the results of this analysis have

external validity, they should review the findings and conclusions reached in the

specification and power analyses, the sample size of each of the analyses, and evaluate

whether the event study methodology was an appropriate research methodology for me to

apply to this analysis to identify abnormal performance. All of the matched-firm

approaches to benchmarking produced well-specified test statistics and were reasonably

powerful; however, the portfolio-matching strategies produced misspecified test statistics

and produced poor power results. These specification and power analyses were tested on

companies that fall within the Russell 3000 Index of companies; Russell Investments

claims that this index currently represents “approximately 98% of the investable U.S.

equity markets” (Russell Investments, 2008)—a large portion of the U.S. financial
102

markets. In addition, I conducted these analyses over a substantial time horizon, a 22-year

horizon, using a non-repeating sample of 10 observations of 50 companies and a non-

repeating sample of 1 observation of 500 companies per year. Thus, given the simulated

specification and power results and the sample sizes the results obtained from the

matched-firm approaches have external validity.

In my power and specification analyses, using the matched-firm approach, the

methodology I used to detect abnormal performance in sample sizes in excess of 50

companies produced well specified and powerful results; however, when I increased the

sample size to 500, the results of the specification analysis improved and the slopes of the

power curves increased substantially. Therefore, to obtain results that can be generalized

researchers should use large sample sizes when using this methodology. My sample sizes

were as follows: (a) pre-trade analysis—1,876 observations, (b) 1st day of trade

analysis—2,143 observations, (c) long-term performance analysis—5,583 observations,

and (d) quiet and lockup expiration analyses—5,529 observations. The list of IPOs used

for the quiet and lockup expiration analysis had 6,525 IPOs identified for the entire time

horizon analyzed (1985-2002); therefore, 15% of the original list failed to meet the

requirements of the quiet and lockup period analysis, 14% for the long-term analysis. The

time horizon for my analysis of performance experience during the 1st day of trading

went from January 1997 to December 2005, during this period, I successfully (obtained

all of the required data) identified 2,289 IPOs from the list, 6% of these IPOs failed to

meet the requirements for inclusion in this analysis. For the preissuance trading

performance I relied on the Hoover’s IPO and Edgar IPO databases, these databases,
103

combined, produced a sample of 1,876 IPOs that had both offer and initial trade data and

went public from April 1996 to January of 2008—100% of these IPOs were included in

this analysis. The samples used to test for abnormal performance were well in excess of

the samples size of 500 observations needed to produce a well-specified and powerful

test of abnormal performance in the simulation analysis.

To complete the discussion of validity, I conclude by arguing why I chose to use

the event study methodology to conduct this analysis. In chapter 3, I wrote a few pages

explaining the reasons why competing research methodologies would have been

unsuitable for the event study methodology when researchers are attempting to identify

abnormal performance. Experimental design, case studies, correlational analysis, surveys,

and other research methodologies are unsuitable for the event studies for numerous

reasons explained in chapter 3 pages 46-49. I could have used a causal comparative

approach in this analysis; however, it seemed that researchers could better apply this

method to conduct research in educational and social settings—researchers have

consistently applied the event study methodology to examine whether information flows

or events cause abnormal performances. My choice of research methodology for this

analysis was the event study methodology and I think researchers would have a difficult

time proving that another methodology can be better suited to analyze abnormal

performances occurring in security’s price performance.

Leedy and Ormrod (2005) define reliability as “the consistency with which a

measuring instrument yields a certain result when the entity being measured has not

changed” (p. 29). If I were to recalculate the result of this analysis using the same match-
104

firm and portfolio-approaches, on the same event firms, I would obtain the same results.

As such, reliability does not apply to this study, which is an event study.

Recommendations for Action

I have one take home message or recommendation for action and that is to let the

market control the process of unseasoned equity issuance. Let the market set the initial

value for the offering, let it stabilize the price of the security itself, and let it decide where

to redeploy the valuable resources of intellectual capital that are currently used to

interfere in the IPO market. The rationale to maintain the current system is that the

market needs some force to generate demand for an issue and that it needs a force to

stabilize the price of a new issue after its issuance. I will explain here why this process is

unlikely to occur and who needs to demand the change in the system.

Allowing market forces that determine prices would initially affect issuers and

investment bankers in a negative manner. Underwriters initially overestimate the value of

the new issue as compared to similar companies that are not IPOs and they constrain

supply creating excess demand, which forces up prices in the secondary market--why

would issuers want this to change? The underwriters, in this process, conduct road shows

to drum up demand, prepare the prospectuses and other complicated documentation,

determining the value of the offering, providing stability in the aftermarket, and the

researcher is sure there are additional tasks that the underwriting syndicate is involved in

that are unknown to the researcher. However, it seems that these market professionals are

so sophisticated that they have found a way to interfere legally in the markets in order to

obtain a proportion of the value of the offering. So are there any incentives for the issuers
105

and the underwriters to change the system? If such incentives do exist, they would be

negligible in comparison to the incentive to accept the status quo.

Because the issuers and the underwriters have no serious incentive to change,

some other governing force needs to promote change in the financial markets. Legislative

solutions are very seldom the best way to provoke change, especially in a market system.

What, then would the solution be? I believe that the free market would have to determine

that the process of shares issuance is inefficient and that changes are necessary. This

implies that the average market participant needs exposure to these abnormalities—the

the wider the dispersion the better, and that the most efficient way to change the system

seems to be a collective whisper from the general populists, call it a nudge from the

average citizen. In my opinion, this process is generally the most effective method to

accomplish almost anything, although it is probably the most difficult task.

Recommendations for Further Study

There are many potential recommendations for future study that could naturally

flow from the results, or the interpretation of the results, of this analysis; however, I

would like to focus on the most important ones. Prior to this analysis, it seemed to me

that researchers questioned the results of studies conducted on IPO performance because

of the (a) methodological approach used to evaluate the anomalies, (b) fragmentation of

the analyses, or (c) the sample sizes were inadequate. In this analysis, I conducted

specification and power analyses, which generated acceptable results, used the best

method to conduct the analyses, lengthened the time horizons of studies that researchers
106

previously executed with small samples, and integrated the analyses. Therefore, in my

opinion, these anomalies are real.

The first recommendation for further analysis is that researchers should conduct

more research on the matched-firm approach to benchmarking using the BHAR method.

Many researchers avoid using this method of analysis when conducting event studies and

prefer to make complicated adjustments to portfolio –matching strategies even when

conducting studies in event-time. It would be interesting to see if researchers could use

different firm characteristics to match-firms with no pre-event performance with event

firms--maintaining the specification results derived from this analysis--and increase the

power of these methods of analyses. Increasing the power of the BHAR approach to

evaluating abnormal performance matched by some firm-specific characteristic is the

next critical step for researchers conducting analyses on the methodological approaches

to event studies.

The second recommendation is to determine whether investors initially overprice

IPOs relative to their peers. The results presented in Purnanandam, and Swaminathan

(2004) are astounding when considered in tandem with analyses of the actual

performances of IPOs subsequent to issuance. If IPOs are initially overpriced by 10% to

30%, followed by an average of 11% positive performance in the pre-offering period, and

then another 3.5% positive performance in the first day of performance, it is hard to

suggest that markets are efficiently valuing these shares. Researchers should attempt to

use different metrics to evaluate whether these shares are actually overpriced when

issued.
107

Researchers attempting to prove or disprove whether markets operate in an

efficient manner have generated a substantial base. This proposition is flawed a model of

reality and the flaws are identified in numerous analyses. However, the skeptics of

market efficiency really need to present a meaningful alternative. Of course, the new way

of thinking about market interactions will be viewed with a substantial amount of

skepticism and have to overcome substantial hurdles to dethrone the concept of market

efficiency, but the process of scientific exploration is full of intriguing challenges-

embrace them.

Conclusion

In this dissertation, I have (a) presented a well-specified and powerful method of

identifying abnormal performance when conducting event studies, (b) shown that short-

term abnormal IPO performance is positive, (c) shown that events occurring throughout

the IPO process instigate abnormal performances, and (d) provided a description of IPO

performance over the initial three-years of seasoning. These results presented in this

dissertation seem to be at odds with the concept of market efficiency. The results of the

analyses related to event specific performances, the expiration of the quiet and lockup

period, generate significant, but not substantial abnormal performance. However, initial

abnormal performance of 11% in the preoffer period and 3% in the initial day of trade,

together with long-term underperformance of IPOs in excess of 30%, combined with a

study that concluded that IPOs’ in the preoffering period are overpriced, casts serious

doubts on the concept of market efficiency. Of course, further research needs to be

conducted on some of the conclusions reached in this analysis, and an alternative


108

hypothesis needs to be offered to supplant the concept of market efficiency, but given the

results of the preceding analysis, I have significant doubts pertaining to the of market

efficiency.
REFERENCES

Affleck-Graves, J., Hegde, S., & Miller, R. E. (1996). Conditional price trends in the
aftermarket for initial public offerings. Financial Management, 25(4), 25-40.

Alexander, S. S. (1961). Price movements in speculative markets: Trends or random


walks. Industrial Management Review, 2, 7-26.

Ang, J. S., & Zhang, S. (2004). An evaluation of testing procedures for long horizon
event studies. Review of Quantitative Finance and Accounting¸23, 251-274.

Bachelier, L. (1900/1964). Theory of Speculation. In Cootner, P. H. (Eds), The Random


Character of Stock Market Prices (pp. 17-78). Cambridge, MA: MIT Press
(1964).

Ball, R., & Brown, P. (1968). An empirical evaluation of accounting income numbers.
Journal of Accounting Research, 159-178.

Barber, B. M., & Lyon, J. D. (1997). Detecting long-run abnormal stock returns: The
empirical power and specification of test statistics. Journal of Financial
Economics, 43, 341-372.

Barron, D. P. (1982). A model of the demand for investment banking advising and
distribution service for new issues. The Journal of Finance, 37(4), 955-976.

Beatty, R. P., & Ritter, J. R. (1986). Investment banking, reputation, and the underpricing
of initial public offerings. Journal of Financial Economics, 15, 213-232.

Bernanke, B. S. (2007). The Level and Distribution of Economic Well-Being. Speech


Presented on February 6, 2007 to the Greater Omaha Chamber of Commerce,
Omaha, Nebraska. URL:
http://www.federalreserve.gov/newsevents/speech/bernanke20070206a.htm

Bernstein, P. L. (1998). Against the gods: The remarkable story of risk. John Wiley &
Sons, Inc.: New York, NY.

Bhabra, H. S., & Pettway, R. H. (2003). IPO prospectus information and subsequent
performance. The Financial Review, 38, 369-397.

Booth, J. R., & Smith, R. (1986). Capital raising, underwriting and the certification
hypothesis. Journal of Financial Economics, 15, 261-281.

Bradley, D. J., Jordan, B. D., & Ritter, J R. (2003). The quiet period goes out with a bang.
The Journal of Finance, 58(1), 1-36.
110

Bradley, D., Jordan, B., Ritter, J., & Wolf, J. (2004). The quiet period revisited. Journal
of Investment Management, 2(3), 1-11.

Brav, A., Geczy, C., & Gompers, P. A. (2000). Is the abnormal return following equity
issuances anomalous? Journal of Financial Economics, 56, 209-249.

Brown, S. J., & Warner, J. B. (1980). Measuring security price performance. Journal of
Financial Economics, 8, p. 205-258.

Brown, S. J., & Warner, J. B. (1985). Using daily stock returns: The case of event
studies. Journal of Financial Economics, 14(1), 3-31.

Brown, S. J., & Weinstein, M. (1985). Derived factors in event studies. Journal of
Financial Economics, 14(3), 491-5.

Campbell, J. Y., Lo, A. W., & MacKinlay, A. C. (1997). The econometrics of financial
markets. Princeton, NJ: Princeton University Press.

Carter, R. B., Dark, F. H., & Singh, A. K. (1998). Underwriter reputation, initial returns,
and the long-run performance of IPO stocks. The Journal of Finance, 53(1), 285-
311.

Cheng, W., Cheung, Y. L., & Po, K. (2004). A note on the intraday patterns of initial
public offerings: Evidence from Hong Kong. Journal of Business Finance &
Accounting, 31 (5-6), 837-860.

Cowles, A. (1933). Can stock market forecasters forecast? Econometrica, 1(3), 309-324.

Cowles, A., & Jones, H. E. (1937). Some A Posteriori probabilities in stock market
action. Econometrica, 5(3), 280-294.

Dalton, D. R., Certo, S. T., & Daily, C. M. (2003). Initial public offerings as a web of
conflicts of interests: An empirical assessment. Business Ethics Quarterly, 13(3),
289-314.

De Bondt, W. F. M., & Thaler, R. (1985). Does the stock market overreact? The Journal
of Finance, 40(3), 793-805.

Ellis, K., Michaely, R., & O’Hara, M. (1999). A guide to the initial public offering
process. Corporate Finance Review, 3, 14-18.

Fama, E. F. (1970). Efficient capital markets: A review of theory and empirical work.
The Journal of Finance, 25(2), 383-417.

Fama, E. F. (1976). Foundations of Finance: Portfolio Decisions and Securities Prices.


New York, NY: Basic Books, Inc.
111

Fama, E. F. (1991). Efficient capital markets: II. The Journal of Finance, 46(5), 1575-
1617.

Fama, E. F. (1998). Market efficiency, long-term returns, and behavior finance. Journal
of Financial Economics, 49, 285-306.

Fama, E. F, Fisher, L., Jensen, M. C., & Roll, R. (1969). The adjustment of stock prices
to new information. International Economic Review, 10(1), 1-21.

Fama, E. F., & French, K. R. (1992). The cross-section of expected stock returns. The
Journal of Finance, 57(2), p. 427-465.

Fama, E., & French K. (1993). Common risk factors in the returns on stocks and bonds.
Journal of Financial Economics. 33, 3-56.

Fama, E. F., & French K. R. (1996). The CAPM is wanted, dead or alive. The Journal of
Finance, 51(5), 1947-1958.

Field, L. C., & Hanka, G. (2001). The expiration of IPO share lockups. The Journal of
Finance, 56(2), p. 471-500.

French, C. W. (2003). The Treynor capital asset pricing model. Journal of Investment
Management, 1(2), 60-72.

Garfinkle, N., Malkiel, B. G., & Bontas, C. (2002). Effect of underpricing and lock-up
provisions in IPOs: Implications of a classic case of supply and demand. The
Journal of Portfolio Management, 28(3), 50-58.

Gompers, P. A., & Lerner, J. (2003). The really long-run performance of Initial Public
Offerings: The pre-NASDAQ evidence. The Journal of Finance, 57(4), 1355-
1392.

Helwege, J., & Liang, N. (2004). Initial public offerings in hot and cold markets. Journal
of Financial and Quantitative Analysis, 39(3), 541-569.

Henderson, G. V. (1990). Problems and solutions in conducting event studies. Journal of


Risk and Insurance, 57(2), 282-306.

Ibbotson, R. G. (1975). Price performance of common stock new issues. Journal of


Financial Economics (2), 235-272.

Ibbotson, R. G., & Jaffe, J. F. (1975). “Hot issue” markets. The Journal of Finance,
30(4), 1027-1042.

Ibbotson, R. G., Sindelar, J. L., & Ritter, J. R. (1994). The market’s problems with the
pricing of initial public offerings. Journal of Applied Corporate Finance, 7(1),
112

66-74.

Jegadeesh, N. (2000). Long-term performance of seasoned equity offerings: Benchmark


errors and biases in expectations. Financial Management, 29(3), 5-30.

Jegadeesh, N., & Titman, S. (1993). Returns to buying winners and selling losers:
Implications for stock market efficiency. The Journal of Finance, 48(1), 65-91.

Jegadeesh, N, Weinstein, M., & Welch, I. (1993). An empirical investigation of IPO


returns and subsequent equity offerings. Journal of Financial Economics, 34,
153-175.

Krigman, L., Shaw, W. H., & Womack, K. L. (1999). The persistence of IPO mispricing
and the predictive power of flipping. The Journal of Finance, 54(3), 1015-1044.

Kothari, S. P., Shanken, J., & Sloan, R. G (1995). Another look at the cross-section of
expected stock returns. The Journal of Finance, 50, 185-224.

Kothari, S. P., & Warner, J. B. (2006). Econometrics of event studies. Working Paper,
Tuck School of Business at Dartmouth.

Krigman, L., Shaw, W. H., & Womack, K. L. (1999). The persistence of IPO mispricing
and the predictive power of flipping. The Journal of Finance, 54(8), 1015-1044.

Leedy, P., and Ormrod, J. (2005). Practical research: Planning and design, 8th edition.
Upper Saddle, NJ: Pearson Prentice Hall.

Lintner, J. (1965). The valuation of risk assets and the selection of risky investments in
stock portfolio and capital budgets. Review of Economics & Statistics, 47(1), 13-
37.

Loughran, T., & Ritter, J. R. (1995). The new issues puzzle. The Journal of Finance,
50(1), 23-51.

Loughran, T., & Ritter, J. R. (2004). Why has IPO underpricing changed over time?
Financial Management, 33(3), 5-37.

Lowry, M., & Schwert, G. W. (2002). IPO market cycles: Bubbles or sequential learning?
The Journal of Finance, 57(3), 1171-1200.

Lyon, J. D., Barber, B. M., & Tsai, C. L. (1999). Improved methods for tests of long-run
abnormal stock returns. The Journal of Finance, 44(1), 165-201.

McDonald, J. G., & Fisher, A. K. (1972). New-issue stock price behavior. The Journal of
Finance, 27(1), 97-102.
113

Miller, R. E., & Reilly, F. K. (1987). An examination of mispricing, returns, and


uncertainty for initial public offerings. Financial Management, 33-38.

Perfect, S. B., & Peterson, D. R. (1997). Day-of-the-week effects in the long-run


performance of initial public offerings. The Financial Review, 32(1), 49-70.

Purnanandam, A. K., & Swaminathan, B. (2004). Are IPOs really underpriced? Review of
Financial Studies, 17(3), 811-848.

Reilly, F. K. (1977). New Issues Revisited. Financial Management, 6(4), 28-42.

Reilly, F. K., & Hatfield, K. (1969). Investor experience with new stock issues. Financial
Analysts Journal, 25, 73-80.

Ritter, J. R. (2003). Differences between European and American IPO markets. European
Financial Management, 9(4), 421-434.

Ritter, J. R. (1984). The “hot issue” market of 1980. Journal of Business, 57(2), 215-240.

Ritter, J. R. (1988). Initial public offerings. Contemporary Finance Digest, 2(1), 5-30.

Ritter, J. R. (1991). The long-run performance of initial public offerings. The Journal of
Finance, 45(1), 3-27.

Ritter, J. R., & Welch, I. (2002). A review of IPO activity, pricing, and allocations. The
Journal of Finance, 57(4), 1795-1828.

Rock, K. (1986). Why are new issues are underpriced. Journal of Financial Economics,
15, 187-212.

Ross, S. (1977). Risk, Return, and Arbitrage. In Friend, I. (Eds.), Risk and Return in
Finance (pp. 198-218). Cambridge, MA: Ballinger Publishing (1977).

Roll, R. (1977). A critique of the asset pricing theory’s tests. Journal of Financial
Economics, 4, 129-176.

Rumrill, P. (2004). Non-manipulation quantitative designs. Work, 22, 255-260.

Russell Investments (2008). Russell 3000 Index: Fact Sheet. URL:


http://www.russell.com/indexes/documents/Fact_Sheets/3000.pdf. Accessed on
July 8, 2008.

Samuelson, P. (2000). Honoring founding fathers of modern finance economics.


Financial Management, 29(2), 92-3.

Schaeffer, J. (2005). A link between IPO underperformance and long-term performance.


114

University of Chicago, Dissertation, 1-58.

Schultz, P. (2003). Pseudo market timing and the long-run underperformance of IPOs.
The Journal of Finance, 58(2), 483-517.

Sharpe, W. F. (1964). Capital asset prices: A theory of market equilibrium under


conditions of risk. The Journal of Finance, 19(3), 425-442.

Sharpe, W. F., Alexander, G. J., & Bailey, J. V. (1995). Investments, Sixth Edition.
Englewood Cliffs, NJ: Prentice Hall.

Simon, C. (1989). The effect of the 1933 securities act on investor information and the
performance of new issues. The American Economic Review, 79(3), 295-318.

Spiess, D. K., & Afflect-Graves, J. (1995). Underperformance in long-run stock returns


following seasoned equity offerings. Journal of Financial Economics, 38, 243-
267.

Tinic, S. M. (1988). Anatomy of initial public offerings of common stock. The Journal of
Finance, 43(4), 789-822.

Treynor, J. (1962). Toward a theory of market value of risky assets. Unpublished


manuscript.

Wiggenhorn, J., & Madura, J. (2005). Impact of liquidity and information on the
mispricing of newly public firms. Journal of Economics and Finance, 29(2), 203-
220.
APPENDIX A:

Power Analysis—1-Year Event Horizon—n=50

Power Analysis
Empirical Size - 1 Year Performance - n=50
MC Ind MC & Ind MC & BtoM MC MC & BM Ind
-.75 99% 99% 98% 97% 100% 100% 99%
-0.5 94% 97% 96% 89% 97% 97% 82%
-0.3 75% 82% 82% 72% 81% 84% 61%
-.20 46% 58% 56% 47% 69% 74% 53%
-.15 30% 37% 39% 30% 65% 63% 50%
-.10 16% 22% 20% 17% 54% 56% 46%
-.05 8% 11% 9% 6% 50% 49% 44%
-.01 6% 6% 4% 3% 46% 44% 47%
0 5% 4% 4% 3% 43% 43% 47%
+.01 4% 4% 6% 3% 42% 42% 48%
+.05 8% 5% 8% 8% 40% 33% 53%
+.10 19% 19% 17% 21% 38% 34% 63%
+.15 36% 42% 42% 36% 40% 41% 72%
+.20 51% 60% 57% 59% 51% 51% 79%
+0.3 78% 79% 82% 83% 67% 70% 89%
+0.5 95% 96% 96% 96% 91% 92% 96%
+.75 97% 97% 98% 98% 98% 97% 98%

Note. Power Analysis conducted on 180 samples of 50 randomly selected firms spanning the 18-

year period from 1985 to 2002. The first column of the Table provides the amount of simulated

abnormal performance (MC=Market Capitalization; Ind=Industry; BtoM=Book-to-Market

Ratio). Columns two through five give the results from the matched-firm approaches. Columns six

through eight convey the results of the power test ran on the portfolio-matched approaches.
116

APPENDIX B

Power Analysis—1-Year Event Horizon—n=500

Power Analysis
Empirical Size - 1 Year Performance - n=500
MC Ind MC & Ind MC & BtoM MC MC & BM Ind
-.75 100% 100% 100% 100% 100% 100% 100%
-0.5 100% 100% 100% 100% 100% 100% 92%
-0.3 100% 100% 100% 100% 100% 100% 92%
-.20 92% 100% 100% 100% 92% 92% 92%
-.15 92% 92% 100% 100% 77% 92% 85%
-.10 85% 92% 92% 69% 85% 85% 92%
-.05 23% 54% 54% 23% 100% 92% 92%
-.01 0% 0% 0% 0% 92% 85% 69%
0 0% 0% 0% 0% 77% 77% 69%
+.01 8% 0% 0% 0% 69% 69% 69%
+.05 54% 46% 23% 69% 69% 62% 69%
+.10 92% 92% 100% 100% 92% 85% 92%
+.15 100% 100% 100% 100% 92% 85% 92%
+.20 100% 100% 100% 100% 77% 77% 92%
+0.3 100% 100% 100% 100% 92% 92% 100%
+0.5 100% 100% 100% 100% 100% 100% 100%
+.75 100% 100% 100% 100% 100% 100% 100%

Note. Power Analysis conducted on 18 samples of 500 randomly selected firms spanning the 18-

year period from 1985 to 2002. The first column of the Table provides the amount of simulated

abnormal performance (MC=Market Capitalization; Ind=Industry; BtoM=Book-to-Market

Ratio). Columns two through five give the results from the matched-firm approaches. Columns six

through eight convey the results of the power test ran on the portfolio-matched approaches.
117

APPENDIX C

Power Analyses—2-Year Event Horizon—n= 50

Power Analysis
Empirical Size - 2 Year Performance - n=50
Size Ind MC & Ind MC & BtoM MC MC & BM Ind
-.75 92% 96% 92% 88% 88% 93% 66%
-0.5 77% 78% 79% 74% 67% 71% 48%
-0.3 41% 52% 48% 40% 51% 54% 49%
-.20 18% 28% 28% 21% 45% 44% 50%
-.15 11% 14% 18% 12% 37% 38% 55%
-.10 7% 8% 10% 7% 32% 31% 56%
-.05 5% 4% 6% 4% 31% 29% 56%
-.01 3% 2% 4% 3% 32% 28% 59%
0 3% 2% 3% 3% 32% 28% 59%
+.01 3% 2% 3% 3% 31% 28% 61%
+.05 6% 6% 4% 5% 36% 29% 63%
+.10 11% 11% 9% 10% 37% 31% 66%
+.15 18% 17% 18% 22% 44% 38% 74%
+.20 27% 32% 31% 32% 52% 48% 79%
+0.3 49% 54% 56% 54% 67% 63% 86%
+0.5 76% 82% 83% 79% 86% 79% 91%
+.75 89% 91% 91% 91% 90% 90% 93%

Note. Power Analysis conducted on 18 samples of 50 randomly selected firms spanning the 18-

year period from 1985 to 2002. The first column of the Table provides the amount of simulated

abnormal performance (MC=Market Capitalization; Ind=Industry; BtoM=Book-to-Market

Ratio). Columns two through five give the results from the matched-firm approaches. Columns six

through eight convey the results of the power test ran on the portfolio-matched approaches.
118

APPENDIX D

Power Analyses—2-Year Event Horizon—n= 500

Power Analysis
Empirical Size - 2 Year Performance - n=500
MC Ind MC & Ind MC & BtoM MC MC & BM Ind
-.75 100% 100% 100% 100% 100% 100% 92%
-0.5 100% 100% 100% 100% 85% 92% 92%
-0.3 100% 100% 100% 92% 69% 69% 85%
-.20 92% 92% 100% 77% 77% 77% 85%
-.15 69% 92% 77% 62% 92% 85% 85%
-.10 46% 54% 62% 31% 100% 85% 77%
-.05 0% 15% 8% 0% 92% 85% 85%
-.01 0% 0% 0% 8% 77% 77% 77%
0 0% 0% 0% 8% 69% 69% 85%
+.01 0% 0% 0% 8% 69% 69% 85%
+.05 8% 0% 8% 31% 62% 69% 85%
+.10 46% 69% 54% 77% 62% 69% 100%
+.15 85% 85% 85% 92% 85% 69% 92%
+.20 92% 92% 92% 92% 77% 77% 92%
+0.3 100% 100% 100% 100% 92% 85% 92%
+0.5 100% 100% 100% 100% 100% 100% 100%
+.75 100% 100% 100% 100% 100% 100% 100%

Note. Power Analysis conducted on 18 samples of 500 randomly selected firms spanning the 18-

year period from 1985 to 2002. The first column of the Table provides the amount of simulated

abnormal performance (MC=Market Capitalization; Ind=Industry; BtoM=Book-to-Market

Ratio). Columns two through five give the results from the matched-firm approaches. Columns six

through eight convey the results of the power test ran on the portfolio-matched approaches.
119

APPENDIX E

Power Analysis—3-Year Event Horizon—n=50

Power Analysis
Empirical Size - 3 Year Performance - n=50
Size Ind MC & Ind MC & BtoM MC MC & BM Ind
-.75 84% 84% 84% 78% 72% 74% 47%
-0.5 55% 64% 63% 56% 59% 56% 48%
-0.3 31% 31% 33% 21% 50% 39% 58%
-.20 13% 15% 16% 9% 39% 37% 62%
-.15 9% 9% 10% 5% 32% 32% 62%
-.10 6% 4% 7% 4% 32% 31% 66%
-.05 4% 2% 2% 1% 33% 29% 68%
-.01 3% 2% 3% 1% 34% 29% 68%
0 2% 2% 3% 1% 33% 29% 69%
+.01 2% 2% 3% 2% 34% 31% 69%
+.05 3% 3% 3% 4% 35% 34% 73%
+.10 6% 6% 7% 6% 38% 38% 78%
+.15 12% 14% 9% 8% 45% 41% 82%
+.20 17% 21% 17% 16% 49% 46% 88%
+0.3 29% 35% 33% 31% 64% 54% 92%
+0.5 61% 68% 70% 69% 81% 78% 94%
0.75 82% 83% 87% 83% 89% 87% 97%

Note. Power Analysis conducted on 180 samples of 50 randomly selected firms spanning the 18-

year period from 1985 to 2002. The first column of the Table provides the amount of simulated

abnormal performance (MC=Market Capitalization; Ind=Industry; BtoM=Book-to-Market

Ratio). Columns two through five give the results from the matched-firm approaches. Columns six

through eight convey the results of the power test ran on the portfolio-matched approaches.
120

APPENDIX F

Power Analysis—3-Year Event Horizon—n=500

Power Analysis
Empirical Size - 3 Year Performance - n=500
MC Ind MC & Ind MC & BtoM MC MC & BM Ind
-.75 100% 100% 100% 100% 77% 85% 85%
-0.5 100% 100% 100% 100% 92% 85% 77%
-0.3 77% 92% 100% 62% 85% 69% 92%
-.20 62% 77% 77% 46% 92% 85% 92%
-.15 46% 46% 38% 31% 92% 92% 85%
-.10 31% 15% 23% 8% 92% 85% 92%
-.05 0% 8% 0% 0% 85% 85% 92%
-.01 0% 0% 0% 8% 77% 85% 92%
0 8% 0% 0% 8% 69% 85% 92%
+.01 8% 0% 0% 8% 69% 85% 85%
+.05 8% 0% 8% 23% 69% 77% 85%
+.10 23% 31% 23% 46% 77% 69% 92%
+.15 46% 46% 69% 85% 69% 69% 92%
+.20 85% 77% 85% 85% 77% 69% 100%
+0.3 100% 92% 92% 92% 77% 77% 100%
+0.5 100% 100% 100% 100% 100% 100% 100%
+.75 100% 100% 100% 100% 100% 100% 100%

Note. Power Analysis conducted on 18 samples of 500 randomly selected firms spanning the 18-

year period from 1985 to 2002. The first column of the Table provides the amount of simulated

abnormal performance (MC=Market Capitalization; Ind=Industry; BtoM=Book-to-Market

Ratio). Columns two through five give the results from the matched-firm approaches. Columns six

through eight convey the results of the power test ran on the portfolio-matched approaches.
121

APPENDIX G

Power Analysis—4-Year Event Horizon—n=50

Power Analysis
Empirical Size - 4 Year Performance - n=50
MC Ind MC & Ind MC & BtoM MC MC & BM Ind
-.75 68% 68% 69% 65% 64% 62% 57%
-0.5 41% 48% 47% 38% 56% 53% 67%
-0.3 16% 19% 21% 14% 46% 36% 76%
-.20 11% 11% 11% 7% 43% 30% 77%
-.15 7% 7% 9% 6% 41% 27% 78%
-.10 4% 7% 8% 3% 42% 28% 81%
-.05 4% 5% 6% 2% 43% 34% 82%
-.01 4% 6% 4% 3% 44% 37% 84%
0 4% 4% 4% 3% 47% 37% 84%
+.01 4% 4% 4% 3% 47% 37% 84%
+.05 4% 6% 3% 3% 48% 41% 86%
+.10 6% 8% 6% 7% 51% 44% 88%
+.15 9% 11% 9% 8% 54% 48% 89%
+.20 12% 13% 16% 12% 59% 52% 89%
+0.3 23% 29% 27% 22% 66% 61% 94%
+0.5 41% 52% 56% 52% 78% 77% 96%
+0.75 72% 77% 76% 73% 88% 86% 97%

Note. Power Analysis conducted on 180 samples of 50 randomly selected firms spanning the 18-

year period from 1985 to 2002. The first column of the Table provides the amount of simulated

abnormal performance (MC=Market Capitalization; Ind=Industry; BtoM=Book-to-Market

Ratio). Columns two through five give the results from the matched-firm approaches. Columns six

through eight convey the results of the power test ran on the portfolio-matched approaches.
122

APPENDIX H

Power Analysis—4-Year Event Horizon—n=500

Power Analysis
Empirical Size - 4 Year Performance - n=500
MC Ind MC & Ind MC & BtoM MC MC & BM Ind
-.75 100% 100% 100% 85% 92% 77% 85%
-0.5 92% 92% 85% 77% 92% 92% 85%
-0.3 62% 54% 77% 46% 85% 92% 100%
-.20 46% 38% 23% 31% 85% 77% 92%
-.15 23% 15% 15% 8% 92% 77% 92%
-.10 0% 0% 8% 0% 85% 77% 92%
-.05 0% 0% 0% 0% 85% 69% 92%
-.01 0% 0% 0% 0% 69% 62% 92%
0 0% 0% 0% 0% 69% 62% 92%
+.01 0% 0% 0% 0% 69% 62% 92%
+.05 8% 0% 0% 8% 69% 62% 92%
+.10 8% 23% 23% 23% 69% 62% 92%
+.15 31% 38% 23% 54% 69% 62% 100%
+.20 46% 62% 54% 62% 69% 77% 100%
+0.3 77% 85% 85% 100% 85% 77% 100%
+0.5 92% 92% 100% 100% 92% 92% 100%
+.75 100% 100% 100% 100% 92% 92% 100%

Note. Power Analysis conducted on 18 samples of 500 randomly selected firms spanning the 18-

year period from 1985 to 2002. The first column of the Table provides the amount of simulated

abnormal performance (MC=Market Capitalization; Ind=Industry; BtoM=Book-to-Market

Ratio). Columns two through five give the results from the matched-firm approaches. Columns six

through eight convey the results of the power test ran on the portfolio-matched approaches.
123

APPENDIX I

Average Initial Day IPO Performance by Year and Month

Average 1st Day IPO Return by Month & Year


T-
Year Year/Month Average % Return StDev Count Value
1997 Year 1.23% 10.92% 446 2.38
Jan -0.20% 8.76% 25 -0.11
Feb 0.76% 8.24% 43 0.61
Mar 1.12% 7.69% 27 0.76
Apr 5.33% 17.04% 20 1.40
May 1.20% 13.85% 27 0.45
Jun 0.58% 8.69% 49 0.47
Jul 1.84% 9.49% 43 1.27
Aug 3.03% 8.97% 33 1.94
Sep 2.26% 8.47% 34 1.55
Oct 1.11% 10.28% 53 0.78
Nov -1.50% 14.99% 63 -0.79
Dec 3.46% 11.10% 28 1.65
1998 Year 1.60% 10.27% 275 2.58
Jan 2.07% 11.35% 12 0.63
Feb -0.67% 5.93% 39 -0.70
Mar -0.19% 6.35% 28 -0.16
Apr 0.59% 8.95% 32 0.37
May -0.27% 6.71% 33 -0.23
Jun 2.83% 12.60% 48 1.56
Jul 4.08% 9.17% 35 2.63
Aug 7.71% 13.17% 16 2.34
Sep -4.23% 8.09% 2 -0.74
Oct -0.27% 1.09% 2 -0.35
Nov -1.07% 13.45% 13 -0.29
Dec 3.82% 17.77% 15 0.83
1999 Year 6.07% 23.78% 464 5.49
Jan 6.31% 14.36% 11 1.46
Feb 4.84% 21.34% 28 1.20
Mar 4.69% 23.07% 18 0.86
Apr 6.61% 17.38% 32 2.15
May 3.59% 15.94% 48 1.56
Jun 7.16% 23.50% 53 2.22
Jul 2.75% 15.50% 58 1.35
Aug 17.54% 41.50% 38 2.61
Sep 3.87% 27.42% 40 0.89
Oct 3.89% 24.97% 53 1.13
Nov 9.01% 23.08% 49 2.73
Dec 3.71% 21.68% 36 1.03
2000 Year 6.05% 20.62% 369 5.64
Jan 16.70% 22.68% 11 2.44
124

Feb 4.60% 21.53% 54 1.57


Mar 3.28% 20.81% 50 1.12
Apr 11.79% 22.83% 32 2.92
May 3.68% 16.18% 21 1.04
Jun 2.05% 15.78% 32 0.74
Jul 11.52% 21.93% 41 3.36
Aug 5.00% 25.05% 59 1.53
Sep 2.67% 18.48% 24 0.71
Oct 6.97% 10.93% 18 2.71
Nov 3.16% 13.44% 19 1.03
Dec 12.44% 17.51% 8 2.01
2001 Year 2.46% 9.57% 82 2.33
Jan 8.69% 21.63% 4 0.80
Feb 1.15% 6.86% 8 0.48
Mar -0.17% 8.02% 9 -0.06
Apr -0.75% 12.24% 2 -0.09
May 3.91% 12.51% 9 0.94
Jun -0.22% 8.05% 13 -0.10
Jul 4.69% 9.01% 7 1.38
Aug 3.47% 14.20% 4 0.49
Sep
Oct 0.13% 5.17% 7 0.07
Nov 5.18% 9.31% 10 1.76
Dec 3.26% 7.45% 9 1.31
2002 Year 0.01% 8.56% 70 0.01
Jan -1.54% 2.77% 3 -0.96
Feb 4.86% 11.25% 8 1.22
Mar -0.99% 3.52% 5 -0.63
Apr -1.89% 11.46% 7 -0.44
May 1.17% 6.00% 15 0.76
Jun -3.66% 8.49% 10 -1.36
Jul -3.30% 9.66% 6 -0.84
Aug
Sep
Oct 0.82% 9.83% 5 0.19
Nov 1.08% 7.42% 9 0.44
Dec 1.06% 10.38% 5 0.23
2003 Year 1.67% 10.59% 73 1.35
Jan
Feb -4.34% 4.80% 3 -1.56
Mar
Apr
May 3.26% 4.43% 2 1.04
Jun -6.37% 0.00% 1
Jul -1.15% 10.21% 8 -0.32
Aug 4.96% 3.72% 4 2.67
Sep 3.33% 6.29% 7 1.40
125

Oct 4.63% 9.33% 10 1.57


Nov -1.47% 8.33% 16 -0.71
Dec 3.55% 14.71% 22 1.13
2004 Year 2.24% 9.06% 179 3.30
Jan 6.65% 4.83% 2 1.95
Feb 1.85% 10.21% 17 0.75
Mar 1.91% 8.41% 10 0.72
Apr 6.26% 7.60% 9 2.47
May 4.63% 6.82% 11 2.25
Jun 2.59% 11.29% 23 1.10
Jul 2.76% 6.74% 21 1.88
Aug -2.61% 7.75% 13 -1.21
Sep -1.60% 9.46% 11 -0.56
Oct 3.02% 10.78% 22 1.31
Nov 3.68% 9.36% 14 1.47
Dec 1.74% 8.19% 26 1.08
2005 Year 1.82% 10.71% 185 2.31
Jan -2.77% 9.34% 8 -0.84
Feb 1.25% 5.96% 21 0.96
Mar -2.37% 17.41% 7 -0.36
Apr 0.60% 1.87% 7 0.85
May 0.85% 8.97% 13 0.34
Jun 1.49% 5.99% 23 1.19
Jul 2.56% 6.74% 15 1.47
Aug 5.87% 18.56% 30 1.73
Sep 0.92% 11.13% 16 0.33
Oct 1.29% 7.91% 11 0.54
Nov 1.98% 9.00% 18 0.93
Dec 1.31% 7.53% 16 0.69

Note: The proceeding table shows the average yearly and monthly performance of IPOs on their

first day of trade. The sample included firms that issued unseasoned equity shares from 1997-

2002, 2,143 observations could be included in the sample.


126

APPENDIX J

Yearly Average of Pre-Trade Performance 1997-2007

Averaged Yearly IPO Performance from 1997-2007


Test Ho </= 0, H1 > 0

Average Stand T- tcritical=1.796 (df


Year Return Deviation Observations value =11)
2007 0.12 0.06 12.00 6.52 Reject
2006 0.10 0.05 12.00 6.73 Reject
2005 0.10 0.05 12.00 6.73 Reject
2004 0.09 0.04 12.00 7.77 Reject
2003 0.07 0.07 12.00 3.70 Reject
2002 0.06 0.04 12.00 4.68 Reject
2001 0.08 0.06 12.00 4.29 Reject
2000 0.35 0.33 12.00 3.68 Reject
1999 0.05 0.07 12.00 2.47 Reject
1998 0.00 0.04 12.00 -0.06 Accept
1997 0.02 0.02 12.00 2.49 Reject

Note: The proceeding table illustrates the average yearly and monthly IPO performance

obtained by IPOs in their pre-IPO trading period. The sample period range from April 12, 1996

until January 29, 2008 and there were 1,876 observations.


127

APPENDIX K

Yearly Abnormal Performance at the Expiration of the Quiet Period

Yearly Quiet Period Expiration


Standard Sample T-
Year Performance Deviation Size Value
1985 -0.55% 7.42% 230 -1.12
1986 -0.07% 8.68% 445 -0.18
1987 -0.30% 15.01% 353 -0.37
1988 -1.47% 10.45% 148 -1.71
1989 1.13% 8.48% 136 1.56
1990 1.19% 12.85% 125 1.04
1991 1.11% 14.85% 275 1.24
1992 1.10% 10.09% 375 2.11
1993 ~2.00% ~10.00% 515 4.46
1994 0.88% 9.04% 328 1.76
1995 2.27% 10.40% 405 4.39
1996 2.25% 12.68% 521 4.05
1997 1.14% 11.32% 431 2.09
1998 3.80% 14.95% 271 4.18
1999 5.84% 22.54% 435 5.40
2000 3.40% 24.79% 349 2.57
2001 -2.40% 12.50% 77 -1.68
2002 0.99% 8.78% 66 0.91

Note: The proceeding table shows the 5-day abnormal performance surrounding the conclusion

of the quiet period. The sample consisted of IPOs that were issued from 1985-2002, which

generated 5,529 observations.


128

APPENDIX L

Abnormal Performance & Lockup Expiration

Yearly Lockup Expiration


Standard Sample T-
Year Performance Deviation Size Value
1985 -0.18% 8.61% 237 -0.33
1986 -0.40% 10.45% 445 -0.80
1987 -0.24% 12.47% 353 -0.36
1988 -0.32% 7.76% 148 -0.51
1989 -0.52% 12.34% 136 -0.49
1990 0.06% 9.71% 125 0.07
1991 -1.90% 13.50% 274 -2.33
1992 -0.19% 9.88% 375 -0.36
1993 -0.48% 9.64% 515 -1.12
1994 0.99% 15.75% 328 1.13
1995 -1.34% 12.82% 405 -2.10
1996 -6.17% 103.91% 521 -1.36
1997 -0.68% 12.36% 431 -1.14
1998 -0.88% 15.74% 271 -0.92
1999 -2.70% 18.56% 435 -3.04
2000 -3.65% 21.13% 349 -3.23
2001 -2.33% 11.26% 77 -1.82
2002 -0.70% 15.89% 65 -0.35

Note: The proceeding table shows the 5-day abnormal performance surrounding the conclusion

of the lock-up period. The sample consisted of IPOs that were issued from 1985-2002, which

generated 5,529 observations.


129

APPENDIX M

Popular Benchmarks used to Represent Normal Performance

Portfolio-Based Matched-Firm
• Portfolios Matched Based Upon: • Size
• Size • Size & Book-to-Market
• Size & Book-to-Market • Size & Industry
• Size & Industry
130

APPENDIX N

Methods used in Long-term Event Studies

T able of Research Based Specifically on Measuring Long-term IPO Performance


Author(s) and Date of Publication Subject Area Chosen Benchmark
Ritter, J. (1991) Performance CSRP v-w NASDAQ
CSRP v-w AMEX-NYSE
Matched Firms - industry & size
Smallest size decile - NYSE
Loughran, T ., and Marietta, J. (2005) T iming Listing of NYSE CSRP Equally Weighted
FF - T hree Factor Model
Gompers, P., and Lerner, J. (2003) Performance CSRP v-w
Portfolios - size & book-to-market
FF - T hree Factor Model
Bhabra, H. (2003), and Pettway, R. (2003) Performance Matched firm - Size
Matched firm - Size & Industry
Matched firm - Book-to-Market
Matched firm - Book-to-Market and Industry
Schultz, P. Performance CSRP v- and e-w
Carter, R., Dark, F., and Singh, A. (1998) Performance V-W CSRP (NYSE/AMEX/Nasdaq)
Brav, A., and Gompers, P. (1997) Performance S&P 500
NASDAQ v-w
NYSE/AMEX v-w
NYSE/AMEX e-w
Portfolios - size & book-to-market
FF - Industry Portfolios
Perfect, S., and Peterson, D. (1997) Performance Matched firm - industry & size
CURRICULUM VITAE

Zachary A. Smith
507 Winchester Road
Jacksonville, NC 28546

Phone: (910) 388-0573 Email: zacharyasmith@gmail.com


Cellular: (757) 553-1692

EDUCATION

Undergraduate: Old Dominion University, Norfolk, Virginia; B.S., 2003

Graduate: Capella University, Minneapolis, Minnesota; M.B.A., 2005


Walden University, Minneapolis, Minnesota; Ph.D., 2008
Dissertation: An Empirical Investigation of Initial Public Offering (IPO)
Price Performance

WORK EXPERIENCE:

Merrill Lynch Internship 07-04 through 10-04


Wachovia Securities Analyst (Trading) 02-04 through 04-04
Primerica Financial Services Internship 08-02 through 12-03
United States Marine Corps Analyst (Operations) 08-97 through 08-01

RESEARCH INTERESTS:

List: Asset Pricing Models, Event Studies, Market Efficiency, Heuristics & Biases,
Investor Decision-Making, Financial Market Anomalies

TEACHING INTERESTS:

List: Investments, Capital Markets, Financial Decision-Making, Personal Finance,


Behavioral Finance

You might also like