You are on page 1of 134

MARKET

TECHNICIANS
ASSOCIATION
JOURNAL
Issue 21
May 1985
MARKET TECHNICIANS ASSOCIATION JOURNAL
MTA Journal/May 1985 1
MTA Journal/May 1985
MARKET TECHNICIANS ASSOCIATION JOURNAL
Issue 21 May, 1985
Editor: James M. Yates
Bridge Data Company
10050 Manchester Road
St. Louis, Missouri 63122
Contributors: David R. Aronson
Barbara B. Diamond
Ralph Fogel
David Holt
William R. Johnston
George C. Lane
Steve Leuthold
J. Curtis Shambaugh
Jim Tillman
Bronwen Wood
Publisher: Market Technicians Association
70 Pine Street
New York, New York 10005
MTA Journal/May 1985 3
OMarket Technicians Association 1985
MTA Journal/May 1985 4
MTA JOURNAL - MAY, 1985
TABLE OF CONTENTS
Title
Page
MTA OFFICERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
MEMBERSHIP AND SUBSCRIPTION INFORMATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
STYLE SHEET FOR SUBMISSION OF ARTICLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
MTA LETTER FROM THE EDITOR . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . . . . , . . . . . . . . . . . . . . . . . . . .
9
James M. Yates
TECHNICAL ANALYSIS IN THE UNITED KINGDOM, DOMESTIC AND INTERNATIONAL . . . . . . .
11
Bronwen Wood
HOW CYCLETREND CHANNELS HELP DETERMINE TURNING POINTS
FOR STOCKS AND THE MARKET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . . . .
27
Jim Tillman
THE POWER OF THE YIELD CURVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . .
29
J. Curtis Shambaugh
RELATIVESTRENGTH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
A Workshop on Relative Strength Moderated by Steve Leuthold
LANES STOCHASTICS: THE ULTIMATE OSCILLATOR
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
George C. Lane
A VIEW FROM THE FLOOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
William R. Johnston
A THREE YEAR FOLLOW-UP ON THE ENIGMATIC STOCK OPTION
A CONSTANT CHANGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . .
47
David Holt
A VIEW FROM THE FLOOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . . . . . . . .
61
Ralph Fogel
CENTERFOLD . . . . . . . . , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . .
64
OPTIMIZATION - SOFTWARE REVIEW WORKSHOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
Barbara B. Diamond
ARTIFICIAL INTELLIGENCE/PATTERN RECOGNITION APPLIED TO
FORECASTING FINANCIAL MARKET TRENDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . . .
91
David R. Aronson
MTA Journal/May 1985
5
1984-85 MARKET TECHNICIANS ASSOCIATION
PRESIDENT
Richard Yashewski
Butcher & Singer
5161627-l 600
VICE PRESIDENT
John Greeley
Greeley Securities
212/227-6900
VICE PRESIDENT (SEMINAR)
Gail Dudack
Pershing/Div. DLJ
2121902-3322
OFFICERS
PROGRAMS
David Krell
2121623-8533
NEWSLETTER
Robert Prechter
4041536-0309
JOURNAL
James Yates
314/821-5660
CERTIFICATION
Charles Comer
2121825-4367
MEMBERSHIP
Phil Roth
2121742-6535
LIBRARY
Ralph Acampora
212747-2355
SECRETARY
Cheryl Stafford
Wellington Management
617/227-9500
TREASURER
Robert Simpkins
Delafield, Harvey, Tabell
6091924-9660
COMMITTEE CHAIRPERSONS
ETHICS & STANDARDS/ PUBLIC RELATIONS
Tony Tabell
6091924-9660
PLACEMENT
John Brooks
404/266-6262
EDUCATION
Fred Dickson
2121398-8489
COMPUTER SPECIAL INTEREST GROUP
John McGinley
203/762-0229
FUTURES SPECIAL INTEREST GROUP
John Murphy
2121724-6982
SAN FRANCISCO TECHNICAL SOCIETY
SPECIAL INTEREST GROUP
Henry Pruden
415/459-l 319
MTA Journal/May 1985 6
MARKET TECHNICIANS ASSOCIATION
MEMBERSHIP AND SUBSCRIPTION INFORMATION
REGULAR MEMBERSHIP - $75 per year plus $10 one-time application fee.
Receives the MTA Journal, the monthly MTA Newsletter, invitations to all meetings, voting member status, and a dis-
count on the Annual Seminar fee. Eligibility requires that the emphasis of the applicants professional work involve
technical analysis.
SUBSCRIBER STATUS - $75 per year.
Receives the MTA Journal and the MTA Newsletter, which contains shorter articles on technical analysis. The sub-
scriber receives special announcements of the MTA meetings open to The New York Society of Security Analysts
and/or the public, plus a discount on the Annual Seminar fee.
ANNUAL SUBSCRIPTION TO THE MTA JOURNAL - $35 per year.
SINGLE ISSUES OF THE MTA JOURNAL (including back issues) - $15.
The Market Technicians Association Journal is scheduled to be published three times each fiscal year in approxi-
mately November, February, and May.
An ANNUAL SEMINAR is held each Spring.
Inquiries for REGULAR MEMBERSHIP and SUBSCRIBER STATUS should be directed to:
Membership Chairman
Market Technicians Association
70 Pine Street
New York. New York 10005
MTA Journal/May 1985 7
STYLE SHEET FOR THE SUBMISSION OF ARTICLES
MTA Editorial Policy
The Market Technicians Association Journal is published by the Market Technicians Associ-
ation, 70 Pine Street, New York, New York 10005 to promote the investigation and analysis of
price and volume activities of the worlds financial markets. The MTA Journal is distributed to
individuals (both academic and practioner) and libraries in the United States, Canada, Europe
and several other countries. The Journal is copyrighted by the Market Technicians Association
and registered with the Library of Congress. All rights are reserved. Publication dates are Feb-
ruary, May, and November.
Style for the MTA Journal
All papers submitted to the MTA Journal are requested to have the following items as prereq-
uisites to consideration for publication.
Short (one paragraph) biographical presentation for inclusion at the end of the accepted article
upon publication. Name and affiliation will be shown under the title.
All charts should be provided in camera ready form and be properly labeled for text reference.
All tables should be properly labeled and in camera ready form.
Paper should be submitted typewritten, double-spaced in completed form on 8% by 11 inch
paper. If both sides are used, care should be taken to use sufficiently heavy paper to avoid
reverse side images. Footnotes and references should be put in the end of the article.
Greek characters should be avoided in the text and in all formulae.
One submission copy is satisfactory.
Manuscripts of any style will be received and examined, but upon acceptance, they should be
prepared in accordance with the above policies.
MTA Journal/May 1985
MTA LETTER FROM THE EDITOR
The Journals deepest appreciation for the contributors notes, text, and exhibits for this Sem-
inar Journal. A lot of hard work went into their preparation and it is evident by the contents.
The MTA Seminar Indicator is once again included for your inspection and interpretation.
Good weather and lots of hot air is predicted at Hilton Head in May, 1985; so we hope everyone
enjoys and takes advantage of it.
The editors compliments to the valued assistance in the production (as usual) go to Sally Rup-
pert and Pam Hollrah. The Seminar issue is always close to the wire and could not occur with-
out their competent and cheerful help.
James M. Yates
EDITOR
MTA Journal/May 1985 9
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.
;
MTA Journal/May 1985 10
TECHNICAL ANALYSIS IN THE UNITED KINGDOM, DOMESTIC AND INTERNATIONAL
Bronwen Wood
INDICATORS AND STATISTICS NOT AVAILABLE IN THE LONDON STOCK MARKET
1) Due to the British paranoia for secrecy there is no volume data for individual stocks. The
only volume figures for sectors are monthly. There is no breakdown of any kind of the one daily
volume figure, which is total equity bargains.
2) There are no satisfactory figures on institutional liquidity. Official figures are months out of
date. Private sampling suggests that absolute levels of liquidity move in up and downtrends,
so that shortage of cash so great that the market cannot rise further was signalled at five per-
cent between 1976 and 1980, whereas the level at which the supply of cash exceeded the
supply of stock and, therefore, caused the market to rise, trended down from fourteen percent
to five and one-half percent over the same period. There was then a readjustment, and the
new trend has high liquidity sloping from eight percent to five and one-quarter percent and low
liquidity from five percent to two and one-half percent. Another adjustment is in the making at
the moment, with possibility of an uptrend developing over the next few years.
3) The only sentiment indicator available is the puticall ratio which is so new that its use-
fulness cannot yet be established. In fact, the options market is so little used that it may turn
out not to be a very good indicator.
4) There are no margin debt figures as margin trading in stocks and bonds is not permitted
in London.
5) There are no specialist, member or odd-lot, short sales figures. This is basically because
short sales are not permitted at all by many stockbrokers and are generally not admitted to
when they do occur. Rolling a short position over from one two-week accounting period to the
next is rarely permitted, even if your broker will knowingly allow clients to go short. Even if short
sales figures were collected, the all-pervading secrecy would never allow the data to be broken
down into specialist, member and odd-lot, bargains.
6) There is no formal measure of contrary opinion. There is not a big enough private client-
base seeking investment advice to allow more than a small number of market letters to pros-
per, and these mostly take the form of tip sheets. There is therefore no way of assessing
professional advisory sentiment objectively.
INDICATORS AND APPROACHES WHICH ARE HELPFUL IN LONDON
7) Overbought/Oversold Indicators. I calculate three for the equity market, the gilt (or gov-
ernment bond) market, and the gold mining market.
a) The daily net figure of how many more of the last ten trading
days were up than down. At plus six or greater, the market is
showing signs of being overextended.
MTA Journal/May 1985
11
b) The fourteen day R.S.I., using seventy and thirty as overbought
and oversold levels. This is sometimes helpful as a divergence indicator.
c) The ten-day moving total of the net rises and falls for all
stocks in the relevant market. For short-term corrections
this is the best of these indicators. For medium-term moves,
it is a good non-confirmation indicator. (See Chart 1:
gilt overbought/oversold in May and December, 1984;
equity overboughtioversold December, 1983, and July, 1984.)
These indicators are particularly useful when they are all clearly overbought or oversold to-
gether, which they frequently are not. In addition, they are good for confirming a major change
in the direction of the market, when, for example, they become overbought but the market re-
fuses to correct (e.g., Chart l), the period in the gilt market after November, 1982, when the
market was embarking on a prolonged period of sideways trading after a long bull leg, and
August, 1984, in the equity market when a huge new bull move ( + thirty-five percent) had just
started.
d) Five-day momentum is a good short-term indicator. It is
overextended at plus three percent.
8) Cumulative advance/decline works well, though not invariably. For example, it gave a
good sell signal in May, 1984, - see Chart 1. It gave clear but not always early signals at all
major tops and bottoms from 1971 onwards.
9) Annual momentum is most useful in London at major tops and bottoms. It gives good ad-
vance warning that absolute peaks and lows are about to be hit and has signalled well for both
equities and gilts over the past fifteen years. (See Chart 1: gilts end 1982; Chart 2: equities
mid-l 970 to February, 1971; Chart 3: equities March/May, 1972.)
10) Volume rarely has anything to add to an understanding of the London market. Occasion-
ally, one finds volume starvation (e.g., Chart 3 May/June, 1972, and September/October, 1972).
11) Another divergence indicator which sometimes works extremely well, but is also some-
times confusing is illustrated for the gilt market on Chart 4. The top line is a three-day moving
average and the bottom line is a five-day average of the percentage difference between the
index and its thirty-eight-day moving average.
12) As a lagging indicator the five-day moving averages of new highs and new lows often
works well. When a change of primary trend looks likely it remains unconfirmed until the two
lines have crossed dramatically and been able to stay crossed. See Chart 1 - 1983.
13) The quotient of the All-Share Index divided by the Government Securities Index has
given excellent long-term trends for many years. The trend only breaks when top or base-build-
ing in the equity market is well advanced or has been completed. See Chart 5.
OTHER ASPECTS OF TECHNICAL ANALYSIS IN BRITAIN
14) While indicators such as these are a major part of the technicians armoury in the London
market, it must be admitted that severally and together they have let us down quite often in the
past five years. The most recent memorable occasion was in July, 1984, when most indicators
MTA Journal/May 1985 12
suggested that the bull market was coming to an end. See Chart 1. A pattern which looked as
though it could well be a major head-and-shoulders top developed (Chart 6). However, in Au-
gust, relief that interest rates were falling, and that the sterling crisis was over, seems to have
been enough to make a nonsense of all the technical indicators, and the market began a rise
which added thirty-five percent to the All-Share Index in under six months.
The only way to get the market right and be useful to ones clients for quite a long time now
has been to concentrate on sectors and individual shares rather than on market indices and
indicators.
15) London has become so volatile that moving averages have often set traps by breaking,
rolling over and crossing just as the share or the market concerned is about to reverse direc-
tion I find them too unreliable to be useful except as confirmation, and sometimes not even
then.
16) Relative strength, however, on shares and sectors is one of the most useful tools in Lon-
don, particularly for support and resistance levels and trendlines. For sector selection, in par-
ticular, relative strength lines are invaluable.
17) The government securities market in London is extremely important. I keep my three
overbought/oversold indicators, the oscillator, annual momentum, interest rate charts, futures
charts and subsector charts for gilts. The London gilt market is one of the biggest and most
flexible in the world and attracts enormous overseas business. It outperforms the equity mar-
ket quite often, and not just in bear markets. Given the end of the bear market in sterling and
the probability of interest rates falling, a lot of international money will probably be attracted to
United Kingdom gilts during the next eighteen months. Contrary to prejudice, technical anal-
ysis works extremely well in our gilt market.
18) Because London fund managers invest enormous sums abroad, it is necessary for us to
keep abreast of overseas markets. My solution to this is to be aware of the position of the
major indices for each national market, but only to look at individual shares as requested. The
only source of reliable charts for individual stocks over a wide range of countries that I am aware
of is the Chart Analysis international Book. It is my contention that for top quality technical as
well as fundamental information on individual stocks, experts in the country of origin should
be used, due to the greater depth of data available to them. However, for an overall view of
foreign markets it is often possible to be surprisingly successful by using market indices, fig-
ures for which are available in the Financial Times.
19) Being internationally orientated, gold bullion and currencies are very important in Lon-
don. I use a combination of long-term and short-term charts for both. In particular, I like one
box reversal point and figure charts on an insensitive scale for currencies. This allows history
and sensitivity to both be apparent on the same sheet of paper. See Charts 7a, b, and c. For
bullion, I use an insensitive long-term point and figure chart and a sensitive bar chart, the for-
mer for perspective and the latter for estimating trading moves. See Charts 8 and 9.
20) The various chart books and services available from Chart Analysis and Investment Re-
search are excellent for London equities and gilts, overseas markets, currencies, and com-
modities. They stand head-and-shoulders above anything else available so far from England,
including Datastream, whose charts, though more numerous, are not so reliable.
MTA Journal/May 1985 13
i
i
\
-
MTA Journal/May 1985 14
r i
MTA Journal/May 1985
15
MTA Journal/May 1985 16
MTA Journal/May 1985
17
MTA Journal/May 1985
18
CHART 6
ALL-SHARE INDEX 1967-1985
Scale: 2 points per box, 5 box revel sal
MTA Journal/May 1985 19
MTA Journal/May 1985
20
,.,I.: I I I I i
., .,.
/:: .A.!.
T
:, ::,:::
.:! :il:::
: ,. _:
: . .,. :
4%
:.: :.:. : :
:, :,
,.I f,
:
I :
,,...
,I
I.,
: ~
~
.- -
I : : : :
I :,
.j ,
..: ., .::. J
j / ,,:::,;
:::.I
(::.
--*-, /
4
I
:
-:
/
., I. I
. ...! .._.. / . .._ ;..A-.
, jp,.
I I !
MTA Journal/May 1985 21
MTA Journal/May 1985 22
--
X-.x.x-.r.r
9
MTA Journal/May 1985
23
._
:.A.: L.. : . : .
.
:
. : .._. i
,. : : ;
__~_.__. . : --._- 7-L L-. _ .._.
I - : +---
I
.
_ .~ I -1. ----.:
I I
: ,i
I
I
i
I
. . .
1
_:- . ..__.
I I
:
1 ..~ 1 ..:- -;
I .i
I
' ,.:
I
/
I-- 1 - ---r-
. ..-.L-..LL-
I 1.1
fit
: ;
; .: .I j
-, - :. .- 1,:
;:.. .::
_..__ ~._ .~.
g::
.2-L- *-:
II : .I.. ..: ; .,_ .:j
I ./ 1: : .
MTA Journal/May 1985 24
BIOGRAPHY
Bronwen Wood joined Rowe & Pitman twelve years ago where she is currently in charge of
Technical Research, covering the U.K. stock and bond markets, commodities, currencies and
overseas markets for a largely institutional clientele.
Bronwen was educated at Bristol and London Universities and the Central London Polytechnic
where she completed a post-graduate diploma in management studies. Bronwen first joined
a stockbroking firm as a fundamental analyst. Finding technical analysis to be more effective,
she gradually switched over and moved to Chart Analysis, the well-respected technical con-
sultancy firm. From there she joined Rowe 8. Pitman. Rowe & Pitman is due to become part
of the new financial conglomerates which will come into existence sometime in October, 1986.
Its United States operations have already been merged into S. G. Warburg, Rowe and Pitman,
Akroyd, Inc.
MTA Journal/May 1985 25
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.
MTA Journal/May 1985 26
HOW CYCLETREND CHANNELS HELP DETERMINE TURNING
POINTS FOR STOCKS AND THE MARKET
Jim Tillman
Having written a market letter based on cycles for over ten years, I have tried various methods
of presentation to correctly convey my views. Dealing with such a conceptually difficult subject
as cycles and the way they combine within the market has been difficult, at best, for many and
total frustration for others. Only after adding cycle channels a few years ago did the total picture
come into focus for the average reader.
These charts of the Dow Industrial Average on a weekly, daily, and hourly basis show how the
concept may be helpful no matter what time parameters one may choose. Of course, this par-
ticular time period (February 15,1985), was clearly saying the market was ready to come down
and would need to pick up channel support before ready to advance again.
At the Market Technicians Association annual conference, I will show where we are currently
in the Cycletrend channels, illustrate how channels may be used on individual stocks, and make
projections for the market based on current dominant cycles. I look forward to seeing you there.
DOW JONES INDUSTRIAL AVERAGE INDEX:
Daily, weekly, and hourly charts
MTA Journal/May 1985 27
BIOGRAPHY
Jimmie E. Tillman is Vice President, interstate Securities, institutional Department, Charlotte,
North Carolina. He is also author of Cycletrend, an institutional cycie timing service for the stock
market and stock groups. Married, with four children, Mr. Tillman is a native south Georgian,
educated at Clemson Univsersity and a self-taught market technician for twenty-five years.
MTA Journal/May 1985 28
THE POWER OF THE YIELD CURVE
J. Curtis Shambaugh
The most frequently observed phenomena that all capital market participants utilize in making
decisions is the term structure of interest rates. The U. S. Treasury yield curve, which appears
in all financial publications, as well as being available on line in numerous information re-
trievable devices, represents the sum total of all participants transactions in the capital market,
whether they become borrowers or lenders, hedgers or investors, are taxable or non-taxable,
individual or institutional, or domestic or foreign. Any alteration of the slope of the yield curve
reflects changes occurring somewhere else in the financial markets induced by governmental
policies, supply and demand of credit, or even fear or greed.
As a result of the past half-decades rapid deregulation of interest rates and the onset of very
liquid hedging devices of futures and options, a huge market of interest rate swaps has de-
veloped. Consequently, a greater proportion of the U. S. economy is now more sensitively at-
tached to the level of and change of shorter-term interest rates, particularly in adjustable rate
mortgages for housing, and automobile loan rates.
Academicians in study of the Treasury Yield Curve describe a positive yield curve as a forecast
of rising interest rates when utilizing rolling horizon analyses. A simple example of this method
would be that if a one-year security yielded nine percent and a two-year security yielded ten
percent, in theory, one year later in time a one-year security could yield eleven percent to result
in an equivalent total return as the original two-year security.
However, yield curves have evidenced long periods of positive or negative character before
such forecasts come to pass. Also, interest rates have come down a number of times when
the yield curve was positive or have even come down when the yield curve was less positive.
Most rises in interest rates have occurred in periods of negative yield curves.
Over the past half-decade, changes in slope of the yield curve have evidenced a high corre-
lation to prediction of moves in the capital markets as will be more evident in the following charts.
The reasons that these changes in slope are predictive are more easily explained in visceral
terms of fear and greed. Most simply, as the yield curve gets more positive, the investing world
is induced to extend in maturity (read risk) and when the yield curve becomes less positive,
such incentive is reduced. Since the price of anything is most effected by the marginal trans-
action and if the marginal transaction is a purchase, it seems logical that the price should rise.
As can be seen in Chart 1, there have been numerous changes of significance over the past
five years. This chart is very simple: just the ratio of the active long-term treasury bond yield
divided by the bond equivalent yield of the six-month treasury bill. After testing more complex
ratios and/or different maturities, I have found this chart worked best and was extraordinarily
correlative to subsequent moves in monthly total returns of long-term treasury bonds (Chart
2) and even leading changes in equity indices that are not overweighted by market price or
capitalization. (Chart 3 is the monthly average of the industrial component as the Value Line
Index.)
We will discuss in more depth, at the seminar, each of these charts and which moving aver-
ages seem to add even greater predictive value to these events.
MTA Journal/May 1985 29
0.8
Od
CHART 1
15
10
5
0
2
-5
-10
-
-
-
\
LUNCi IktW WNU ~IURNS
1/29/80 To 3/26/85
YIELD TOTRET GOV (45.000)
CHART 2
THE FIRST ROSTON CORPORATION
fIXED INCOME RESEARCH
220
200
140
Ir(WSTt(LIaI.
PRICE VALUE LINE INDEX
CHART 3
THE flRST ROSTON CORPORATION
plWeL *.a-. .- -s-m . - --
BIOGRAPHY
J. Curtis Shambaugh has been with First Boston Corporation as Vice President, Taxable Fixed
Income - Strategist since January, 1983. Prior to that he was with with Alliance Capital Man-
agement and its predecessors in a variety of positions. Going backwards in time Mr. Sham-
baugh has been a portfolio manager equity accounts, fixed income accounts; manager of
discretionary fixed income department (originated in 1970); investment counselor; member of
Moodys Investors Rating Committee-Industrial Specialist; and Moodys Manuals. Prior to 1981,
he was employed by Edwards & Hanley, registered representative: Permatex Corp., Labora-
tory Chemist, and U. S. Weather Bureau.
Mr. Shambaugh was educated at Massachusetts Institute of Technology, and C. W. Post Col-
lege.
MTA Journal/May 1985 33
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.
MTA Journal/May 1985 34
RELATIVE STRENGTH
A Workshop on Relative Strength Moderated by Steve Leuthold
PANELISTS:
Jim Bohan
Merrill Lynch, Pierce, Fenner & Smith, inc.
Richard Gala
Batterymarch
Ed Nicoski
Piper, Jaffray & Hopwood, Inc.
David Upshaw
Waddell & Reed
QUESTIONS:
What are the relative strength tools that you use and how are they calculated? Oscillators?
Percentiles? Charts?, etc.
What are the strengths of relative strength analysis?
What are some pitfalls of relative strength analysis?
Why does it not always pay to buy positive relative strength and sell negative relative strength?
How important are relative strength considerations in your overall analytical approach?
What is the best market proxy to use in calculating relative strength? S&P 500? NYSE Com-
posite? Unweighted Indices such as Value Line or Indicator Digest? Other? Why?
Should an individual stocks volatility characteristics be factored into its relative strength cal-
culation? (Beta adjusted relative strength.) Why or why not?
Do you think relative strength is a less useful tool today than it was ten years ago or is it about
the same? If less effective, how do you explain it?
Now, what about the future. Do you expect relative strength to become more or less useful?
Why?
MTA Journal/May 1985 35
BIOGRAPHY
Steve Leuthold is an investment strategist and researcher, actively involved in various phases
of investment and economic research for over twenty years. He is the managing director of
The Leuthold Group, an investment research organization headquartered in Minneapolis, Min-
nesota. From 1977 through 1981, prior to forming his own firm, Mr. Leuthold served as an of-
ficer and portfolio manager for two mutual funds -- Pilot and Industries Trend Fund. From 1969
through 1981, he was also associated with Piper, Jaffray & Hopwood as an investment strat-
egist.
MTA Journal/May 1985 36
LANES STOCHASTICS: THE ULTIMATE OSCILLATOR
George C. Lane
In 1954, I joined Investment Educators as a junior analyst. In reality, I was a go-fer, running
the projector, carrying the luggage. But I also kept up the charts, learning the art of technical
analysis by doing.
investment Educators was then an eight year old educational school, teaching charting, mov-
ing averages, and the Elliott Wave in a series of three classes--all on the stock market. In those
days, the stock market had periods of drifting without much to interest potential clients, so we
soon added commodities courses to our fare. I taught them.
After I joined the six-man, no-pay research staff, we discovered oscillators. We researched and
experimented with over sixty applications, with the result that we found about twenty-eight that
had predictable values. In charting our cumulative oscillators, we found they were running all
over the chart paper. Soon, we had chart paper running all over the walls. So, we struck upon
the technique of reducing these oscillators to a percentage. We used the alphabet to differ-
entiate one from the other: %A, %B, etc. Each one was reduced to a percentage indicator pri-
marily so we could manage to keep them workable on the chart paper!
As a result of all the hard work (the 14-hour, mostly by hand, no-pay days), we decided that
the most reliable indicator was %D for % of Deviation. The basic premise of %D is that mo-
mentum leads price. It makes top before price and it makes bottom before price. Momentum
is a leading indicator. %D is a momentum oscillator.
I quote from Welles Wilders book, New Concepts in Technical Trading Systems::
One of the most useful tools employed by many technicians is the momentum os-
cillator. The momentum oscillator measures the velocity of directional price move-
ment. When the price moves up very rapidly, at some point it is considered to be
overbought; when it moves down very rapidly, at some point it is considered to be
oversold. In either case, a reaction or reversal is imminent. The slope of the mo-
mentum oscillator is directly proportional to the velocity of the move. The distance
traveled up or down by the momentum oscillator is proportional to the magnitude of
the move.
For those of you who would like a detailed mathematical description of the theory and func-
tioning of momentum and oscillators, I refer to Perry Kaufmans book, Commodity Fading Sys-
tems and Methods.
Let us now turn to the practical application of Lanes Stochasatics (%K and %D). We are using
U. S. Treasury Bond Futures for illustration. Using Elliott Wave analysis, we had exhausted the
downside in 1981, when we reached the 55-00 level. The period of 1981 to 1984 can be ana-
lyzed as a double bottom formation. (See Chart A.) Our analysis begins in 1983 (see Chart
B). We are short and, as we follow the downward pattern of T-bonds, we are aware that our
short side prosperity must, someday, come to an end. But when?
In late May, we noticed the long-term interest rates were making a pointed top, and at the same
time, T-bond futures had accelerated their decline, n-2 pushing their way through the downside
of their previous channel. (See Chart C.) We drew a parallel channel of the same width below
MTA Journal/May 1985
37
it, and on Thursday, May 28, 1984, T-bonds just touched the bottom of that channel and re-
bounded. There is a loose five-week cycle in T-bonds, so we bracketed a period of six weeks
(allowing for ten percent deviation on either side, as taught by Walt Bressert) in advance. T
bonds returned back inside their original channel, rallied up to the top of it and turned down.
History has taught us that, if this is truly to be the bottom in the futures (the top in interest rates),
the channel should contain the downmove. We, therefore, now had a window of price and time
(a technique taught by Jake Bernstein): price: 58-l 6 to 59-l 6; time: the week of June 29 to July
5,1984. Five weeks after their first top, long-term interest rates, which had declined, rallied and
made an attempt at a new high. This attempt failed, and by July 3 to 5, 1984, we could spec-
ulate that interest rates had topped out--and T-bond futures had made bottom! (The municipal
bonds topped a week earlier than the corporate and treasury instruments. It just goes to show
you: the bond dealers are smarter than the government and corporations!)
Question: Did we have a major bottom? Could we cover our shorts and reverse our position
in the face of so much adverse, contrary professional and public opinion?
We now turned to our computers not that we hadnt been haunting them, checking the printouts
in the wee hours, in the weeks previous! (See Chart D.) What did we find?
A. Volume showed a large increase at the first botton - a selling climax. But volume dried up
at the second bottom - classic volume action at a double bottom: bullish!
B. Open interest had begun growing in April and continued to do so right through the double
bottom: bullish!
C. Lanes stochastics gave us a preliminary buying signal in May 1 and in June 2, when price
made lower lows but %D made higher lows, a divergence caused by deviation from the former
rate of descent. The final buying signal in July 3 (completing the l-2-3 buying signal pattern)
showed enormous upward strength, barely managing to touch the oversold band: bullish! As
far as we were concerned, the bottom was in!
D. Lanes Serial Differencing, a tool we use to complement and confirm Lanes Stochastics,
gave us the same l-2-3 buying signal in May - June - July: Bullish confirmation that the sec-
ond leg of the double bottom had been completed.
As we went through our printouts, we found that six out of seven of our other indicators con-
firmed our analysis. This was the buy week! So, we did!
To profit in trading futures, be it gold, T-bonds, or the stock indexes, we have a simple, but ef-
fective approach. We use conventional charting techniques, augmented by Elliott Wave and
cycles to determine a window of time and price. Within that window, we use our own Lanes
Stochastics to determine when the major change in trend occurs. By your bank balance, will
you also swear it works!
MTA Journal/May 1985 38
!. -.
T-BONDS CBT CHI. s
Chart A

ii
II
1
1
1
1
1
60
76
76
74
72
70
66
66
64
62
60
56
56
54
Chart B
MTA Journal/May 1985 39
0 24 I 9 7J
I
6 20
I
4 18
FEB. MAR. APR. MAY JULY
I
AUG.
MTA Journal/May 1985
Chart C
40
PI ki Pl Li
._
1; : Lt..& 5 I-,
I
ct il
%.K..~~ i % -._.-.-._ .-.-..-.. - .-.-.- ._.-.
STOCHASTICS
SERIAL DIFFERENCING
Chart D
MTA Journal/May 1985
41
BIOGRAPHY
George C. Lanes educational background is in Political-Military Science, Medicine, Finance,
Security Analysis, and investment Management. He attended Drake University, Washington
and Lee University, Northwestern University, The Academy, The Citadel, William & Mary, The
New School, Baruch School of Finance, E. F Hutton, Chicago Board of Trade Institute, and
Chicago Mercantile Exchange School.
The majority of his working life since 1957 has been spent teaching bankers, investors, agri-
cultural producers, brokers and market analysts alike the mechanics of Hedging and the Sci-
ence - and Art - of Technical Analysis. As President of Investment Educators, the oldest technical
commodities school in the United States, George writes a weekly market letter with a daily Hot-
line update and teaches commodities seminars on a regular basis. He is currently working on
a book detailing Lanes Stochastics and its variations; publication Fall, 1985.
MTA Journal/May 1985 42
A VIEW FROM THE FLOOR
William R. Johnston
MTAs theme for this 10th anniversary seminar Looking Back - Looking Ahead is certainly
appropriate as we celebrate, or mourn (depending on ones perspective) the tenth anniversary
of May Day, this forum permits me to reflect on the past and the future from my vantage point
as a specialist. A role, I might add, which many thought would be non-existent long before May
Days tenth birthday.
In the relatively short ten years since the unfixing of commission rates, our business has lit-
erally been transformed into an industry whose participants are as diverse as the products it
now offers.
May Day marked much more than the departure from the 183-year tradition of fixing rates by
the New York Stock Exchange (NYSE). It meant the end of the business as we knew it. It meant
future performance would be under a microscope. May, 1975, was the end of an era and the
birth of an industry the financial services industry
The deregulation which has occurred over the last decade has shaped an industry which is
unlike the one it replaced. Todays financial services business is one driven by customer ser-
vice, product diversity, and competitive edge gained through innovation and technology.
In my end of the business, the pace of change has been equally quick. Whereas ten years ago
there were almost one hundred units on the NYSE, today there are fewer than sixty At the
same time, capital in todays specialist community is almost one hundred percent.
In a broad context, as I look ahead, even greater change is in store for the specialist business.
Recent rule changes at the NYSE will permit hedging by specialists in their registered issues,
as well as open the way for diversified firms entry into the specialist business. As these de-
velopments, and others on the near horizon, begin to impact the way our industry operates,
the future of the specialist business will be determined by capital, talent, and competitive tech-
nology. But unlike the last ten years, these ingredients for success will be in greater demand
than ever before. Upstairs risk positioning to accommodate a progressively more volatile and
short-term oriented trading scene will mean greater levels of risk to brokers and specialists,
thereby creating new channels to spread those risks. And use of derivative products by our
customers will place ever increasing demands upon the dealer community to quickly and ef-
fectively respond to major shifts by institutional investors.
I would like to focus a bit this morning on three major elements of change effecting the spe-
cialist business: technology, capital, and customer service.
Today, the trading floor at the NYSE looks nothing like it did a few short years ago. If you were
to visit the Floor, and I hope each of you will consider this an open invitation to do so, you will
find fully electronic trading posts. Those old stanchions of oak and mahogany which served
the Exchange so well between 1929 and 1980 have all vanished. In fact, most were preserved
and now grace the halls of prominent museums and universities around the United States. In
their place, we have built the most efficient electronic trading arena the securities world has
ever seen.
Beginning with our automated order routing network, known as SuperDot 250, a customer can
walk into a branch office of a member firm and place a market order for 1,000 shares of any
MTA Journal/May 1985 43
listed stock. That order will route electronically from branch office to point of sale on the Trading
Floor, be exposed to the auction market, executed and reported to the office of entry in less
than 80 seconds. And SuperDots capability continues to grow. We are currently providing au-
tomatic executions (up to 1,000 shares) in several hundred stocks with 118 point markets. Most
importantly, the entering firms own floor personnel never touch these orders. The significance:
systemized orders, now defined at 1,099 shares or less at the markeV30,099 shares or less
with limited prices, may be electronically routed, efficiently handled, given exposure to auction
market principles, executed and reported to originating office in the time it once took an
average order clerk to type it for transmission. The retail customer is efficiently served, and the
broker-dealers own floor staff is free to concentrate on the high end (block) business, where
professional agent representation is critical.
The routing network (SuperDot 250) was only the beginning of technological enhancement to
our market. We followed SuperDot 250 with an opening assist program known as OARS, which
stands for Opening Automated Reporting System. This system allows firms to electronically
enter orders prior to the opening each day in a master electronic file. The specialist cues the
file shortly before the opening to determine the supply/demand picture in a stock. Using con-
ventional methods for opening a stock, he then enters the opening price in OARS which au-
tomatically triggers instant reports to all orders in the file. In the past, large openings could cause
significant delays in reports to customers. Today, delays caused by an influx of orders at the
opening are virtually nonexistent. Most importantly, all trades entered in the OARS file are clocked
and guaranteed clean, that is no dont knows or question trades which cause administra-
tive headaches and significant expense.
The technology express moved to high gear in recent months with the elimination of the paper
books for specialists at eleven locations on the Floor. In their place are electronic limit order
files that accept, store, monitor, display and report electronically delivered limit orders (up to
30,099 shares). As an integral part of SuperDot 250, once these limit orders are executed, a
single input automatically triggers execution reports on such order. This system will continue
to expand floor-wide.
A totally paperless touch-trade system using personal computers with touch screens is the
way of our future. Touch-trade will perform all reporting, trade and quote dissemination tasks
for market and limit orders that had traditionally been handled manually. Thus, a single touch
executes a trade, reports it to the tape, sends reports to the entering firm, and enters the trans-
action into the comparison system. There are six such systems in operation on the Floor today.
Post-trade reconciliation also bears mention. Each trade executed through SuperDot 250 is
automatically submitted to the comparison cycle on a locked-in basis. This guarantees that all
systematized orders are processed error-free with a complete audit trail. This process will ul-
timately lead to a much streamlined post-trade process, reducing the current five-day cycle to
overnight processing.
We are also developing voice-recognition technology. The potential applications are enor-
mous. Suffice to say that in the future all trade data, execution reports, tape prints, post trade
reconciliation, audit trail, etc. will be captured at the point of sale by capturing the brokers spo-
ken words.
Let me refocus briefly on specialists capital. Considering the prospect of diversified firms en-
tering the specialist business and the competitive factors which that implies, financial capital
commitment in our business will inevitably grow. Human capital will also increase dramatically.
The sure judgment and expertise required to efficiently utilize dealers dollar commitments, in
an increasingly volatile marketplace, continues to force a change in the specialist community.
One measure of the ongoing competition for expert market-makers is the fact that the NYSE
MTA Journal/May 1985 44
specialist community is a much younger, more aggressive group than you would have found
only a few years ago. And that trend continues.
The third leg of our future is a clearly focused commitment to customer service. ,NYSE spe-
cialists will soon have a new job description built on that principle. Periodic interface with both
our listed corporate community, and with our direct customers (the broker/dealers) will be re-
quired. As the wizards of merger and acquisition continue to chip away at our list, the specialist
community can barely stay even as new equity listings are brought on board. At the NYSE,
allocation of new listings utilizes customer evaluation of each specialists performance as the
key criterion. Thus, exceptional specialist performance in meeting customers needs is the only
sure way to enlarge our business.
Let me sum up by saying that unrelenting focus on our customers needs, and fulfilling those
needs in a raipdly changing environment, is the cornerstone of our future. One element that
you can be certain will not change is the Exchanges commitment to providing a marketplace
where all public investors are afforded an equal opportunity to compete and interact, regard-
less of the size of their orders. From the largest institution to the 100 share purchaser, the Ex-
change has been and will always be a trading market which serves to ensure a high level of
participation in equity investing for the broadest possible customer base.
I thank you for the opportunity to be here today and would welcome any questions you may
have.
MTA Journal/May 1985 45
BIOGRAPHY
Mr. Johnston is presently serving as Chairman of the Board & Chief Executive Officer of Agora
Securities. Prior to August, 1980, he was Senior Vice President and Director of Mitchum, Jones
& Templeton, Inc. Other business activities include: NYSE floor official (second term), NYSE
Competitive Review Committee and NYSE Specialist Evaluation Committee, Board Member
and Treasurer of Specialist Critical Issues Organization, Chairman of Education Committee of
SCIO (Reverse FACTS) and Director of North American Bank Corporation.
Mr. Johnston graduated from Washington & Lee University with attainments in commerce in
1961.
MTA Journal/May 1985 46
A THREE YEAR FOLLOW-UP ON THE ENIGMATIC STOCK OPTION
A CONSTANT CHALLENGE
David Holt
PREFACE
The theme of the 1982 Market Technicians Association (MTA) Annual Meeting in Princeton,
New Jersey, was Challenges for the 80s. As the author pointed out in his presentation at that
meeting, the theme was especially apropos for exchange listed options. Because of their unique
qualities, options resisted and, in some cases, totally repelled conventional rules of technical
analysis.
During the three intervening years the listed options market has undergone a tremendous met-
amorphosis; and yet, the more it changes, the more it stays the same, at least as far as tech-
nical analysis is concerned.
Our objective in presenting this paper hopefully meets the appropriate requirements of Article
II of the MTA Constitution, to wit: B. Educate the public and the investment community (in-
cludes MTA members) to the uses and limitations of technically oriented research and its value
in the formulation of investment decisions. C. foster the interchange of material ideas and
information for the purpose of adding to the knowledge of the membership.
To reach these objectives, we will first review the idiosyncrasies of options that create the in-
compatibilities with conventional technical analytical techniques, both as they existed three years
ago and as they are now. We then will present some ideas and information that, hopefully, will
add to the knowledge of the members who review this presentation.
Perhaps the best way to start this discussion is to touch on several of the unique features of
stock options that severely restrict conventional technical analysis techniques.
VOLUME
Unlike equity securities, the volume of stock option contracts is all but useless as raw data for
the application of conventional technical analysis. More correctly stated, its the unique aspect
of option volume as well as the method used in reporting option volume that stymies the tech-
nician.
One of the primary objectives of analyzing volume is to determine the amplitude and bias of
any imbalance between demand and supply. With equities, this is a rather straightforward anal-
ysis and has been quite useful for a number of years. Options, however, are another story be-
cause of the unique situation where a transaction can either be supply, demand, or both. When
a closing trade occurs between two parties who are both exiting, you have supply. However,
when one side of either transaction is opposite to the other, you have both supply and demand
which tends to neutralize their pressures. To compound the problem, the various option ex-
changes (through their control of the Option Clearing Corporation (OCC) ) continue in their re-
fusal to release opening and closing volume on a timely basis. To their credit, they did throw
a crumb to the technicians (who had been grinding them for this type of data for years) in 1984
when they started releasing opening and closing statistics for customer orders. However, the
MTA Journal/May 1985 47
data is so delayed in its availability that its usefulness has been reduced to a minimal level.
The OCC still contends the firm and market maker orders, which consistently run over sixty
percent of the total, cannot be marked opening or closing for competitive reasons. We readily
admit that a large neon sign flashing opening or closing on every ticker a market maker
activate would unduly restrict his (or her) floor activities. However, in this high technology age
of the 198Os, there are undoubtedly multiple ways in which orders could be identified as open-
ing or closing without contributing the real and implied threats to the security of privileged in-
formation of those in front of or behind the various posts on the trading floors.
If properly motivated we feel the OCC could easily release opening and closing volume for all
orders each day, along with the other statistics created by their overnight clearing activities.
Until that happens, volume statistics will continue to perplex and frustrate technicians attempt-
ing to use them in conventional ways.
Whether option volume is used in its more traditional role as a confirmation of price movement
or, in what has become quite a fad with the introduction of broad-based cash-settlement op-
tions, as a foundation for put/call ratios, option volume can be a useful tool for technicians who
can break out the supply and demand portions (i.e., brokerage firms who can tabulate both
their own and their customers opening and closing volume) on a timely basis. Other than that
relatively small application, we submit that technical indicators using option volume are, at the
best, marginally efficient and, at the worst, dangerous.
BREADTH
Because of having a set life span, which in the spectrum of investments is relatively short-term,
options naturally have a built-in downside bias as their time value evaporates. Thus, if you are
attempting to work with advance/decline data in the conventional sense, you must first elimi-
nate the downside bias. But that is easier to say than do. We have expended a lot of man and
computer hours analyzing various option series in an effort to find a consistent pattern of ero-
sion. When we first started, we felt it would be a task easily and quickly disposed of, as the
severe downside bias should be in the weeks immediately prior to expiration. However, in the
real world, where expanded position limits, new products, and a sharp increase in the sophis-
tication level of the players exploded the number and size of hedging and arbitrage programs,
this makes a predictable erosion curve as elusive as a feather in a windstorm.
We started with the hypothesis that a set of option series would adopt a pattern of eroding time
value dictated by the characteristics of the underlying stock and general-market psychological
pressures. Perhaps it was merely a case of being naive, but we felt, as long as the logic was
there, the reality of it would follow. What we failed to anticipate was a change in the basic forces
brought about by proliferation and unique external pressures, such as straight and reverse
conversions, illiquidity, and huge arbitrage programs that employ options. In our early work we
did not allow for the almost unbelievably high level of sophistication that would be achieved by
the market professionals on the floor(s) that would in turn create an unprecedented level of
efficiency. Fortunately, our learning curve allowed us to adjust to the highly efficient market that
evolved, even though, in the process, we had to scrap most of our initial programs.
The erosion of time value, which produces the downside bias in breadth indicators, is still fairly
consistent for calls when monitored as a group (i.e., expiration series, exchange, in/out-of-
money, etc.). However, puts are relatively erratic even when smoothed by the use of large
universes (see Charts A-l and A-2).
MTA Journal/May 1985
After extensive computerized cross-screens of the corresponding aspects of option series, we
have come to the conclusion that the relatively erratic erosion curve of puts is a direct result
of the liquidity quotient. This hypothesis is supported by the application of simple logic.
The one thing puts and calls do have in common in this area is the almost total unpredict-
ability of individual contracts, even on the same stock and same cycles. On the surface, it would
appear that the erosion patterns are controlled to a large degree by the dictates of the market
professionals based on what part that particular contract plays in their overall position/hedge
strategy.
Even though none of the foregoing, either separately or collectively, represent an insurmount-
able hurdle in achieving penetrating technical analysis of the options market, they do present
challenges worth pursuing with more sophisticated analytical processes.
One of the unique features of listed stock options that may well end up being the foundation
of a truly historic breakthrough in technical analysis is . .
REMAINING TIME VALUE
In the interest of brevity, we will summarize our conclusions on time value by saying it is the
sum and total of all supply and demand pressures that are at work on the price structure of an
option contract at any particular point in time. It is the bottom line of an options financial state-
ment revealing which pressure is in excess and to what degree.
If you are a writer, you want all the time value you can get, because it represents your potential
gain. If you are a buyer, you dont want to pay anything over intrinsic value if you dont have
to. As a matter of fact, you would like to be able to buy the contract you want at a discount if
you could, and quite often, can if it is far enough in-the-money. NOTE: In-the-money options
do not necessarily go point-for-point with their underlying stocks as the attached tabulations
for April 11, 1985, show. (See Tables I and Il.)
As a consequence of this, we use time value as one of the primary screening devices for the
selection of options both for writing (primary) and purchase (secondary).
It seems logical, therefore, that time value would be an excellent base for constructing timing
indexes for the overall options market.
The application of logic tells you time value for calls should increase during a market uptrend
as enthusiastic buyers in their exuberance bid up prices. Conversely, the time value for calls
should decrease as a market correction unfolds and buyers become more and more reluctant
to be on the long side. The opposite to the above should be the pattern of time value for puts.
Apparently, this logic is faulty, because that is not how time value equates to overall price struc-
tures, at least not consistently enough to be of value. In a very loose interpretation, the time
value for puts does go in opposite direction from their underlying stocks, but time value for
calls does not correlate with its stocks even loosely. (See Chart 9.)
This lack of correlation is undoubtedly a direct result of the high degree of efficiency achieved
by the proliferation of sophisticated hedging and arbitrage programs during recent years.
MTA Journal/May 1985 49
Getting back to logic for a minute, it would seem that comparing time value of puts to calls
(on a percentage basis so you have a common denominator) would produce a very mean-
ingful ratio. When the ratio gets high, the price structure of the stocks should be overextended,
and thus, a top could be expected to form. When the value is low, prices should be in the pro-
cess of bottoming as the corrective process comes to a conclusion.
Based on the above, we programmed our computer to calculate this data so we could con-
struct a put/call ratio of percentage time value. We only used stocks that had both puts and
calls (starting in the summer of 1977 with twenty-five) so there would not be any distortion in
the ratio with only one side of the equation (i.e., calls only). Our thoughts were that it would
be a superior indicator to one using volume (we couldnt factor in opening and closing volume),
prices (distorted by the imbalance of in or out-of-the-money contracts) or other criteria.
The resultant put/call ratio is depicted on Chart C. We have indicated most of the interme-
diate-term tops and bottoms in price with the tie-lines. Up until the early 198Os, the put/call
ratio, at the best, could be labeled interesting, provocative, or enigmatic. It most certainly was
not the historic breakthrough we were looking for, even though we felt quite strongly time value
reflects all internal and external pressures being brought to bear on prices.
However, during recent years one characteristic has become extremely reliable in confirming
a major market advance:
When the put/call ratio is relatively low and experiences a sharp and substantial
increase, a major advance in the overall market is virtually assured (See Chart C.)
In all candor, we must admit the logic of why this occurs escapes us. Our logic tells us it should
be just the opposite; time values for calls should expand rapidly as a major uptrend is launched,
not puts. However, here again the cause of this effect is undoubtedly the result of massive
hedging and arbitrage programs which utilize the purchase of puts. This excess demand, which
is anticipatory, produces a sharp expansion in the time value for puts while the expansion of
demand for calls is reactionary pressure produced by the normal lag-time sequence of trend
following decision making processes.
Thus, by utilizing one of the unique features of options, you can develop a relatively consistent
timing device for the price structure of the underlying stocks. This tool can, therefore, be added
to the technicians arsenal of conventional market timing tools to arrive at an even stronger
conviction as far as impending market behavior is concerned.
The next logical search takes you in quest of a method to effectively use this unique charac-
teristic of options in an efficient screening process.
AVERAGE (AVR) PUT AND CALL PERCENTAGE TIME VALUE FOR INDIVIDUAL STOCKS
A SCREENING TECHNIQUE
Lets start with the assumption that you have recognized and accepted the fact that time value
is a valuable piece of information you can use to increase your performance, regardless of the
option strategies you employ. Now, how do you obtain and put this information to use?
Our experience has taught ,us that the most productive sequence in screening any type of data
is to start with the stock and end up with the option. Once you accept the validity of time value,
MTA Journal/May 1985 50
it becomes easy to understand why some stocks consistently command relatively high and
others relatively low time values.
A high velocity, highly volatile stock that has a lot of sex-appeal to investors is, naturally, going
to have large time values for both their puts and calls. Stocks in the area of technology are
obvious examples, as well as swinger stocks that are the favorites of professional traders
because they can get a lot of action out of them--in both directions.
On the other side of the equation, you have low beta, slow moving, pachyderm-type stocks
that consistently have relatively low time values. This is primarily due to low demand from
traders who cannot make any money on stocks that are asleep and from investors who, by
the very nature of the stocks, do not feel compelled to use their options to hedge their stock
positions. Utilities are the most common among these type of stocks.
Because of the basic desire of option buyers to be cheap and option writers to be greedy,
you would, naturally, expect writers to be attracted to the former and buyers to the latter, which
they are. But here is where actual strategies must be fitted with the correct merchandise
(options). As an example, income writers, as a general rule, require (desire) very stable stocks,
and, thus, they would not be interested in the normally high beta, high time value stocks whereas
the speculative and aggressive writers would feel right at home with these swingers, as
would investors who write options as hedging techniques.
By compiling a relative strength (rank) of the percentage time value of the almost four hundred
underlying stocks, you have a logical screening technique for both buyers and writers of op-
tions. (See Tables III and IV).
First, a general explanation of the printouts. There are four (4) different listings of forty stocks
which are appropriately labeled. The various columns ar self-explanatory except you should
be aware the the last column (NO) refers to the number of put or call options for that stock
depending on the heading of that list.
As you glance through these four lists you can see the pattern of stocks we described earlier
(i.e., highest time value = electronic and computer stocks; lowest time value = utilities
and banks). There will, naturally, be some that dont fit the mold, but thats because they are
there for special situation reasons or are, in reality, different than they are generally perceived
to be so far as velocity and volatility are concerned. In general, however, you will be able to
accept the placement of most of the stocks in each category.
There are several cross-screens of these lists that are very fruitful exercises. First, you have
the stocks that appear in both the highest average percentage time value for both puts
and calls. These are the high velocity, highly volatile stocks whose options were consistently
in demand, at least for the week tabulated, enough to produce extremely high time values.
These time values reflect the excess demand better than do most other numbers you could
come up with. Fifteen of the forty stocks were in both lists in the previous week. As you would
expect, the names pretty well fit the mold and contained some exceptionally large betas, which
confirmed their high degree of volatility.
The next cross-screen was, logically, the stocks in both the lowest AVR percentage time
value for both puts and calls. The eight names on Table IV contain a low-beta utility as well
as stocks like Western Company of North America, Lehman Corporation, and Tri-Continental.
The options on these stocks are, currently at least, out of favor for both buyers and writers.
The third list, and quite frankly the one that most piqued our imagination, was the stocks that
were in the highest AVR, percentage time value for calls, and lowest AVR percentage
time value for puts.
MTA Journal/May 1985
If you locate a stock whose puts and calls consistently have one of the highest AVR per-
centage time value out of all three hundred seventy-four underlying stocks, you know you
have a high velocity, highly volatile stock. Because the options on this stock, both puts and
calls, are under heavy demand, you know you are going to be required to pay a hefty premium
(i.e., time value) if you want to buy them. If you are a writer, you know you are going to receive
a hefty bonus for being on the supply side of a transaction. How you can use this informtion
to your advantage is relatively straightforward, especially those utilizing writing strategies.
Now, lets take a look at a totally different breed of cat. Lets say you isolate a stock whose calls
have one of the largest AVR percentage time value of all underlying stocks. What does this
tell you, and how can you use this information to increase the performance of your option strat-
egies? The first logical conclusion is either the calls are overvalued, the puts are underval-
ued, or both. It is an obvious case of unbalanced supply and demand pressures on the options
caused by any number of possible reasons.
Your strategy is, therefore, relatively apparent as you would want to go short the calls and go
long the puts. This strategy, of course, takes into consideration the point we made earlier that
all other considerations must be equal, or at least neutralized. In other words, you should not,
necessarily act on the AVR percentage time values, and disregard all the other factors that
could affect your results (i.e., overall market conditions, underlying stocks technical condition,
status of individual option, etc.). However, the point we want to make is that such a strategy
could be viable enough to employ independent of, but in conjunction with, your normal strat-
egies.
In conclusion, we must confess we have experienced a great deal of success in applying proven
technical analysis techniques, such as first and second derivatives of price, to stock options
and are not in the least deterred in our efforts. We are, however, intrigued by the challenges
presented by the items mentioned in this article and will continue to pursue them, even though
the successful conclusion is reached by someone other than ourselves. Indeed, we would be
extremely thrilled to learn these challenges have already been overcome, if the conquerors
are willing to share the results.
1
MTA Journal/May 1985 52
X TIME 'VAIUE
8" EXP,RCI:O:I SE9iES
JAN 1082 FE3 1'182 XARCIi 1982
A
a\
I
\
:.a.. '\
:
I
*. \
:
i., CALLS
',
\ CALLS
. \
: \
\
:
: '\\
t
\
:
I
:
\
: \
:
'.
\
. \
: \
: I
: \
:
\
:
\
\
1 / '\
:
\ \
:
\
\
+.
\ :
\ \
: PUTS
--.
\
, PUTS
\
: : \
:
*.*:
:
L\", \
*.
: v \
\
**..
*. I \
*.
f . :
I
, r=
\
. a.*
-4
:
\ "
'0
\ t ',
1
:
\
:
:
'. :
\ : '\\ \,
\A'
:...* .:** : v II ' \
\ \
'. . . ,. -.*
\
: :
' \
: *.
l 1 \\
*. :
I
\ \
. :
: .
.-\ \
\
FiGURE I\-:
\\
,
\
CHART A-l
JAN 1985
CHART A-2
MTA Journal/May 1985 53
CHART B
A.II!I , ,I ; ( ,
I I
IrWIIl /I I/ I I
CHART C
MTA Journal/May 1985 54
TRAVELERS AUG 30 42.25 11.62 42 -1 .s
LOuEST X TIME VALUE CCbLLS)
YU?"HY OIL AUG 23 30.37 10.cc 51 -1.2
NTL 3ISTLR PUG 20 30.87 10.50 54 -1.2
x
' LEM'4AN CP PUG IO 13.75 3.62 37 -0.9
STCCK OPT'N IN
;;" t"" ChA34 PUG 10 14.25 4.12 42 -n.9 OPTION RRICE PR:CE OUT
ucNUINE 24 PUG 25 32.12 6.37 23 -0.5
APR SEQIES
COLECO INO APR 5
AM EXPRESS APR 20
AVON PROD APR 15
GEORGIA P4 APR 15
YES4 'ETRO APR 10
T I E COMM APR 5
AY EXPRESS APR 25
AM EXPRESS APR 30
CROWN ZELL APR 25
CULLINANE APR 15
22.02 7.12 50
li?.5C e.lr as
14.62 3.25 192
42.25 21.25 111
21.2s 5.75 41
6.75 1.62 35
42.25 16.5G 69
42.25 11.50 40
43.25 17.50 73
25.75 13.25 91
MAY SERIES
ARMCO INC YAY 5 7.75 2.5C 55
TRAVELERS MAY 30 42.25 11.12 4C
GULF CNADA YAY 10 14.25 3.87 42
COMYOR INT MAY 5 lC.GO 4.75 1cc
AYDAHL CP YAY lo 14.87 4.50 4E
SAXTER TRV M4Y lo 15.87 5.50 58
CINCI HILA MAY 15 22.62 7.12 50
PITTSTON C M4Y 10 11.62 1.37 16
SIGNAL COS 14AY 25 34.37 8.75 37
04TAPCINT MAY 10 14.50 4.25 45
JUN SERIES
-2.2 PHILIP MOP SEP so
3AXTER TRV AU; 10
-2.' PriILID MOR SEP 85
ROCKkZLL
-1.9
PUG 25
SCUTHRN CC ALi 15
SANEB SPVC SEP 5
-1.' SC1 ATLNTA SER 5
-2.C
1;': 3UCYRUS-EP SE3 10
.
YIC SO UT
-2.4
SEP IC
-1*7 FE9 NTL YA SEP 10
SEP
MITCHL E;Y SEP 10
SfRIES
CCHSST ENG SEP 25
-2 . 4
15.67 5.75 55 -c.a
34.75 9.5C 39 -c.7
20.12 5.00 34 -0.6
-~ 2 S3TrtL3 CP SEP 20
*.
$3 OCT SERIES
.
-2.5
AVON RR03 OCT 15
-'-' CROWN ZELL OCT 3c
-2*4 4ti TEL&TEL OCT 15
-2*2 3UKE POk'ER OCT 25
-'-' TEXACO INC OCT 33
-lma ALLIED CP OCT 33
-lm7 EXXON COQP OCT 43
ATL QICYFL OCT Co
3ANKAMERCA OCT 15
-2.3
CoMOISc0 OCT IC
93.62 10.75 17 -3.1
93.52 7.12 IO -1.6
9.12 4.00 52 -1.4
10.87 5.75 117 -1.1
14.50 4.37 45 -c.9
14.12 4.oc 41 -c.9
lCI. 75 5.62 57 -0.8
15.37 5.25 53 -0.8
32.2 5 7.00 29 -0.3
32.25 12.12 61 -0.4
21.25 6.05 41 -1.2
43.25 13.OC 44 -0.6
21.37 6.25 42 -0.6
32.50 7.37 3c -0.4
36.00 5.87 20 -3.4
42.25 12.12 r@ -0.3
50.e7 10.75 27 -0.2
46.5G 8.5G 21 C.0
19.25 4.25 2e 0.0
16.25 6.25 bi 0.0
SC1 4TLNTA JUN 5
HECLA MINI JUN lo
BUCYRUS-EQ JUN 10
MITCHL EGY JUN 10
LOEWS CP JUN 30
FED NTL MA JUN 10
BAKER INTL JUN 10
KANE6 SPVC JUN 5
TELEX CORP JUN 25
SOTHLD CP JUN 20
JUL SERIES
lG.87 5.62 117
17.12 4.75 71
14.50 4.25 45
I;*; NCV SERIES
.
15.37 5.12
4b.75 16.00
15.75 5.50
16.42 6.37
3.12 4.oc
40.75 15.25
32.25 11.87
53
55
-1.7
-1.6
-l.b
-1.5
-1.4
-1.2
-1.2
CARTER H H
NTL DISTLR
ARKLA INC
SOUTilRN CO
CONS EDISO
LEHMAN CR
N B I INC
NOV 20
kOV 20
NOV 15
NOV 15
NOV 25
2e.ocl 7.50 40
30.87 lO.SG 54
-1 .e
-1.2
-1 .I
-0.6
57
66
52
63
61
22.25 7.oc 48
20.12 5.oc 34
32.50 7.5c 30
13.75 3.75 37
14.OG 4.00 40
0.0
0.0 NOV 10
NOV 0 0.c
0.C
c.3
0.4
SHELL OIL YOV 5c
SCHERING P NCV 35
41 -, 3 CCC1 PETRO NOV 25 .
59.62 9.62 19
44.12 3.25 26
3c.00 5.12 20
AVON PROD JUL 15 21.25 5.87
CROUN LELL JUL 25 43.25 17.75
CROWN ZELL JUL 30 43.25 12.75
FST CHICAG JUL 15 22.75 7.50
ALLIED CP JUL 30 42.25 11.37
DAYTON HUD JUL 25 36.75 11.50
BANKAPERCA JUL 15 19.25 4.12
AM EXPRESS JUL 30 42.25 12.00
CIGNA CORD JUL 35 48.87 13.62
KERR MCGEE JUL 25 30.75 5.62
AUG SERIES
CARTER H H AUG 20
73 -1 .i
44 -1.2
51 -1.1
OEC SERIES
LAPITA CP OEC 10
SANE8 SPVC oic 5
MI3 SO UT DEC 10
BUCYRUS-ER DEC 1G
FED NTL MA DEC lo
YITCtlL EGY 3EC 10
VALERO ENG DEC 5
qti;HS TOOL OEC 10
CdEVRCN CP DEC 30
N ii INOUS DEC 50
-1.7
-1.4
-c.9
0.0
G.0
-0.C
-0.0
0.8
1.1
1.1
14.87
9.12
14.12
14.50
15.75
4.62
4.00
4.00
4.50
5.75
5.37
6.12
5.37
4.37
6.00
4E
32
41
4s
57
4c -c.9
47 -0.7
25 -0.7
40 -0.6
39 -0.5
15.37
11.12
15.25
34.5c
55.37
53
122
23 -G.4
52
15
10
28.00 7.5C 40 -1.8
TABLE I
MTA Journal/May 1985
55
LOWEST x TIME VALUE
(PUTS)
LEIYAY CR AU; 20 13.75 5.7s 31 -3.6
LEHYAN CP AUG 15 13.75 G.97 6 -2.8
r. x
IN TM
OUT VL
30ME MINES PUG 15 9.75 5.OC
ZENITH RAD AUG 30 2l.iC 3.50
INTL FLAVC AUG 35 27.37 6.5:
\ L INOUS AUS 15 11 .SG 3.25
TESORO PET AU: 2C 12.12 7.62
AVNET INC PUG 40 30.37 F.GC
SKYLINE CC AUG 20 14.00 5.75
35
3c
2P"
-2.6
-2.4
-2.2
-2.2
-2.1
-2.1
-1.8
STCCU CPT'N
PR:CE PRICE
OPTION
APR SERIES
STOR; TECH APR 15 2.50
STORC TECH
APR IO 2.50
WSTR UNION APR 15 e.a7
bISTR UNION
APQ 20 8.e7
WSTR UNION
APR 25 e.87
VERBATiM
APR 10 7.37
VEROATIH APR 15 7.37
STOR; TECH APR 5 2.50
SOUTHLND R APR 2'0 15.37
SYITH INTL APR 20 11.37
e3 -15.2
75 -15.2
12.12
7.12
5.75
lG.75
15.75
2.37
7.37
4C -4.2
55 -4.2
54 -4.2
26 -3.5
SC -3.5
2.43
4.25
50 -2.E
23 -2.4
43 -2.2 3.37
MAY SERIES
REAONG-BAT
qAY 15 9.62 4.62
G E c INTL HAY 10 5.12 4.62
LEHMAN CP MAY 20 13.7s 5.75
LEHqAN CP
MAY 15 13.75 3.87
SKYLINE CO MAY 20 14.00 5.b2
DOME MINES '4AY 15 9.75 5.00
ZENITH RAO
MAY 3G 21.00 8.50
INTL FLAVO MAY 35 27.87 6.50
N L INDUS MAY IS 11.50 3.25
TESORO PET MAY 20 12.12 7.42
JUN SERIFS
23
39
24
3C
SE? SERIES
MOHAWK OAT
MARY KAY
L T V CORP
NTL PATENT
TEXAS GIL
DATA GENRL
VEECO IhST
FCL CP AM
SANTAFE SP
COHP SC1
5.00
11.37
lG.12
SEP 15
SEP 1s
SCP 15
SEP 2s
SEP 25
SEP 70
SEP SO
15.25
17.37 7.25
46.00 23.00
19.75
SEP 1s 6.87
SEP 35 27.00
sip 20 15.30
OCT SERIES
uSTP UNION OCT 15 9.67
SCUTHLNC R CCT 20 15.37
CHX&NU TRN OCT 30 13.b2
Ti E COMM OCT 15 6.75
TANDY CORR OCT 40 33.25
3ETHLHH ST OCT 25 17.50
WINNEBA;O OCT 25 17.75
9.75
3.25
4.62
9.37
66 -5.2
24 -3.3
32 -2.5
39 -2.5
30 -2.2
34 -2.2
34 -1.9
54 -1 .e
22 -1.9
25 -1.7
9.87
8.OC
7.5c
4.75
35 -7.8
49 -5.0
31 -3.6
e -2.8
30 -2.7
5.75
4.25
4c
23
34
5s
16
-4.2
-2.4
-1.9
-1.9
-1.5
-1.4
-1.4
-1.4
-1.3
-1.3
35 -2.6
32 -2.4
15.oc
3.12
6.25
7.25
7.00
20 -2.2
23 -2.2
39 -2.1
30
29
16
24
16
HSTN NAT G OCT 55 45.37 3.5c
FLUOR CORP OCT 25 19.00 5.7s
PCLAROIO OCT 35 29.37 5.25 KEY PHARM JUN 20
MOHAWK OAT JUk 15
GEhc MOTORS JUN 85
~GLCBAL MAR JUN 10
PARADYNE JUN 20
KEY PHARM
JUN 15
L T V CORP JUN 15
ZAPATA CP JUN 25
VEECO INST JUN 30
NTL PATENT JUN 25
JUL SERIES
STORG TECH JUL 5
WSTR UNION JUL 15
WSTR UNION JUL 20
WSTR UNION JUL 25
STORG
VERBATIM
TECH JUL 1C
JUL 10
SOUTHLND R JUL 20
BETHLHM ST JUL 25
ALLIS CHAL JUL 15
T I C COMM JUL 10
9.75 9.75
5.oc 9.75
73.50 9.00
4.37 5.5c
51 -5.1
66 -5.0
13 -3.4
56 -2.9
2e -2.6
3s -2.6
32 -2.5
40 -2.5
34 -2.5
39 -2.5
NCV SERIES
READNG-BPT NOV 15 9.62 5.00
LEHMAN CP NOV 2C 13.75 5.75
DATAPC:NT NOV 20 14.50 s.oc
LEHYAN CP NOV 15 13.75 0.87
ZENITH RAO NOV 30 21.00 3.5G
INTL FLAVO NOV 35 27.87 6.5C
N L INOUS NOV 15 11.50 3.25
TESCRO PET NOV 20 12.12 7'.62
AVNET INC tiov 40 30.37 9.oc
DATAPCINT NOV 25 14.50 10.25
35
31
27
8
3P
20
23
-3.9
-3.4
-3.4
-2.8
-2.4
-2.2
-2.2
-2.1
-2.1
-1.7
14.37 5.25
9.75 5.00
10.12 4.b2
14.87 9.75
19.75 9.75
15.25 9.37
39
24
42
3.07 5.7s
8.87 10.75
2.50 2.25
8.87 15.75
7.37 2.37
15.37 4.25
2.5c
17.50 7.12
7.37
6.75 3.12
6.75 3.12
50,
75
4c
5s
64
26
-13.0
-5.2
-4.2
OEC SERiES
-4.2
-4.2
-3.5
-2.4
-2.2
-1.9
-1.9
MARY KAY OEC 15 11.37 3.25 24
OOU CHEM OEC 35 29.00 5.50 17
COHP SC1 OEC 20 15.00 4.75 2s
KANE3 SRVC OEC 15 9.12 5.75 39
KEY PHARY DEC 15 9.7s 5.12 35
L T V CORP OEC 15 10.12 4.75 32
APACHE CP OEC 15 11.37 3.50 24
PER'CIN ELM DEC 30 24.00 5.75 2c
PARADYNE DEC 20 14.37 5.50 28
YTL PATENT DEC 20 15.25 4.62 23
-3.3
-1.7
-1.7
-1.4
23
30
-1.3
-1.2
5s
32
-1.1
-1.c
-0.9
-0.9
AUG SERIES
READNG-BAT PUG 15
9.62 5.00 35 -3.9
TABLE II
MTA Journal/May 1985 56
AVERAGE ?ERCENT TIME VALUES for CPTION UNCERLYING STOCKS
Run on: II-Apr-85 RUN tt: 1099
3:
;:
.
HIGHEST AVR ,Y TIHE VALUE
HIGH:ST AVR X TIME VALUE
RANKED ACCO RDING TO CPLLS
RANKE3 ACCCRDIN; TO PUTS
STOCK
----------
MOHAWK OAT
GERBER SC1
ALLIS CHAL
C B S INC
L T V CORP
COHMOR INT
ALL1 STORE
NTL PATENT
CRAY RSRCH
KEY PHARM
DOME MINES
I T T CORP
TELEX CORP
ASARCO INC
STORG TECH
;&EM& :ZG
HEWLETT PK
SALLY HFG
AMDAHL CP
t&S;":$"
GEN IhSTR
COHP SC1
DATA GENRL
FED EXPRES
ZENITH RAD
ALCAN ALUM
U S AIR
;G; FAI;;'
CNTRL DATA
CHRYSLER
TESORO PET
GENRAD INC
PRIME CHPR
GuLF&WSTRN
GLOBAL MAR
VIACOM IkT
COASTAL CP
AVERAGE:
AVR AVR
CALL PUT NO
----- m-m-- --
'X 3.2c 5.75
_
6:44
s
6.00 x:
:*;4 2:64
3:
9
5143 ;*A;
1194
11':
5.39 10
;'g
5111
7.;;
2142
2;
4.90 2.46 1:
y;
4:80
y9"
-9167
:i
1
yp
4174
';A;
1177
7
9
4.65 1.69 9
2.;: $25 9
4149 2:35 2-z
4:38
$91 1
1173 1%
4.18
t-i: . $.;a 0:59 4: 8
4.05 3.29 18
----- --m-s --
4.91 1.88 10
AVR AVR
STOCK CALL PUT NC
--e------- ----- -e-e- --
MCtiAwK DAT
VALERC ENS
APERADA HS
;C; ;T;t$A
ti i iNOUS
UNIOk OIL
A H F CORP
COASTAL CP
TELEX CORP
NCRTHROF C
GERBER SC1
COMYDR IhT
COLECO IN0
CRAY RSRCH
ASARCC INC
ENSERCH CP
CROUN ZELL
4;T;:: !P
VIACOM INT
ALL1 STORE
p;; pcmb
A S A LTD
REVLON INC
NTL SEMICD
NTL DISTLR
TERADYNE
ZAPATA CP
CHHPN INTL
LOUIS LAND
f4oaxL coRP
CtiLLINANE
GENRA3 INC
AVON PROD
CHRYSLER
I T T CORP
STRLNG ERG
MEDTRCNIC
3.69
4.05
4.87
2:s
5155
3.73
zt
2:46
y*"2:
2:97
;*;z
2175
:*B
3:04
t=:i
3:57
2;7i
2.72
-----
AVERAGE: 3.77
5.75 8
4.32 8
x; I9
z:9c :
:=Pt z
2:74
2.69 ep
2.67 11
2.65 6
2.64 6
2.63 16
z: 8
2154 $
2.53 8
$2; lcj
1147 6
2.46 10
2.46 11
2.45 8
2.42 9
- - - - e - -
2.97 10
TABLE III
MTA Journal/May 1985
57
LOWEST AVR % TIME VPLUE
RANKED ACCORDING TO CALLS
LCWEST AVP % TIME VALUE
RANKED ACCORDING TO PUTS
AVR AVR
STOCK CALL PUT NO
---s-e---- ---mm s---m me
STOCK
----------
STGRG TECH
VER34TIM
LEHMAN CP
kSTR CO NA
GLOBAL YAR
iJSTR UNION
REBDNG-SAT
BETHLHM ST
PARKR ORLL
SKYLINE CO
AM ELEC PW
INTL FLAVO
gE;;;I;oPA
MGH U4 ENT
TRi-CONTL
i : :Nk::n
INEXCO OIL
G E 0 INTL
CCMPTRVISN
APACHE CP
NCVO INDUS
SHELL GIL
BLCK DECKR
FIRESTONE
AM HOSPITL
SYBRON CP
y::;': gy
EkGLHRG CP
SHAKLEE CP
;Lp; p!;i
OCW CHEM
HARRIS CP
KANE3 SRVC
SE4RLE G D
i"; ZF5 *NC
----------
AVERAGE:
4VR
CALL
-em--
$-S?
0:8;
-1%
3:75
3:;
ti:oo
3.27
0.51
1.22
AVR
PUT
-w--s
I;=;;
-3:19
I; l ;;
-I:24
-0.65
WSTR CO NA
AVERAGE:
DUKE POWER
;R;K~oD;T?L
CONS EDISO
SHELL OIL
CONTL TEL
CROWN ZELL
SOUTHRN CO
AM ELEC PW
DOMNON RES
AVON PROD
AM TELBTEL
VERBATIM
EXXON CORP
AETNA LIFE
TEXACO INC
aELLSOUTH
LILLY ELI
TRAVELERS
COOPER IN0
LEHMAN CP
BANKAHERCA
GENUINE PA
ROYAL DUTC
ALLIED CP
ATL RICHFL
TRI-CONTL
FST CHICAG
SCHERING P
WARNR LA43
MAPCO INC
INTL FLAVO
A# HOME PR
AM EXPRESS
CARTER H H
CLOROX CC
CiGNA CORP
ECKERO JAC
GOODYEAR T
----------
I;=;;
0:oo
0.20
0.34
-d*Z
-0:53 ':
2.26 6
1.83 s
8-25 :
2:33 13
1.53 6
1.10 8
0.60 9
0.63 17
:=x
1:22
1.23
x
1:25
1.27
xi
,,',B
0.80 0.79 9
----- --
HIGHEST CALLS
HIGHEST PUTS
-------------
MOHAWK OAT
EEiB;RIi;I
L T V CORP
COMMOR INT
ALLI STORE
fRfYTR;;;;
TELEX CORP
ASARCO INC
VALERO ENG
NTL SEMICD
CHRYSLER
GENRAO INC
VIACOM INT
COASTAL CP
:=a;
3:60
3.97
:*:i'
0136
:29
2:21
"r=$l
2:71
0.25
0.25
8%
0:32
3.34
0.35
--e-e ---a-
2.46 -0.50
DUAL LISTINGS
-------------
LOWEST CALLS HIGHEST CALLS
LOWEST PUTS LOWEST PUTS
------------ -------------
kSTR CO NA
PARKR ORLL
SHELL OIL
AM ELEC PW
VER8ATIM
LEHMAN CP
TRI-CONTL
INTL FLAVO
ALLIS CrlAL
STORS TECH
flGM UP ENT
GLOBAL MAR
HIGHEST PUTS
LOWEST CALLS
------------
CROWN ZELL
CPRTER H H
AVON PROD
TABLE IV
MTA Journal/May 1985
BIOGRAPHY
After completing his formal education at UCLA, David Holt joined a Certified Public Accounting
firm in Southern California, where he specialized in Municipal Auditing. He joined a NYSE member
firm in 1961 as a registered representative. After several years, he went into private business,
where he continued to gain experience as an investor. In January, 1972, he joined Trade Levels
as Director of Advanced Planning. He is now President of T L Communications, Inc. and Editor
of the nationally known Trade Levels Report and the Trade Levels Option Report.
MTA Journal/May 1985 59
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.
MTA Journal/May 1985 60
A VIEW FROM THE FLOOR
Ralph Fogel
The primary qualification for being an effective trader is experience. Experience is what the
trader uses to define the three most important factors that underscore the decision-making
process. Those factors are risk and reward, competitive edges, and the environment.
Risks and rewards vary, depending on the traders area of responsibility. For example, a spe-
cialist generally tries to keep his inventory small so that he can take advantage of the times
when there are extreme buying or selling going on in his stocks. The specialists gauge their
risks and rewards on the movement of stocks, balancing longs against shorts, opting for some-
what smaller profits in light of assuredly smaller losses.
On the other hand, option traders aim to set up positions with as little risk as possible for a wider
play. The options trader takes advantage of the different options within the stock that he/she
is trading. Often he becomes a trader who bases his risk decisions on the values of the indi-
vidual options. Their decisions are also based on order flow and on the value of any given spread
within the many options of a security and stock.
Just as options traders draw on much different criteria than do specialists or off-the-floor trad-
ers when considering risks and rewards, so do different criteria pave the way when the traders
consider their competitive edges. Specialists have an edge in that they handle the same stocks
daily; therefore, they gain a familiarity with what the ranges are of stock. They have a feel for
any movement. They sell strengths and buy market weaknesses. Specialists have access to
the ticker tape. For those people who are traders and are concerned with movement-to-move-
ment transactions within the security, the ticker tape is the most important source of infor-
mation, providing an edge over those who do not watch the tape on a moment-to-moment basis.
Unlike specialists, options traders are not interested in the movement of stocks. They try to
maximize the fact that they are buying and selling value. These traders try to buy spreads un-
dervalued and sell them overvalued. The options traders competitive edge is that he sees or-
der flow in the various strikes within the options which off-the-floor traders do not see.
Prior to a recent rule, off-the-floor traders had a more difficult time than others when it came
to establishing a competitive edge. However, since the advent of the Clearing Member Trade
Agreement (CMTA), these traders now pay to see order flow. They also have an edge in that
they do not have to be on two sides of a market at all times and do not have to make specific,
standard required allotments of inventory on every transaction. In many ways, off-the-floor traders
are not as limited by the rigid guidelines imposed on specialists and options traders. However,
it could be argued that what the off-the-floor traders make up for by having less stringent guide-
lines, they lose in environmental deprivation.
The floor of the exchange is an environment like no other. Being on the floor gives traders a
feel for the market. There is more than just a ticker tape and seeing order flow. Traders can
almost feel the surge of orders--the tempo increases, the noise level increases, and the move-
ment on the floor increases. It is the wordless sounds of excitement that inform traders of a
turn in the market or of a rally.
Specialists and options traders see the ticker tape and actually see the individual sales taking
place--not just the accumulation of them as seen on a bar graph. Being stationed in that en-
MTA JourndiMay 1985 61
vironment, those traders see more than an end-of-day chart showing the markets range and
its highs and lows by closing time. They are provided access to information on a first hand ba-
sis. The more information that the traders abstract from the environment, the better able they
are to make profitable decisions.
It is the experienced trader who learns to view various communication situations as environ-
ments. Even the off-the-floor traders realize that the market chatter on a bus or train ride to
work, the street noise on their way to get coffee, and the news medias coverage of rumors
are all valuable communication environments that just may hold the key to where a stock is
headed on any given day.
In conclusion, it is not any single factor (risk and reward, competitive edges, the environment)
that influences a traders decisions. It is all the aforementioned factors being processed si-
multaneously that provide the experienced trader with the needed information to make the best
possible decisions.
MTA Journal/May 1985
BIOGRAPHY
After graduating Brooklyn College, Mr. Fogel was employed by Spear, Leeds where he be-
came vice-president in 1977. In 1980, he became a general partner of Spear, Leeds and Kel-
logg, where his duties included all trading operations on the American Stock Exchange Floor.
In April, 1984, Mr. Fogel was appointed as floor official on the American Stock Exchange. Ralph
Fogel is currently a specialist for the XMI options and senior partner on the American Stock
Exchange Floor.
MTA Journal/May 1985 63
CENTERFOLD:
THE SEMINAR INDICATOR
1983 MTA
QSXC (ff ) LEN'IKD TO QSXC (EF )
c
x9.6432- -------p - 165.1688
5B.wee-
-9.82!6-
-69.64?2-
- 134.1254
- il~.Em
- le3.8386
LAST -21.2281
RKIVE 172 8%
.Lxw 0% 8%
BE!d 83% 92$
TOTRLS !@a 4838
[ISXC/ffi/IISXC/BF~~RV/KT-RV.5/VLT.5~tKT.5/RV~/~~/BF~LS/LS.5~~/6RD/U(0/WYS/~/OLB 14-MY-8.3 4.55
)
Above is the result as produced on 14-May-83 at 455 PM. eastern time. It represents the ratio
of the five-week change in share liquidity to the five-week change in value of the Standard and
Poors 500 Index. The result of the ratio of these changes is arithmetically smoothed on a five-
week basis again. The center of the chart, for presentation purposes, is put at 50 units on the
left-hand scale (indicator scale). The Standard and Poors 500 weekly value, the Os, is overlaid
using the right-hand scale.
Below is the chart of the indicator as of April 24, 1984, about a year later and just prior to the
1984 MTA Seminar.
At the top of the facing page is the chart of the indicator as of April 27, 1985. Below it is the
daily (rather than weekly) variation of the seminar indicator chart.
t-83 MW
125,3226- ~~~~~~~~~-~--~~-----______
- 170.4100
87+&$13- --------------------- ----------_--------
- 153.5800
50*0000- .-.-e-.-1- - 136.7500
- 119.9200
-25.3226-
--19v-B-----~---+----------1983-----------+-
- 103.0900
LAST 37.5361
ABOVE 6% 2x
.oooo 0% 0%
BELOW 94% 98%
TOTALS 99 3627
OSXC/RS/RSXC/BF((AV/VLT-~V.~/VLT.5~~VLT.5/~V~/~~S~/~F~LS/LS.5~~/G~~~/W~O/~UW5/~F5
O/OLB Z4-APR-84 6.01
MTA Journal/May 1985 64
CENTERFOLD:
THE SEMINAR INDICATOR
5a. 000d- - :t5.66a0
- : 57. c00a
-344.333:- ---1383-----------+-------- - 143. 34d8
LRST -35. 3257
FIBOVE Z7% 13%
. a000 8% 0%
BE!-ti# 73% 61%
TOTFlLS 180 3717
BSXC/RS/L!SXC/HF I (QviVLT-Rv. S/VLT. 3) *VLT. 5/RV) /hRS c/i+ (,-S/is. 5) ) /tiRD/WKi3/PUWS/NF5
G!SXC (hF ) COmFg6akED T!2 DSXC (i3- I
-182. 8386-
-415.6772-
LFIS I- -86. 3EE1
FlbUVk 34x r: 3 7.
. 0080 0% 0%
BELOW 66% 71%
TOTQLS 14s 24714
G!SXC/RS/G!SXC/BF ( (GIv/VLT-AV. >/Vie!. =~)+VL-I. s/t-iv) /ncis(/~+ (~b/&Sj. 5) ) /GR~/DQ~/PUWS/~JF~
MTA Journal/May 1985 65
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.
MTA Journal/May 1985 66
OPTIMIZATION
Software Review Workshop
Barbara B. Diamond
PRE-OPERATION DECISIONS
ALL SYSTEMS:
I. Natural Cycle Test
A. Relative Strength Index 5, 7, and 9
B. Fourier Analysis
II. Order of Analysis
A. Optimize Individual Studies
B. Combine Studies for Systems Analysis
III. Length of Data
IBM - 510 Data Point - equiv approx
A. lntra Day - 80 days, 1 hour
B. Daily - 2 years
C. Weekly - 10 years
D. Monthly - 42 years
APPLE - 240 Data Points - equiv approx
A. lntra Day - 40 days, 1 hour
B. Daily - 1 year
C. Weekly - 4 years
D. Monthly - 20 Years
MTA Journal/May 1985
67
PRODUCTREFERENCE
Product: Market:
Compu Trac
Compu Tiac, inc.
Box 15951
New Orleans, LA 70175
Profit Optimizer
Micro Vest
Box 272
Macomb. IL 61455
ProfitTaker
Distek, Inc.
Box 1108
Lake Mary, FL 32746-9990
TechniFilter
RTR Software Systems, Inc.
444 Executive Center Boulevard
El Paso, TX 29902
Stock/Futures
Stock/Futures
Futures
Stock
MTA Journal/May 1985 68
Natural Cycle Test -Fourier Analysis
J
i
0 3
Y
I
FIGURE 1 - INTRODUCTION
MTA Journal/May 1985 69
0
4
Formulas :
Cl : c
c2 : CYl
c3 : CA14
c4 : CG7
c5 : CG14
C6 :J
c7 :P
0
a
C8 : ((H-L)/C)RlOO
Conditions :
1 : Cl>C2 @BUY
2 : Cl<C2
3 : Cl>C3 @BUY
4: Cl<C3
5 : c4>c5 @BUY
6 : c4<c5
7: C6>100
8 : C8>1.50
@CLOSE
@CLOSE YESTERDAY
@14 DAY MOV AVG
@ 7 DAY RSI
@14 DAY RSI
@POSITIVE VOLUME INDICATOR
@PRICE VOLUME TREND
@PERCENT DAILY VOLATILITY
@SELL
@SELL
@SELL
FIGURE 2 - TECHNIFILTER
MTA Journal/May 1985
70
Formula Set : SAMPLE
SYMBOL
------
,TXN
,GRA
,DGN
rDJ
, XON
, SQB
,MOT
,DEC
,AMD
,LUV
,TDY
,sy
,IBM
,JNJ
,PRM
, UTX
,CDA
,AIR
,HCA
,HWP
,wx
,NCR
,UAL
,DD
,TL
,SLB
,HON
,SOH
,GW
,GRL
,MHS
,HUM
,GD
,TAN
,DAL
,HNG
,CBU
,AMR
,SNE
,MD
1 2 3 4
---_- ----- ----- -----
117.75 118.38 122.16 27.17
40.50 40.63 41.16 27.91
57.75 58.00 64.79 41.32
43.00 43.75 45.44 25.51
46.38 46.25 47.14 27.88
51.75 51.88 51.95 77.78
34.88 35.00 36.54 4.51
113.75 113.75 117.90 34.15
32.88 33.63 34.37 31.22
24.13 24.13 25.11 21.97
260.88 265.50 263.75 46.37
47.38 47.38 47.83 56.09
132.88 133.75 134.17 57.42
37.38 37.50 38.35 29.79
18.75 18.63 18.53 50.00
41.88 42.00 42.81 42.48
35.63 35.88 36.74 31.25
20.88 21.13 20.76 50.00
44.63 44.88 45.77 50.00
37.00 37.75 37.37 55.90
30.50 30.75 31.65 44.75
28.88 29.50 29.58 48.09
45.13 45.75 45.64 44.44
52.88 53.38 53.30 63.29
48.00 48.50 49.74 0.00
40.75 41.50 41.40 27.78
62.75 63.25 63.28 56.83
45.00 44.38 44.58 52.27
32.63 32.88 32.23 65.17
19.88 20.38 20.82 23.95
83.50 84.63 84.47 53.14
29.00 30.00 29.67 51.78
79.00 78.00 78.38 39.75
30.63 30.63 31.16 34.53
45.00 45.00 44.58 60.00
46.25 46.38 46.48 43.78
13.50 14.00 13.16 41.61
39.50 39.75 38.35 74.93
16.75 16.63 16.34 64.76
81.38 81.50 78.86 82.57
Date : 02-22-1985
Ordered By Column 5
5 -
-----
21.36
25.96
27.52
33.33
33.39
35.27
36.22
39.05
39.72
40.56
41.18
41.29
42.85
42.93
46.78
47.72
47.72
48.55
49.16
50.70
50.79
51.92
52.96
53.10
53.70
54.07
54.90
55.48
55.84
57.14
57.43
57.87
58.95
63.66
63.86
64.04
65.06
65.20
65.53
86.12
6 7 8
----- ----- -----
84.27 -286.38 0.64
114.18 -234.52 0.94
130.00 -452.47 1.51
103.53 -373.11 3.79
115.79 64.13 1.08
136.55 726.16 0.73
82.87-2270.11 2.52
155.48 5398.71 1.32
95.00 1845.78 2.65
82.17 1435.73 0.54
142.59 2346.39 2.59
113.11 136.72 1.06
116.39 3782.84 1.60
109.19 278.80 1.69
189.32 3616.49 1.97
142.83 429.22 1.19
166.22 463.70 1.40
126.70 491.66 2.39
111.43 777.45 1.41
99.36-1023.69 2.38
137.79 3738.96 1.25
76.87 -88.92 2.60
111.61-1625.87 2.22
149.85 1656.05 1.17
73.96 -552.51 1.56
104.64 1424.19 3.36
100.99 86.64 1.40
134.78 -323.09 1.38
127.01 264.95 0.77
108.06 -88.76 3.77
106.00 195.45 1.50
136.14 2574.41 3.90
134.92 1194.27 2.22
80.12-2650.48 2.02
140.66 664.87 1.67
134.48 2548.47 2.44
26.05-.lOE+OS 5.56
131.72 65.83 1.59
181.89 6405.68 0.78
131.30 782.84 0.77
FIGURE 3 - TECHNIFILTER
MTA Journal/May 1985 71
Summary Of Results : SAMPLE
CONDITIONS
SYMBOL 12345678
------ --------
,LUV
, TXN
, SQB
,GW
,MD
,SNE
,GRA
rsy
, XON
,DD
,UTX
,wx
,DEC
,SOH
,HON
,CDA
,HCA
,MHS
,DGN
,TL
,AMR
,IBM
,DAL
,JNJ
,PRM
,TAN
,GD
,UAL
,HWP
,AIR
,HNG
,MOT
,TDY
,NCR
,AMD
,SLB
,GRL
,DJ
,HUM
,CBU
1
---
6
38
30
11
35
2
18
28
27
31
20
9
37
23
33
14
22
36
32
29
17
39
24
16
3
10
34
25
15
5
26
13
40
7
12
19
4
21
8
1
2 3 4 5
--- --- --- ---
6 6 3 10
38 38 6 1
30 30 39 6
11 11 37 29
35 35 40 40
2 2 36 39
18 18 9 2
28 28 31 12
26 27 8 5
31 31 35 24
20 20 18 16
10 10 21 21
37 37 13 8
22 22 28 28
33 32 32 27
14 14 12 17
23 25 25 19
36 36 29 31
32 33 16 3
29 29 1 25
17 17 38 38
39 39 33 13
24 21 34 35
15 16 10 14
3 3 26 15
9 9 14 34
34 34 15 33
25 24 20 23
16 15 30 20
5 4 24 18
27 26 19 36
13 13 2 7
40 40 22 11
7 7 23 22
12 12 11 9
19 19 7 26
4 5 4 30
21 23 5 4
8 8 27 32
1 1 17 37
FORMULA RANK
Date : 02-22-1985
6
---
5
7
31
23
25
39
19
18
20
36
35
32
37
28
10
38
16
13
24
2
26
21
33
15
40
4
29
17
9
22
27
6
34
3
8
12
14
11
30
1
7 8
--- ---
30 1
10
6
25 p
2
3
19 4
27 5
40 6
11 7
17 8
14 9
31 10
21 11
37 12
39 13
9 14
16 15
22 16
26 17
18 18
7 19
6 20
15 21
38 22
24 23
20 24
36 25
2 26
28 27
4 28
5 29
23 30
34 31
3 32
33 33
12 34
32 35
29 36
13 37
8 38
35 39
1 40
FIGURE 4 - TECHNIFILTER
MTA Journal/May 1985 72
0
A
(1)
-----
8
10
12
14
16
18
20
22
24
Formula Set : SAMPLE
PARAMETERS LONG
(2) (3) (4) GAIN #TRADES
_____ _---- ----- ---- -------
l 11.34 2.20 52 50
I==0
M9
3.72 38
8.70 32
7.14 32
-11.07 38
Symbol : ,LUV
SHORT
GAIN #TRADES
_--- -------
43.91 104
25.81 100
30.73 78
41.70 64
40.76 64
2.47 76
0 B Formula Set : SAMPLE
PARAMETERS LONG
(1) (2) (3) (4) GAIN #TRADES
_---- ----- __--- -----
5
7
9 12.46 54
11 8.69 72
Symbol : ,LUV
SHORT
GAIN #TRADES
---- -------
64.96 y 124
45.91
Q
106
46.21 108
38.52 144
Formula Set : SAMPLE Symbol : ,LUV
PARAMETERS LONG SHORT
(1) (2) (3) (4)
GAIN #TRADES GAIN #TRADES
---- ------- ---- -------
-----
5
7
0 P Formula Set : SAMPLE
PARAMETERS LONG
(1) (2) (3) (4)
GAIN #TRADES
----- ^---- _---- ----- ---- _------
12 5 8.74 32
12 7 8.40 32
14 5
14 7
Symbol : ,LUV
SHORT
GAIN #TRADES
---- --e---v
39.29 64
39.66 64
FIGURE 5 - TECHNIFILTER
MTA Journal/May 1985 73
command
( )An
B
C
i Fl
( )Gn
H
( )I1
J
K
L
( Wn
( )Nn
P
Q
( )h
( )Sn
( P-
( Wl
V
( Wn
( Wn
Zn
( )n
fLulction
Todays n-day simple moving
average of ( ).
Negative Volume Indicator (NVI).
Todays closing price.
Sum of all values of ( ) from the
beginning of the data base to the
present.
The n-day Welles Wilder relative
strength of the quantity ( ).
Todays high price.
Indicates if ( ) is positive, zero, or
negative by assigning 1.00, 0.00, or
- 1 .OO respectively.
Positive Volume Indicator (PVI).
On Balance Volume (CVI).
Todays low price.
Largest value of ( ) over the last n
days.
Smallest value of ( ) over the last n
days.
Price Volume Trend (PVT).
Daily Volume Indicator (DVI).
Multiply ( ) by the positive number
n.
Sum of the last n values of the
quantity( ).
Copy the quantity ( ).
Absolute value of ( ).
Todays trading volume.
Slope of the Least Squares line of
the last n values of ( ).
Todays n-day exponential moving
average of ( ). Here, n must be
bigger than two.
Use the computation in an earlier
formula.
Compute the nth power of ( ), for
n positive.
FIGURE 6 - TECHNIFILTER
MTA Journal/May 1985 74
= Timing Ok
Optimized
f7 = Abort Test
d-
FrofitGptisi:er Scanary Repcrt fcr [SF:
Syc,tea Gate [96-27-14841
Frofit Set [Print All.1 Pq!: I
ii SJ LO LSa SSa Entr Exit Ziiin Trd Cus P/L HaxLoss !Is:iDdwn ?rf?act
------- -- -- -_ --- --- ----- ----- _ - - - - - - - _ _ - -- - _ __-_--- _------
3 8 45 .en .ao Open zpEr! 0.759 3,424 ! 1,463 3.43
3 946 .B;J -23 Gpen hen 0.6&9 2!2?9 : 1,463 !.%9
3 B 47 .CO .i?O Open Own 9.583 1,624 2 2,325 l.:? 3
5 9 45 .09 .F9 Gpen Dpen 9.750 3,424 1 1,463 T.349
j 9 45 ,90 -90 Open hen 9.m 2,299 1 1,463 l.%B
3 9 47 .QB .ee UpEn Gpen 9.580 1,567 2 : _. f7C -* 1.4ei
3 !9 45 .80 .90 @pen Open a.i;a 3,367 1 l,lbJ j..:15
3 19 46 .9a .0a Open Open 9.590 2,249 1 1,463 1.365
3 19 47 .?3 .a9 Open Open a.5i;9 1.574 2 :.g25 l.?EC
4 6 45 .29 .fd hen hen 9.73 1,424 I 1,453 :.3:a
4 8 46 .iT0 .ea Open Open a.ir39 2,299 1 1,465 !.993
4 8 47 .aa .08 Open Open 9.5ao 1!624 2 2,ar; 1.49a
4 9 45 -99 .a0 Open Open e.759 3,424 1 1!463 3.340
4 9 46 .K$ .09 Open Open 9.6!0 2,299 1 1,463 1.%6
4 9 47 .90 .99 Open Cpen 0.589 1,537 2 2,925 !.436
4 18 45 -90 -90 Open Open 9.75a 3,307 ! 1,40; 3.315
4 19 46 .P9 .BB Gpen Gpen 9.689 2,249 1 1,463 1.865
4 19 47 .Q9 .09 Open Ups 0.509 1,574 2 2.325 1.480
5 8 45 .Ea .98 Cpen Open a.759 3,799 ! l,NE 4.4?2
5 8 46 .OO .Q0 Cpen Open 9.699 2,299 1 1,463 l.EtiB
5 8 47 .ca .oa Open Open ma 1,624 2,025 1.478
5 9 45 .@a .02 Cpen Gpen 0.750 3,735
1 2
1 i,w 4.492
5 9 46 .9a .OB Open Open 0.699 2,29? 1 !,463 1.885
5 9 47 .ea -90 Open Open a.:ea 1,587 2,025 1.486
5 10 45 .90 .ea Open Cpen a . i<B w 3,762 1 !,988 4.45:
5 19 46 -90 .DO Open Open 0.689 2,249 ! 1,463 1.665
5 19 47 .0a .Btl Open Open 9.m 1,574 2 2,z5 l.iaa
________________________________________-------------------------------------
Total Successful iriteria Printed [ 2ij
iota1 Failed Criteria not Printed [ 01
Total of all iested Criteria..... [ 271
____________________-----------------------------------------------------------
Total Test Tire [B0:94:551 End of Optiairir;!:
FIGURE 7 - PROFIT TAKER
MTA Journal/May 1985
75
++r**s*****;g
f9 =
Possibilities
Change. Y = Timing Ok Entr=Open htt=apen
f7 = Abort Test
Optimized
Prof:tB:tirizer Sumry Reoort for [SF:
Svstea Date [Bb-27-19341
Prciit Set 1 K,:i?Bl Pace: 1
iF ss LD is3 353 En!r Exit Win ird C?ND P/L ?,axLsss tls:DJHn iriiart
__ __ _- _-- __- -_-_- ----- _____-_- __----- ------- _ - - - - - - - - -- - - -
5 9 45 .%I .99 Open Ojfn I.758 3,793 1 1,m 4.492
5 945 .oa .31 Gpen Cpen O.667 3,349 1 1,233 4.533
5 9 45 .93 .a2 lloen Cpen r.aaa 4,662 a a 8. a90
5 9 45 .90 .C3 CpE!l Oren !.EEB 4,bb? 0 D a.cea
5 ? 45 .til .e4 O:en Epen i.aea 4,hi2 9 e e. RIO
5 945 .?C .95 ripen @;ER l.N# 4,662 a a 0.PZP
s 945 .!a .I6 Open Open !.X0 4,661 a a 6.809
5 945 .Ol .02 hen Open !.CE0 3,Fxi 0 0 a.??9
5 9 45 -01 .K Open Cpen i.ZiGl 3,908 a 0 e.eza
5 ?45 .O! .04 hen Bpen !.993
j ,"a2 a a a.eea
5 9 45 -81 .15 Open Dpen 1.m 3,913 3 3.aoa
5 9 45 .Ol .ab Open Open 1 929 3,?03 0 3 u.oiY
5 9 45 -02 .a2 Dvn Open !.a?3 4,225
R a
0 '3 0 . va.,
5 945 .t2 .03 Open Gpen 1.000 4,225 0 0 0.389
5 9 45 .a2 .B4 Cpen Open 1 3iY 4 I- 125 i? c 3.8ia
5 9 45 .a2 -05 Open Open i.00a 4 2?5 - , 0 0 o.occ
5 9 45 .82 .95 Gpen Open 1.000 4,225 il 9 9.3?3
5 34: .i33 .a2 Open Open ! 0% 4,225 6 1 a.%?9
5 9 45 .03 .Ej Ot2n Open t.eze 4 ,?C ,-J a J a.aaa
5 9 45 .83 ,@4 Open Gpen t.eae 4,225 0 B a.iw
5 9 45 ,83 .fl5 Open Open 1 i?J0 4,225 a 9 2.9c9
5 9 45 ,fl3 .05 Open Open 1.080 4 ??C ,&Ad 0 9 il.GiM
S 9 45 .94 -02 OFen Open 1.290 4,225 a 0 c, 223
5 9 45 .94 .03 Own Open t oaa B 9 0.9e0
5 9 45 .a4 .a4 Ojen
4,225
Open l.iW 4,225 D 9 o.300
5 9 45 .@4 .a5 Open Open 1.W 4,225 B 3 a.m
5 9 45 .94 .i% Open Open 1 39B 4;225 9 0 0.300
Total Successful Criteria Printed I 27i
Total Failed Criteria not Printed I 81
Total of all Tested Criteria..... i 351
Tota! Test Tine [iJ0:0b:Zll End of Octizi:ing
FIGURE 8 - PROFIT TAKER
MTA Journal/May 1985 76
The ProfitAnalyst Indicat
status Tlminq Filter Short Dxrect Long Direct 1 Lsensltlvlty SSensi tlvi ty
I
f7 = Abort Test
Profit0ptimizer %mry P,eecrt ix lSF1 .
Syjtes 2ate lO5-27-1934i
Profit Set [ t4,5a@j Pq2: 1
Ti SD LD LS2 SS2 Entr Exit 3in Trd Cm P/L ilSL35j !lSXP?*i; iriF3It
__ __ __ ___ __- -____ __--- _-_____- ------- ------- ______- --__---
5 9 45 .M .5Z hen Cpen 1.030 4,'s a B 0.m
5 9 45 .08 .a? ClCSe ClC55 i.EZJ 5,224 B a 0.OE;1
5 9 45 .a8 .a? Gm Cias. !.a24 4,652 0 B 0.G!o
5 945 .OJ .02 Class Open
,-BE;! < "4
d,iL. a 5 0.m
_______-_____-______------------------------ --_--_----
Criteria PrintEd [
iota1 Test Tiae [03:00::83 End ci @;tiri:ing
FIGURE 9 - PROFIT TAKER
MTA Journal/May 1985 77
TumLcLosEDmTRADEs .............................. 17
T;oNGwINNmEs
____- __---
..................................
SHRT WINNING TRADES .................................. 9
mAL WINNING TRADES ................................. 12
-LcfGT;osmEs----- ................................... 3
------
SHRrmSINGTRADES ................................... 2
TOTAL mm TFaJxs .................................. 5
mx.-BR --..--. _-_...._---
. ..... ................
-.__---- ...----.O-- p
.............
%WINNINGTFViDES ..................................... .705
--%-I;oSING-TRADES
_-_-.-- ..
...................................... 7294
%BRUKEVENTFNXS ................................... 0
-TmAI;-REAT;ImTEIOFITSr ... -- -.-. ..- .... - -----. - .. -...- ... 255g6-.~--. --- .~ .~. --~- -
...............................
lwrAL -zED LOSSES ................................ -7801
CWULATIVE PROFIT OR LOSS ............................ 17795
RATIO-CUMULATIVFPEIT--TO--TOTE REALIZEIIzOSSE.-;I---7;281---
&9- -_~-.- -.-
MAXIMUMh'INNINGTRADE ................................
..~~.T;OS~-.T--.- ..- .-. _-- .. -.--. ..... -.-.---.._ .- -~_ .._. ...
.................................
AVERAGEWINNINGTRADE ................................ 2133
AVERAGELOSINGTRADE ................................. -1560.200
RATIOXJEEVCE- wINNwc;-lu -msING-TRAD~-- d ................
M6.j--------- -
AVERAGE PROFIT OR UXS PER TRADE ..................... 1046.764
...
MPXMUM NUMBER CCNSECUXVE LOSING TRADE .............. 1
I4?aIMmDo-CONSECU?IVEms.. ...................
M?XIMUMDRAkXGWN- CLOSED WI! TRADES.; ............ ;.-:
#- -- ... . ~---~~~ - ....
PROFIT FACJXJR ........................................ 3.281
SHARPE RATIO-- .......................... :;. ............... :;
..i*24 ............ - ....
TBILLRATE ...........................................
LEVERAGEFACIUR ......................................
am41ss1CNS - CLOSED cm TRmEs ......................
ExECurIONsLIPPAGE ...................................
mTm m...-Zm.PmFIT-oR m ;T, ; ; ; . ;;,;, ;-
RATIO CCM4 AND SLIP To CL&l NET REALIZED PmFIT .......
~~TQT~ZEIS-PKiFITST;N--OPEN-TRADT- ...............
TOTAL UNREAWIZED LOSSES CN OPEN TRADE ................
!ixYr& TRADIIGDAYS - .- 638---..- -
-- ..____ --.-. .- -.--
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
TOTALHOLIDAYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
'IWTALDAYS INFILES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1515
ToTAL .-~~~.-. ---- - -..- ~---. .._.~. ._. _._ __~
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 -- --
CCWVERSIONFACIOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
-POINT- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.500
-...._-____
..- DAILy L;CMIT --.--.- ----.-.- _----
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150------------
END OF HIWRY TEST FOR RANGE: 1
FIGURE 10 - PROFIT TAKER
MTA Journal/May 1985 78
0
4 Strateqies Options
-
0. Move to Index Menu
1. Buy or Sell Only (Hedging)
2. Select a Stop Strategy
3. Combine Two Indexes
4. Implement Pyramiding Strategy
5. Trade Toward Cash Price
j 6. Define Your Entry Points
Indexes
1 - STD. MOV. AVG.
3 - PERCENT R
5 - VAR. OSC.
7 - VOLATILITY
9 - G-PERCENT
11 - MA. VAR. G%
13 - REGRES. SLOPE
15 - MA OSCILLATOR
17 - EXP. MOV. AVG.
19 - MA OSCL. CROS.
2 - REL. STR. IND.
4- HI/LO OSCILLATOR
6 - PAR. TIME PRICE
8 - SWING INDEX
10 - VAR. G-PERCENT
12 - VOL. OP/INT IND.
14 - ACCUM/DIST.
16 - WHT. MOV. AVG.
18 - DIR. MOV. IND.
20 - MOV. AVG. BANDS
LENGTH OF MA #l 3
INCR OF MA #1 3
END OF MA#l 12
LENGTH OF MA #2 6
INCR OF MA #2 4
END OF MA #2 18
LENGTH OF MA #3 9
INCR OF MA #3 6
END OF MA #3 27
FIGURE 11 - PROFIT OPTIMIZER
MTA Journal/May 1985
79
CNTRCT/STK . . . . JUN 84 84 T-BILLS INDEX #l . . . . 16 DAY RSI
COMMISSION . . . . 50 SHORT AT . . . . 75
TRADING . . . . . BUYING ONLY LONG AT . . . . 25
TAKING PROFITS . . 30 INDEX #2 . . . . 10 DAY %R
TWO INDEXES . . . SAME SIGNALS SHORT AT . . . . 20
RE-ENTER . . . . . REVERSE SIGNAL LONG AT . . . . 80
# OF TRADES . . . . . . 2 # OF LOSING TRADES . . . 0
COMMISSIONS . . . . . . 100 LARGEST LOSING TRADE . . 0
LARGEST UNREAL LOSS . . -24 # OF WINNING TRADES . . 2
TOTAL PROFIT OR LOSS . 1400 LARGEST WIN. TRADE . . . 700
# OF DAYS IN MARKET . 33 AVG. DAILY GAIN/LOSS . . 42
0
9
PROFIT DISTRIBUTION
~_-~~--~~----~------
-3.523 21.33
~_-~~--~~----~~-----
1 *
2 *
3 *
4 *
5 *
6 *
7 *
8 *
9 *
10 *
11 *
12 *
13 *
14 *
15 *
16 *
17 *
18 *
19 *
20 *
C
c
PROFIT DISTRIBUTION
-_-----~---~---~--~~--
-4.355 -10.34
----~--~~--~_--_--~~--
1 *
2*
3 *
4 *
5 *
6 *
7 *
8 *
9 *
10 *
11 *
12 *
'-
FIGURE 12 - PROFIT OPTIMIZER
MTA Journal/May 1985 80
0
6
0
e
Pass Amount
F/L - Marlmum 5 $0. osu2
E,wity - Plax~mum 5 0.08V2
-------------------------------------------------------------------------------
24-~Jan-s5 13:39:% 0 WARKS NF828& ; 3TA.FFT Page: 1
WlA. SCH : SMA.TSK Pas*: h
rAlE. EN. YIM. tad. ROSE. u va. u 01, w:35
Pl PX PIL 0.0823 I,. 087 I,. 0327
. .oate 941121 n41023 841023
Ml P/L -0.0109 -0.OOQ7 -0.0109
. . oate 130406 830405 830406
i-lax Equity 0.0950 o.lOlh 0. 10 16.
. . oat e 841107 850114 850114
Pl,n Equ,ty -0.0097 -0.010? -0.0109
. . Date a30405 3304oc 8304Oh
Statlstlcs
Pcrluds
Trtdee
* Profitable
1 LosIng
% Profltablc
% LosIng
Cummlsslon
Sl ip*ag*
Gross P/L
open P/L
PIL
Equity
127 348 509
12 10 22
3 6
0
9 4 13
25.00 ho. 00 40.?1
75.00 40.00 59. 09
Results
0.0000 0.0000 0. 0000
0.0000 Cl. 0000 0. oooo
-0.0048 0.0871 0.0823
0.0000 0.0000 0.0000
-0.0048 0.0871 0.0823
-0.0048 0.0871
0.0823
FIGURE 13 - COMPU TRAC
MTA Journal/May 1985
FIGURE 14 - COMPU TRAC
MTA Journal/May 1985 82
0
A
Relation Elements
A - and
B - or
C - increasing
D - decreasing
E - previous
F - next
G - min
H - max
I - average
J - entry-price
0
B
Pun&ion Elwents
A-+
B--
c-*
D -- /
E-C
F->
G-z
H-c
T-1
add
subtract
multiply
divide
less than
greater than
equal to
open parenthesis
close parenthesis
0
c
THE STUDIES
Advance-Decline .......................
Commodity Channel Index ...............
Commodity Selection Index .............
Demand Index ..........................
Detrend ...............................
Directional Movement Index ............
HjlL Momentum Index ....................
Haurlan Index .........................
McClellan Oscillator ..................
Momentum ..............................
Moving Average ........................
Moving Average Convergence/Divergence.
Open Interest .........................
Oscillator ............................
Parabolic (SAR) .......................
Point & Figure ........................
Rate of Change ........................
Ratio .................................
Relative Strength Index ...............
Short Term Trading Index ..............
Spread. ...............................
Stochastic ............................
Volume ................................
Weighted Close ........................
Williws XR ...........................
FIGURE 15 - COMPU TRAC
MTA Journal/May 1985
83
FIGURE 16 - COMPU TRAC D
MTA Journal/May 1985 84
FIGURE 17 - COMPU TRAC D
MTA Journal/May 1985 85
0
4
/// slJ=aY 0fAvaileble Arithmetic Operatom
key
definition example
+ add see "EXAWLX", below
subtract high - low
multiply see "EXAMPLE", below
divide
11 ,I II 11 1, 11
=
equal to
AZ
not equal to
> greater than
< less than
>= greater than or equal to
<= less than or equal to
& and
or *
close = high
close h= low
rsi > 30
rsi < 60
close >= 350
close <= 300
rsi > 30 IL oscl > 0
rsi > 30 : oscl > 0
open parenthesis see "EXAMPLE", below
closed parenthesis
,t t1 ,I 0 1, ,I
[xl
offset ** close > close[l]
* The sign for "or" is generated by typing <SHIFT><\>.
This character is NOT a colon.
** For example, "cloae[5]" yields the value of the close 5 days previous
to the current close. The number enclosed by [I (open & closed square
brackets) must be a positive constant in the range l...x, not an
expression. "x" is the maximum number of dates accumulated for the item.
This number depends on the format of the data diskette.
lLKb!PLh': (high + low + 2 # close) / 4
--Co.oditions You Cen Set
key
definition example
=
*=
>
<
>=
<=
&
equal to close=high
not equal to closeA=low
greater than
rsi>70
less than
rsi<30
greater than or equal to close>=350
less than or equal to
close<=300
and
rsi>70&oscl>O
or rsi>70:oscl>O
NOTE: The sign for "or"
is generated by typing <SHIFT><\>.
This character is NOT a colon.
FIGURE 18 - COMPU TRAC D
.
MTA Journal/May 1985 86
0
4
Resident Analysis Routines
Moving Average ...............................
Relative Strength Index ......................
Spread .......................................
Ratio ........................................
Oscillator ...................................
Momentum .....................................
Weighted Close ...............................
Commodity Channel Index ......................
Commodity Channel Index: General Information.
Rate of Change ...............................
Stochastic ...................................
Stochastic: General Infomation ..............
Directional Movement Index ...................
Moving Average Convergence/Divergence ........
0
6 Advanced T~dePlaa Operations
Enter long when....
Close greater than previous close and
---
close > close[l] &
High greater than previous high and --- high > high[l] &
RSI greater than previous RSI and --- rsi > rsi[l] &
RSI greater than 30 and --- rsi > 30 &
Oscillator greater than previous Oscillator and -- oscl > OSC~[~] &
Oscillator greater than zero --- oscl > 0
Exit long when....
Close less than previous close and
etc.....
-- close < close[l] &
The TR4LU Fuzxtion
a) Trade Enter a set of trading rules for each of four potential
b) Open-PL
c) Close-PL
d) Trade-PL
e) return
market positions: long entry, long exit, short entry,
short exit. The trading rules are based on the perform-
ance and interaction of other elements in the TradePlan.
For example, you could tell the system to take a long
position when the Relative Strength Index reaches 70
and
to exit the long position when the RSI comes back to 70.
There are many such possibilities.
Open-PL yields a profit/loss figure for all open
positions - positions you have entered but have not
exited.
Track the profit/loss for all closed positions -
positions which have been entered AND exited.
Mark profit/loss each time a trade is exited. This is
not a cumulative measure.
Track annualized "return on investment" by relating
profit/loss to margin requirements.
FIGURE 19 - COMPU TRAC D
MTA Journal/May 1985 87
FIGURE 20 - MITRONIX II
MTA Journal/May 1985
BIOGRAPHY
Barbara Diamond is President of Diamond Services Group, Ltd., established in 1978, a con-
sulting company for market data/software, international communications and broker services.
She also serves as a Director for Brokerage Support Services, Pty., Ltd., a firm specializing in
services to the Pacific Basin area. Ms. Diamond is a charter member of the international Mon-
etary Market and the Singapore International Monetary Exchange.
MTA Journal/May 1985 89
This page left intentionally blank for notes, doodling, or writing articles and comments for the MTA Journal.
MTA Journal/May 1985 90
ARTIFICIAL INTELLIGENCE / PAITERN RECOGNITION
APPLIED TO
FORECASTING FINANCIAL MARKET TRENDS
David R. Aronson
Abstract
Current use of computers by most financial market analysts (FMAs) barely scratches the sur-
face of what is possible. Computers have been used primarily to speed up tasks done by hand
and desk-top calculator in the pre-computer era. Such tasks include graphing data, calculation
of indicators, testing humanly derived trading rules, and fitting simplistic models. A new and
powerful application of computers to the FMAs domain is in its infancy: exploiting artificial in-
telligence and pattern recognition (AIIPR) to improve the accuracy of price-trend forecasting
and security selection. AI/PR is a computer-intensive data analysis and modeling process that
enables a computer to generate predictive models from histories of numerous indicator vari-
ables. AI/PR employs automated inductive inferencing to make associations and discover
complex relationships (i.e., conditional probabilities) between sets of indicators and subse-
quent market trends. Complex relations can escape traditional computer-based modeling ap-
proaches.
Forecasting financial markets is extremely challenging. Valid predictive models are hard to de-
velop, and they are likely to have complex structure. This is explained by several factors in-
cluding limitations in human intelligence, the efficiency of financial markets and the complexity
of the price setting mechanism. Although computers can help, traditional methods often pro-
duce models that fit the historical data well but perform poorly when put to use on new data.
Such model failures are usually due to a misapplication of the computer and inherent weak-
nesses of a given modeling methodology. However, AI/PR when properly applied can produce
effective forecasting models by avoiding the pitfalls of traditional modeling methods. Traditional
assumptions are relaxed by virtue of a much more intense use of the computer.
Forecasting models synthesized from hundreds of candidate indications have been produced
with an AI/PR system called PRISM (Pattern Recognition Informations Synthesis Modeling).
Directional forecast accuracy on out-of-sample (new data) has been statistically significant. With
AI/PR approaches there is a significant opportunity for synergy between the FMA and the com-
puter, as their information processing capacities are different and complementary.
This article is organized into six parts. Part I discusses a new role for the computer in the work
of the FMA. Part II considers the role of the FMA as an investigator of historical data to discover
predictive laws. Part III discusses difficulties associated with the discovery of valid predictive
laws as well as why they are likely to be complex. Part IV examines a number of common fore-
casting procedures in light of how well they address the difficulties discussed in Part III. Part
V introduces the AI/PR approach and how it deals with the difficulties raised in Part III. Part VI
discusses a number of applications of an AI/PR system called PRISM.
MTA Journal/May 1985 91
PART I
A NEW ROLE FOR THE COMPUTER
A. Old Roles and Old Views
Financial market analysts (FMA) are failing to exploit computers fully and properly in their ef-
forts to forecast price-trends of equities, bonds, commodities, currencies, etc. Most computer
applications merely speed up what had been done by hand and desk-top calculator in the pre-
computer era. Tasks such as creating graphs, calculating indicators, and testing theories and
strategies are common uses of the computer. Some are starting to use it to optimize trading
strategies and fit various models to historical data including exponential smoothing, Box Jen-
kins (ARIMA), and multiple regression. In many cases the models are over-optimized or over-
fitted (i.e., they fit historical data well but predict poorly on new data).
However, computers can be used more effectively For the last twenty years researchers in
various scientific and engineering fields have been utilizing the computer in a different way to
produce models and amplify their intelligence. By using the computer more creatively and in-
tensely, they have been able to escape the limitations of pre-computer era methods. Spe-
cialized software has enabled it to perform both inductive and deductive reasoning, infer patterns
in data, detect complex relationships and generate effective prediction models. When cast into
this new role computers have equaled and sometimes exceeded the performance of human
experts, thus prompting the term artificial intelligence. In this article we will discuss a partic-
ular type of artificial intelligence called pattern recognition (AI/PR). It is particularly applicable
to the domain of the FMA.
The failure to fully exploit the computer stems from an outdated view of its potential. Typical is
the comment of a senior vice-president of a large money management firm. (Wall Street Com-
puter Review 3185)
What you get out of a computer is only as good as what you put into it, and it is a
very subjective approach.
We contend this view fails to see that computers can be programmed to discover relationships
and patterns buried in masses of data. This discovered information (i.e., a predictive model)
is extremely valuable, and exists by virtue of the computers ability to transform the raw input
data in a very useful way. We are not, however, getting something for nothing, nor is this the
latest addition to a line of perpetual motion machines. The information gleaned by the com-
puter comes at a significant expense over and above the input data. First is the effort that went
into the design of the AI/PR software and secondly, a large expenditure of computational re-
sources required to perform the analysis.
B. Artificial Intelligence and Heuristic Problem Solving
We define artificial intelligence (Al) to mean capacities programmed into a computer, that, if
displayed by a human, would be described as intelligent. This includes the ability to recognize
and assess situations, perform significant logical steps, make decisions, and learn from prior
experience. One type of Al called heuristic searching solves a problem by trial and error.
Success or failure on any given trial is fed back to the program, helping it make an intelligent
choice as to what to try next. Thus, not all possible choices are tried, only those that lead in
MTA Journal/May 1985 92
the direction of a good solution. Of course, this requires that the program be given an explicit
definition of what constitutes a good solution.
C. Pattern Recognition
Pattern recognition is a specific type of heuristic program that was developed in the 1960s. The
objective of AI/PR software is to develop a model that can be used to classify a sample, object,
event, etc., based on the samples important attributes. The method is based on inductive logic,
the type of reasoning that derives a general rule from a study of specific cases. When there
are only two possible classes, we have the fundamental problem in pattern recognition: two
class discrimination.
For example, a college admissions office may wish to classify (predict) high school graduates
as to their ability to succeed in college. A pattern recognition approach to the problem would
start with a large sample of college students including some who did well and some who did
poorly (i.e., the outcome is already known). Each student is described by numerous quanti-
fiable attributes available prior to entering college. This is the data available to an admissions
officer and might include SAT scores, high school average, high school rank, etc. The job of
the AliPR program is to find a group of similar patterns, called a pattern-class that tends to be
associated with those who ultimately did succeed. This task includes identifying the relevant
attributes of the pattern, as well as value ranges, for each attribute.
To an FMA the term pattern might mean a visual pattern seen on a price chart such as the
famous head and shoulders, or a certain oscillator configuration. However, pattern in the
context of pattern recognition means something very specific. It refers to a set of measure-
ments that describes a single sample. Each measurement relates to a different attribute. For
example a three attribute pattern that might be used to describe a person are: height, age, and
weight. For these attributes, the authors pattern is: height = 58, age = 39, and weight =
165. Other terms synonymous with attribute are factor, variable, feature, or indicator.
The basic assumption of pattern recognition is samples from the same class will tend to have
similar patterns. The mathematical formalism used in AVPR is to represent a pattern as a point
in a multi-dimensional space, where each axis of the space represents one attribute. Thus,
samples from the same class will tend to cluster or clump in the same regions of the multidi-
mensional space. However, clumping will take place if, and only if,the attribute axes are rel-
evant (i.e., they have useful information). In addition, irrelevant attributes must be avoided for
they can dilute the information of the useful ones. Hence, the heart of the pattern recognition
problem is isolating the relevant attributes from a larger initial list.
The relevance of an attribute depends entirely on the classification task we wish to perform.
We know, without the benefit of a computer, that height will be a useful attribute in placing
an individual into either the class of jockeys or the class of basketball players. The attribute
eye color will not. The problems for which the important class qualifying attributes are known
dont require AVPR.
AVPR is usefully applied to complex problems whose relevant attributes are poorly under-
stood, the problems where even experts make poor judgements and predictions. Examples
would include the geological attributes of a site that is likely to contain oil or the indicator at-
tributes of significant bottoms in the gold market. An interesting paradox is that to the extent
most FMAs know the attributes of bottoms in gold they cease to be such. Thus, the real attri-
butes can only be known to a few FMAs. In such cases, AVPR analysis can be and has been
useful.
MTA Journal/May 1985 93
Using heuristic searching, the program tries to identify the most useful combination of attri-
butes (indicators) within a much larger set. With only twenty candidate indicators, there are
over six thousand possible combinations. Thus, an exhaustive search of each one would be
prohibitive, even for super computers. To deal with this, Al/PI3 programs use rules of intelligent
searching to pare down the domain of the search. Humans do this, for example, when looking
for a pair of lost eyeglasses. Few would consider the brute force approach of starting at the
North Pole and working south in expanding concentric circles. Most people would be intelligent
enough to start searching where they last remembered having them.
Financial market forecasting lends itself to the AVPR approach. First, forecasting can be easily
translated into a classification problem. Is today a bottom and, therefore, a time to recommend
buying? Is a particular stock likely to out-perform the market? Is the trend up or down? Second,
there is an ample supply historical samples of known class. For example, we know August 12,
1982 was a bottom day. Third, FMAs have numerous indicators that describe the state of the
market on a continual basis. Fourth, they are interested in determining how to best use those
indicators to forecast.
Readers familiar with multi-variate linear disciminant analysis may note a good deal of simi-
larity between it and AVPR. Although these approaches have similar purposes, there are sig-
nificant differences in their assumptions, methodology, and robustness that will be addressed
in a later section.
PART II
FMAS ROLE AS FORECASTER AND HISTORIAN
A. The Multi-Indicator Approach
An important, on-going task of the FMA is accurately forecasting price trends in financial mar-
kets Accuracy is of particular importance when the current price-trend is about to reverse di-
rection (i.e., the transition from bull market to bear market and vice versa).
A common approach to trend forecasting and reversal detection is the multi-indicator ap-
proach. The FMA considers the current readings on a multitude of indicators that measure var-
ious characteristics of the market. Among the dozens that may be found in a FMAs work book
are indicators of price and volume change, market psychology, monetary and interest rates,
institutional liquidity, etc. Each is evaluated as to its current implications or signal (bullish, bear-
ish or neutral). Finally, the FMA arrives at a forecast, which is a consensus of all the separate
indicator signals. This seems entirely reasonable. However, producing a forecast by properly
integrating the signals from a multitude of separate indicators is far more complex.
B. Rules Derived From Historical Data
Before considering the production of a forecast from numerous indicators, lets consider how
a FMA typically derives a forecast from just a single indicator. In other words, how is the current
level of a given indicator interpreted to be either bullish, bearish, or neutral. Such interpreta-
tions are based on rules derived from historical data. A rule is a distillation of historical prec-
edents and is, in fact, a simple prediction models. A typical rule might be: if indicator x has a
level greater than 1.20 grade it is bullish; otherwise, grade it neutral. Therefore, the FMA must
MTA Journal/May 1985 94
adopt the role of market historian or study the works of other historians before taking on the
role of forecaster.
A fundamental assumption of FMAs is that historical data series contain discoverable rules
(models) that can be used to make predictions from current data. However, FMAs vary widely
in the rigorousness of their methods to derive the rules.
Thus, indicator interpretation rules are really emperical laws that have been inductively gen-
eralized from historical data. Inductive generalization is the process by which a law or model
is derived from numerous past observations of a phenomena. If an observer notes that on one
hundred occasions, each time condition A was observed event B followed eighty times, in-
ductive inference permits the leap to the more general statement: whenever condition A oc-
curs, then event B is predicted to occur with a research probability of 0.8. This type of reasoning
underlies the historical research of FMAs, as well as the scientific method in general. As with
all reasoning, it must be carried out properly to arrive at valid conclusions.
For example, consider how a single indicator model (rule) based on the PSSR (Public to Spe-
cialists Short Sale Ratio), a well-known indicator of stock market sentiment, might be induced
from historical data. Assume that an FMA has the past thirty years of weekly readings on PSSR,
as well as the history of the Standard and Poors 500 (the item to be forecast). The FMA ex-
amines the history of the PSSR data series, noting its level at specific points in time as well as
what the market did subsequently. If it is seen that on many prior occasions when the PSSR
exceeds a value of 0.60 (a relatively high level), an interpretation rule (model) for the PSSR
can be inductively generalized.
If PSSR exceeds 0.60 expect prices to trend up.
The PSSR rule is a simple model that translates a current indicator level into a forecast of the
trend.
C. Multi-Indicator Models
However, few FMAs rely on the predictive power of a single indicator and rightly so. In an effort
to get greater predictive accuracy, most practitioners wish to combine the signals of numerous
indicators to produce a forecast. Attempting this more ambitious approach creates a number
of problems. First is confusion. Most of the time indicators are giving conflicting signals. Some
are bullish, some are bearish and some are neutral. Resolving this conflict without hedging is
extremely difficult. Second, deriving a forecast from numerous indicators is far more difficult
than single indicator forecasting. Third, the vast difference in difficulty is not readily apparent.
The subtle, but substantial complexity of the multi-indicator forecasting leads many efforts astray
For example, one common approach attempts to derive a forecast from a consensus voting
of numerous single indicator signals (i.e., algebraically summing the signals where
bullish = + 1, bearish = - 1, and neutral = 0). Such an approach incorrectly assumes that each
indicator has equivalent, valid, non-redundant, independent information. The fact is a good
forecast that truly resolves indicator conflict requires a multi-indicator model that takes into ac-
count indicator interrelationships This includes their relative importance or lack thereof, pos-
sible redundancy, and possible non-additive (i.e., non-linear) interactions. Data-modeling methods
that attempt to take such effects into account are termed multi-variate. The AVPR approach
to be discussed is one multi-variate method that is particularly powerful when modeling the
behavior of complex systems.
MTA Journal/May 1985 95
The complexity of producing a multi-indicator model results from the combinational explosion
that takes place when considering numerous indiators. With only twenty indicators to consider,
there are over six thousand possible indicator models (i.e., combinations): all combinations of
indicators taken 19 at a time, plus all combinations taken 18 at a time, plus all combinations
taken 17 at a time, etc. With that many possibilities relative to the small number of historical
data samples (for example, there have been approximately forty significant, intermediate term
turning points in the stock market over the last thirty years), there is a larger danger of con-
triving a law that fits the past perfectly, but is truly devoid of predictive power. This problem
called over-fit will be discussed in the next section.
Harry Truman said the only thing new in the world is the history we dont know. Although this
is an exaggeration, the former president knew that much that surprises us could be anticipated
by better study of history. In the next section we will look at the problems that hamper the FMAs
search for predictive laws. Not only are they difficult to unearth, but the most useful ones are
likely to be complex.
PART III
WHY FORECASTING FINANCIAL MARKETS IS SO DIFFICULT
Why is the FMAs task of developing accurate forcasting models so difficult, and why are the
models likely to be so complex (i.e., contain numerous variables related in highly non-linear
ways)? Several factors account for this.
1) Human limits in configural thinking and pattern induction
2) Efficiency of financial market (The Efficient Market Hypothosis)
3) Inherent complexity of the price setting mechanism in financial
markets (Cybernetics and the Law of Requisite Variety)
4) Limitations and pitfalls in traditional computerized data analysis
A. Human Limits in Configural Thinking and Pattern Induction
Cognitive Psychology, a field concerned with studying the brain as an information processing
machine, has discovered limits in mans ability to effectively analyze numerous variables si-
muftaneously and detect relationships among them. Herbert Simon, the Nobel Laureate, called
it the principle of bounded rationality. The capacity of the human mind for formulating and
solving complex problems is very small compared with the size of the problems whose solution
is required for objectively rational behavior in the real world-or even for a reasonable approx-
imation to such objective rationality.(l)
In addition, it has been discovered that under certain conditions, the mind tends to imagine
patterns in data known to be random. Both discoveries imply that the FMA relying solely on
brain power will experience great difficulty.
An FMA performing multi-i,ndicator analysis is engaging in configural thinking, a known weak
link in mans intellectual capacities. A configural thought task is one that requires the mind to
MTA Journal/May 1985 96
grasp the significance of a set of facts or variables, in a holistic fashion. In such problems, the
key variables produce their effect in a highly interdependent way rather than acting as single
agents. Therefore, to predict an outcome, the analyst must keep all the variables in mind at
the same time rather than considering each one in isolation. We point out below that both the
efficiency and the complexity of financial markets support the contention that the FMAs must
engage in configural thinking to accurately forecast trends.
Data Over-Load
-Confusion
FIGURE 1
The Strained Brain of the F.M.A.
Configural thinking limits make it-difficult for the FMA to perform multi-indicator forecasting.
MTA Journal/May 1985
97
Research indicates mans abilities are strained when four or more variables need to be con-
sidered configuratively(2). Thus, if there is a valid predictive law (i.e., a multi-indicator model)
involving four or more factors it will be difficult for FMAs to detect. The problem is compounded
when the FMA is searching through a list of twenty plus indicators for valid laws composed of
a subset. In such a case, even two or three indicator laws are likely to be missed among the
several thousand possible combinations.
There is yet another intellectual roadblock. The mind sometimes erroneously perceives pat-
terns and relationships in data that dont really exist. Research indicates that the mind is dis-
pleased by chaos and will try to impose patterns on sensory data, even if there is none. In an
experiment done at Stanford University, subjects were presented with patterns that were gen-
erated randomly (a randomly moving light source over a photographic plate). They did not know
the patterns were random. The subject was asked to classify each photo into one of two classes,
based on perceived similarity of pattern. After each classification, the subjects choice was de-
scribed as correct or incorrect by the experimenter on a random basis. In other words, it was
a totally random situation. Yet, each subject thought they had discovered two distinct pattern
classes. Even after they were told of the hoax, the subjects maintained that the two pattern
types were real.{3} Similar experiments with random sequences of colored lights showed sim-
ilar results. Subjects insisted they could predict the next color to appear even after being told
they had been observing a random process.
B. Efficiency in the Financial Markets
According to the Efficient Market Hypothosis (EMH), the financial markets are examples of nearly
efficient markets. The hypothosis implies the impossibility of forecasting price-trends such
that above average returns, adjusted for risk, can be earned. This is because an efficient mar-
ket is one in which current prices fully reflect (i.e., already take into account) all known and
knowable information. Such efficient pricing results from many intelligent, rational, and well-
informed participants trading in the market. As soon as some influencing information becomes
known, or even knowable, the buying and selling of these knowledgeable participants will push
prices quickly to a level that discounts that information. Attempting to analyze the markets fun-
damental and/or technical information to gain a predictive edge is, therefore, pointless.
We agree that EMH has validity, up to a point. Forecasting market trends is extremely difficult,
and one ought to be suspicious of simplistic approaches that promise to forecast accurately.
Even if a simple system worked initially, its simplicity means it could be easily reproduced and
followed by enough adherents to dull the edge it once had.
But EMH has some possible loopholes, and significant opportunities may exist for the FMA
with superior analytical tools. We contend the market is efficient only to the degree that people
are able to properly analyze the information and understand its implications. Clearly, not all par-
ticipants are equally able to analyze the masses of data. Good research is expensive so those
with the most money have an advantage. But superior research funding is not enough. The
analytic methodology must be superior, as well. To the extent that data analysis methods, man
or machine-based, fail to capture and utilize all relevant information, the market remains in-
efficient. This creates the opportunity for the analysts with the better data analysis methods.
Where might these opportunities lie? We saw in the previous section that human intelligence
has difficulty detecting the complex multi-indicator patterns in data. In addition we shall see
many existing computerized data analysis methods have limitations that cause them to miss
certain types of complex multi-variate information as well (part 4, section D). Thus, undiscov-
ered predictive laws (models) are likely to be based on a multitude of complexly interrelated
indicators.
MTA Journal/May 1985 98
In an efficient market, to the extent that a predictive law is easily discoverable, it is without value.
The truly valuable information must always lie beyond the grasp of most market analysts and
modeling methods.
C. Complexity of the Price Setting Mechanism & The Law of Requisite Variety
An additional reason for believing that only relatively complex forecasting models can be suc-
cessful comes from the field of Cybernetics. Cybernetics is the interdisciplinary study of com-
plex living and non-living, goal-seeking systems. It is concerned with the ways in which the
system absorbs and utilizes information from its environment to achieve its goal. Norbert Wie-
ner coined the term in titling his landmark work, Cybernetics: the Science of Communication
and Control Processes in Animals and Machines. Wiener was the first to notice that com-
plex living and machine systems were alike in many ways, and their complexity was such that
a new discipline was required to understand them better. Actually, the field draws on many ex-
isting disciplines including mathematics, biology, economics, statistics, etc.
The term system here is used in its most general sense: an arrangement of elements so re-
lated as to form an organic-interrelated whole. The attributes common to complex systems are:
1)
2)
3)
4)
5)
6)
Has a goal or purpose.
Self-awareness of how well it is achieving the goal through
feed-back, and the ability to make adjustments.
Complex interrelationships between the various parts of the
system, not easily modeled by traditional mathematical models.
Marked dependence on receiving adequate information from its
environment to achieve its goal.
The viability of the system results from a cooperative
interlocking of its various parts (synergy): the whole is
much more than the sum of its parts.
The system can adapt to changes in the environment even if
such changes were not anticipated in its original design.
One of the principles of Cybernetics that is of particular significance to FMAs is the Law of Req-
uisite Variety.{5,6} The Law supports our twofold contention:
1) That viable market prediction models must be complex.
2) Simplistic approaches based on too few factors linked in
overly simplistic ways are likely to fail.
The Law states: Problems involving complex systems require complex solutions. More spe-
cifically, attempts to control or predict the behavior of complex systems will be successful only
to the degree the control or prediction mechanism approaches the complexity of the system
itself. The Law of Requisite Variety implies that a successful market forecasting system must
be based on a relatively large variety of information (different types of indicators about different
aspects of the market). In addition, those indicators must be integrated in a way that reflects
the interrelational complexity of the market price setting mechanism (i.e., highly non-linear re-
lationships).
MTA Journal/May 1985 99
The financial markets are examples of complex systems. The price setting mechanism is a
complex sub-system. Its operational complexity cannot be captured by traditional methods of
analysis and modeling. The human mind or even traditional statistical modeling are inadequate
to the task of controlling or predicting the behavior of such systems.
Of most interest to the FMA is the price setting mechanism. Its goal is to set a price where
buyers and sellers are temporarily in balance. As new developments occur to change supply
or demand, the price changes in an effort to bring offers and bids back into equilibrium. The
number of inter-acting influences that cause and describe price changes are so varied and
complexly linked with each other that the process defies the descriptions offered by simplistic
models.
The implications of the Law of Requisite Variety are consistent with the implications of the Ef-
ficient Market Theory, as well as the findings of Cognitive Psychology: successful forecasting
models will be difficult to develop and complex.
D. Limitations of Traditional Computerized Data Modeling
It would seem that the computer holds the answer to the FMAs problem. Used properly, the
computer can assist greatly, used improperly, it produces its own set of problems.
The most serious problem plaguing computerized data modeling is overfit. Overfitted models
fit the historical data very well but give poor predictions when applied to new data. It results
from taking too much freedom and using too much force in conforming the model to the his-
torical data. A second problem is underfit and results from the preconceived notions underlying
the given data analysis methodology. Such models will have sub-optimal power as well.
The objective of sound data modeling should be to capture all available information in a given
set of data, while at the same time rejecting noise (random effects) present. The result is a
model that is neither underfit nor overfit.
1. Overfitting the Model to Data
This is the trap that most data analysts fall into whether they use eye/brain or computer. In an
overzealous and misguided attempt to gain perfect predictive accuracy the analyst takes what-
ever liberty necessary to force the model to fit the historical data. The power of the computer
often seduces the analyst into this trap. It is simply too easy to keep revising a model to in-
crease fit. This applies to models that are functional relations, such as regression, and rule-
based trading systems, which are sets of conditions connected with various logical operators.
Wiih a finite amount of data, a perfectly fitted model can be derived if permitted to include enough
complex rules, free parameters, etc. The model becomes a detailed description of the ana-
lyzed data, including the random effects. Thus, fit may not be the best criteria, nor is it what
we really want. The real objective is predictive power on new data (i.e., on samples that have
not been analyzed in developing the model). Yet, most existing data analysis methods, whether
by machine or by man, strive to fit the data.
Lets see how a well-intended analysis can lead to the absurdity of overfit. Consider the ex-
ample of a FMA attempting to derive a rule-based model for signaling intermediate-term stock
market bottoms. Because his favorite indicator is the PSSR ratio (public to specialists short
sale ratio), he starts by studying it. After some study he notices a rather effective rule.
MTA Journal/May 1985 100
If PSSR exceeds 0.60 then buy.
Although this rule works fairly well, some false signals are given and some bottoms are missed.
So the FMA revisits the data to develop a more refined model. After much additional study, the
FMA is quite pleased at having produced a set of rules that give no false signals and call every
bottom. The model that resulted is presented below:
If PSSR exceeds 0.60
or The Short Term Trading Index (TRIN) 1 O-day moving average has
been over 1.20 within the last twelve days
or T-Bill Rates have fallen below and have remained below a 13-week
moving average
or Mutual Fund Cash is greater than 9.0%
Then Buy unless
it is a leap year
or the current Miss America is from a state south of the Mason
Dixon Line and a Democratic president is in office or Jupiter
is in the constellation Orion and the planet Pluto........
No doubt Jupiters position in Orion or Miss Americas home state are conditions that permit
the fit to be perfect. But these factors are clearly randomly related to the behavior of the stock
market. In fact, if fit is too good to a process known to contain at least some noise, one should
be suspicious. For unless the movements of financial markets are completely free of random
movement, and not even the most extreme opponent of the EMH would assert this, perfect fit
should be impossible.
Optimum fit (complexity) occurs when a model has captured the real information in a sample
but not its noise? One way to achieve this is keeping some historical data set aside as an in-
dependent sample for testing models generated on another portion of the data. This method
is called cross-validation (see part 5, section E), and it seeks to impose the rigor of the scientific
method on the process of data modeling.
Consider the following: An FMA notes that for the period 1965 to 1975, whenever the T-Bill rate
falls below its five week moving average, the stock market is about to start an advance. This
set of observations is the basis for generalizing a tentative rule-based prediction model.
If T-Bill rates fall below their 5week moving average it is a bullish signal.
According to the Scientific Method this statement should not yet be considered a valid law. It
is only a hypothesis awaiting validation and that requires testing it on an independent set of
data. If the hypothesis has validity in the data sample 1976 to 1984, then we have some reason
to believe in its forecasting power, and may elevate it to the status of an emperical law.
If additional rules are added in an attempt to improve accuracy (i.e., the model is made more
complex), the revised model must also be tested on the reserved data set. If the more complex
version is valid, the predictive accuracy on the reserved data will improve. However, when rules
that are really descriptions of random effects are added (overfit), the test in reserved data will
show a decline in performance. Thus, cross-validation provides the FMA with feedback that
can help prevent overfit.
MTA Journal/May 1985 101
2. Underfit
Another problem plaguing computerized data modeling is underfit. It is the opposite of overfit.
While overfitted models mistakes random effects for true phenomena, underfitted models fail
to capture all the real information in the data. Some analysis methods can produce models that
suffer both effects at the same time.
Many data modeling approaches assume that a specific type of model is appropriate prior to
starting the analysis. For example, linear regression assumes the model is of the linear form.
The equation below illustrates this simple model structure.
Y = w,x, + w,x, + w,x, . . . + W,X + Wn+,
Figure 2 below depicts a linear regression model employing two predictors Xl and X2. The
altitude of the surface is the dependent variable (Y) and can be thought of as the probability
that a point on the plane below is a BUY DAY. The greater the altitude of the regression surface
the more likely a BUY DAY. Notice that the regression plane is flat (i.e., linear). This shape is
fixed by an assumption of the linear regression method. Flexibility is limited to the angle of as-
cent along each axis. In fact, the fitting of a regression model is concerened with finding the
best angles (i.e., values for the Ws) for the plane such that it most evenly slices through the
sample data.
/
-I
-2
-3
Y=W,X,+WrX2+W.3
REGRESSION SURFACE
FIGURE 2
2
A Linear Regression Model
The regression surface of a linear model structure is restricted to a f/at shape. Only the slope of the surface can
be adjusted to best estimate the relationship between Y and the two predictor variables Xl and X2.
MTA Journal/May 1985 102
A linear model structure implies that all variables are of the first degree and are combined by
an additive operator. Obvious non-linear effects can be dealt with to a limited degree by certain
univariate transformations and explicit variable interaction terms. All this requires deep insight
on the part of the analyst.
In the diagram below one can get a visual sense of the limitation imposed by assuming a linear
model. Assume that the true underlying phenomena is represented by the hilly surface in figure
3. Many important effects are brushed aside by forcing the flat surface through the more
complex, non-linear phenomena. Obviously predictive accuracy will be less than optimal.
Y
I
LINEAR REGRESSION SURFACE
\
A Linear Model Trying to Represent a Non-Linear Phenomena
The linear model (the plane) cant begin to capture the true non-linear relationship between Y (dependent vari-
able) and the two predictor variables (X7 and X2). However, advanced non-linear methods can if given enough
sample data.
One of the main assumptions of linear models is that the predictor variables (i.e., the indicators)
are independently related to the dependent variable (i.e., the market trend). Our suspicion, based
on cybernetic considerations, is that financial markets are examples of complex systems. The
behavior of such systems are generated by many complexly interrelated variables. Thus, data
modeling procedures that pre-assume linear relationships or any other relationship may fail to
capture much useful information. Accurate modeling of complex systems requires a meth-
odology that is flexibile enough to capture important relationships regardless of their com-
plexity (i.e., shape).
MTA Journal/May 1985
PART IV
CURRENT DATA MODELING AND FORECASTING APPROACHES
In this section we will review a number of approaches FMAs are using to analyze data, gen-
erate models, and do forecasting. Our purpose is to consider some potential weakness in light
of the difficulties outlined in Part IV.
A. Subjective Multi-Indicator Analysis
In this approach the computer plays a relatively minor role. It is used to calculate and display
the indicators, but it is not called upon extensively as a historical research tool. The FMA eval-
uates the current indicator levels in light of past experience. Forecasting is not based on rig-
orously researched statistical models or specific rule sets. Rather, rules of thumb and subjective
criteria derived from examination of historical data are the basis for forecasting. The analyst
relies on mental power to weigh and evaluate the various indicators to arrive at a forecast. A
virtue of this approach is that it can attempt to deduce the consequences of unique or infre-
quent events not easily incorporated into historical data models.
Given human limitations in configurai reasoning, the experience base of the analyst is likely to
be missing multi-indicator relationships with predictive power. Although the mind can deal with
as many as three variables in a configural sense, signifiant three variable laws are still likely
to elude the FMA. This is so because sifting out a two or three indicator rule (model) from a
larger number of potential indicators will involve considering many more than three variables
at a time. Thus, the FMA will have great difficulty discovering laws that are obscure enough to
circumvent the efficiency of the market or complex enough to comply with the Law of Requisite
Variety.
B. Computer-Based Trend Following System
Some FMAs, notably managers of several large commodity trading funds, have chosen the
objectivity of the computerized price-trend-following systems. Trend-following models use var-
ious kinds of price-based indicators such as moving averages, recent price ranges (channels),
oscillators, etc. An example might be if the price exceeds a 20-day moving average by a cer-
tain percentage, an up-trend criteria is met and a long position is taken. Trend persistance is
the underlying assumption.
The computer has been used extensively in the development and optimization of such sys-
tems. Because they are relatively simple to implement and test, thousands of variations on the
basic trend-following theme have been investigated.
Although such systems have produced significant profits when markets are in welldefined trends,
they generate many false signals and cause severe capital errosion during choppy or trendless
markets. Thus, the investment returns show significant volatility (risk). Undercapitalized or un-
disciplined traders tend to abandon the system before trends and profits materialize.
The low signal reliability and attendant risks are not surprising. First, most trend-following sys-
tems tend to be highly similar (i.e., buy on strength and sell on weakness), and there are many
of them. Above average risk-adjusted returns in an efficient market require a degree of unique-
MTA Journal/May 1985 104
ness. But unique they are not. Second, because such strategies are based only on historical
price data analyzed in simple ways, they are not in conformity with the Law of Requisite Variety.
Third, many trend systems are overfitted. This results from efforts to improve the poor signal
accuracy of the basic trend-following system (usually correct 40 to 50 percent of the time). Ov-
erfit results when numerous qualifying rules are added and many free parameters are opti-
mized for a given set of historical data. Validation on independent data is rarely done. Often a
newly developed trend-following system shows excellent results in historical simulation, but
produces actual results that are much worse - a sure-fire symptom of overfit. After some hard
knocks in the real world, many trend system developers are found back at the drawingboard,
re-optimizing and refitting their systems.
C. Optimization of Rule Based Models
This approach makes heavy use of the computer, but frequently, it is a misuse. Optimization
refers to the progressive development and refinement of a set of trading rules by repeated re-
visitations to the same body of data. After each pass, rules are added or changed to improve
signal accuracy. Of course, results seem to get better after each pass, but this is simply be-
cause the model is being forced to fit the data closer and closer. Overfit is the inevitable result
unless precautionary measures are built into the optimization process. Predictions in new data
are usually quite disappointing.
A helpful measure would be to reserve some data for testing each newly revised version of
the model. So long as signal accuracy in the reserved data improves, the modifications make
sense. When the model becomes overfit, the optimization process would be stopped.
Another potential weakness is the FMAs reliance on his configural thinking to propose rule sets
to test. The computer is used only to test what the analyst hypothesizes. Configural thinking
limits prevent efficient searching of the large number of possible rule sets when the number
of indicators exceeds just a few.
D. Indicator Voting
This is a common approach to synthesizing a forecast from many indicators. Each indiator is
graded as to its current bullish or bearish or neutral implications. Then bullish and bearish votes
are added algebraically, creating a composite score. A bullish indicator is given a rating of + 1,
and a bearish one is given a -1. If seven are graded as bullish, and ten are graded as bearish
( + 7-l 0 = 3), a bearish prediction is given.
This approach is subject to a number of weaknesses. First, combining indicators by addition
or subtraction creates a linear model and ignores the possibility of more complex non-linear
indicator relationships. The Law of Requisite Variety tells us to expect complex relationships
among variables when modeling a complex system such as a financial market. Second, the
Efficient Market Hypothesis implies that the voting model is unlikely to achieve high levels of
predictive power because many analysts can create such simplistic composite models. An ex-
ception would be if an FMA had some powerful indicators not known to others. Third, it as-
sumes that each indicator has equivalent, relevant, and non-redundant predictive information.
Summing two indicators that are relevant but redundant amounts to double counting. Including
irrelevant indicators adds noise that can dilute or destroy the information of good indicators.
On the other hand if the FMA attempts the task of selection and weighing, he falls prey to the
minds limited configural thinking abilities.
MTA Journal/May 1985 105
E. Multiple Disciminant and Multiple Regression Models
By far the most intelligent use of the computer by FMAs has been the application of computer-
based multi-variate modeling methods. Disciminant analysis produces models that distinguish
one class of items from another such as BUY-DAYS and NON-BUY-DAYS. Regression anal-
ysis produces models that estimate a continuous variable (for example, the percent change in
the Standard and Poors 500 over the next thirty days). The mathematical basis of both meth-
ods is quite similar.
These multi-variate approaches create models that are combination-weighted variables (in-
dicators), that best classify or predict the variable of interest (indicators), that best classify or
predict the variable of interest (the dependent variable). The weights are determined by the
computer so that the most important variables get the most weight. In addition, careful atten-
tion is paid to avoid including variables that are redundant. One version called step-wise
regression (discriminant) builds a model in steps, searching for variables that increase the fit
of the model. This approach starts with a relatively large number of indicators and selects the
indicator with highest linear correlation to the dependent variable from them. The second var-
iable added to the model is the one that in conjunction with the first provides the biggest gain
in correlation. The process continues as long as each new variable is significant according the
selection criteria used to build the model. At least two organizations have produced models
with this procedure, and real time predictions (ex-ante) since 1975 appear to be good for pre-
dicting trends over time frames as short as three months.{7, 8)
Despite the virtues, these modeling approaches rely on a number of assumptions that are likely
to be overly simplistic and restrictive for financial market predictive modeling. Thus, complex
relationships with significant forecasting power may get overlooked.
Both regression and discriminant modeling attempt to fit an equation to data. A potential weak-
ness in any methodology that attempts to fit equations to data is the assumption that the cor-
rect structural form of the equation (e.g. linear, quadradic, cubic, etc.) is known by the analyst
prior to analyzing the data. In the sciences, there is often good theory to indicate the proper
form. However, when complex, poorly understood problems are being analyzed, this is often
not the case. In such cases, the analyst often assumes a linear structure and hopes that non-
linear effects can be treated by transforming some variables, and/or including cross product
terms. However, the Law of Requisite Variety tells us to expect an almost unlimited variety of
complex relational forms. Thus, the analyst is on the horns of a dilemma. Given the known
limitations of human intellect, it is unreasonable to expect an analyst to have sufficient insight
to choose a correct model equation for a complex system. Yet, these modeling methods re-
quire the analyst to choose. The choice is usually convenient but too simple.
Any data modeling procedure based on a fixed and pre-assumed form is likely to miss im-
portant but complex relationships when analyzing financial market data. The Efficient Market
Hypothesis implies valuable forecasting information may lie buried among complex relation-
ships, for that is where most analysts are unable to look.
While underfitted with respect to structure, traditional regression models tend to be overfitted
with respect to the number of variables they include. Most practitioners do not reserve data for
testing the predictive power of the fitted model. Thus, there is a tendency to allow too many
variables into the model relative to the amount of sample data. By including enough predictor
variables relative to the number of samples analyzed, the model will fit very close to the data
even if the assumed form is linear. Each time a new variable is added, the space within which
the samples are projected increases by one dimension. Each additional dimension gives the
hyper-plane of the model a new independent direction in which to angle itself. This permits ever
greater degrees of fit, though not necessarily greater degrees of predictive power.
MTA Journal/May 1985 106
F. Towards a More Robust Methodology
In Part III, we outlined a number of difficult issues that must be confronted when developing
financial market predictive models from historical data. In this section we have considered sev-
eral methodologies in current use and point out how they fail to adequately deal with one or
more of the difficulties.
In light of these considerations, we contend that robust data analysis and modeling methods
should meet the following objectives.
1) Ability to detect subtle information that has likely been
ignored by most FMAs. More specifically predictive laws
that involve three or more complexly interrelated indicators.
2) Ability to detect highly non-linear relationships without
having to specify those relations in advance of the analysis.
In other words, the analysis discovers the form of the model
expressed in the data.
3) Ability to deal with large numbers of candidate indicators,
yet perform the analysis in a reasonable amount of time, by
utilizing intelligent search methods.
4) Ability to extract maximum information from data without
committing the sin of overfitting. This will require some
type of feedback that lets the method know when the model
is being forced to fit noise rather than information.
5) No need to assume the correct statistical distribution is known.
In the next section we outline the AliPR data analysis methodology as one possible approach
to meeting these objectives.
PART V
ARTIFICIAL INTELLIGENCE / PA-I-TERN RECOGNITION
A. Vector Spaces to Represent Patterns
A pattern is a set of measurements that describe a given sample. Each measurement quan-
tifies one attribute of the sample. A sample could be a day in the history of the stock market,
and its attributes could be levels of various indicators on that day. A class of samples of interest
to the FMA are all days that were low points prior to the inception of an uptrend. AI/PR can
help us determine if there is an indicator pattern common to that class.
In order for a computer to perform AI/PR, the patterns must be represented in a way that can
be utilized by a digital computer. A common way to represent a samples pattern is by the lo-
cation of a point in a multi-dimensional grid or space. Such a space is known as a vector space
MTA Journal/May 1985 107
or attribute space. Each dimension or axis of the space represents one measurable attribute
of the sample.
The simplest space or grid, is composed of one dimension (1-D) and is exemplified by a straight
line. Any point in a 1-D space can be uniquely specified by a single number representing its
distance for the origin (0 value) of the space. If a sample is characterized by only a single at-
tribute such as height, a 1-D space suffices. See figure 4. However, if we wish to display
more information about a sample, a higher dimensional space is required. A two dimensional
(2-D) space or grid is exemplified by a piece of graph paper. To uniquely specify a location in
a 2-D space requires two numbers representing distances along two mutually perpendicular
axes. See figure 5. A three dimensional (3-D) space is required to display samples charac-
terized by three measurable attributes. See figure 6. Though we cant visualize them, hyper-
space grids (i.e., spaces containing more than three dimensions) are used when available
samples are described by many attributes. Readers may recognize that an attribute space is
nothing more than a Euclidenan space of one or more dimensions, with each axis mutually
perpendicular to all others.
AUTHOR'5 PATTERN
J
FIGURE 4
A One Dimensional Attribute Space
The location of the point at 58 indicates the authors height. The point is a 1-D pattern.
MTA Journal/May 1985 108
WEIGHT
AUTHOR'5 PATTERN
5
58 6
7 8
HEIGHT
FIGURE 5
A Two Dimensional Attribute Space
The location of the point in 2-D space gives additionalinformation about the author. The 2-D pattern specifies both
his height and weight. Note: the axes are perpendicular, but dont appear so due to the perspective.
Note the axes are perpendicular, but dont appear so due to the perspective.
MTA Journal/May 1985 109
FIGURE 6
A Three Dimensional Attribute Space
The 3-D pattern shown indicates the authors height, weight, and age by its location in the 3-D space. Note: the
three axes are mutually perpendicular. People cant visualize spaces composed of more than three dimensions,
but computers can perform calculations for spaces of N dimensions.
It is interesting to note a possible connection between our inability to visualize spaces con-
taining more than three dimensions and the number of facts that we can effectively process in
configural thinking (i.e., a maximum of three or four). Computers, on the other hand, can easily
construct vector spaces containing many dimensions. A 20-D space is no more difficult than
a 3-D space. This makes it a simple matter for a computer to detect degrees of similarity among
samples characterized by numerous attributes. Thus, the question what attributes are really
common to BUY DAYS becomes answerable.
MTA Journal/May 1985 110
B. Discrimination
Discrimination in this context refers to the task of classifying a sample whose true class is un-
certain, but whose attributes are known. Thus, an enhancement in the ability to discriminate
is equivalent to reduction in uncertainty. FMAs dont know with certainty if today is a BUY DAY
(i.e., an uptrend is about to occur), but levels of various technical indicators are known with
certainty. We wish to reduce our uncertainty about todays class (BUY DAY or NON-BUY DAY),
based on those known pieces of indicator information.
In vector space terms, discrimination is possible when samples from each class cluster in dis-
tinct regions of the attribute space. In other wordds, the clump(s) of BUY DAYS is far removed
from the clump(s) of NON-BUY DAYS. Obviously, for this to occur, the attribute axes must con-
tain class separating information. In other words, in a good attribute space birds of a feather
flock together, but not with others. When attempting to classify a sample of unknown class,
its known attributes locate a unique point in vector space. If the immediate vicinity is dominated
by known BUY DAY samples, then by the fundamental axiom of pattern recognition, the mys-
tery sample is inferred to be a BUY DAY as well. For in pattern recognition, vector space prox-
imity is equivalent to class similarity. Since there can be varying degrees of proximity, the
classification is stated by a probability rather than an absolute yes or no.
This concept is illustrated in figures 7, 8, and 9. Consider the 1-D height space.
000 xxx
00000 xxxxxx
00000000 xxxxxxxxx
00000000000 xxxxxxxxxxxx
UNKNOWN
o= JOCKEY
X= BASKETBALL PLAYER
FIGURE 7
Jockeys and Basketball Players in Height Space
Samples from the two classes are located in 1 -D height space. The classification power of this attribute is evi-
denced by the fact that jockeys clump in a height range that is very distinct from the region occupied by the bas-
ketball player cluster. Classification of the unknown with a height of 65 is easy as it lands in a region dominated
by basketballplayers. Computers use a measure of distance to determine which cluster is closer to the unknown.
MTA Journal/May 1985 111
Figure 7 shows samples of known jockeys (symbol 0) and known basketball players (symbol
x) located in height space. The two classes tend to cluster in distinct regions of attribute space.
This is highly desirable as it allows easy classification of a sample of uncertain class. This is
equivalent to saying that the attribute height contains extremely useful class separating in-
formation. For most problems, one attribute is not sufficient.
Thus, the task of the AVPR program is to determine which combination attributes display the
best class separation. The AVPR process starts with an ample supply of samples whose class
is already known. Each time a new combination of attributes (i.e., vector space) is evaluated,
the program can measure how well the goal of class separation is being achieved.
Now consider a slightly more difficult problem, one requiring two attributes. We wish to dis-
criminate between football players and basketball players. See figure 8 below.
00000000
00000
UNKNOWN
000
0= FOOTBALL PLAYER
X= BASKETBALL PLAYER
FIGURE 8
Basketball Players and Football Players in Height Space
Height, by itself, is not a sufficiently informative attribute to separate the two classes, as both tend to occupy similar
height ranges. Classification of the unknown is uncertain, as samples from both classes are found in its immediate
vicinity More information is needed.
Since both classes tend to be tall, we get an ambiguous situation in height space. The two
clusters (classes) have a degree of overlap. If given the problem of classifying an unknown
whose height is 65, no definite conclusion could be reached because samples from both classes
are found in that region. The attribute height, by itself, is not a sufficiently informative indicator
to produce a good pattern recognition model. Its a more complex problem than the basketball
versus jockey problem, thus, requiring more information. However, when we cleverly add weight
to the space and project our samples into the 2-D height/weight space, we get good class sep-
aration. See Figure 9 below.
MTAJournaliMay1985 112
LB-S.
300
T
000 xxx
00000 xxxxx
0000000 xxxxxxx
00000000 xxxxxxxx
00000000 xxxxxxxx
00000000 xxxxxxxx
00000000 xxxxxxxx
00000000 xxxxxxxx
0 0000000 xxxxxxxx
00000000 xxxxxxxx
UNKNOWN
000000000
000000000
_ - C2QO~QQQQQ~~XXY
000000000
000000000
xE+gK~/
xxxxxx xx
xxxxxxx,xx
00000000 xxxxxxp
00000000
0000000
XXXXXX~X
XXXXXfX
00000 XXXXY
0000 XXXXI
I
I
I
0= FOOTBALL PLAYER
X= BASKETBALL PLAYER
FIGURE 9
Height and Weight Separate Classes
The samples of football players and basketball players from the prior illustration are shown by 2-D height/weight
space. The additional information provided by weight causes the two classes to separate, permitting classifi-
cation of the unknown. The pattern height = 65, weight = 210 Ibs., locates a point in a region dominated by
basketball players. Conclusion, the unknown plays basketball.
MTA Journal/May 1985 113
With the addition of weight as an indicator, a good AliPR program would sense class sep-
aration had been achieved and cease the pattern induction process. With this model in hand,
classification of a person of unknown class, but known height and weight (height =65,
weight = 210 Ibs.) is possible. The pattern of the unknown is clearly that of a basketball player,
as its 2-D (i.e., coordinates) pattern is a point in heightiweight space that lands squarely in
the basketball player cluster.
Incidentally, its fortunate that we were smart enough to select weight as the second attribute.
If we had been foolish enough to pick zodiac sign or hair color, we would not have gotten such
excellent class separation. Or worse would have been our selecting an attribute that due to
random effects worked on this particular sample but had no general validity. For example, if
all football player samples were from New York, and all basketball players were from Los Angeles
(due to poor sampling), a home-town attribute would cause separation, but clearly for the wrong
reasons.
Real world pattern recognition problems are rarely this clear due to inherent complexity and
high levels of noise. Often the difference in information content between the best and worst
attribute spaces is small. Sharply defined clusters are never seen. However, the sensitive
measurements of AVPR software, combined with the power of the computer, can detect useful
information for complex phenomena contaminated by high levels of randomness. Some in-
teresting problems that have been successfully approached with AliPR include disease di-
agnosis, searching for oil, weather prediction, economic forecasting, and financial market
prediction.
C. Steps in Building a Model
The process of building a model with AliPR takes place in a number of steps. They are:
1) Defining the two classes
2) Proposing candidate indicators
3) Division of historical data into training and testing sets
4) Reducing the number of candidate indicators
5) Model Construction
6) Ex-Ante Testing
7) Adaptation
1) Defining the Two Classes: Top Days and Bottom Days
The first step is to identify for the AI/PR programs days in the past that belong to the two classes
of interest. For this example, we have chosen to construct a model that will classify each day
as a TOP DAY (i.e., we are at an intermediate term top) or a BOTTOM DAY (i.e., we are at an
intermediate term bottom). Lets define the criteria for an intermediate term trend as a price
move of at least 15%.
With hindsight using eye or computer, we go back over our historical database and identify all
days that were highs in intermediate moves (e.g., trends in which prices moved at least 15%),
and lable them TOP DAYS. Then we do the same on all lows and label them BOTTOM DAYS.
We now have a two class pattern recognition problem. See figure 10 below.
MTA Journal/May 1985
114
SCP500
IO0
75
25
TIME
O=TOP DAY x = BOTTOM DAY
FIGURE 10
Historical Samples for a Top/Bottom Discrimination Model
The historical sample data is the raw material for the development of an AliPR model. Samples A, C, E, and G
are known TOP days. B, D, F, and H are known BOTTOM days. indicator levels for those days are known, as well.
Discovering an indicator space that will cause the two classes to truly separate is given to AIIPR
2) Proposing a Set of Candidate Indicators
This step is performed by the FMA. Because a computer cannot be creative, a knowledgeable
human must give it a list of candidate indicators thought to contain useful information. The FMAs
experience and intelligence is crucial here, and the success of the analysis rests on a good
starting set of indicators. A list of several hundred can easily be created as each variation of
an indicator is considered a separate candidate. For example, the five-week change in T-Bill
rates is one; a ten-week change is a second. AVPR will determine which is better for a given
problem.
The first step in generating a set of candidate indicators is to identify raw data series thought
to contain some useful information. In the case of the stock market, the list may include market
price data, advance-decline data, total volume, short sales, volume, interest rates on T-Bills,
odd-lot volume, put-call statistics, etc. However, raw data is generally not useful in building Al/
PR models. It must be transformed in various ways to amplify its information content.
What are commonly known as technical indicators are created by transforming raw market
data with various mathematical operations, such as moving averages, ratios, differences, etc.
Most indicator transformations attempt to normalize the raw series in some way. Normali-
zation can mean removal of measurement units, removal of a trend (i.e., stabilizing its mean),
stabilizing its variance, etc. For example, the PSSR (Public to Specialists Short Sales Ratio)
indicator is created by taking the ratio of the raw public shorting volume to specialists shorting
volume, and then smoothing the figure with a moving average of 4 to 1 O-weeks. Raw advance/
MTA Journal/May 1985 115
decline (ND) data can be transformed into the well-known ND cumulative line, or a 1 O-day net
ND oscillator.
The better the indicators, the better the final model. Unless the initial set of candidate indicators
contains some useful information, the best AliPR program will fail to produce a good predictive
model. The human is very much in the loop.
3) Division of Historical Data Into Training and Testing Sets
A fundamental aspect of the scientific method is the requirement that a hypothesis be validated
on data other than that which gave rise to it. In this spirit, the AliPR method holds aside a por-
tion of the historical sample data. Usually the data is divided into two sets, each containing half
of the samples. Each time a new indicator combination is examined, the first portion, called
the training set, is analyzed for potential class separating power. If some is evident, the sec-
ond portion called the testing set is used to validate or invalidate the suspected classification
utility of the space. This procedure is called cross-validation. It is explained and illustrated later
on.
Lets assume that our entire historical database consists of one hundred samples: fifty BOT-
TOM days and fifty TOP days. Half of the BOTTOM samples and half of the TOP samples would
be put into the training set. The remaining samples would be placed in the testing set.
If the training set shows PSSR to be a good indicator (i.e., bottoms and tops segregate in PSSR
1-D space), the AVPR program will then check the testing set. If the indicator shows similar
discrimination power in the testing set, the program will retain it for testing in combination with
other indicators. Cross-validation is an extremely important defense against overfit when the
number of possible indicator combinations is large. If a candidate set of indicators is large (twenty
or more), the number of possible indicator combinations relative to the number of samples makes
it highly likely that false discrimination power will show up in many instances. This is particularly
so in high dimension combinations (i.e., four or more indicators). Figures 12a, 12b, and 13a,
13b illustrate the cross validation concept.
4) Reduction and Compression of Candidate Indicator List
Because complex processes can involve so many potential indicators (candidate set) and Al/
PR is compute intensive, there is a need to reduce the size of the initial set prior to model build-
ing. There are two things that can be done.
First, irrelevant and redundant indicators can be eliminated from further consideration. This
step is not simple, as some indicators that appear without value on a stand-alone basis are
extremely valuable in multi-indicator combination. Thus, there are trade-offs to be made.
Second, the information of several indicators can be compressed (i.e., projected) onto a single
new indicator. Typically, this is accomplished by rotating the axes of a multi-indicator space into
a more favorable position, thus, enhancing class separation, indicator independence, or var-
iance explanation.
Mass indicator screening and reduction is carried out with pattern recognition methods that are
less refined, but more rapid than those used for model construction. Certain types of AVPR
are useful for screening candidate sets containing up to five hundred indicators.
MTA Journal/May 1985 116
The subject of variable screening, dimensionality reduction, and information compression is a
large one and well beyond the scope of this paper. The important point is that much can and
should be done in the way of reducing the size of the initial set of indicators.
5) Model Construction
When the candidate set has been reduced to approximately thirty indicators, the model con-
struction starts. There are numerous schemes for generating a model, and all involve trade-
offs. The most ambitious would be to allow the AliPR program to consider all possible com-
binations of indicators taken one at a time, two at a time, three at a time, up to all thirty of them.
Although this approach guarantees that the best model will be found, it is feasible with only the
largest and fastest computer and very large research budgets.
Good, though not the best models can be found using step-wise procedures. Such ap-
proaches limit the number of possible indicator combinations searched. First, all indicators are
considered on a stand alone basis. The one with the highest predictive power is selected. As-
sume indicator 26 was the 1 -D winner. Then all two indicator combinations involving indicator
26 are tried. The pair with the highest predictive power is selected. Assume that pair was 26
and 7. Next all possible 3-D models using 26,7 and each of the remaining variables are tested
for predictive power. The tri-indicator set with the highest power is selected, and an attempt is
made to add a fourth indicator. An interesting aspect of the selection process is that indicators
selected after the first one often look useless on a stand alone basis, but have significant in-
formation when acting in concert with several other indicators. These kinds of synergistic ef-
fects are not visible to data analysis methods that assume each indicator has independent
predictive value.
The program stops when the addition of another indicator results in a decline in predictive power.
It may seem counter to common sense that more indicators could produce a decline in pre-
dictive accuracy, but in practice, it does. One reason is that the number of historical samples
is limited. As a vector space grows in dimensionality, its volume expands rapidly. Consider how
much more room there is in a one foot cube than in a one foot square. The finite number of
samples become sparser and sparser, until the clusters of like-class samples dissipate and
get lost in the noise. This phenomena is known as Bellmans Curse of Dimensionality.
Lets consider how the AI/PR program measures the predictive power of given combination of
indicators. Recall that in an indicator space composed of relevant attribute axes, the samples
from the class of BOTTOM days will form one or more clusters that are distinct from clusters
of TOP days. Each time a different indicator space is considered, the program measures two
criteria of goodness (predictive power). First, the training samples are examined to see if the
BOTTOM DAYS and TOP DAYS separate to some degree. If class separation is not present
(i.e., the classes appear randomly intermixed) the indicator combination is rejected. (See figure
11.) But if class separation appears, the program then determines if that separation is also ev-
ident in the testing samples. Classification power in both sets is required to rate the indicator
combination as worthy of further consideration. See the diagrams below.
MTA Journal/May 1985 117
0
OX
X0
f
+10
X
X
0
0
0
x0x x
i
0 xx
x 0
OX
x O
X
X
X
0
X 0 --
X
0
0
X
Ox ox--
0 0
X
- -10
X
0 ox O,,
X
X0 0 X
X
x0 0
0 0 0
X0
x x
X=BOTTOM DAY O=TOP DAY
FIGURE1 1
A Poor 2-D Indicator Space - Classes Intermixed
I f samples from the training set show no class separation, as in the illustration, the AIIPR program ceases its in-
vestigation of that space.
MTA Journal/May 1985 118
IO
TRAINING SAMPLES
GOOD SEPAfiATlON
FIGURE 12a
Training Set Shows Good Top/Bottom Separation
TESTING SAMPLES
::xx x
1: xXxX xx
1-x xxx
xX
_: x
-10
x XxX
*IO
tIII::~::I=:III;II1iI
000 0::
o$)~oo ::
00 0;::
-10 x2
Xl
GOOD JEPARATION
X=BOTTOM DAY
O=TOP DAY
FIGURE 12b
Cross-Validation: Testing Set Confirms Good Class Separation
In the training set BOTTOMS and TOPS separate in the Xl, X2 indicator space. But before accepting Xl, X2 as
a good 2-D model, its separation power must be validated in the test set samples. Classes separate there as well,
so the AIIPR program grades the indicator combination favorably
MTA Journal/May 1985
119
TRAINING SAMPLES
GOOD SEPARATION
FIGURE 13a
Training Set Shows Good Class Separation.....
iI0
T
TESTING SAMPLES
0 0 xo I: )( 0 )(
X
x O --
IIOX 0 x
IO
0 x
--Ox O
+I0
l:"';'::'$:'I'::::I,
O xo
1
0
ox x
x0x
0
ox
X x0 O
x0 x
POOR SEPARATION
X'BOTTOM DAY O=TOP DAY
FIGURE 13b
But Testing Set Does Not Confirm It!
The training samples seem to indicate ndicators X4 and X21 have good conjoint classification power, but cross
validation shows the effect was false, as BOl7OMS and TOPS fail to separate in the test set. The spurious class
separation in the training set was due to chance, a frequent occurrence when examining thousands of indicator
combinations and relatively few samples.
MTA Journal/May 1985 120
6) Ex-Ante Testing
The ultimate test of an AI/PR model is its ability to predict or classify on future data. Ex-ante
or out-of-sample data is from a period of time that was in neither the training samples nor test
samples. For example, if our training samples and testing samples came from the period 1970
to 1980, we might use 1981 through 1984 as the ex-ante sample. This will indicate if the pat-
terns found in the 1970-l 980 time period continue to have validity.
7) Adaptation
If the process under study is suspected to evolve over time, there is a need to allow the model
to adapt. There are a number of adaptive techniques. For example, as new samples take on
known class membership, they can be incorporated into the model, the oldest samples can
be deleted, or more recent samples can be given higher weights, etc. There are many things
that can be done to permit gradual adaptation of the model.
D. Linear and Non-Linear Pattern Boundaries
The EMH, as well as the Law of Requisite Variety, leads us to suspect that pattern boundaries
(i.e., the surfaces that separate the classes) will be complex. Thus, pattern recognition meth-
ods that assume the pattern classes to be linearly separable (e.g., with straight line, plane, etc.)
may not be able to provide accurate discimination. Fishers Linear Discriminant Analysis is the
best known method of this type. In the figures below we see one problem that is solvable by
linear methods, and a more complex one that is not.
T
FIGURE 14a
Classes Are Linearly Separable
MTA Journal/May 1985 121
X= BOTTOM DAY O=TOP DAY
FIGURE 14b
Non-Linear Boundary Required
In Figure 14a, the classes can be separated by a classical linear discriminant model. However, in Figure 14b
a flexible non-linear method is required to find the true class boundary Complex systems display an unlimited
variety of non-linear effects. See Figures 16 and 17.
The most recent advances in AVPR have been toward the development of methods that can
separate classes that are not linearly separable. This is desirable as it is a more general method
that can solve linear, as well as non-linear problems. The spirit of such approaches is to let the
data express its own message rather than force fit a pattern boundary whose shape was con-
ceived prior to the analysis. Flexible non-linear methods permit the true shape of the surface,
such as that in figure 3, to be approximated.
E. Overfitted Patterns
A potential danger of the more flexible methods (i.e., non-parametric, non-linear) is the gen-
eration of overfitted pattern boundaries. Figure 15a shows a 2-D indicator space devoid of class
separating information, but figure 15b shows how a contrived pattern boundary can achieve
apparent class separation in that same space. Clearly, the pattern boundary is spurious.
MTA Journal/May 1985 122
- 0 x +ro
OX 0 0 -- X0 0 X
X
xox xx--
0 ox
0
X
x
0
0
X
X
--
xo 0 X0 X 0 0
0 x--
x o
,:I:::::;;;:;::::;; II *,,,I ,,,,,,,,,,,,
-10
X
0
0 x
0
0
x
0
X0
x Ox
0
0
x O
X
0
0
X
X
x0
0
_ 0
X0
x 0
o 0
X
- -20
0 x x+o
0
OX
0
0
x x
X
X=BOTTOM DAY O=TOP DAY
FIGURE 15a
A 2-D Space That Should Be Rejected
BO77OMS and TOPS are scattered randomly throughout this space. But if men or machines fir freely enough,
over-fitted pattern boundaries can emerge........ (see 15b).
MTA Journal/May 1985 123
- TOP DAY REGION
X= BOTTOM DAY O=TOP DAY
FIGURE 15b
An Overfitted Classification Boundary
Without the feedback provided by cross-validation, the computer will define highly contrived and false class
boundaries, even in random data. A test of this boundary on independent data would likely reveal its lack of va-
lidity
Although there are a number of approaches to avoiding overfit, the cross-validation approach
is the most conservative. The highly contrived pattern in the illustration would be seen to be
false when cross validated on the test set. Thus, robust AI/PR methods can adapt the pattern
boundaries to something that approximates their true nature, while avoiding the trap of mod-
eling the noise in the data.
E. Expert Systems Versus AliPR Models
Much attention in the artificial intelligence area has been on expert systems. Such systems
incorporate rules and knowledge of experts organized into a knowledge base. An inference
engine operates on the knowledge base to give the kinds of conclusions and explanations that
an expert would give when asked for advice. Successful expert systems have been developed
to aid in medical diagnosis, find mineral deposits, fix diesel motors, and offer certain kinds of
financial advice.
The effectiveness of an expert system is highly dependent on the accuracy of the knowledge
base. In situations where even experts dont have successful rules, the results may not be sat-
MTA Journal/May 1985 124
isfactory. Financial markets and other complex processes are characterized by extremely
complex workings not easily understood by people. Thus, expert system alone is not likely to
forecast market trends well. On the other hand, expert systems can incorporate aspects of market
behavior that are difficult to quantify. Elliott Wave analysis may be a type of financial market
forecasting that would lend itself to a rule-based expert system.
The AI/PR approach has the ability to infer rules and laws from data that may not be apparent
to even the best experts. But it is confined to quantifiable indicators and rules for which ex-
tensive histories can be made available. So, a combined AI/PR system and an expert system
might make sense.
F Limitations of the AI/PR Method
1) Requires large amounts of historical data
AliPR induces predictive models from numerous observations (the more, the better). The more
complex the phenomena, the more examples needed. Thus, unique or infrequent events can-
not easily be incorporated into AliPR models. Wars, strikes, government policy changes of an
infrequent nature, or indicators that signal once a generation are examples. Yet, these events
may be extremely important.
2) Information Must be Quantified
AI/PR can digest information if it can be quantified and extensive histories made available. Many
aspects of the FMAs work can be quantified, but some cannot. The information that is non-
quantifiable cannot be used in an AI/PR system.
3) Patterns Assumed to Remain Valid or Change Slowly
The pattern attributes and boundaries are assumed to have some durability. If either are sub-
ject to large and abrupt changes, the models will not be able to adapt and predictive accuracy
will be poor. This applies to any predictive method based on historical analysis.
4) Cost to Develop AI/PR Models are High
Computer running cost for AI/PR are high. The process requires large computers or special
purpose computers designed for vector processing. Computer resources for an AI/PR anal-
ysis may be ten to one hundred times that of a traditional regression analysis done on the same
problem. A mitigating factor is some AI/PR programs can sense early in an analysis if any
worthwhile information exists in the candidate inputs.
5) A Good Candidate List of Indicators is Required
MTA Journal/May 1985 125
AliPR can mine information from a database if the data exists. Some of the candidate indi-
cators must contain useful information. Thus, intelligent and experienced people are needed
to create an initial list. If the candidates are weak, the analysis is doomed from the start.
PART VI
PRISM: PAT-TERN RECOGNITION INFORMATION SYNTHESIS MODELING
PRISM is a non-parametric, non-linear AliPR system developed by Raden Research Group
used for generating multi-variate classification, estimation, and prediction models. The design
philosophy was to make intensive use of the computer in order to confront the complexities
associated with complex unstructured problems for which large amounts of data exist.
1) PRISM can accept up to five hundred candidate variables.
2) PRISM makes no a prior assumptions about the structural form of the models or distribu-
tions of the variables. There is no need for variables to be linearly related to the dependent
variable, normally distributed, or uncorrelated with each other. PRISM models can take on highly
non-linear structure, if such is indicated by the data.
3) If the data is random or the candidate variables contain insufficient information to generate
a predictive model, PRISM will indicate such at an early stage of the analysis.
4) PRISM makes extensive use of cross-validation to avoid overfit.
5) Candidate variables can be of a scalar, binary, ordinal or categorical type.
Since the completion of PRISM (version 1 .O) in 1982, it has been applied to developing pre-
diction models for a variety of financial market applications and Department of Defense ap-
plications. Below we describe some of the models.
A. Application to Dow Jones Prediction
The dependent variable was defined as the percent change in the Dow Jones Industrial Av-
erage over the next sixteen weeks. The candidate list of variables were approximately two
hundred indicators designed by an analyst not associated with Raden Research Group. The
entire historical database extended from 1964 through 1983. Data from 1964 through 1978
was used to develop the model (i.e., training set and testing set samples came from this pe-
riod). A model composed of three variables was produced.
The model was then tested on data from 1979 through 1983. During 1979 predictions in the
trading range market of that period were marginally profitable. In 1980 several large trends were
correctly predicted just prior to or just after trend turning points. For example, a sharp decline
between 2/15/80 and 4/22/80 was predicted by negative forecasts that persisted between 2/
18/80 and 4/24/80. The model also stayed positive during a strong up trend that began in April
of that year, but turned negative somewhat early (August 20, 1980). The bear market in the
summer of 1981 was correctly predicted, as was the rally in the fall of that year. The large bull
MTA Journal/May 1985 126
market starting August 14, 1982, was preceded by bullish forecasts starting in June of 1982.
The model estimated from 1964 to 1978 data was left unchanged throughout the 1983 period
(i.e., no adaptation was permitted). In 1983, two variables in the model went to levels never
seen in the historical data, and the forecasts became inaccurate. In general, when variables
go out of historical ranges, the mathematics of the PRISM model pushes the forecast back to
the grand mean of the entire data set. In sum, the model did provide accurate predictions at
major and intermediate turning points for four of the five years of ex-ante data.
8. Soybean Model: Raden lnhouse Research
The dependent variable of the model was defined as the slope of a linear regression ten days
into the future. Training set and testing set data were taken from the time period 1977 through
1980. Raw data series were limited to price, volume, and open interest of the soybean futures
market. Thus, the model was based only on technical indicators. Each of the three raw data
series were transformed into twenty technical indicators, for a total of sixty candidate inputs.
Thus, there were twenty indicators based on price data, twenty based on volume, and twenty
based on open interest. In general, the indicators were of the oscillator type (i.e., bandpass
filter outputs). PRISM produced a model composed of three of the sixty candidate indicators.
Interestingly enough, all three of the selected indicators were derived from the open interest
data. This is in marked contrasts to most commodity trading models, which are based on price
data.
Ex-ante testing on data from 1981 through 1983 indicated the model had the ability to detect
important trend reversals, though signal was sometimes early. This also contrasts with trend-
following models which detect trend changes with a lag. In 1983, a 12-month trading test com-
menced using the mini-contract of soybeans on the Mid-American Commodity Exchange. A
profit exceeding one hundred percent on capital was earned. Four of five signals were correct,
which was consistent with earlier levels of signal accuracy.
C. CyberTech Research Partnership
In 1984, Raden Research Group was engaged to develop a series of short-term prediction
models for twelve different futures markets by the CyberTech research and development part-
nership. For each model, five to ten years of historical data were used. The candidate indi-
cators were based on data that related to the future being modeled, as well as exogenous data
series.
Ex-ante testing for each model was done on the most recent two years of data. Two measures
of predictive power were used. First, the predictions were correlated with actual outcomes.
Second, the fraction of times the forecast was directionally correct was noted. In general the
forecasts were found to contain significant information when they exceed a threshold. In other
words, forecasts close to zero were randomly correct. For example, the test of directional ac-
curacy lends itself to the binomial test for nonrandomness. (See MTA Journal February, 1981,
Significance: What Is It? by Arthur Merrill).
A model to forecast 5day changes in the S&P 500 Index was directionally correct over seventy
percent of the time when the forecast exceeded a specified threshold. This level of accuracy
is significant at the ninety-nine percent level. Other models were correct on direction often enough
to be significant at the 99.9 percent level.
MTA Journal/May 1985 127
D. Other Examples
Prior to the development of PRISM, one of its designers investigated the application of AVPR
to major trend forecasting and stock selection. These models were developed with techniques
that are relatively seminal by todays standards, yet, the results were encouraging. The figures
16 and 17 are 2-D cross sections of the two models.
n = MAJOR MARKET BOTTOMS
= NEUTRAL TREND
= MAJOR MARKET TOPS
FIGURE 16
Stock Market Forecasting Model
Two macroeconomic variables (Xl, X2) were selected from over 100 by an AIIPR system for their ability to classify
major market tops and bottoms. Note the non-linear boundaries. The dark region at the top of 2-D indicator space
is where bottoms occurred, and the white region at the right has been associated with market tops. Developed
for a financial institution in 1970s.
MTA Journal/May 1985 128
n = BEST PERFORMING STOCKS RELATIVE TO MARKET
n = SECOND BEST
= THIRD BEST
l-l
=
UNDERPERFORMING THE MARKET
FIGURE 17
Stock Selection Model
Balance sheet and income statement data was used to construct candidate variables for a model to forecast rel-
ative price performance of stocks (versus S&P 500). The figure is a 2-D cross section of a model that is composed
of four indicators. The model has been in real-time use since mid 7970s by the institution for whom it was original/y
developed. The best performing stocks are found in the darkest regions. Stocks in the white region are likely to
underperform the market.
MTA Journal/May 1985 129
The Future
There is a significant opportunity for a synergy between the FMA and the computer, for each
has unique and complementary talents. The FMA uses experience and creative intellect to de-
rive new indicators and improve existing ones (computers have no such ability). Computers
programmed with AI/PR are exploited for their ability to analyze many factors simultaneously,
detect complex relationships, and generate multi-indicator forecasting models, a task beyond
the intellectual powers of men.
Although the first phase of the Industrial Revolution saw the development of machines to am-
plify mans physical powers, a second phase is starting that will result in machines to amplify
mans intellectual powers. Semantic debates that question a computers ability to think as hu-
mans do miss the point. The real issue is whether or not such machines can be of practical
value to man. Initial indications are that, indeed, they can.
FOOTNOTES
1. Simon, Herbert A., Models of Man (New York: John
Wiley, 1957) p.198
2. Hayes, J. R., Human Data Processing Limits in Decision
Making, Report ESDTDR6248 (Massachusetts: Air Force
Systems Command, Electronics System Division, 1962).
3. Rivett, Patrick, Model Building for Decision Analysis,
(New York: John Wiley, 1980), p.1415, discussion work
of Bavelas at Stanford University.
4. Wiener, Norbert, Cybernetics or the Science of Communication
and Control Processes in Animals and Machines (Massachusetts:
MIT Press, 1948).
5. Ashby, W. R., introduction to Cybernetics, (New York:
John Wiley, Inc., 1963).
6. Felsen, Jerry, Cybernetic Approach to Stock Market Analysis,
(Hicksville, New York: Exposition Press, 1975) p.50. Book is
available from Raden Research Group, New York, New York.
7. Two firms that have developed market forecasting models with
multi-variate linear disciminant and/or regression analysis are:
The Institute for Econometric Research, Inc., 3471 North Federal
Highway, Fort Lauderdale, Florida 33306; and William Finnegan
Associates, inc., 21235 Pacific Coast Highway, Malibu, California
90265.
MTA Journal/May 1985 130
OTHERREFERENCES
1. Felsen, Jerry, Decision Making Under Uncertainty: An
Artificial intelligence Approach, (Jamaica, New York:
CDS Publishing Company). Book is available from Raden
Research Group, New York, New York.
2. Fosback, Norman G., Stock Market Logic, (Fort Lauderdale,
Florida: The Institute for Econometric Research, ninth
printing 1985)
3. Bellman, Richard, An introduction to Artificial Intelligence:
Can Computers Think? (San Francisco: Boyd & Fraser, 1978).
4. Batchelor, Bruce G., Pattern Recognition Ideas in Practice,
(New York: Plenum Press, 1981).
5. Kaufmann, Arnold, The Science of Decision Making,
(New York: World University Library/McGraw Hill, 1968).
6. Albus, James S., Brains, Behavior and Robotics, (Peterborough,
New Hampshire: Byte Books/McGraw Hill, 1981).
MTA Journal/May 1985 131
BIOGRAPHY
Currently the President of Raden Research Group Incorporated, David R. Aronson began his
career in business as an Account Executive with Merrill Lynch in 1973. After four years, he
conducted research on the performance and trading strategies of several hundred commodity
trading advisors for AdvoCom Corporation. In 1978, he began the Raden project to determine
feasibility of applying advanced data analysis and modeling techniques to forecasting the be-
havior of financial markets. Mr. Aronson also began the development of PRISM (Pattern Rec-
ognition Information Synthesis Modeling), a multi-variate pattern recognition system employing
automated inductive inferencing, heuristic programming and cybernetics.
Mr. Aronson received his BA in Philosophy from Lafayette College in 1967.
MTA Journal/May 1985 132

You might also like