You are on page 1of 62





C. Powel1




Throughout human history there have been a number of attempts to come

to grips with the relationships between the events we observe and the meanings

we attach to these observations. Ever since Plato suggested that the essence

of Reality resided in the IDEA of

the object, we have ranged the gammet of

possible suggestions about how we come to "know" our world.

This present paper represents yet another attempt to illucidate this

complex and baffling problem. The approach to be put forward here owes a

substantial debt to the writings of Karl Popper and his followers, who have

laid the. cornerstone for the "Constructionist" position I will enunciate.

Part of what motivates me to be daring enough to attempt to present

yet another "new" position has been the years of facsinating research I have

conducted into educational practice. Not since John Dewey has an educator

dared to venture into the domain of the philosopher. However, my encounter

with Chris Argyris and his fertile concept of "double-loop" learning has

prompted me to step forward. Another factor which prompts my boldness has

been that most of the writers I have read concerning issues in the philosophy

of knowledge (or science) seem to me to be approaching these issues from the

position of first hand knowledge about philosophy, and second-hand knowledge

about education as it pertains to knowing or science as it pertains to doing.

it Is within this frame of reference that I, as an educator who has

devoted years of research to the study of the acquisition of "knowledge" now

see fit to venture out of my domain to address the philosophical issues my

research has encountered.

The second issue I propose to discuss, is the nature of current

educational practice in the light of these philosophical issues which have

been uncovered by my research. Most criticism of education at the present

time seems to focus upon the

content of the activity; the "What is to be

taught?" issues. The issues I propose to address, as these translate from

the theory of knowledge I will be proposing are more concerned with the process

of education; the "How to teach?" issues.

In the third part of

the paper I will address the educational

implications I infer from the constructionist position. These implications

also come from the results of my research.

In the proposal for an alternative

approach to education, then, I have both philosophical and empirical grounds.

It may also be of considerable importance that the empirical findings preceeded

the formulation of the philosophical position.

In total, the combination of

the philosophical model and the instructional model represent my attempt to

make sense from some rather surprising research findings; namely that the

wrong answers seem to be more educationally meaningful that the right ones.

Such a contralogical finding has required considerable rethinking on my part,

and this paper is my attempt to share this thinking with others.



This section of this paper will consist of three parts.

It will

begin with a definition of the constructionist position and a brief explan

ation of the central assumptions made within this position.

The second section will deal with the

issue of the nature of

physical reality, both as viewed historically and from the constructionist

position. My reasons for taking this position will be explained and my

indebtedness to others acknowledged as the discussion proceeds.

The third part will deal with

the nature of knowing from both the

historical and the constructionist perspective as well. My debt to the

work of Piaget will become particularly evident in this section.

I will

discuss the relative functions of the three primary sources for knowledge;

tradition, exploration and intuition. As an important part of the discussion

I will put forward a theory about how knowledge is increased. 1 will conclude

the discussion with an attempt to put the constructionist position into

symbol ic terms .


The Constructionist Position Defined.


The basis for the constructionist position is neither new nor

particularly startling. it is founded upon the assumption that direct

knowledge about the true nature of the world in which we live is not available

to us.

instead, what we know is constructed by a rather elaborate process

which involves chipping away at the unknown in ways which enable us to

provide some sort of meaningful structure to the otherwise confusing buzz of

objects and events which surround us.

The major departures from some of the current thinking which this

position presents are in the assumed nature of the physical reality about

which we are learning, and in the way in which I formulate how this learning

takes place.

For a variety of reasons which I will discuss later.

I describe the

physical reality around us upon the basis of the indeterminacy postulate.


this postulate it is assumed that the likelihood of an event occuring is

statistical rather than absolute. This assumption is equivalent to saying

that absolute truth does not exist with the educational implication that

teaching for "right" answers may be both futile and irrelevant.

The second assumption which is distinctive within this position is

that the relationship between knowledge and reality is dependent in only one

direction, from reality to knowledge and not the other way. Knowledge,

therefore, represents the accumulative ability to predict outcomes.


accumulates by refinement as well as by disconfirmation, by the successful

prediction of an event before it is demonstrated as well as by the failure to

observe an expectation.

In contrast to Popper, then,

I take the position

that the critical element in scientific discovery is demonstration (of which

failure to demonstrate is an important part) rather than di sconf irmart ion in

isolation. Such demonstrations are always statistical in nature at a

probability level of less than +1 (or greater than -1 in the mutually

exclusive case) with an improvement in the explained variance always possible

from a better formulation of the theory.

'With absolute thruth unavailable, knowledge becomes relative and

contingent upon our ability to know in advance an established part of the

total outcomes of an action by means of repeated demonstration of this

portion of the effects. The meaning of this action-outcome relationship is

constructed by us into a theory, the


amount of the total consequences of an action

of which is only as good as the

it is able to explain.


theory is either true or false.

Instead all contain a knowable amount of

"error" in the sense of those parts of the outcome NOT explained.

This approach has two advantages. First, we can determine precisely

which of two theories is "better" upon the bais of either the stability or

the amount of explained outcome it predicts. Second, with reality indeterminate,

all outcomes are changeable knowing the appropriate contingencies. This

conclusion implies that the limitations upon human endeavour arise from lack

of knowledge rather than lack of possibilities. The educational implications

for this conclusion are that we should focus upon how knowledge is generated

in our instructional practice rather than (or at least in addition to) attending

to the transmission of what is already "known." We start from the known

and progress to the next level of the knowable concentrating upon the develop

ment of the knowledge extension skills rather than upon the known by itself.

Similar proposals have, of course, been made before. However, these

proposals have tended to lack either the empirical basis, or the philosophical

basis or both which they needed for effective implementation.

B. The Nature of Physical Reality.

One of the striking features encountered when reading philosophical

writing in contrast with scientific writing is that the philosopher seems to

be trying to deal in absolutes while the scientist seem to prefer to surround

his or her statements with hedging and conditionals. Of course the scientist

has used statistical procedures to achieve the findings being reported, which

demand conditional and probabilistic statements in the reporting and the

philosopher is not encumbered by such constraints. In addition, philosophers

are usually more concerned with process than with product, while the scientist

starts, of necessity, with outcome.

There was a time, however, when the scienstist tended to talk in

absolute terms as well, although that time has been gone now for about a

century. This difference leads me to wonder whether or not philosophers are

assuming an ABSOLUTE TRUTH may ultimately be available from scientific activity

in a deterministic and closed reality. Certainly, if this assumption is being

made, it is not explicitly being stated. Since the consequences of the

distinction between a determined and an indeterminate reality are so great,

this may be a very fundamental issue.

Certainly, the typical scientist of the 19th Century, as looking for

Cause-Effect relationships which could be formulated into Laws.

With the advent of the discovery that the process of measurement

introduced imprecision into observation, the search for absolutely definitive

results in science very quickly lost ground. As soon as statistical procedures

were introduced to scientific exploration, the substantial increase in

the precision of the resulting observations soon revealed that the then

current theories did not account for the totality of the observations.

It was Darwin's evolution theory which was the first clear statement

of the possibility that a single process might produce more than a single

product. Such a possibility can only happen in a contingent environment.


a deterministic environment, a single process should always produce the same


Freezing water will always yield ice.

However, there is more than one

kind of ice, each of which is contingent upon different temperature and pressure

conditions at the time of freezing.

It was the realization of this conditionally present in physical

events which led Heissenberg to state his famouse Uncertainty Principle.


is this latter concept, that observation interferes with an event, making the

event itself uncertain, which is the doorway between 19th Century Deterministic

Science and 20th Century probabilistic science.

However, the old concepts are slow to die. The fact that scientists

began to realize that ABSOLUTE TRUTH may be inaccessable did not immediately

lead them to the conclusion that ABSOLUTE TRUTH might not exist!

In fact, this

latter conclusion still may not be of common currency, even though certain

other major theories such as Relativity Theory might achieve better formulation

if the indeterminacy possibility for reality were true.

Another problem with which both current science and current

philosophy must deal is Church's Hypothesis which suggests that there may be

properties within any logical system, which can not be deduced from within

that system. Since this hypothesis can be related both to Popper's concept

of refutation and mine of demonstration, I will return to it later.

In any event, the current interpretation of REALITY from the physical

sciences seems to have two properties. First, it is based upon "common

sense;" and second, it is "local."

By a "common sense" reality, we mean that it is:

1. Independent from MIND.

2. Inferrable by abstraction from observation through.


deductive/inductive reasoning,

3. Based upon a probabilistic multiple-cause/multiple-

effect patterning.

Reality is "local" in the sense that:

1. Events are proximally and not distantly linked.



information can be transferred no faster than the

speed of 1i ght.

The concept of independece from mind is related to whether or not

events occur because of our desires rather than or in addition to our actions.

This is the classic "body-mind" dilemma of Greek antiquity. The apparent

extrasensory ability of certain individuals to bend spoons, move distant

objects through mental concentration or influence the growth of plants with

"good" thoughts would be classified as involving some form of trickery by

this approach. Of course, physical scientists have been working with only five

forms of force, mechanical, "strong", electromagnetic, gravitational, and "weak". The

possibility that weaker forces, undetectable with current measurement tech

nology, may exist has not been given serious consideration.

The need for abstractive inference comes from the Uncertainty Principle.

Our only available source for direct, information, according to this view,

comes from observation. All other "knowledge" and specifically all MEANING

is a product of inference.

As such it is subject to inaccuracy from two


to be some indication that the time-lag characteristic of radio signals

because of the speed of light did not seem to occur with esp phenomena.

Many other instances of psychic events also seem to indicate "instantaneous"

communication. However, the distances thus far reached for such experiments

are not great enought to be conclusive. A radio signal from the moon

takes only about a half second. On the other hand, if communication

speeds greater than light actually occur, then at least in this sense,

our environment may not be completely local. That is, something could

happen remotely which would effect our environment before we could

detect the event which caused it by our present detection methods.

Typical of this sort of situation is the fact that we often need to look

in front of the source of the noise to see the jet aircraft.


phenomenon occurs because light travels faster than sound.

There have been a number of occasions in the physics lab

oratories themselves, which might be interpreted as challenging the

assumption of the localization of reality. Some particles seem to

disappear and reappear rather than move directly across the interviening


In high energy experiments, sequences of changes sometimes would

make better theoretical sense if time went backward. These problems

may be rectified by better theories, but it is also possible these events

are the leading edge of as yet unknown energy forms with velocities

higher than light. They may represent the gateway to undiscovered

existances. Existances which when known and understood explain psychic

and religious phenomena, which link the Universe into a single unit in

the process of evolutionary transcendence, of itself.

If this latter possibility turns out to be the case, as seems

to be implied by psychic phenomena in particular, then the Scriptural


"miracles" would have been and would continue to be a natural part of this


If reality is open, the possibilities may be unrestricted.

What we would need

is to learn how to contact

these forces and to use


For the best reason in the world, that I myself an a clairvoyant,

I am assuming that the non-localized openess 1 have just described is a

property of reality.

2. The Constructionist Conception of Reality

Personal experience from such an admittedly "unscientific"

source does not "prove" this openness. Fortunately, however, I do not

need to prove my convictions to use them in a philosophical sense in

order to explore the consequences these assumptions may imply.

Many aspects of the constructionist position can hold without

invoking mysticism. On the other hand, this concept of openness is going

to be very useful when we come to deal with inductive (or intuitive)

reasoning. The principle problem of the philosophy of knowledge (or

science) has been to explain how a thinker can escape the confining

constraints of any deductive system in order to discover "new knowledge."

Such an event is not logical, but there is no question about the fact

that it happens.

if some of us learn how to contact some unifying force

which exists outside of the confines of our "local" reality, then this

event is easily explained, if this ability, like singing, is a talent,

but can be learned to some extent by most people, then the implications

of this possibility for education are tremendous.

Thus upon the basis of both logical considerations, that is,

the need to explain how it is possible for discovery sometimes to transcend

logic, and upon "frontier" evidence which is admittedly open to alternative

Interpretation if not outright challenge, I can now present my "con

structionist" view of "reality."


1 am assuming that the reality in which we live is a complex

web of interconnected force


I am also assuming that it is some

what indeterminate, and not entirely a closed system. In addition, some

process in addition to "common-sense" is needed to explain how we get to

know this reality, a process, I am assuming, in which the "mind"


"reality" are not entirely independent. These exceptions to the current

conception reported above are probably marginal, and in ordinary conditions

represent "weak" forces which may not yet have been detected by current

instrumentation technology. On the other

hand, the impl icat ion-s of these

assumptions make possible an alternative conception of our existance

with possibilities which are vast.

To utilize these possibilities, we need an approach to the

generation of knowledge which makes them available to us. We can now,

therefore, turn to our discussion on the nature of knowing.

C. The Nature of Knowing

The problem of knowing can be considered to refer to

the way in which meaning is derived from our attempts at the observation

of events.

The distinction which needs to be made is between the actual

occurance in the environment and the manner in which we interpret that

occurance ,

To illustrate this problem, let us consider ourselves sitting

in front of our fireplace watching a log blazing away merrily. As we

enjoy the warmth and the dancing light patterns, we observe the flames

swirling outward from the log to lick upward toward the chimney. Sparks

fly off as little glowing missies with a heart-warming pop and crackle.

Slowly the log diminishes in size, being reduced finally to a small pile

of 1i ght whi te ash.


Obviously, the process of burning causes the log to become smaller

and lighter.

On the other hand, if we talk to a chemist, he or she will

tell us that these perceptions are deceiving. The log actually vaporizes and

oxidizes, which means that the end products of the burning of the log are

larger in volume (as invisible gasses in the atmosphere) and heavier in mass (since oxygen has been added to the original mass.)

If we realize that the popping and crackling and the flying

sparks are the result of the uneven vaporizarion of the irregular structure

of the wood, then we can "see" the chemist's interpretation. These events

may be regarded as small explosions.

Thus the "change of state and

structure" interpretation becomes the BETTER interpretation, even when

the "vanishing" interpretation is more visually compelling.

The point to be made from this example is that it may be the

specific aspects of an event which a person chooses to observe which

determines the interpretation applied. Other examples abound. The

average person puts up their hands to protect their face, knocking the

volley-ball down, instead of stepping back in order to strike the ball

from below. it is quite possible that most of our "natural" interpret

ations and responses could be improved with specific training. However,

to achieve this improvement of performance the specific properties of

the BETTER performance must be known, AND the appropriate teaching

procedures must also be known. The breaking of the four minute mile

may be more a matter of improved training procedures than of athletic


The advent of the scientific revolution which produced the

Industrial Revolution had its origins in the realization that appearances

may be deceiving. Systematic observation designed to help researchers to

select the most appropriate possibilities from a set of alternative


interpretations has proven to be far more effective in the derivation

of viable interpretations than the more commonly used direct interpretation

method. The discovery that "common wisdom" was often incorrect produced

a great deal of excitement among these early scientists. It seemed that

the TRUE NATURE of reality might be revealed using these new methods. The

tendency of scientists to talk in terms of Laws and Principles is evidence

of this belief that they were revealing God's Processes.

This replacement of the classical viewpoint in which reason was

seen to be more useful than observation led to considerable rethinking of

the philosophy of knowing. The need to test opinions using observation

became clear. The regularity of patterns which were being observed helped

to maintain the value of deduction as a basis for structuring knowledge

The deductive model in use was binary (statements were either True or

False). Hence it was tacitly assumed

the science was in search of the

"right" answers. This tacit assumption was equivalent to assuming that

the underlying reality was pre-determined and closed. Such was the status

of science in the middle of the 19th Century when Darwin published his

The Origins of the Species.

As already indicated, the evolution theory produced the surprising

result that a single process (natural selection) could produce more than

one effect (all of the living and extinct species.)

This outcome was not

the only attack which was occurring on the Absolutist position. Experi

ments into how much metals expand with different amounts of heating

showed inconsistant irregularities. It was found that these patterns

become very near to straight lines if the irregularities could be con

sidered to be produced by errors in meaurement. When the results were


averaged, the averages where even closer to straight lines. This

improvement seems to support the assumption that measurement error con

tributed to the problem. Measurement errors were being found elsewhere.

Galton found these in systematic differences in to response times of

people. The interesting thing about all of these "errors" was that

their distribution pattern closely approximated a new curve recently

discovered by Gauss and which has since become known as the "normal"

(or bell shaped) curve. With this discovery, mathematical statistics

wa s b o rn .


From this point on, scientists have used the proceudres which

arose from this finding to try to fit mathematical models to sets of

observations. The overall variability of the observations can be

determined before and after the model

is used.

The amount that the

variability is reduced when the model is used is considered to be the

"explained variance". The rest is considered to be unexplained, with a

part of this unexplained variability to be taken as "measurement error."

It is not enough that a model fits a data set very well.


model must also fit other experiments of the same sort fairly well too.

In addition, it is necessary that the language statements in explanation

of the model fit the pattern of that model.

In addition, the explanation

must also explain results from other experiments which may not be directly

related to these. And finally, the resulting explanation must be able

to predict outcomes not yet observed, but which occur nearly as expected

when the appropriate experiments are conducted.

The present generally accepted model for education (the use

of the general linear model upon aggregates of answers deemed to be

"correct") typically explains about one fourth of the variability and

does not meet the other criteria just given very well either.


In response to these dramatic changes in view-point, philosophers

of knowledge (or of science) have taken a veriety of positions over the

last 100 years. The first big change was the adoption of an evolutionary

stance, then relativism was added. Noting that diversity seemed to enrich

outcome prospects, a movement toward pragmatism developed. The realisation

that the methods of verification were a critical element in interpretation

led to the positivism movement. Even more recently, partly in reaction to

the apparent mechanical aspect of mathematical models and partly in

response to the recognition of the intimate link between the contents of

an observation and its interpretation, the existential movement emerged.

Each of these movements were found to be wanting in some way.

The evolutionary model seemed to explain fairly well the diversity of

cultures among people but ran into problems with both the statistical

probability issues and with the sometimes sudden changes which have occured

when the history of a single culture is considered. The relativist

approach ran into criteria problems.

What* for instance, is "good" in a

relativist position? The pragmatists also had the criteria problem.

Positivists ran affoul of the inability to obtain complete verification.

And the existientialists had difficulty separating the actual event from

its interpretation which gave them validation problems.

In all of these approaches, some form of movement from the

observation to its implication was required. Although the final inter

pretations could be stated in such a way that a deductive chain downward

to the observation could be established, getting to this interpretation

initially was difficult to treat as a deductive process.

With the development of communications theory, it was realized

that the exceptional event contained more "meaning" than does the "normal"



This concept is a little puzzling until we realize that the

common event on a telephone line is "background noise" and the exceptional

events are the systematic irregular modulations of the human voice. As

a parallel development to this, Popper realized that the sudden leaps

which the evolutionists could not account for, the criteria problems and

the verification problems could all be dealt with DEDUCTIVELY in an event

which REFUTED current theory. The rules for verification did not seem to

have universal application, but refutation was always complete and absolute

These events in the history of science have been both exceptional and

very powerful. A new philosophy of science, built upon disconfirmation

rather than verification was born.

Popper and his followers seem to believe that they have resolved

the deductive-Inductive controversy as well because the disconfirmation

process is clearly a deductive one. However, I disagree.

In order to

produce an hypothesis which is disconfirmable, it must have particular

properties which reflect the dilemma proposed in the Church Hypothesis.

In order to produce such an hypothesis we must step outside of the

mutually reinforcing characteristics of a normal deductive system to an

event in the system which cannot be explained by the system, or to an

event which should occur in the system but does not.

To take this step

requires an inductive process in the formulation of the hypothesis to

be tested, in the same way that all hypotheses are generated inductively.

What it does do is to remove the inductive overburden from the inter

pretation process of the results.

In mathematical terms, if reality is a lattice, then the dis-

confirmation approach speeds the process of finding the boundaried of

the various subsets. Popper's model has not eliminated the need for

creative insight as a totally deductive model would have done. What it

has done is to provide


us with a very powerful approach to the testing

of creative inventions for their level of insight without the danger of

getting trapped into a set of self-confirming hypotheses. This procedure

involved testing the outliers on the presumption that exceptional events

carry the most meaning.

1. The Constructionist Theory of Knowledge.

More than a dozen years ago I attempted to set up a mathematical

model for the concept "experience." At

that time I was not particularly

successful, but many of the ideas I had then still seem to me to be


I propose here to rework those original concepts. In this

section I will describe my

ideas in words and in the next section I will

attempt to do the same thing symbolically. As the first major attempt

to present this theory publicly, I would appreciate constructive criticism

of both the ideas and the symbolic structure.

To begin with, the approach is cast into the "reality model"

already proposed.

Space as we know it

seems to be filled to varying

densities with a web of interconnected force fields. The location of

points of density would fit a probability pattern rather than being

precisely defined. Aggregates of the more dense of these fields would

be what we perceive as objects.

Some of these objects can emit certain energies in various

ways, some can transmit energies with greater or lesser efficiency

(in the sense that efficient transmission involves low distortion)

others can reflect energies and still others absorb energies. Put more

simple, perhaps, the physical properties of the objects in the Universe

are diverse rather than uniform.

We will consider the emission of energy to be an action,



the results of that emission to be an event.

The most common form of

event involves some form of change in the energy as it travels, and often

changes in objects occur as well,

I will try to clarify these

concepts with an example. We have

a ball at the top of a flight of stairs.

A baby gives it

a push and it


to the edge and bounces down the stairs.

From the stationary

position, the first cause is the mechanical energy of the push from the

baby. In this case the ball receives a directional momentum which causes

it to move.

The friction between the

ball and the floor absorbs some


this momentum, and the result is that we observe the ball rolling

(changing its attitude or physical orientation in space) as it moves.

At the edge of the top stair it loses its friction and the

under-plane which prevented the action of gravity. It does not, however,

lose its spin.

Now in a falling attitude, gravity adds

to its momentum

and it drops to the first tread.

Upon impact a new set of forces mani

fest, derived from the elasticities of the impacting surfaces. Each is

deformed by the impact, absorbing some of the energy and using the rest

to recoil into its original shape.

This recoil projects the ball into

the air nearly as high as the original level. The forward momentum

carries it far enough that upon the second bounce it has further to fall.

With further to fall, the forward momentum has longer to work, so it may

miss the second step completely and bounce on the third.

And so on to

the bottom where it will continue to bounce about

until it is stopped

in some way or until both the vertical and the horizontal mementum has

been used up.

We do not usually think about a ball bouncing down the stairs

in terms as complex as this.

In fact, if the absorption of the momentum


involves the breaking of a favorite vase, the thoughts we have may be

deflected from the ball completely.

None-the-1ess, action-object-event relationships tend to be

formed in the manner just described, and sometimes large portions of

them can be structured into an interrelated whole using mathematical

equations. If the direction and initial thrust upon the ball are precisely

known before-hand, the pathway can be fairly accurately calculated

predicted. The further along the trajectory, the less accurate will be

our initial prediction. It is just such a combination of momentum,

spin and bounce which the champion billiards player uses to win.

Another important aspect of this issue is that changes in

the circumstance will change the outcomes. If the stairs were replaced

by a ramp, the

ball might not bounce in the vertical direction at all.

In effect, the outcomes are contingent upon the interacting properties

of the structure of the system in which the

event is occurring. This

qualification gives us a fourth dimension to consider; the dimension of

the conditional or contingent relationships among the other three.


is this fourth dimension with which science is most concerned.

In fact,

scientists tend to write the physical properties of the actions and the

objects into the contingency statement and then attempt to predict the

event from the contingency vector.

In so far as these properties and their effects can be described

in mathematical equations, these predictions can be successful. More

important, for the purpose of controlling outcomes, the degree of accuracy

of prediction can be known. As already indicated, we achieve this latter

accomplishment by comparing the actual observations with the predicted

expectations and can, using statistical procedures, determine the degree


of accuracy of our mathematical model from the estimates of explained

variance these procedures give to us.

In a deterministic and closed reality, absolute verification

should be possible, once we know all of the operating factors. The fact

that verification has proven to be very difficult, if not impossible in

our advancement of knowledge, has either the meaning that we are not

yet near absolute knowledge in any area, or it could mean that reality

is indeterminate.


On the

In this latter case, absolute verification is not

other hand, in both cases, not only is RELAT-IVE

verification possible, it already exists within the concept of explained


It is more tidy logically to assume an indeterminate

reality, since this endows meaning to the concept of relative verification

without the complications of the requirements of "True-False" logic.

In addition to this, although disconfirmation is absolute for

the particular hypothesis tested, and this result may require a restate

ment of several other (and sometimes many other) hypotheses, the re-

furation does not di sconfi rm the variability already explained, although

the results may lead to an alternative explanation. Instead, the new

formulation must include what is already known, either as a special case

within a more comprehensive and powerful theory, or in a reformulation

which provides for a better explanation. Thus existing theories are

not merely replaced with "new" theories, they are replaced with BETTER


In the situation where we admit relative verification, the

definition of BETTER is easily obtained. There are two properties of

the explained variance to consider. The amount of this variance differs

from situation to situation, not just because of measurement error, but


also because the impact of specific actions varies somewhat as the

contingencies are changed.

Thus a map of the theory showing all of its

language level components, will have differing average values of explained

variance for each part of the theory.

This set of values will have both

its own average and its own variability pattern. The average will be

greater than zero and less than one.

The position of this average in

this range, I will refer to as

the "strength" of the theory.

A weak

theory will explain relatively ljttle variance and a strong theory will

explain a good deal of the variance. As examples, I would cite'current

education theory, which explains about one fourth of the variance as a

weak theory, and the oxidation/reduction theory for chemical reaction

as a strong theory in which nine tenths

or more of the variance


explained (at least for the environment of Earth).

The second property is the way these values spread out around

the average.

! will refer to

the size of this spread as the "efficiency"

of the theory. An inefficent theory will have a broad variability and

as such will admit both strong and weak examples. An efficient theory

will have a narrow variability. The same examples also illustrate this


The relationships betv/een two theories at the same level of

abstraction will map at zero where they are unrelated, and positive for

mutually exculsive hypotheses where these displace each other. Since

explained variance is a squared number, it is always positive, however

the mapping would probably be more easily interpreted if negatives were

used for the mutually exclusive relationships.

From this discussion it becomes clear how the concept of a

lattice might apply.

It appears that a specific set of contingencies


will apply within a rather confined zone in the lattice, with greater

applicability in its most powerful region and then fading off toward the

boundaries. Changes in the contingencies result, in many cases, in

some changes the the output events.

What science has been doing, then, is mapping this lattice.

As a result we can define scienfitic knowledge as representing a set of

statistically supported logical propositions about the contingent relation

ships among actions, objects and events. This set is known not to account

for all events and not to account for all of the properties of *the events

which it does explain.

This last statement leads us to the central problem, is


philosophy of knowledge (or science).

This problem is how to add to the

set of propositions in order to improve them. This problem, which is

the one I believe Popper was trying to address with his insightful

suggestions, arises, because as a deduct iye system, the existing set of

propositions is always mutually reinforcing and closed upon itself. To

add to the system, we must step outside of the system!

It is possible,

from within the system to add support to it by showing that certain events

which have not been observed but should be are, in fact observed.


best recent example of this event, to my knowledge are the Josephson

Effects related to super-conductive phenomena at very low temperatures.

And of course, the exploration of the boundaries of known effects can

reveal unknown effects or unexplained changes in known effects.

in this

latter case, however, the development of propositions which explain

these events requires some sort of creative invention which then must

show that it is better by accounting satisfactorily for the observations

and improving explanation (by increasing either strength or efficiency


or both) elsewhere in the set of propositions.

By the. very nature of this step outside of the deductive system,

it can not be a deductive process. The term inductive had generally

been used, but I would prefer to use the term "Intuition" for the subset

of inductive (or creative) inventions which are successful. My reason

for preferring intuition as the term is that, as a clairvoyant, it seems

reasonable to me to suggest that intuition follows one or more of the

remote linages in this open reality I am assuming to find an appropriate

configuration for the necessary contrasting propostion(s) . In layman's

terms, I am suggesting that new discovery is "inspired." I have already

suggested that I thought intuition was a talent like singing in which

most people can learn at least some skill.

I have another reason for using the term intuition. There is

a vast area of human experience which does not qualify as being part of

our scientific knowledge. Many of these experiences are related to areas

where theory from science is weak or non-existant but which, none-the-less

have their own propositional systems and degree of event support. In

general these systems seem to involve beliefs which have not been subjected

to the same rigorous testing as have the sceintifice propositions.

It is

possible that some of them, particularly as related to the "psychic" may

involve energy forms which are not detectable to recognizable levels

using current scientific instrumentation. There is no reason to suppose,

however, that intuition has not operated in these areas as well.

In most

of these cases the theory base is

usually a very weak, often being only

at the early classification stage of knowledge development.

In this sense we may have a large segment of human knowledge

which is not part of the propositional set of scientific knowledge. The


scientist who disclaims this knowledge in contradictions of someone

else's personal experience, run the risk of alienating that individual.

The experience is undeniable, it is the interpretation of it which

needs to be questioned in the same way that scientists now proceed. To

discount an experience on the grounds that it does not fit, or that

current theory would need to be changed to make it fit seems to me to

be the height of pedantry. Within the context of the constructionist

theory being put forward here, such an attitude would not be an admissible

proposition. The fundamental assumption (that of an indeterminate

reality) leads to the proposition that we can not "know" everything.

To insist that "science" is wrong on the grounds of tradition (such as

the Scriptures) or on the grounds of Incompleteness is equally inadmissible,

since right and wrong

(true and false) in any absolute sense does not


In either case, once we have agreed that the measure of the

strength of a propisition is in the variability it explains, the re-

joiner to either confrontation is to be asked to be shown a BETTER

alternat ive.

Within the context of the constructionist position, we design

our own reality, as the existentialists would have it, but we can also

know how accurate this reality design may be using systematic testing

procedures, the most powerful of which involve attempts at disconfirmation.

Thus we solve the criteria problem by applying a relativistic criteria

with known mathematical properties. This theory also proposes a solution

to the transitive dilemma by proposing that knowledge is deductive and

statistical within itself (rather than deterministic) and Intuitive with

respect to the process which adds successfully to this knowledge.


uition is not deductive, and may be either nonlogical or trans logica1.


1 will reserve the term illogical for unsuccessful inductions. !t may

have its own structural properties, some of which may already be known.

With respect to the educational implication of the construction

ist position, a full treatment will be given in the last

part of this


It is sufficient to say here that the current practice of

evaluating learner status upon the basis of accumulated "right"

answers wou1d be seen as invalid because such "right" answers do not


In addition, the current practice of concentrating upon the

transmission of current "knowledge" at the expense of teaching the

learner how to add successfully to his or her own knowledge would be

seen as inappropriate because this procedure would be expected to lock

the learner into the logic of the current systems by providing insufficient

practice in the exit procedures.

I will now provide a short attempt to express these same ideas

symboli cally.

E. A Statement in Symbols of the Constructionist Position

We begin with our definition of "knowledge" as being an

hypothesis (h) in which a particular action (a) taken in the presence of

a particular object (o)

should produce a particular event (e) . These

hypotheses are at a level of direct linkage to observables, hence we

will consider them to be first order hypotheses. This relationship can

be expressed as follows:




This hypothesis is then translated into a mathematical statement

which expresses a functional relationship (f) between or among particular

aspects of the action and the objects:







The mathematical model h


can either be derived from the verbal

statement of the theory, or, more commonly today, from the data patterns

of the observations. in this latter case the natural language verision

is derived inductively from the mathematical model.

In either case, the


model fits the observation to a determinable level (r ). For several

reasons, including the fact that explained variance estimates may not

come from regression equations, I will use the symbol V for explained

vari ance.

The hypothesis is now tested, and if it meets the necessary

statistical criteria it will be supported to the extent that it explains

the variance in the observation data (u). We can, therefore, indicate

its degree of verification in our statement.







A second experiment in the same domain in which some aspect of

the action (a ) is modified can now be conducted.

This new experiment;

yields a new hypothesis {h ) which can be expressed as follows:





f(ao)-*-exs v1

in this case our new hypothesis is supported at a different

leven with a modification of output. This second hypothesis can be

"better" than



the first one in several ways.

procedure a











To begin with, it is






procedure a.

and v







of outcome

may off-set



A similar








from e




















statement will






whatever may be








Similar procedures




to map the






or /.

















(#) which have a number of properties





of actions



in common and a number





properties which vary.



in which

object o will

have been conducted upon

object o to yield






with a corresponding

set of veri
















or a theory about


object o.









The strength of this theory will be the average (y) of the set


This value will be greater than zero and less

than one, and its

relative maqnitude


in this ranqe tell us how well the


for the properties of the object o.

theory (h" ) accounts


The set of explained variance {V) will have its own variance





which will







propositions about object o tend to be.


A large s




2 will imply that

there is a broad variability in the strengths of various parts of the

theory pertaining to object o. Such a theory

would be ineffi cient

An efficient theory will admit exemplars of about equal strength.

Similar theories to h"

can be generated for fy a or e.

In addition to this, third and higher order combinations of

hypotheses {h) can be generated about the relationships among the more

specific theories at the second level of hypotheses. In this case,

mutually exclusive event systems can also be accommodated by using the

symbol -V. These higher order meta-theories would be expected to integrate

and differentiate among components of theories to produce a lattice map

of tested and supported propositions from the first order level.


The determination of improvements in any theory H would be

relatively easy using statistical procedures since the tests for improve

ment involve tests of the comparative properties of the distributions of

V and V

. The simplest tests would involve comparisons between means

and variances of these two sets.


Is vx>v and/or s



Thus a theory of knowledge at the theory level can be translated

into statistical analysis among the verification (V) sets. Better theories

will be stronger and/or more efficient than their predecessors.

In addition

to this, this precedure may give us an approach to an analytical "theory

about theories."

Similarly, the quality of the meta-theories wi11 be reflected

in the quality of the lattice maps they produce. The mathematics of this

higher order analysis is less clear. In fact, any of serveral procedures

(decision theory, fuzzy set theory, etc.) may be useful, and in so far

as their outcomes are different, may reflect differing properties of

meta-theori es.

Several conclusions are obvious, however, the ability of a

theory to generate a hypothesis which is supported also supports the theory.

In a similar manner, the ability of a meta-theory to help to generate a

theory by identifying a "hole" in the map, in so far as this theory is

supported, will also support the meta-theory. Thus the internal logic

of the existing map

of our scientific knowledge can be (and is being)

used to lead to the refinement of this knowledge. Although such know

ledge is "new" in the sense that the necessary experiments to support

the possibilities had not been conducted before, and such experiments

can create events which had not occurred before, or which had not occurred

in precisely this manner before; all such refinements are cosmetic.


It is at this point that the true power of Popper's position

becomes evident. This statement can be made because his position seems

to me to be about how to step outside of the logic of the map to discover

NEW knowledge which could not have been deduced from the map and which

will require its restructuring.

It is this sort

of knowledge which will

increase the amount of explained variance at the event level.

In this

sense, Popper's theory may not be so much a theory about knowledge as


The most likely place to find points in our knowledge map

which contain events, which can be used to disconfirm a theory and through

it to require restructuring of the related meta-theories, is among the

outliers. These are the events in the system which are least like their

neighbours in the map. Thus we are back to the property of communication

theory mentioned earlier in which the most information is to be found

among the most exceptional events.

V/e can thus improve our search for new knowledge by using this

deductive search strategy upon a well drawn map, but we wi11 still need

intuition to generate the alternative hypothesis we design to disconfirm

the theory, and, in the process, generate an alternative (and BETTER)


This venture into the symbolic approach to the propositions of

constructionist theory, seems both to clarify the concepts and to indicate

the ways in which this model might

generation of knowledge.

be useful for the explanation of the

It would now be beneficial to present an illustration of how

this constructionist approach may work.

In this example, I should admit,

the theory under illustration is post hoc. I have designed it to account


for the paradoxical results of my research into the learning processes.

It is to this example that we now turn.



The schooling process is neither monolithic nor singular.

However, there are a number of features which most school settings seem

to have in common. Unlike the clinic or the sales

setting the school is

usually characterized by a many-to-one rather than a one-to-one service


In addition, the person giving the service is assumed to

be different from

the clients in a number of important ways.

He or she

will have been selected to be a representative of the social and cultural

authority of the community to an extent greater than other helpers,

except the police, will be assumed to fulfill. Except for prisons and

asylums, the custodial role of the teacher will be greater than for any

other person engaged in social service. Finally, his or her role will

generally be more ambiguous than for any other social servant.

All of these features of schooling provide a curious mixture

of autonomy and constraint to the teacher's position.

This perplexing situation has developed over thousands of years

of tradition, with each new view of society injecting its own special

features, and each new advocacy group trying to hold the school accountable

for the group's interests rather than those of the clients or the school

itself. To stear a middle course in this situation has proven to be both

amazingly difficult and surprisingly easy. In the end, most advocacy

groups have either started their own schools, or have settled for the

teaching of the version of what is "right" which has been held by the

dominant culture of the day. The school has been left with a greater

responsibility for exposition of the culture, when such interests clash


than for its imposition.

It has become more important for students to

be able

to recite the current public doctrine than it is for them to


it or to be able

to use


The success of "science" over the last two centuries, has

been the one single force

to have the most profound effect upon the

content of schooling over this same time period. Strangely, however,

science has not had the same impact upon the process of schooling, to

account for this paradox, we need some historical perspective.

A. Schooling from the 1.9th Century Viewpoint.

As has already been indicated, the view of science at least

until the latter

half of the 19th Century was that reality was both

deterministic and closed. This viewpoint was a carry-over from the

theological viewpoint of centuries earlier. Scientists saw their work

as revealing the Creation process, and the schools continued to use the

Scriptures as the basic textbook. As the secularization of the schooling

process progressed, books other than the Scriptures and religious tracts

began to be used in the schools. As we might expect, these new materials

were treated in the same context as though their contents were Absolute

Authority, in the same manner as were the previous religious materials.

It is not surprising, then that the introduction of the new

scientific concepts which seemed to challenge either religious or social

authority would be denounced so vehemently. Nor is it surprising to find

that the same slavish insistance upon precise recitation of content should

continue into the use of these new materials.

If God did not say the

answer in the book was RIGHT then Science said it was, so that in a

procedural sense, the change of content made little difference. Building

upon a misunderstanding of the concept from the Humanists of Mental

Discipline, the content of schooling was seen as "exercising the mind".

The idea

that a


learner should be able to use what he or she

learned was

a separate issue, and usually reserved for the vocational school.

Another force was operative in North America. The settlers

who flocked from Europe in the closing decades of the last Century, and

the first decade of the present one, saw the absence of an hereditary

aristocracy in the presence of education as an opportunity for them to

convert their children into a new aristocracy in this new land. For

this reason, trades training, which was seen as a route to the Middle

Class in Europe, was seen as a "second best" here. The Horatio Alger

version of the American Myth was that a person's ability to do was in

herent, all he or she needed was an Academic education.

Hence the measure

of educational accomplishment in North America became the "ability to

pass tests."


Changes in the 20th Century

With the entry of the United States into the Great War, the

need to determine quickly and easily those people who might be officer

material became imperitive. Under contract with the army Otis developed

a new type of test, the "multipe-choice" test. This type of test was

an ingeneous development.

It represented the forefront of psychological

thinking of the day. We must remember the Pavlov's famous dogs were

barely out of the news at that time, and Watson was doing amazing things

with his puzzle boxes. The Associationist theory was in its heyday. What

more natural, then, than to determine which "correct associations" were

in place than to supply these in two parts, with the second part hidden

among a jumble of possibilities.

The person who had the correct associations available, would,

of course, recognize them immediately. Those who did not would need to


resort to "trial and error" and wou1d be expected to get the answer

"correct" in this latter case by accident only.

Except for the use of

a "correction-for-guessing" the "wrong" answers could be ignored. For

the purposes for which these tests were intended, they worked amazingly


North America, always a lover of gadgets, was launched upon a new

technology of the assessment of educational achievement.

As we shall see shortly, the logical assumption that these tests

were measuring associations has turned out to be an inccorect description

of the psychology of test taking behaviour. However, this invalidity

was not reasized, and the convenience of these tests led to their

proliferation. It was soon discovered that there was a general tendancy

for scores on these tests to increase with age, and that careful construc

tion of these tests tended to lead to cross-sectiona1 score patterns

which approximated the "normal" distribution. This latter observation

suggested that the statistical procedures which had proven so useful in

physics and were now doing amazing things toward a more scientific approach

to agriculture, could be applied in education as well.

From the fact that both achievement and intelligence seemed to

be normally distributed, Dewey inferred that score differences implied

differential learning rates, and the Progressive Education Movement (i.e.

that learners should move through schooling at their own pace) was born.

The lower than desirable correlations between intelligence scores and

achievement scores was seen to be "measurement error" in a situation (with

human subjects) which was notoriously diverse and unpredictable.

For the past kO years large quantities of educational research

have been conducted, with amazingly little progress in the improvement

in the process of schooling. Differences among various teaching procedures,

which would be expected to show differential effects have not demonstrated


these effects in these research projects.

Class size did not, as would

be expected from "common wisdom", seem to have any effect upon learning

outcomes. Similar negative results from special intervention programs,

for the number of years of teacher training, from the methods of teacher

training, and on and on.

As all of this was going on, these "new" tests v/ere becoming

more and more ingenious, with methods for exploring very complex tasks

being devised. But when we tried to open the educative process to a

focuse upon more than memorization, the results seem to be a decline,

rather than the expected improvement, in test scores. And unless the

objectives for instruction were very carefully defined and very

carefully taught, the best we seem to be able to achieve is an

average explained variance (y) of about 25 percent. What

ever we

seem to be doing, we seem to have a very weak theory about the

educational process.

With the advent of computers, and their easy availability, and

with the rapid improvement of mathematical procedures into complex multi

variate approaches,, refinements to current theory seem to be leading to

the strengthening of this level somewhat.

This example Illustrates very well how a deductive system is

closed upon itself, and why it is necessary to step out-side of the system

to improve such a theory substantially. My research has made the culprit

very clear.

The problem with our theory is that we are still using the

scoring procedure invented by Otis.

We assume that the right answers possess the only meaning, that

all wrong answers and some right answers are selected upon a "trial-and-

error" basis, and

since we

do not know wh ich

right answers were achieved

by "guessing", the total number of right answers will be more stable and,


therefore, more meaningful than which questions were answered "correctly"

or even which answers were selected. We are apparently using all of

these complex and powerful statistical procedures upon a data set (the

tota1-correct scores) which is constructed upon a set of invalid

assumpt ions.

Otis is not to be faulted for this amazing event, since


used the very best available psychological theories from the turn of the

Century, which was when he did his work. Nor, apparently can we find

fault with the few other people who tried, occasionally to challenge these

assumptions, most of them have been locked into the "True-False" logic

we are discussing in this paper and so they tried to modify the way we

achieve a TOTAL score, or when they tried to look at wrong answers as

independent (nominal) decisions events (as Darrell Bock did) the made

inappropriate selection of items to test their assuptions, inappropriate

for making a "breakthrough" that


This is why the intuitive leap is

so important.

The present theory will be a special case of the new theory,

if we choose material in our testing of the new theory which is more

representative of the "special case" than of the general situation, then

we will not get conclusive results. V/ith each of us working on several

agendas, it is easy to discontinue a search prematurely.

It has taken more than 15 years of determined exploration before

I could unlock this door.

Now that it is unlocked, the

information re

vealed has profound implications to education, to learning theory, and

through learning theory to the philosophy of knowing.

C. The Constructionist Theory of Learning

From the constructionist point of view there are two sources

for knowledge. These are:

1 .

Tradi 11 on


2. Exploration

I will use the term Tradition to include both current "scientific"

knowledge AND our accumulated "conventional wisdom". The essential

difference between knowledge which is "scientific" and knowledge which is

"conventional" in that for the scientific knowledge the V (verification)

levels are known, and for conventional knowledge these levels are unknown.

Since we are assuming that ABSOLUTE TRUTH does not exist, both types of

knowledge must necessarily contain "error" (unexplained variability.)

Convention is weaker than science only in the sense that it is untested

by scientific methods.

Its "strength" in the terms of this position

merely awaits testing. Experience is not "denied" in this context, instead

it needs to be explained.

It may turn out that the traditional inter-

pretation has a V level of the "vanishing log" theory of combustion. It

also may turn out to have

the status which led to the discovery of

digitalis. Beforehand we can not make a valid judgement, after the

validation experiments we have added to our knowledge.

The concept of explorat ion I will reserve for most current

scientific activity and for all other persuits of curiosity which might

reasonably be called "systematic." I will suggest that there are two

kinds of exploration. These are:

1. Assimilat ion



I have borrowed these terms from Piaget because his model for

these processes seems to fit the concepts I am proposing. In the assimila

tion type of exploration, the reasoning is assumed to be essentially

deductive, with little or no inductive or intuitive component.

In its

simplest form the Plans for the TOTE (test operate test exit) suggested


years ago by Miller et_ aj_ seems to fit.

In the example they use, a


wishes to nail a board into place.

He tests the present

situation to determine whether it conforms to the end state desired.


not he nails

it in place (operates) and then tried the observation test


If the desired condition is now satisfied, he goes on to the next

task (exits). Of course, there are smaller TOTE units possible.

He can,


instance, hit the nail once, if it did not go all the way, he will


it again, and so on.

These smaller units form a series of internal

transitional loops as the process develops from begining to end. That

is, the TOTE units are

recursive (repeated over and over on new data) and

are single looped.

It is this kind of behaviour to which 1 believe

Argyris and Schon are referring when they talk about "single loop"

learni ng.

Single loop behaviour will accomplish a vast number of tasks.

It has a weakness, however. The operation step functions only upon a

very limited set of data, the observed mismatch between current status

and goal.

In this sense it is "se1f-sealing" in that it remains within

the deductive system and the perceptual system wherein it was formulated.

It will do the job it is intended to do within the V level of the establish

ed procedure.

It will not lead to a BETTER approach.

Of course, in

hammering nails into boards a "better" approach may not be needed. The



of that action

is already high.

It also seems to me

that this pattern is equivalent in its

essential details to Piaget's concept of assimilati on. Piaget discusses

assimilation in terms of the resolution of equilibrium mismatch by a

procedure already well established in the learner's schema (or cognitive

map.) The TOTE unit can be used in "purposeful" behaviour where dis

equilibrium may not be involved, at least in an organic sense.


Another similarity which strikes me is the procedure which

decision theoriests refer to as "direct process" solutions. The best

example of this concept I have encountered relates to air traffic control.

The radar scan and strings of coordinate numbers both will give the

controller the same information about the location of aircraft in the

approach pattern to an airport, However, the scan already organizes these

data into a two dimensional visual display. This latter advantage makes

direct process solutions to traffic flow problems easier. Mismatch is

relatively easy to identify and the TOTE unit can be employed without the

need to reorganize these data, or otherwise requiring an intermediating

step in the solution.

A TOTE unit can be used to "discover" new knowledge when the

possibility to achieve this knowledge already exists within the perceptual/

deductive system into which the unit is embedded. More likely, however,

would be its usefulness in the refinement of existing knowledge, particularly

about procedures.

On the other hand, if we agree to the constructionist position

that, it is the nature of the observation procedures we use which control

the data we collect, and it is the structural properties of the deductive

system we use which determines how we interpret these data, it becomes

easy to recognize how the assimilation process might lead to the self-

sealing property of Model I behaviour which Argyris and Schbn discuss.

In fact, research into perception reveals that it is highly selective, and

research into hypnotism (which may be a state of hyper-concentration)

reveals that we can quite literally "see" things which are not present and

"not see" things which are present.

In this case, all interpretations

we formulate which we allow to go untested carry the potential for high

levels of error. When it comes time to step outside of our current


perceptual/deductive systems the assimilation process will not be of help

to us .

To deal with this issue, Piaget suggested a second equilibration

process which becomes employed when the mismatch requires a change in the

perceptual/deductive frame of reference being employed. He gave an

insightful analysis of several protocols of children's oral language

from which he inferred that the driving force for what he calls the

"decentration" process (in which the child learns to identify reality

systems other than his or her own) is the discovery that others think

differently. This discovery, said Piaget, creates the need for the child

to defend his or her own thinking, a process which culminates in the

beginnings of two-way communication and the ability to begin to use

"logic" in reasoning instead of the "intuitive" processes of the "egocentric"

child. This second process Piaget called "accommodation."

I am defining the term "intuition" in a different way than

Piaget, hence the "Discovery" would be intuitive, in my usage, and the

"Intuitive-thought" (or Preoperational Thought) stage he refers to would

be seen here as the "rational" consequences of a set of undefined and

untested "animistic" assumptions which the child constructs in the early

stages of abstracting events from the observed transformations of objects.

In any case, the descriptive properties of these events would be the

same, whatever the terminology.

If we return to the TOTE unit, it now becomes evident that it

needs to be augmented to fit this new concept. After the recognition of

mismatch, but before operation in that context, we must take a lateral

step into an alternative loop. This loop is investigatory, either for

external additional information or for some directly observable data

which was NOT part of the first perceptions from which the "mismatch"

























now wants












































































"better way"

to do things














behaviour which

may or may




also be

clear that








logic will






















Y into





may go





Y will






















Y wi11






One the

other hand









but eliminate




itself has market value.







of a knowledge












step outside


BETTER solutions








to extend



into other areas,



am inferring

Popper was






his "disconfirmation" model


scientific discovery.








He did









"decentration" example,

a "new"







"drove" the

consideration of alternatives






by the


The self-sealing










within a deductive system using only deductive procedures. I may not

have presented a counter-proposal which will convince him and others of

the need for this second process.


either case, I am indebted to his

insights particularly with respect to the need to look at the exceptional

cases for the counter evidence needed for a "break-through".

I need not go too much further into the problems inherent in

trying to solve problems with technology for which they were not designed.

Argyris and Schon have done this admirably in their development of the

concept, of a Model 0-1 learning system.

In it they

suggest that a "Win-

Lose Ethic" (in my terms operation upon the assumtion that a "right answer"

exists) combined with an "unwillingness to offend others" has tended to

stop the flow of information which is vital to the identification of the

short-comings of current practice. Since we are inclined to rely upon

tradition rather than science in our interpersonal relationships (that is

we have tended to operate without testing or even identifying our assump

tions) substantial problems seem to be occuring in areas of our lives where

interdependences exist. The tendency of our assimilative behaviour to

restrict the range of our perceptions and to distort our interpretations

coupled with our tendancy to hyper-concentrate under stress, and for the

ethic we live in to withhold vital information can easily be seen as the

source of such things as bureaucratic foul-ups.

Contrary to Peter's fascinating proposal concerning managerial

incompetence, it may simply be impossible to manage a bureaucracy with the



strength of the theories in current management science.

The cul

prit may not be conspiracy or camoflauge, but an attempt for everyone to

be "right" by everyone involved when the concept of a "right answer"

itself may be a fallacy. About a quarter century ago, Dexter referred to

this property of our interpersonal ethic as "contempt for stupidity."


It is within this context that

I will now

turn to my research into the

learning process as an example of how the constructionist philosophy

may fit "reality" and may be used to help us to find a "better way."

D. The Constructionist Theory Applied to Education.

As i have already indicated, current educational theory has

shown itself to be weak by the standards put forward for a "good" theory

in our earlier discussion. What 1 now propose to do is to discuss my

research into the selection reasons for "wrong" answers as an illustration

of the constructionist position and as an explanation of how this position

has been developed.

I also indicated that the current scoring procedure for multiple-

choice tests reflects the early Associationist position held by the

inventor of this type of test. This procedure has remained virtually

unchanged for nearly three-quarters of a century. What I hope to suggest

is that; first this procedure may be appropriate as a special case within

a more general theory, second, that there exists in the same observational

field a set of directly obervable data which has been overlooked because

of the logic of the observational procedures in use, and third, that these

"exceptional events" serve to support the constructionist model both in

content and in impact.

More than 20 years ago I took a course in the design and con

struction of teacher-made tests. I started to apply this knowledge in

my secondary-school classrooms. Instead of putting an "X" for wrong

answers into the hand prepared tables for item analysis, as instructed

to do, 1 put

the letter of the specific "wrong" choice. This change in

procedure was not merely fortuitous, since I made the change quite

deliberately upon the basis that I might get more information if the

specific selections were not obliterated by the "X" convention. This was


the first of several "intuitive" steps I took as I proceeded.

While copying alternatives into my analysis sheets, I noticed

that there sometimes seemed to be patterns for selection among these

"wrong" alternatives. Most typical is the pattern I called "clustered


By this term I meant that in a well defined item, when the

people were arranged from low to high total score, I might

get a string

of "A's" then a string of "D's" then a string of "B's" and finally the

"right answer" ("C's" left blank).

There would be some, but not much

over-lap from left to right among the members of these strings.'- Much

more common, of course, was to have three of the answers clustered in

this way and

to have the fourth one mixed in, in

"random" pattern.

what looked 1ike a

In any case, I observed this phenomenon so frequently (I tried

to give a test in each

of my classes at least bi-weekly) that I was soon

convinced that these clusterings were not accidental. I will suggest

that this pattern recognition step was also "intuitive." The next step I

took was clearly deductive. I approached several of the people who gave

a particular wrong answer to find out why they chose it. As a Math/Science

teacher, it was easy to recognize from their explanation that they had

made a particular error in their solution of the problem. In effect,

specific wrong answers seemed to have DIAGNOSTIC value.

Once I began to understand why certain "errors" were being made,

and which "tough spots" to deal with in my instruction, my students began


average in the top third in the school.

This ability 1 was developing,

to identify why a learner was having problems, did not go unnoticed by the

school's administrators.

I was soon in charge of the school's special

education program. This was the first clear evidence I had that the

educational theory I was in the process of discovering was "stronger" than


current theory. That is, learners made larger gains when I used their

"wrong" answers to help me to determine how they were learning than those

in the classes of my colleagues who were content to stay within tradition

and to evaluate learner progress solely upon the number of right answers

they gave.

it should be noted that the diagnostic use of wrong answers

remains within the contest of a theory of reality in which "right"

answers may exist. The "errors" 1 was detecting were genuine, being either

procedural or informational." A less common source for error as my ability

to write items improved, were errors of interpretation. At that time, I

followed the traditional approach of assuming that frequent interpretation

errors in an item

was a sign of a "poor" item.

At this point i began my Doctorate, and freed from the content

restrictions of Math/Science and the minutely programmed structure for

special students, 1 decided to shift my attention to multiple-choice

items in the Language Arts area. This deliberate choice proved to be

my next "intuitive" step.

in the language area I "discovered" that

"wrong" answers could be selected, when the basis for selection was

examined, for alternative reasons which were quite valid, but which DID

NOT reflect upon the quality of the items in


i also found

procedural errors, and a small amount of low level error like misreading.

The eye-opener, however, was the prospect that for those who were "over-

reading" the questions, answers classified as "wrong" on an abitrary

basis from the deductive system implicit in the test, could actually be

logically correct when viewed from outside of that system.

It was at this time also

at which I first encountered Popper's

" I later began to use a "higher order" organization by observing the changes

in frequency of the different types of procedural error, and used these

data to sequence topics for instruction into a "developmental" order.


writings, the relationships between his perception of the history of

science and the observations I was then making were not missed. Further

support for the value of wrong answers was found in this same study when

I observed that the generalized quality of the reasoning which linked

several different (factorially common) wrong answers was not only internally

consistant within one group but replicated in another group to a level of

nearly two thirds (.6*0 of the answers.

This was the first time I had

seen the explained variance go above 50 percent in several years of

reading of educational research.

Using the learners' reasoning to classify answer patterns

instead of trying to formulate my own interpretation of the factors proved

to be another "intuitive" step.

So now I had a single


in the sense that I had

obtained V levels for the first time, which showed, perhaps, that the V

level might be improved by including "wrong answers" in the analysis, and

that the arbitraty classification of at least some "wrong" answers as

being ALWAYS WRONG might be invalid.

I then approached my Dissertation topic with more confidence.

I designed a test which had the distracters constructed upon

the basis of several logical principles.

In it I gave several reading

selections and asked my students to display their verbal reasoning ability

in this context. The test was thoroughly validated using a variety of

procedures before my major study.

1i terature.

Some of this validation work is in the

In my major study, I used nearly 300 subjects randomly assigned

to two groups for cross-validation purposes. Once again using a variety

of multi-variate procedures to predict scores on two separate achievement

tests (one concurrent and the other delayed) I surpassed the 50 percent


explained variance on the concurrent test with the analytic group. Wrong

answers were better predictors than right answers consistantly through

out the study for all predictions and cross-validations. However, the

concurrent cross-validation, though still respectably higher than .25

was nowhere near the analytic result.

Also the internal classification 1 had set up so carefully of

Freshman classes of about 500 students using three separate administration

fell apart with the summer school (adult) group who were my subjects for

my final study. My examiners, for this and other reasons, raised the

legitimate question that 1 may have been processing the "noise" in the

system. Typicaly of frontier research in any area. Here was a reasonable

alternative explanation, particularly when also considering the fact that

most of my cross-validations dropped by about one third of the explained

variance. The very kindest thing which could be said about these findings

was that the multi-variate procedures I was using were inefficient,

producing a wide range of V levels for whatever inherent structure might

be present among these data.

One other finding worth noting, which was out of pattern to the

rest, was that when I arranged the right and wrong answers into a composite

hierarchy and used that to predict the achievement tests, although explained

variance was lower than for the rest


of the study, efficiency was much

If I had left it there, as is common for Dissertation topics,

what i am now saying would never have been written.

The test I wrote for the college level was too difficult even for

high-school students, so in the next study 1 returned to the broad-spectrum

comprehension test I had used in my first venture into science as I now

define it.

This time I used children from grades 3 to

8 inclusive (550 in all)


The fact that I had good stability of cross-validation with the

composite scale suggested to me that there might be curved-line events in

these data.


this end I clustered the wrong answers on the test by

their modes of age level of selection and then used a version of the

simplex to get a scale for the right and the wrong answers subtests in

combination. To my amazement, the scaling gave me an order which exactly

followed the age sequence, even though there were several subtests at more

than one age level so that the possibility of getting an exact distribution

of this sort among \k subtests is astronomically small.

Once again, wrong answers were better predictors than right

answers of scores on a test taken 8 months earlier.

When I went to curved

line predictions I came, once again close to a V level of .50.


consistancy estimates jumped from .76 for the right answers alone to .9*1

for combined answers. Using a non-linear prcedures of the plotting of

the change patterns for each of the subtests, 1 drew a developmental

pattern, which was supported by the curved-line patterns from the predictions

of the independent test.

However, the most startling of the findings did not come from

the statistical analyses, but from the logical analyses of the reasoning

protocols collected to help to determine the meanings of the sub-tests

developed analytically. The most pov/erful single influence in the selection

of "wrong" answers for this age group was the ways in which they interpreted

the questions! The interpretations followed a pattern which was strongly

reminiscent of the sequence described by Piaget. Many of the "wrong"

answers were actually CORRECT within the thought schema of the age groups


The second most powerful influence upon answer selection was the

strategy of first choice used by the various individuals to solve these


problems. The information base upon which these questions were built


in a distant

third as an

influence in answer selection.

Thus this

replication of the Dissertation study not only supported the general

findings with a different age group and a different test, it supported

the possibility of an underlying curvi-1inearity, which would explain

the inefficiency of the linear mu1ti-variate prcedures used in the

Dissertation study. Those findings were apparently NOT noise.

On the other hand, for this age group, the focus of instruction

upon information instead of interpretation seemed to be inappropriate.

It is not sufficient, however, to make such a revolutionary conclusion

upon the basis of only a single study, even one which replicated the

principal finding of another study in this manner. Hence, I embarked

upon a set of confirmatory studies with another administration of the

same test (with two administrations to verify by cross-validation) with

more than 2500 subjects, extending the age range to include secondary

schools and using non-parametric procedures.

To conduct these confirmatory studies, Popper's suggestion of

using contrasting theories was employed. The pattern which would be

expected for the Otis scoring procedures to be valid, was used as the

basis (linear) model.

Two curved-line models, one which assumed a single

developmental pathway, and the other which assumed multiple pathways

were contrasted,. The multiple pathway model was clearly superior to

the single pathway model and the linear model was completely out of the

picture, showing only partial support from two out of 27 hypotheses.

As we can gather from these findings, and from the finding that

diversity increased with age, we have very interesting support for the

possibility that people design their own realities. These conclusions

are supported by the work of others from many areas of science, artificial


intelligence studies, discourse analysis, and the theory-of-action per

spective from Argyris and Schon. The fact that multiple interpretations

are also inferrable from these several sources, it seems reasonable that

the concept of a "right" answers in any absolute sense, at least within

the present level of our knowledge should be abandoned in favour of a

more mutually supportive and less personally threatening concept of searching


There is another aspect of the overall pattern of the findings

which needs comment. If we include Bock's findings with vocabular items,

the improvement in explained variance shows an interesting progression.

In Sock's study, which could be considered as involving items in which

memory was the primary factor being tested, the improvement was negligible

for learners above the median. With the comprehension test I have used

for most of my studies, the improvement seems to be by a factor of two or

three. It may be that as much as three-quarters of the variance might

become available from better procedures than I am now using, but at the

moment we are not witnessing that much. Complex analytical processes as

included in my Dissertation seem to start with an even lower explained

variance for the linear case and to incrase to nearly the same level with

the more complex analysis. This progression of deteriorating linear

patterns and increased improvement for non-linear approaches suggests that

the Otis system for scoring for multiple-choice tests might be a special

case which applies only when memory and/or recogition types of Item are

being employed.

If this is the case, then total-correct scores may well be

reasonable for a strictly information transmission approach to education.

There may still be some diagnostic value in the "wrong" answers even in

this case.

As we set items which are higher than the knowledge level of

5 1

Bloom's Taxonomy we may be creating the likelihood of producting items

which have more than one reasonable interpretation. In this case, the

more complex the item, the greater may be the curvi-1inearity.

There is still not enough evidence to assure this speculation,

but if true, the use of total-correct scores may be reinforcing teaching

practices favouring memorization in order to get the most stable test


It could be possible that the Otis prcedure not only has prevented

researchers from tapping more effective information sources, but it may

have helped to maintain the Associationist theory of education in practice

after its usefulness had been diminished by the development of more power

ful theories, such as the one of Piaget, or those now coming from studies

of artifical intelligence.

I do not mention Skinner's work, because he

seems to have ignored the non-target behaviours in the same way that other

Association!sts have done, so that his efforts represent a refinement of

existing theory rather than an improved theory.

This self-sealing possibility for practice as well as theory

development would further support the constructionist proposal.


memorization is the circularly reinforced procedure in our schools, with

"being right at all costs" the tacit operator in education, then the "win-

lose ethic" would have a powerful ally in the current schooling practice,

and people would be getting far less practice in accommodation and far

more practice in assimilation than would be appropriate for healthy or

effective cognitive development. Model I as described by Argyris and

Having taught and having worked with teachers for a number of years now,

I have observed that teachers expect the tests to confirm the rank order

of the class members which they have inferred from the day-to-day classroom

responses. Teaching for "thinking" and measuring for "remembering" seems

to produce incompatible orderings.


Schon would be the logical outcome from this combination of forces.

E. An alternative Approach to Educational Practice

Let us assume the validity of the constructionist position.

If two people hold differing views of a situation, and both positions

have equal V values, the differences in perception which led to the

different views is likely to mean that some aspects of the explained

variances in the two differing perceptions may be expected to differ as

well. As a result, the combination of the two positions should lead to

a refinement of both positions.


If we look at the exceptional properties rather than the common

properites of the two positions, and the reasons for these exceptionalities,

we may be able to generate an alternative interpretation which accounts

for much more of the situation.

However, to do this we need to get the

two people communicating about the tacit assumptions they are making about

the situation.

Such an exploration should expose the valid parts of the

inferences, and also expose the misinformation, misperceptions and in

appropriate strategies used by both parties. This exposure will not be

complete, nor without disagreements. But disagreements can also be resolved

by exploration (including experimentation), In this way the participants

can learn how to surface these tacit assumptions and to combine them, study

them, test them. This approach can be generalized to accommodate alternative

search strategies, explicit practice in the inductive processes and in

testing them. That is, we could explicity teach the skills of accommodation.

Of course, we must begin with the fundamental symbolic and

communication skills, since to be able to explore concepts means that the

learners need to be able to acquire concepts from print, pictures, tapes, etc,

because class time will be occupied in the working with concepts rather


than acquiring them. My research seems to show that, at least at present,

"right" answers are more important than "wrong" ones below the age of 9

years. By helping teachers who are not already using "wrong" answers

diagnostically in these younger years to strengthen their teaching with

this refinement of practce, we should have most learners reading fluently

and spontaneously by the time they enter Grade k. Part of the encouragement

to read should be

for the fun

of it and part of it could be encouraged by

practicing the beginnings of these exploratory skills.

Children tend to come to school very curious. Concentration

upon being right seems to discourage this curiousity by the age of 10.

Part of this discouragement comes from the teacher's need to have the

children sitting still while the "telling" the "reading" and the "filling

in the blanks" is going on.

It is unfortunate, but the debilitating

force in our present system may be the needs of the present approach to

education to contain the energy of the young child to a level which the

teacher can tolerate and still be

in full control

of all of the diverse

activities in a classroom.

For most children this need seems to have the

effect of turning them off, even though it takes many of them several

years to achieve this effect.

I am

not recommending that children be allowed to run wild. On

the other hand we are faced with a real choice along with this dilemma.

In learning to explore ideas with safety, children will also learn how to

control themselves. In learning how to learn from teacher dominated

activities, the children need to learn how to accept and to be controlled

by others.

In addition to this we currently get children to feel that they

must "be right", whereas the exploration of ideas can lead the child into

the skills needed to GENERATE KNOWLEDGE; to try to "become better."


The need to be right,

to be controlled and to be in control may

combine to produce what Argyris and Schon call Model I behaviour. Whether

or not, the opportunity to explore ideas, to learn to create testable

hypotheses, and to test them to translate these outcomes into effective

behaviour productions, and to evaluate these productions will actually

lead to Model II learning as Argyris

and Schon suggest is not yet certain.

Nor is it certain that the replacing of the "right-wrong" concept in

education with a "search for better answers" approach will reduce the

current "win-lose ethic" among the children exposed to this alternative.*

It is clear, however, that the definition for a "break-through"

developed as part of the constructionist model is being satisfied by my

"wrong answer" research.

Demonstration that these findings actually

represent a BREAKTHROUGH and the other inferred outcomes prepresent good

"preliminary estimates" in the outcomes of a new and more powerful tech

nology of education, may help to extend the philosophy of knowledge (hope

fully made a somewhat more explicit by the present attempt) as well.



In this paper I have put forward a philosophical position which

I have derived from several sources, but mostly from my research findings.

To begin with I have discovered that a fair number of the answers selected

on multiple-choice tests, which by current procedures are considered

"wrong", are actually CORRECT. These paradoxical answers occur from

differences in interpretation between the test taker and the test maker.

These differences can arise from developmental patterns, cultural differences.

Within-group rivalry seems to be less characteristic of other "social

animals" particularly among the primates than it appears to be among


In this context acquisitiveness seems to be "instinctive" but

its human off-shot of "win-lose" may be a learned behaviour.


profound understanding causing the "over-reading" of the question, and

from differences in outcome from the solution strategies applied.

Since these items do not show the typical patterns which are

charateristic of "poor items", the patterns found suggest that the use of

these so-called "wrong answers" may represent a powerful alternative tech

nology for education.

In addition these findings raise serious questions

about current scoring procedurs and produce the inference that the concept

that "right answers" as a singular viable goal for education may itself be



Several sources of evidence to support the idea that "absolute

right answers" may not exist, in addition to this paradoxical research

finding are cited. The main evidence used was the concept of explained

variance which scientists currently use to estimate the amount of con

fidence we can put in the out-comes of a particular piece of research and

the theories derived from such research.

Since both measurement error

and other factors may prevent perfect predition, it is assumed that this

level of support may be unattainable. There is also some evidence,

of as

yet uncertain status, that a small degree of indeterminacy may be a

property of Reality itself.

From these several sources, the underlying Reality was assumed

to be 1ndeterminate.

In this case, the knowledge we have

can not be

absolute, and must be both relative and a construct built from the observati


of objects as they transform. The contingencies for these events will be

inferred and stated either as hypotheses when the observations are systematic

and made in controlled conditions, and as postualtes when systematic testing

is not attempted.

The advantage of the hypothetical approach is that the level of

explanation in this necessarily relativist!c system can be known.


Existing knowledge, whether scientific or traditional tends to

be organized into deductive systems which perport to explain the "experiences"

of the events we have lived. Experience and reality may not be equivalent

because of selective

perception and because the deductive system wi1 1 be

mutually reinforcing to the perceputal strategies which give us the data

upon which our conception of "reality" is founded. Thus, the reality we

live is seen to be a construction of our experiences, our inferencial and

our observational skills, with a known connection to actual events only in

the case of scientific knowledge. This statement is not to say that non-

scientific "knowledge" has any less claim to being descriptive of experience,

only that is has not been subjected to the same rigorous testing.


of these experiences may not yet be testable because the necessary measure

ment technology is still missing.

It was also inferred from the tendency for the deductive structure

of knowledge to be mutually reinforcing to the perceptual content from

which it was derived, that some mechanism other than deduction is needed

to be able to step outside of such a system to generate "new" knowledge

as contrasted with refinement of existing knowledge. The process was

assumed to be generally inductive in nature and "intuition" when successful.

The evidence from psychic events was taken to suggest that there may be

weak forces which may exceed the speed of light which serve to link remote

points in space and which can be "read" in certain circumstances to provide

information which is not available from immediate observation until after

the concept has been formed and what to look for has been recognized.

A breakthrough in science was defined as an intuition which leads

to an alternative perception and which increases the explained variance or

narrows the range of the explained variance or both for the members of a

theory. Of the two inferred processes, assimilation and accomodation, the


latter was the only one seen capable of inducting a breakthough. The two

processes were described as being "single loop" (TOTE) and "double loop"


Several examples of the related processes were given, most

particularly I described my owrk in "wrong answer" interpretation which

both seems to qualify as a "breakthough" as defined by the constructionist

philosophy and to support many of the substantive conclusions within this

posi tion.

Particular mention of my indebetedness to the work of Piaget,

Popper, and Argyris and Schbn were acknowledged as this is related to the

conclusions drawn herein.

The current status of these several

ideas seem to be that


findings of my research seem to point toward a substantial improvement in

the observational technology upon which educational practice may be based.

It has been general throughout history for major improvements in observational

technology to be followed by corresponding advances in the related

instrumental technologies. It remains to be seem whether these results


If they do, the conclusion drawn that these findings represent a

"breakthough" will have been supported.

In the broader implications of the constructionist meta-theory,

it is to be expected that it should be found to be mutually reinforcing to

the hypotheses derived from the educational context from which it has been

formulated. As a "better" meta-theory, it remains to be demonstrated that

this position can lead to the restructuring of theories in other areas

which also lead to an improvement in the explained variance. To these

ends comments and criticisms are welcomed.











Professor Jay Powell

408 May Avenue

Windsor, Ontario

N9A 2N4


Dear Professor Powell:

Box 247, Boston University Station

Boston, Massachusetts 02215, U.S.A.

Telephone (617) 353-2578




I am sorry to inform you that your paper, "Toward the Application of the

Constpn-t-itr-jpf Philosophy to Kdncatinnul Practice," will not be published

by the FORUM. We operate under severe space limitations and we also

think that the FORUM is not the most appropriate journal for your

paper. These two reasons combined led us to our decision.

Should you care to submit any further work, I will be glad to review it.

Thank you for considering the FORUM.


Marx W. Wartofsky