Professional Documents
Culture Documents
B2 PDF
B2 PDF
Accting., Mgmt. & Info. Tech., Vol. 7, No. 1, pp. 1-19, 1997
1997 Elsevier Science Ltd. All rights reserved
Printed in Great Britain
0959-8022/97 $17.00 + 0.00
Plh S0959-8022(97)00001-5
ACCOUNTING
INFORMATION
SYSTEMS AND
ORGANIZATION
LEARNING: A SIMULATION
Aris M. Ouksel
Roosevelt University
Peter Chalos
INTRODUCTION
Accounting Information Systems (A.I.S.) play a central role in organizational learning,
prompting claims that "the aim of the design of A.I.S. is quite simply to improve organizational
learning" (Emmanuel, Otley & Merchant, 1990, p. 371) and "to activate learning and
experimentation" (Senge, 1990, p. 253). Our interpretation of firm learning builds on three
classical observations of behavior (Levitt & March, 1995). The first is that a great deal of
organizational learning is based on routines (Cyert & March, 1963; Nelson & Winter, 1982)
which are environmentally conditioned (Fiol & Lyles, 1985). The second observation is that
organizations are goal orientated. Their behavior depends on the relationship between targeted
and achieved outcomes. The third observation is that organizational actions are historical.
Output goals are tempered by interpretations of the past, as organizations adapt incrementally
and learn from goal attainment in response to feedback about outcomes (Den Hertog &
Wielinga, 1992). The emphasis on routines and the ecology of learning distinguishes this
approach from alternative theories and closely resembles paradigms of individual learning found
in the accounting literature (Hammond &Deane, 1973; Harrell, 1977).
An interactive learning process of periodic planning, measurement, feedback and performance appraisal characterizes many A.I.S.: (1) goals are determined; (2) environmental
information is coded, stored and subsequently retrieved; (3) deviations of actual outcomes from
predetermined goals are recorded for future corrective action; and (4) goal attainment is
rewarded (Flamholtz, Das & Tsui, 1985, p. 39). Recent evidence from the accounting literature
supports this view of organizational learning. The most commonly cited applications include
project management reports such as path analysis for production scheduling; profit planning
systems for lines of business which report actual to forecast revenue and expense comparisons;
revenue budgets which analyze market share, volume, price, etc.; reports used to monitor
competition; and human resource planning budgets (Simons, 1995, p. 108).
A number of factors are critical in A.I.S. design. Organizational memory, whether tacit or
formalized, may be systematically coded but information may not always be readily available.
Organizations vary in the emphasis they place on formal routines. Goals and feedback, by
definition, are strongly conditioned by the environment in which the firm operates. Firms
operating in uncertain environments face challenges in implementing A.I.S. designed to promote
routinized organizational learning. As environmental uncertainty increases, organizations need
to adapt their A.I.S. in order to promote learning (Ouchi, 1977).
The impact of organizational complexity on learning is also critical in A.I.S. design
(Galbraith, 1977; Weick & Roberts, 1993). Organizational hierarchies and multi-divisional
responsibility centres shape information routines. The complexity of organizational design
usually precludes complete dissemination of accounting information to all divisional sub-units.
Typically, information is aggregated or compressed as it flows vertically up the organizational
hierarchy. Laterally, information is often selectively distributed across sub-units. The ecological
structure of the organization complicates modeling of the learning process. Senge (1990, p. 289)
argues that hierarchies often impede organizational learning, while teams more effectively
promote learning. Recent initiatives to flatten organizational structures, establish crossfunctional teams, and increase electronic information sharing are in part motivated by a desire
to improve organizational learning.
In this study, we replicate and extend previous computational organization theory research
(Carley, 1992) within an A.I.S. framework. Specifically, we measure previously unexamined
effects of environmental information characteristics on team and hierarchical information
processing across segregated and overlapping A.I.S. Outcome effectiveness is measured in terms
of both accuracy and speed of organizational learning. As this is extremely difficult, if not
impossible, to capture in an organizational setting, a simulation is used. While the simulated task
is admittedly stylized, the methodology lays the groundwork for a set of empirically testable
propositions for future experimental and field research.
THEORETICAL DEVELOPMENT
Organizational learning
(Lant & Mezias, 1992) propose a learning model which accounts for patterns of changes at the
level of the organization. Argyris and Schon (1978) distinguish between single- and double-loop
learning. This distinction can be thought of in terms of operational (know-how) and strategic
(know-what) learning. Single-loop learning occurs at the operational level because it concerns
processes, while double-loop learning occurs at the strategic level because it concerns the
definition of goals.
The effect of individual learning on organizational learning can be seen by analyzing the
impact of persistence of shared models on the performance of organizations. Others emphasize
the importance of individual learning only in so far as personal skills, insights and knowledge
become "embodied in organizational learning routines, practices, and beliefs that outlast the
presence of the original individual" (Attewell, 1992). This persistence of conceptual knowledge
is consistent with the notion of organizational learning adopted in this paper, namely "the
encoding of inferences from history into routines that guide future behavior" (Levitt & March,
1988).
Organizational learning however is more than the sum of individual learning of constituent
members. Rather, it is the ability to store data (in some type of memory), to recognize patterns
and rules from this data, and to build shared mental models across individuals and divisions of
an organization. A.I.S. play a critical role in shaping these routines. The notion of shared mental
models is central to work on continuous organizational improvement (Senge, 1990).
Organizational learning in this case is viewed as "improving actions through better knowledge
and understanding" (Fiol & Lyles, 1985) or as "increasing an organization's capacity to take
effective actions" (Kim, 1993) by using discovered patterns and rules to guide future decision
making. This requires constant monitoring of information at the individual as well as at the
organizational level, a role for which A.I.S. are ideally suited.
Environmental uncertain~ and A.I.S. coordination
The environment facing the firm has been shown to have a significant impact on A.I.S.
design. Khandwalla (1972) was one of the first researchers to emphasize the importance of
environmental competition on the firm's information system. A similar conclusion was reached
by Otley (1980) who determined that budgetary systems were strongly influenced by the firm's
environment. Gordon and Miller (1976) found that heterogeneity of product lines was a primary
determinant of A.I.S. reporting frequency and sophistication. Another study, pertaining to
environmental uncertainty (Govindarajan, 1984) found no connection between budgetary
accounting systems and organizational learning effectiveness until the mediating effect of
uncertainty was considered. This considerable body of work relating environmental predictability to A.I.S. has been summarized in such works as Galbraith (1977), Mintzberg (1979) and
Pfeffer (1982) but "has yet to be fully incorporated into accounting systems research"
(Emmanuel et al., 1990, p. 62).
Early and Hopwood (1981) distinguished environmental uncertainty in diagnostic causeeffect relationships from uncertainty in strategic objectives. When objectives are clear and
cause-effect relationships are well understood, decisions are programmable. Examples of this
might include an optimization function when constraints are known, or a discounted cash flow
analysis under relative environmental certainty. Rarely however do A.I.S. provide perfect
decision predictability. As the environment becomes less certain, Earl and Hopwood (1981)
argue that the accounting system "serves as a learning machine" from which probabilistic
inferences are made based upon system inputs. This would be the case for example when
budgetary revenues are compared to actual revenues for a new product, or when a post audit of
an investment in an uncertain environment is performed.
A.M. OUKSELet
al.
clustered across sub-units, learning how to recognize and integrate this diverse information
becomes problematic. Horizontal information transmission is much more vulnerable than vertical
transmission to the degree of local sub-unit expertise. The impact of a single front line analyst is
not diluted by the filtering action of layers in the hierarchy. Operating under much narrower spans
of control, hierarchies have the potential to recognize highly diagnostic sub-unit information to a
greater degree than flatter organizational structures with much wider spans of control.
Hierarchical information transmission in uncertain environments also directly affects the
speed of organizational learning. In flat organizations with relatively uniform sub-unit
environments, simple decision rules are presumably much faster than hierarchical decisions.
While decision models such as simple majority rule of sub-units are simple and quick, learning
how to distinguish where more important sub-unit information resides is much more
complicated. In such cases, under a flat information structure, the speed of organizational
learning will decrease in proportion to the number of sub-units. With different sub-unit
information, the wider the span of control, the slower the organizational learning. A hierarchy
with more levels, but progressively smaller spans of control at each level, should learn more
quickly albeit, as noted, less accurately in many cases.
Our study examined the relative speed and efficiency of organizational learning under flat and
hierarchical A.I.S. Within the flatter organizational structure, two decision rules were employed,
majority and expert teams (defined later). The information system was conditioned by its
environment. Three types of probabilistic information were examined: uniform, dispersed and
clustered information weights. In the uniform case, each organizational sub-unit's information
had the same importance relative to overall problem resolution. In the clustered case,
information varied in importance and information bits of similar weights were grouped together
by sub-unit. In the dispersed case, information bits also had varying weights of importance but
were randomly distributed across sub-units.
The success or failure of organizational learning when combining different information cues
to determine organizational outcomes depends critically, as argued, upon the coordination of
environmental information across sub-units. Organizational learning under flat information
coordination was posited to differ significantly from hierarchical learning as a function of the
environment. While previous research (Carley, 1992) has examined accuracy and speed of
learning under uniform binary bit information, the effect of clustered and dispersed probabilistic
environmental information upon organizational outcomes has not been examined. Accordingly,
the following general propositions were examined:
than learning under segregated A.I.S. with clustered information, but there is no
difference under uniform or dispersed information conditions.
Proposition 4: Overlapping A.I.S. systems decrease the rate of organizational
SIMULATION
Two aspects of A.I.S. are addressed: management reporting structure, "who reports to whom".
and task decomposition, "who has access to what information". Our approach is an extension of
Carley's (1990, 1992) experiential learning model (ELM) research framework for examining
organizational learning performance under various design constraints and operating conditions.
In previous work, Carley (1992) examined the role of hierarchy and overlapping information
under uniform information conditions. We replicate this and extend the model to environmental
uncertainty. Specifically, we examine how A.I.S. design is contingent upon the nature of
differential information available to individual agents.
Before explaining the model, the following terminology must be defined: (1) a bit of
e v i d e n c e - - a single binary input which the organization gathers from its external environment,
where each bit represents the presence or absence of some feature of the environment; (2) a
task--the set of bits of evidence for which a single organizational decision must be rendered;
(3) a subtask--a subset of the input bits of evidence; (4) an a g e n t - - a decision maker within the
organization hierarchy, who converts input evidence from below in the organization structure
into an output decision recommendation, which is passed up the structure; (5) an event--the
process by which an input task is converted into a single organizational decision.
The model assumes that : (1) organizational decision making behavior is historically based;
(2) organizational learning depends on the bounded rationality of individual decision makers
who make up the organization; (3) individual decision makers analyze subsets of the overall
task; (4) subordinates condense their input data into output recommendations to their superiors,
and this information compression occurs at each node in the structure; (5) overall organizational
decisions do not require that a consensus be reached (e.g. a legitimate policy might be to let the
majority opinion rule); (6) the overall organizational decision is two valued (e.g. go/no go); (7)
each intermediate decision is similarly two valued; and (8) the organization faces quasirepetitive integrated decision making tasks.
Quasi-repetitive tasks are similar but not identical to previous tasks. An integrated task means
the task is too complex for a single decision maker to handle. Firm examples include periodic
divisional budgets and performance reports, and multi-product scheduling. Sub-decisions of
multiple agents must be combined in some fashion, depending on organizational design, to reach
an overall decision. The tasks of interest here are assumed to be non-decomposable, meaning
that combining the correct solutions to each sub task may not always yield the correct solution
to the overall task.
In this model, each time the organization faces an input task and determines what its
organizational response will be, a decision making event is said to have occurred. Each of the
input tasks that the organization faces is represented by a binary string of n bits, which we
denote by b = ( b i . . . . . b n ) . Each bit bi represents one element of the input task. We can thus
consider each bit as a feature of the input task and the string of bits as input evidence that the
organization must analyze to determine an appropriate response. This is not uncharacteristic of
frequently modeled accounting decision environments such as audit or managerial judgments in
which decision makers are presented with either sequential of simultaneous probabilistic
information bits and asked to reach a binary decision such as investigate variances/do not
investigate variances, invest/do not invest, sample further/do not sample further, or qualify
opinion/do not qualify opinion.
The organization is represented by a number of agents (the sub-decision makers) each of
whom has access to a subtask (subset) of the n bit string, bi, bi + 1. . . . . bj where 1 < i < j < n.
Each agent examines his/her local memory of prior tasks (bit patterns) and the corresponding
past decision outcomes in an attempt to learn what the decision on the current task ought to be.
That is, they try to learn from their past experience, for example whether or not past cash flow
projections were accurate or what was the cause of high defect rates. To study organizational
learning via this model, it is assumed that the organization initially knows nothing about the bits
of evidence that comprise each task other than the fact that each bit is two-valued (0 or 1) and
that the overall decision to be reached is similarly two-valued (0 or 1). The constructs of the
general model and their interrelationships are described in Ouksel and Mihavics (1995).
A. M, OUKSEL et
al.
Three different stylized organizational structures are studied. These structures can all be viewed
as being hierarchical in architecture, but composed of varying numbers of levels (one, two, or
three). The first is a single level structure which Carley referred to as a "majority team". The
second is a two level structure referred to as an "expert team," and the third is a three level structure
known as a "hierarchy" (Carley, 1990). Figure 1 shows the structures used in the simulation.
Twenty-seven bits (n = 27) of evidence are available per input task and each bottom level agent
handles three bits. A 27 bit task is used here since it is the smallest number of bits which allows at
least a 3 level hierarchy where each decision maker has an odd number of subordinates. An odd
number of bits is chosen so that a unique correct decision can always be directly determined (i.e. so
the majority rule decision mechanism would never result in a tie).
Each bottom level agent examines his/her past experience with the 3 bit data pattern
encountered and probabilistically determines which organizational response (0 or 1) is more
likely to be correct. The agent bases this decision upon the historical accuracy of the bit pattern
Hierarchy
(final decision = leader'sjudgement)
/to
o,
Expert Team
(final decision = leader'sjudgement)
o t
t
I
I t
o
~)t
Majorily Team
(final decision = majority w.'~te)
Agents
,,
,t
o
, ...,,
...o bits of e'.,idence
I
,,t
I
t/
o
encountered relative to the actual outcome--i.e, the hit rate. In the case of the majority team,
these predictions become "votes" and the organizational response is determined by majority
rule, whereas in the other structures the predictions become recommendations to the agents'
superiors. Each superior in turn treats his/her subordinates' recommendations as input data to
his/her own decision making process, following the same probabilistic historical learning
procedure as employed by the subordinates (i.e. pattern matching). An example of this would be
sales forecasts submitted by product line managers which are used to determine overall
divisional sales estimates. These estimates are then reviewed at a higher level and consolidated
into a corporate sales budget.
The basic model assumes that the actual environmental classification function, which
associates a correct decision with each task, is based on a simple majority rule. That is, if more
than n/2 data bits have values of 1, then the true outcome is 1 (also denoted Hi). Otherwise, the
true outcome is 0 (also denoted Ho). We assume n is odd to avoid "ties" or inconclusive data,
such as a bit pattern of 0 0 11. Suppose n = 9 and there are 3 bottom level "agents" or decision
makers reporting to a single manager. After a number of task trials, a typical decision maker's
memory might resemble that shown in Table 1.
Further suppose the current input task was modeled as 1 1 1 0 10 0 0 1. That is, agent # 1 sees
111 and agent #2 sees 0 10 and agent #3 sees 0 0 1. After matching on the input pattern 11 1
agent #1's decision would be a 1; with pattern 0 10 agent #2's decision is 0; and with 0 0 1 agent
#3 decides to recommend a 0 (Table 1). The manager's input pattern is thus 1 0 0 and after
matching this to local memory, he/she sets the overall organizational decision equal to a 0.
However, he/she is incorrect. The true outcome is a 1 since the input task has more l's than O's
(5 vs. 4). This type of task, where completely correct sub-decisions at each stage do not
necessarily lead to a correct overall decision, is what is referred to as a "non-decomposable
decision task" (Carley, 1992).
"Task decomposition scheme" is defined as the set of subtasks assigned to the agents at the
lowest level of the organizational structure. The "segregated" scheme is one in which each data
bit can be analyzed by one and only one bottom level agent (i.e. the subtasks do not overlap).
The "overlapping" task decomposition scheme is one in which a data bit can be analyzed by
more than one agent. Figure 2 shows the two different task decomposition schemes that were
studied. This overlapping scheme was included to study the effect of data redundancy on
organizational learning.
This simulation captures the notion of "weights" of evidence and accords with environmental
reality. That is, not every bit of evidence is necessarily equally important in determining the
H0
H1
000
00 1
0 10
0 11
10 0
10 1
1 10
111
57
42
42
22
42
22
22
7
7
22
22
42
22
42
42
57
l0
Agents
E1
E2
E3
E4
E5
E6
E7
E8 E9
Bits of Evidence
Overlapping
Agents
E6
E7
E8
E9
E3 E4
E5
Bits of Evidence
correct decision for an input task. The potential for non-uniform weights of evidence is modeled
via the inclusion of integer coefficients for each bit of evidence. Thus each binary variable, bi,
is multiplied by its weight coefficient, wi, and the summation of these products is then compared
to some threshold value to determine whether the correct decision is H 0 or Hi.
As mentioned earlier, it is reasonable to suspect that organizational learning performance
depends on the "goodness of fit" between an organization's structure and its environment In
addition, it seems intuitive, due to the law of averages over a large sample of simulation runs,
that non-uniform weights distributed across input data bits would not yield results different from
the uniform weights case.
SIMULATION RESULTS
Design of virtual experiment
The design was a 3 2 2 . Three different organizational structures were studied: majority
teams, expert teams, and hierarchies (see Fig. 1). Using a Monte-Carlo simulation, two different
task decomposition schemes were contrasted, segregated and overlapping (see Fig. 2), as were
three types of weight distributions, clustered (similar weights adjacent to each other), dispersed
(dissimilar adjacent weights), and uniform (all weights were equal). For example, 9 9 9 5 5 5 1 1 1
is an example of clustered weights, whereas 9 1 5 9 1 5 9 1 5 is dispersed. Thus, in total 18
different organizational case manipulations were studied. The operationalization of these
manipulations was described in the Simulation section above. The model output provided two
11
Results on learning performance and speed are summarized in Tables 2 and 3. To better
visualize the simulation resuilts, Figs 3-5 are included to show graphs of learning performance
improvement (under a segregated task decomposition scheme).
Figure 3 shows that under conditions of uniform weights of evidence the flattest structure, the
majority team, learned both faster and ultimately better than the hierarchy. The improvement in
speed was due to decreased cognitive load. The majority team only had to learn 23 bits per agent
whereas the hierarchy had two levels of 23 learning. The majority team's advantage in terms of
learning speed is even more pronounced when compared to the intermediate structure, the expert
team. The cognitive load of the expert team was much greater, involving 2 9 bits of learning.
However, given a long enough time horizon, the expert team eventually catches up to a majority
team's learning proficiency level.
Figure 4, with clustered evidence weights, shows that under these conditions the majority
team structure suffers a large penalty in its organizational learning potential, as a one vote/one
person decision rule does not discriminate between analysts with different predictive
information. Figure 5, on the other hand, reveals similar trends with dispersed weights among
the three organizational structures studied to those exhibited under conditions of uniform
weights (see Fig. 3).
In the case of uniform weights, majority teams achieved a final performance level (84.8% for
segregated decomposition and 82.8% for overlapping) significantly above that of hierarchies,
A sample size of 50 was chosen since the binomial probability distribution approaches a normal distribution when both
n*p and n*q are >5. Since our proportions of success (p) range from 0.70 to 0.85 an n = 50 was deemed
sufficiently large.
12
Majority
team
Expert
team
Hierarchy
Majority
team
Expert
team
Hierarchy
Uniform weights
/a = 84.8
~r = 1.7
~ = 82.3
cr = 1.9
p = 80.0
~r = 1.8
~ = 82.8
~r = 2.0
p = 81.9
cr = 1.9
p = 78.8
cr = 2.0
Clustered weights
~ = 70.9
cr = 4.2
la = 83.5
cr = 1.8
la = 79.4
~r = 2.0
~ = 76.2
~y = 1.9
ja = 83.5
~r = 1.5
IJ = 80.9
cr = 1.8
Dispersed weights
la = 82.6
o- = 2.2
~ = 80.9
~ = 1.7
la = 78.2
~r = 2.3
~t = 83.4
cr = 1.8
~ = 82.1
o" = 1.7
la = 78.2
o- = 2.1
Notes: Each cell has n = 50 simulation runs. The final performance figures reported are the percentage of correct
organizational decisions achieved during the last 100 tasks (i.e. tasks 2401 through 2500).
which suffered information compression (80.0% for segregated decomposition and 78.8% for
overlapping). In addition, with u n i f o r m weights, majority teams learned significantly faster than
h i e r a r c h i e s ( 1 8 . 6 vs. 4 3 . 9 d e c i s i o n p e r i o d s r e q u i r e d f o r s e g r e g a t e d d e c o m p o s i t i o n a n d 21.2 vs.
50.9
by Carley
(1992)
for u n i f o r m
weights.
S i n c e the t w o d e p e n d e n t v a r i a b l e s w e r e n o t s i g n i f i c a n t l y c o r r e l a t e d , t w o s e p a r a t e u n i v a r i a t e
analysis o f variance tests ( A N O V A ) were used rather than a single multivariate analysis o f
v a r i a n c e ( M A N O V A ) 2. U s i n g C o c h r a n ' s C a n d B a r l e t t ' s B o x F tests, h o m o g e n e i t y o f v a r i a n c e s
a s s u m p t i o n s w e r e f o u n d n o t to b e v i o l a t e d f o r p e r f o r m a n c e d a t a b u t w e r e v i o l a t e d f o r s p e e d
data. A L i l l i e f o r s test f u r t h e r i n d i c a t e d that T a b l e 2 w a s f r o m a n o r m a l d i s t r i b u t i o n . I n T a b l e 3,
Majority
team
Expert
team
Hierarchy
Majority
team
Expert
team
Hierarchy
Uniform weights
la = 18.6
o - = 5.6
~t = 88.1
er = 19.2
~ = 43.9
c r - - 9.2
la = 21.2
~r = 6.2
/a = 77.2
o" = 16.8
p = 50.9
cr = 14.7
Clustered weights
~t = 36.0
o- = 25.5
~ = 54.7
o- = 14.5
la = 27.4
cr = 8.2
la = 24.4
cr = 9.4
p = 62.0
~r = 13.6
p = 30.0
o" = 6.2
Dispersed weights
~ = 19.7
o - = 5.5
~a = 87.2
o - = 19.3
~ = 41.6
o - = 10.2
~ = 20.3
o" = 5.9
la = 72.1
o" = 15.2
~ = 45.3
o - = 10.1
Notes: Each cell has n = 50 simulation runs. The speed of learning figures reported were calculated by dividing the # of
decision tasks encountered (until a 69% exponentially smoothed performance level was attained) by 10. See Figs 3-5.
2 Only one of the 18 cells showed a significant correlation coefficient at the a = 0.05 level, and no cells showed
significance at the ~ < 0.01 level.
90 ~
13
Majority Team
~
-
!:o
Majority
50 ~
0
0
OJ
....
0
~"
0
tO
0
O0
0
0
0
04
0
'q"
0
~0
0
~
0
0
00q
OJ
O4
--
Hierarchy
- -
0
I~"
O4
T i m e (in periods)
however, the test cast some doubt as to the assumption of normality. Non-lJarametric tests for
differences in medians were therefore used for speed of learning.
More formally, our proposition 1 (P1) posited that majority and expert teams would
outperform hierarchies when information weights were uniform or dispersed. As can be seen in
Figs 3 and 5, mean final performance of both types of teams was better than for hierarchies
across overlapping and segregated information distribution schemes for both uniform and
dispersed conditions (F = 390; P = 0.00) 3, confirming P1. Hierarchies suffered performance
degradation because they encountered information compression losses across two levels of
decision making,
Proposition 1 also predicted that hierarchies would not be the best performer when weights
were clustered. This conjecture was also corroborated. Hierarchies performed significantly
better than majority teams (F = 197; P = 0.00) but significantly worse than expert teams across
overlapping and segregated information distribution schemes under the clustered weight
condition (F = 166; P = 0.00). Looking at the figures shown in Table 2, we see that when
weights are clustered, the majority teams' final performance is significantly worse than either
hierarchies or expert teams. This is also seen in Fig. 4.
A simple majority rule, where each analyst's opinion counted equally towards the
organizational decision, was ineffective when some decision makers had access to more (less)
important information than others. The reason for this is that having each agent's vote count
equally towards the determination of the overall organizational decision is simply not
appropriate when some agents have access to more important evidence than do others. The real
90 85 !
o~ 80
75
~ 70
65
60
Expert Team
~_4~_~.~-~w(~
.
H=erarchy _ ~ . . _ ~ . - ~ e - 4 ~ - - ~ . ~ - ~ ~
4 --9~ T
-~-~
~
-
4
i
..-'~
I
i .-''.
~n~
-~--
.~'~;~-.m-IC~ C ~ = - - ~
i-m-B" I
~q- = H / = - ~ - a
Majority Ieam
-e~-=
.cU
Majority
Expert
]
.~
.~.~
Hierarchy
-'i
50
0
0
Owl
0
~
0
~
0
aO
O
0
04,.
8 8 8
04
O4
O4
~ In each F test, the degrees of freedom for the numerator (between groups variance) was 1 while the d.f. for the
denominator (within groups variance) was 98.
14
A . M . OUKSEL et al.
MajorityTeam
8O
~,~
70 I
~ 6 5 t j
60 ~ ,
Hi~y
~
~ ~
..~j~-
-~
~'~
-~
Majority
Expert
.=
._.
Hierarchy
50 ~l~
'~"
O
M
O
~1"
O
(.O
O
G0,
Q
O
r--
(~1
~1"
rid
~--
Od
~"
O,J
world analog for this situation is that, with majority teams, the agents who really "know what
they are talking about" have no more influence on the final decision than those who may be
merely guessing. Both expert teams and hierarchies had some means by which to learn which
analysts were more reliable, and thus came to weigh their recommendations more heavily. The
hierarchy was still subject to greater information compression loss than the expert teams,
however, and thus did not perform as well.
Proposition 2 was also confirmed. As indicated in Table 3, due to the increased cognitive
learning of two levels of processing, it took hierarchies considerably more time to learn than
majority teams across overlapping and segregated information distribution schemes, when
information weights were uniform or dispersed (X 2 = 283; P = 0.00). This occurred because the
majority teams' learning rate was dependent only on each analyst's speed of learning 23
different bit patterns, whereas hierarchies also had to wait for managers to learn (2(23)).
However, hierarchies needed significantly less time to learn than did expert teams, again under
uniform or dispersed information (X 2 = 237; P = 0.00). The reason for this was that the leader
within an expert team had to learn 2 9 different bit patterns. In other words, expert teams were
significantly slower in learning than either hierarchies or majority teams because the expert had
512 different possible input data patterns to learn. This reflects the common notion of cognitive
overload.
Proposition 3 was confirmed since a significant improvement in organizational performance
was found with overlapping information systems relative to segregated systems across teams
and hierarchies under clustered information weights (F = 67; P = 0.00), but not under the other
information conditions. Here we see that the expansion of each analyst's data access from 3 bits
of evidence to 5 bits helped reduce the number of analysts whose votes were based on
unimportant information. In the uniform and dispersed information conditions, overlapping
information had little effect, as anticipated.
The only proposition not borne out by the results was P4. No significant differences were
found when comparing learning speeds for overlapping versus segregated distribution schemes
for majority teams facing uniform or dispersed information weights (X2 = 0.048; P = 0.83). The
increase in task complexity of learning 25 patterns v e r s u s 23 was not significant. Had the
overlapping distribution scheme been wider (say 7 bits per analyst), significant differences may
have begun to appear. On the other hand, perhaps the enhanced quality of the analysts'
predictions (since each could view a larger proportion of the total problem) served to mitigate
the increasing complexity of learning more bit patterns.
Additionally, Table 3 seems to indicate a possible increase in learning speed when
information is clustered, both for hierarchies and for expert teams.. This is due to the fact that
with clustered weights, some analysts' recommendations (the ones with little useful
15
information) can be safely ignored. The reduction in complexity which results once such
analysts can be identified seems to more than offset the time needed to learn which analysts
these are.
It is also interesting to note that the final performance levels for hierarchies are not
significantly different across the information conditions or between the task decomposition
schemes. The real world analog here seems to be that hierarchies are fairly stable, consistent
performers. They underperform majority teams when weights of evidence are evenly distributed
among decision making agents, but outperform them when important evidence is concentrated
in certain areas. Similar results were found when comparing expert teams to majority teams.
DISCUSSION
A.I.S. are designed to activate learning across a range of management accounting activities.
Current corporate concerns include, among others, operating budgets, capital investments,
productivity measures, new product development, value added activity analysis, target costing,
outsourcing and process re-engineering. In each of these areas, financial and non-financial goals
are determined. Actual performance is measured relative to these goals as managers learn
through feedback. An empirical question is the extent to which A.I.S. design and performance
in these operational areas have been affected by recent trends in flatter informational structures,
team decision making and lateral information sharing networks. Our study provides preliminary
theoretical evidence with respect to these issues.
Forecast accuracy is particularly important for certain accounting decisions such as budgetary
profit projections, capital budgeting cash flow estimates and target cost projections. Consistent
with Carley (1992) and Williamson (1975), our results suggest that as vertical information
compression is decreased, lateral A.I.S. perform well relative to hierarchical A.I.S., provided the
16
informational environment is relatively uniform across agents. This suggests that for fairly
routinized accounting projections such as project management scheduling or operating budgets
for mature product lines, flatter A.I.S. with only local agent feedback should be adopted.
Majority teams follow a one person/one vote decision rule and so have no way of learning to
discriminate between analysts with good predictive information. Assuming relatively uniform
weights of evidence for operating budgets and project management, there is no need for such
discrimination since all analysts have access to equally important information. Thus the majority
team structure is well suited to this type of environment.
When environmental information of differential importance is clustered within agents, lateral
A.I.S. no longer indiscriminately outperform hierarchies. With clustered information, majority
teams are more vulnerable to uninformed analysts' errors than hierarchies. In hierarchies,
multiple levels of information processing buffer the organization from analyst error by reducing
the number of cases in which a single analyst can affect the overall organizational decision
outcome (Carley, 1990). These hierarchical levels on the other hand also introduce information
compression and loss of accuracy. In clustered environments, expert teams outperform both
majority teams and hierarchies. Managers learn to differentially weight their subordinates
recommendations. As found empirically (Gordon & Miller, 1976; Govindarajan, 1984;
Khandwalla, 1972), environmental contingencies affect accounting reporting. In more uncertain
environments, where the relative importance of local information differs widely between agents,
expert learning is necessary. A.I.S. should be designed to track agents and differentially weight
multi-agent information.
Capital investments for example demand multi-agent expertise. A.I.S. should be designed to
monitor cash flow components of project forecast accuracy over repeated capital budgeting
decisions. This reinforces learning of the relative importance of subsets of accounting
information such as technology transfers, quality assurance and taxation issues which are jointly
vital to the success of the investment. Institutional memory of these events can be captured and
transferred to newly formed capital budgeting teams in order to mitigate the risks associated
with environmental uncertainties.
Overlapping agent A.I.S. reporting only improves performance when information is clustered
within majority teams. This accords with intuition. Sharing uniform or randomly distributed
accounting information does not improve performance. Nor is sharing clustered agent
information necessary for an expert team that learns where diagnostic information resides. For
example, performance of process re-engineering does not depend on overlapping production or
human resource information between team members. Since individual expertise is recognized
by the team, there is no point in duplicating this expertise between agents.
The findings related to the value of overlapping A.I.S. in majority teams have implications for
a d hoc cross-functional teams. For example, teams constituted solely for the purpose of
decreasing targeted production costs, developing product prototypes and launching new
products are increasingly common. In order to attain a target cost at Toyota, marketing provides
sales projections for alternative engineering designs under various cost scenarios associated with
these designs. In newly constituted teams, with no possible history of environmental prediction
expertise, a majority rule decision may well prevail. Our results suggest that A.I.S. designed
with overlapping clustered information between agents significantly improves performance.
In today's rapidly changing technological environment, not only accuracy but speed of
organizational learning is also a vital consideration in A.I.S. design. A.I.S. must reflect the fact
that the firm's value chain from product development through customer delivery is very time
dependent. Product development times and life cycles are much shorter; supplier linkages are
Just-in-Time; and production throughput and product delivery are faster. When information is
17
not clustered between agents, our results indicate that majority teams are not only more accurate
than hierarchies but perform significantly faster as well. This corroborates the intuition of
hierarchical A.I.S. having a slower response time than flatter information structures. Routinized
decisions such as raw material deliveries, production scheduling and local performance
reporting feedback are well suited to flatter non-overlapping A.I.S. design. In such
environments, majority teams minimize information compression and thus tend to learn better
than hierarchies, and also more quickly than expert teams.
More complex management accounting decisions such as a value chain analysis of a product
line or outsourcing decisions may require greater agent expertise. Information of differential
importance exists between team members. While expert teams can lead to improved
performance, they do so at a cost, time. Expert team leaders need time to learn the relative
abilities of their subordinates. Managers within hierarchies also face this problem, but the
reduced span of control afforded by increased levels of management limit the complexity of
learning (i.e. 23 possible input patterns per manager in the hierarchy versus 29 for the expert
team leader). As a result, hierarchies also incur speed of learning delays, but to a much lesser
extent than the expert team.
The implication for A.I.S. design is the cost-benefit tradeoff of speed versus accuracy in
clustered information environments such as capital investments and value chain analysis. When
speed is a predominant concern, hierarchies should be used. When accuracy is essential, expert
teams should be employed. These findings are summarized in Fig. 6, a contingency model for
choosing A.I.S. support based on the parameters identified in this study. When faced with a high
degree of uncertainty with respect to task environmental characteristics, hierarchies seem to
represent a conservative choice since they provide the most stable accuracy and speed of
learning performance across all combinations of environmental parameters in the study. These
findings extend Carley's (1990, 1992) earlier results regarding the robustness of hierarchies to
uncertain environments.
While the task is an abstraction of the organizational learning proces, it does provide a set of
empirically testable propositions for future research. Of interest are issues of information
asymmetries and how problems of agency might impede the learning process. In addition,
organizational learning becomes more complex in dynamic environments. In such cases,
Bayesian learning models might be of interest. The optimal tradeoff between the number of
analysts and breadth of information is particularly important in "rightsized" managerial
environments. Finally, it would be of interest to examine alternative information structures and
Clustered
Hierarchy
Expert
Team
Majority
Team
Majority
Team
Weights
Distribution
Dispersed
and Uniform
Time
Accuracy
Critical Factor
18
the cost of information
methodological
relative
approaches
such
to o r g a n i z a t i o n a l
outcomes
as f i e l d
organizational
studies,
(Malone,
1987). A l t e r n a t i v e
surveys
and
controlled
e x p e r i m e n t a l t e s t s o f t h e s i m u l a t i o n r e s u l t s n e e d to b e p e r f o r m e d to e i t h e r s u b s t a n t i a t e o r r e f u t e
t h e p r e l i m i n a r y f i n d i n g s o f this study.
Acknowledgements--The helpful comments of the editor, an anonymous reviewer and workshop participants at the
Fifth Biannual Accounting Research Conference at the University of New South Wales and the City University of Hong
Kong are gratefully acknowledged.
REFERENCES
Argote, L., Beckman, S. & Epple, D. (1990) The persistence and transfer of learning in industrial settings. Management
Science, 36, 140-154.
Argyris, C. & Schon, D. (1978) Organizational learning: A theory_ of action perspective. Reading, MA: AddisonWesley.
Ashton, H. & Brown, E R. (1980) Descriptive modeling of auditors' internal control judgments: replication and
extension. Journal of Accounting Research, Spring, 1-15.
Attewell, E (1992) Technology diffusion and organizational learning: The case of business computing. Organization
Science, 3, 1-19.
Carley, K. (1992) Organizational learning and personnel turnover. Organizational Science, 3, 20-46.
Carley, K. (1990) Coordinating for success: trading information redundancy for task simplicity. In Proceedings of the
23rd Annual Hawaii International Conference on Systems Sciences, Kona, Hawaii.
Cyert, R. M. & March, J. G. (1963)A theory ofthefirm, Englewood Cliffs: Prentice Hall.
Den Hertog, F. & Wielinga, C. (1992) Control systems in dissonance: the computer as an ink blot. Accounting,
Organizations and Society, 17, 103-127.
Earl, M. J. & Hopwood, A. G. (1981) From management information to information management. In H. C. Lucas et al.
(Eds) The information system environment (pp. 123-156). Amsterdam: North-Holland.
Emmanuel, C., Otley, D. & Merchant, K. (1990)Accounting for management control. London: Chapman and Hall.
Fiol, C. M. & Lyles, M. A. (1985) Organizational learning. Academy of Management Review, 10, 803-813.
Flamholtz, E. G., Das, T. K. & Tsui, A. S. (1985) Toward an integrative framework of organizational control.
Accounting, Organizations and Society, 10, 35-50.
Galbraith, J. R. (1977) Organ&ational design. Workingham: Addison Wesley.
Gordon, L. A. & Miller, D. (1976) A contingency framework for the design of accounting information systems.
Accounting, Organizations and Society, 1, 39-70.
Govindarajan, V. (1984) Appropriateness of accounting data in performance evaluation: An empirical examination of
environmental uncertainty as an intervening variable. Accounting, Organizations and Society, 9, 125-135.
Hammond, K. R. &Deane, D. H. (1973) Negative effects of outcome feedback in multiple cue probability learning.
Organizational Behavior and Human Performance, Februa~, 30-44.
Harrell, A. M. (1977) The decision making behavior of air force officers and the management control process. The
Accounting Review, October, 833-841.
Hutchins, E. (1990) The technology of team navigation. In I. Galagher, R. E. Kraut and C. Egido (Eds) Intellectual
Teamwork (pp. 191-220).
Khandwalla, P. N. (1972) The effect of different types of competition on the use of management control. Journal of
Accounting Research, 10, 275-285.
Kim, D. H. (1993) The link between individual and organization. Sloan Management Review, 34, 37-50.
Lant, T. K. & Mezias, S. J. (1992) An organizational learning model of convergence and reorientation. Organization
Science, 3, 47-71.
Lawrence, P. R. & Lorsch, J. W. (1967) Differentiation and integration in complex organizations. Administrative Science
Quarterly, 12, 1-48.
Levitt, B. & March, J. G. (1988) Organizational learning. American Review of Sociology, 14, 212-243.
Levitt, B. and March, J. G. (1995) Chester I. Barnard and the intelligence of learning. In O. E. Williamson (Ed.)
Organization Theory (pp. 11-37). Oxford: Oxford University Press.
March, J. G. & Olsen, J. E (1976) Ambiguity and choice in organizations. Oslo Uuiversitets, Forlaget.
Malone, T. W. (1987) Modeling coordination in organizations and markets. Management Science, 33, 1317-1330.
Mintzberg, H. (1979) The structuring of organizations. New Jersey: Prentice Hall.
19
Monge, E R. (1987) The network level of analysis. In C. R. Berger & S. H. Chaffee (Eds) Handbook of Communication
Science (pp. 61-87). Newbury Park: Sage.
Nelson, R. R. & Winter, S. G. (1982)An evolutionary theory of economic change. Cambridge MA: Harvard University
Press.
Otley, D. T, (1980) The contingency theory of management accounting: Achievement and prognosis. Accounting,
Organizations and Society, 5, 194-208.
Otley, D. T, & Berry, J. (1979) Risk distribution in the budgeting process. Accounting and Business Research, 9,
325-337.
Ouchi, W. G. (1977) The relationship between organizational structure and organizational control. Administrative
Science Quarterly, 22, 95-113.
Ouksel, A. M. and Mihavics, K. (1995) A general model of Organizational Learning (UIC CRIM Technical Report
1995-01) The University of Illinois at Chicago.
Pfeffer, J. (1982) Organizations and organization theory. New York: Pitman.
Senge, P. M. (1990) The fifth discipline: the art and practice of the learning organization. New York: Doubleday.
Simons, R. (1995) Levers of control. Boston: Harvard Business School Press.
Weick, K. E. & Roberts, K. R. (1993) Collective mind in organizations: heedful interrelating on flight decks.
Administrative Science Quarterly, 38, 357-381.
Williarnson, O. E. (1975) Markets and hierarchies: analysis and antitrust implications. New York: Free Press.