You are on page 1of 45

Machine Learning Activity-Based Costing: Conceptual Test

Brian D. Knox
Boise State University

Electronic copy available at: https://ssrn.com/abstract=3965495


Machine Learning Activity-Based Costing: Conceptual Test

Abstract
Activity-based costing (ABC) can provide more accurate overhead allocations than traditional
absorption costing methods, but its initial and ongoing cost studies are labor-intensive and
therefore expensive. In this paper, I adopt a design science approach and conceptually test a
variation of ABC that automates, through machine learning, the major components of the ABC
process to reduce the need for expensive ABC cost studies. I conduct three numerical
experiments using simulated data sets to illustrate the theoretical space for a combination of
these two relatively widespread tools—ABC and machine learning—into something novel. I
simply call this combination machine learning activity-based costing (MLABC). I find that
MLABC can be more accurate than ABC, while avoiding the latter’s expensive cost studies, if
(1) the data set includes longitudinal correlations between cost drivers and cost resources, (2)
some correlations between cost drivers and cost resources have interactions, and (3) avoiding
ABC’s cost study does not leave the firm ignorant of a cost driver that accounts for a substantial
amount of variance between cost drivers and cost resources. I also find preliminary evidence
that, within a certain subset of the simulated decision space, MLABC can facilitate active
experimentation with the firm’s cost function to learn more about it.

Electronic copy available at: https://ssrn.com/abstract=3965495


1. Introduction

An overhead cost cannot be directly traced to individual product units and often cannot

be directly traced to larger-scale cost objects such as jobs, batches, customers, product lines,

processes, departments, or regions. Because many important decisions hinge upon the

profitability of these cost objects, firms often expend considerable resources to approximate how

much overhead cost belongs to these cost objects (referred to as overhead allocation). More

accurate overhead allocation allows firms to make a variety of decisions more profitably.

Activity-based costing (hereafter ABC) is a system of allocating overhead that has been found to

be more accurate than other absorption costing systems (see Ittner et al. 1997; Babad and

Balachandran 1993). Under ABC, overhead cost categories recorded in the financial reporting

system, known as cost resources, are re-sorted into several activity cost pools. Each activity cost

pool is driven by a cost driver that can be traced to cost objects. For example, the financial

system records dollar amounts for supervisor wages and maintenance costs. These figures might

be apportioned between two activity cost pools, such as machining and testing, from which costs

are allocated to individual product units based on those product units’ actual consumption of the

each activity cost pool’s cost driver, such as machine hours and batches.

ABC is costly, though, which limits its adoption. These costs can include labor-intensive

start-up and upkeep as well as human-driven error and latency (I discuss these costs further in

Section 2). In this paper, I use a design science approach to conceptually test a variation of ABC

that combines ABC principles with machine learning technology, potentially mitigating at least

some of ABC’s costs. O’Leary (2009) explains that a design science approach is a common

mode of inquiry for embryonic and emerging technologies, listing “design science” as the first of

several common research methodologies for technologies in both the embryonic and emerging

Electronic copy available at: https://ssrn.com/abstract=3965495


maturity levels (see his Table 4). “Design science describes and evaluates constructs, models,

methods, and prototypes of tools that provide a novel solution to difficult problems” (Sedbrook

2012, 48). Accordingly, in this paper I seek to illustrate and conceptually test a novel

technology-centered solution to the difficult problem of accurate overhead allocation.1

The design science approach includes an array of techniques and “allows many types of

quantitative evaluation of an IT artifact, including […], analytical simulation” (Hevner et al.

2004, 77). I use a specific type of analytical simulation sometimes called a numerical experiment

(see Balakrishnan and Penno 2014). I conduct three numerical experiments that use simulated

data and experiment-like hypothesis testing. Numerical experiments can be especially useful for

early technologies with few recorded cases from practice. In this instance there are no prior cases

from practice, to my knowledge, even though ABC and machine learning are used separately by

many modern firms.

Machine learning is a form of artificial intelligence that can be defined as the use of

statistical analysis and algorithms to complete, with limited direct human guidance, tasks such as

sorting, classifying, and estimating (e.g. Chapelle et al. 1999; see also Michie et al. 1994).

Machine learning leverages computers’ ability to “crunch numbers” and extract relationships that

are otherwise hard to articulate or perceive, and to do so in a manner that is almost costless at the

margins. I simply refer to my variation of ABC as machine learning activity-based costing

(hereafter MLABC). I test ABC and MLABC using a series of simulated special order decisions.

I gather each costing system’s estimate of a cost object’s true cost (given actual cost driver

1
Many Journal of Emerging Technologies in Accounting articles have used a design science approach (e.g. Chou et
al. 2018; Appelbaum and Nehmer 2017; Lombardi and Dull 2016). Furthermore, Muehlmann et al. (2015) explain
that design science research and artificial intelligence are a leading method and topic for research published in the
Journal of Emerging Technologies in Accounting between 2004 and 2013.

Electronic copy available at: https://ssrn.com/abstract=3965495


consumption by that cost object) and compare how many times those estimates produce the

correct answer at various customer price ceiling levels.

ABC can benefit anyone who profits from the success of the firm (e.g. shareholders,

banks, managers, workers, community members, etc.), since its increased accuracy in overhead

allocations makes profitable decisions more likely. MLABC should have the same types of

beneficiaries, assuming it can produce ABC-like accurate overhead allocations but at a lower

cost than ABC. MLABC’s potential for decreased implementation costs also might mean broader

adoption. Firms that would benefit from ABC’s improved decision-making but cannot justify its

cost might be able to justify MLABC’s cost. Furthermore, improved costing decisions could

allow broader adoption of ABC principles in the not-for-profit and governmental sectors (e.g. see

Kennett et al. 2007).

That said, there may be some obstacles to MLABC directly replacing ABC. Reducing the

costs of ABC’s labor-intensive start-up and upkeep means not engaging in the human effort of

initial and ongoing ABC cost studies. ABC’s initial cost study provides two types of knowledge.

First, an ABC cost study provides knowledge about which cost drivers are correlated with cost

resources, potentially uncovering cost drivers that were previously unknown. Second, an ABC

cost study provides numerical estimates of the correlations between cost drivers and cost

resources. While MLABC likely can substitute for the second type of knowledge ABC provides

(i.e. estimates of cost-driver-to-cost-resource correlations), it seems unlikely that MLABC will

discover previously unknown cost drivers. Thus, MLABC likely must learn more than ABC

from cost drivers that are previously known to be correlated with cost resources.

My first numerical experiment demonstrates how MLABC can learn more from

previously-known cost drivers, specifically when the relationships between cost drivers and cost

Electronic copy available at: https://ssrn.com/abstract=3965495


resources involve (1) longitudinal correlations wherein the cost drivers in one period are

correlated with the cost resources of another period and (2) interactions among the correlations

between cost drivers and cost resources. I expect longitudinal correlations and interactions to

occur in a variety of contexts, including quality costs and long-term focused costs.

I then directly incorporate ABC’s potential to use unique cost drivers (discovered during

ABC’s initial cost study) in my second numerical experiment. I also incorporate the possibility

that some cost drivers that are otherwise intractable for an ABC system could be used by an

MLABC system. That is, I give ABC a unique cost driver that it can observe but MLABC

cannot, and I give MLABC a unique cost driver that it can observe but ABC cannot (I discuss

examples of cost drivers that could be unique to MLABC in Section 3). Then I vary the

informativeness of ABC’s unique cost driver to determine at what point the two costing systems

produce equivalently accurate decisions. I find that, in my simulated decision space, ABC’s

unique cost driver must be set to about ten times the explanatory power of the MLABC-unique

cost driver for the ABC and MLABC systems to provide equally accurate cost estimates. This

exact numerical result is specific to my simulated scenario and a ten to one relationship may not

be exactly the point at which parity would be observed in practice. But the result’s overall

message is to strongly suggest that theoretical space exists for MLABC to provide the decision-

making benefits of ABC while avoiding its labor-intensive cost studies.

In my third numerical experiment, I consider one novel potential feature of MLABC:

active experimentation. MLABC could suggest exogenous manipulations to cost driver

consumption to help optimize how much it learns from subsequent cost data. I train an MLABC

neural network on an initial set of data and then determine which cost driver is most strongly

correlated with output error. This is the cost driver the neural network finds least informative and

Electronic copy available at: https://ssrn.com/abstract=3965495


should be the most beneficial target for active experimentation. Then I, acting on this

information from the MLABC system, exogenously manipulate the level of this least-understood

cost driver in the subsequent data added to the data set. I find that an actively experimenting

MLABC can produce significantly improved predictions compared to an MLABC system that

receives additional data that is not exogenously manipulated in this way. However, this result

only obtains among a subset of my analyses. Active experimentation does not produce

improvements toward one edge of the simulated decision space. While my results from this third

numerical experiment are limited, they help provide an initial view into the opportunities and

challenges of implementing this potential feature of MLABC.

This paper’s results generally contribute to the emerging accounting technologies

literature by illustrating a novel combination of two techniques that are already widely used, a

combination that could have a significant effect on profitable decision-making. The paper also

contributes to the ABC and machine learning literature, at the very least by helping to bring them

into the orbit of one another. ABC is among a handful of the most transformative modern

advancements in managerial accounting thought (or accounting thought in general), and this

paper suggests a refinement to ABC that could further broaden that impact. Finally, these results

add a new line of inquiry within the growing library of robotic process automation and

accounting automation research, topics that many believe will be the next sea change in

accounting practice.

2. Background

The Design Science Approach

Electronic copy available at: https://ssrn.com/abstract=3965495


A paper that uses the design science approach generally completes at least a subset of six

main design science activities (see Applebaum and Nehmer 2017). I summarize these activities

below.

(1) Identifying problem (and motivation)

(2) Defining solution objectives

(3) Designing and/or developing an artifact to meet at least some solution objectives

(4) Demonstrating solution

(5) Evaluating solution

(6) Communicating the problem and solution

Although this paper performs all six of these activities in at least some capacity, I most

emphasize designing, demonstrating, and evaluating an artifact that solves the problem. Sections

1 and 2 do identify the problem and solution objectives, and the paper as a whole does constitute

the sixth activity, with the objective of promoting future development and research in this area.

The problem, as described in Section 1, starts with the fact that overhead costs cannot be

directly traced to individual product units or many other cost objects. This is a fundamental

axiom of managerial accounting.2 Profitable decisions are easier when cost responsibility is

clear, and the profitability of different alternatives hinges upon accurately allocating overhead

costs within those alternatives. This includes allocations made to cost objects that include more

than one individual product unit, such as jobs, batches, customers, product lines, processes,

departments, and regions. In the 1980s, ABC offered a new approach to the overhead problem.

ABC differs in how it applies overhead cost to cost objects. ABC uses multiple cost drivers and,

crucially, uniquely derived activity cost pools (see Cooper and Kaplan 1991; 1988). ABC would

2
Or, at least, a fundamental axiom of traditional absorption costing.

Electronic copy available at: https://ssrn.com/abstract=3965495


be largely indistinguishable from a job-order costing system that uses departmental overhead

rates if its activity cost pools were simply built atop departmental or other existing firm

hierarchies. Instead, during what’s called first-stage allocation, ABC re-sorts overhead costs into

activity cost pools. These pools are explained as being built around the firm’s value-added

activities rather than the firm’s existing hierarchies. ABC refers to overhead costs, as they are

found in the financial accounting and budgeting systems, as cost resources.

First-Stage and Second-Stage Allocation

ABC’s first-stage allocation is necessary because it allows cost resources to be correlated

to more than one cost driver, accounting for more of the complexity of overhead cost

responsibility. ABC accounts for this complexity and improves overhead allocation, which in

turn provides a more accurate picture of cost objects’ profitability and improves decision-making

(see, for example, Cagwin and Bouwman 2002; Kennedy and Affleck-Graves 2001; Shields

1995). Here is a brief example. Assume there are two cost resources at a firm: (1) supervisor

wages ($100,000), and (2) maintenance costs ($150,000). After some study, the firm decides that

it performs two activities that add value and that each activity’s cost are driven by a cost driver

(cost drivers in parentheses): (I) machining (machine hours) and (II) testing (batches). If the firm

determines that 75% of supervisor wages are consumed by the testing activity and 25% of

supervisor wages are consumed by the machining activity, then $75,000 dollars of supervisor

wages should be assigned to the testing activity cost pool and $25,000 should be assigned to the

machining activity cost pool. If 60% of maintenance costs are consumed by the machining

activity and 40% of maintenance costs are consumed by the testing activity, then $90,000 of

Electronic copy available at: https://ssrn.com/abstract=3965495


maintenance costs should be assigned to the machining activity cost pool and $60,000 of

maintenance costs should be assigned to the testing activity cost pool.

That is a simple example of first-stage allocation. Cost resources of $100,000 and

$150,000 are re-sorted into activity cost pools of $115,000 and $135,000. Each cost resource is a

mixture of overhead costs correlated to cost objects through two separate cost drivers. The

resulting activity cost pools, on the other hand, are supposed to be correlated through only one

cost driver each. This simple example can be expanded across a multitude of cost resources and

cost drivers. After first-stage allocation is complete, ABC allocates costs from activity cost pools

to individual product units or other cost objects using an activity rate, which is equal to the total

activity cost pool divided by the total cost driver. As cost objects actually consume units of each

cost driver, they are charged with overhead costs based on the activity rate associated with that

cost driver. This is called second-stage allocation, and it is remarkably similar to traditional job-

order costing with department-level overhead rates. The main differentiator between ABC and

traditional job-order costing with department-level overhead rates is ABC’s first-stage allocation.

The Cost of ABC

However, ABC is costly to implement and maintain (Kaplan and Anderson 2004; see

Kennett et al. 2007; Cooper and Kaplan 1991; 1988). There are four main sources of this

costliness (the latter two sources of costliness flow from the first two). First, a firm cannot

implement ABC without first deriving the percentage weights that allow it to complete first-stage

allocation and convert cost resources into activity cost pools.3 This requires an extensive hands-

on process that includes interviews, surveys, and expert judgment. Compensating specialized

3
In the example above, these weights are the 75% and 25% that I multiply supervisor wages by and the 60% and
40% that I multiply maintenance costs by.

10

Electronic copy available at: https://ssrn.com/abstract=3965495


experts for an initial ABC cost study incurs significant cost. Second, ABC’s initial cost study

will become outdated over time and the percentages that weight how much of each cost resource

belongs to each activity cost pool will change in a dynamic production environment. This means

the initial cost study requires regular follow up studies that similarly involve expensive labor-

intensive effort.

Third, because the initial ABC cost study and its follow-on studies require a lot of labor-

intensive effort, they are prone to human error and erroneous subjective judgments in the

production, collection, or analysis of the cost study’s often-qualitative information. The fourth

cost that can be incurred by ABC systems is latency. That is, an absorption costing system

inevitably takes time to update its approximation of relationships between overhead costs and

cost objects. ABC creates additional opportunity for this latency because a larger array of

relationships must be updated, and those updates can require potentially constrained expert

judgment that is not required in a traditional absorption costing system.

The Algebra of ABC

In this sub-section, I describe ABC mathematically. Equation 1 expresses second-stage

allocation. ABC assumes that the amount of overhead each cost object is responsible for, i.e. the

OHA matrix in Equation 1 (OHA stands for “Overhead Allocation”), varies based on how many

units of each cost driver the cost object consumes, out of m cost drivers, i.e. the ACDC matrix

(which stands for “Actual Cost Driver Consumption”). Note that the OHA matrix is an n × 1

array of costs, one cost per n cost resources. The sum of all elements in the OHA matrix would

be the total overhead allocation that belongs to the cost object in question. The AR matrix (which

stands for “Activity Rate”) represents the activity rates for m cost drivers across n cost resources.

11

Electronic copy available at: https://ssrn.com/abstract=3965495


[ ] × ∗[ ] × =[ ] × [1]

The dimensions of the AR matrix suggest that activity cost pools are mere flow-through

devices for the relationship between cost resources and cost driver consumption. The AR matrix

is an n × m matrix, with a value for every cost driver (in the columns) per cost resource (in the

rows). The AR matrix represents how many dollars from each cost resource are incurred when a

unit of each cost driver is consumed, and each column sums to an activity cost pool’s activity

rate. When I break down the AR matrix further (which gives Equations 2 and 3 below), it should

be clearer how this matrix incorporates ABC’s first-stage allocation.

[ ] × =[ ] × ∗[ ] × ∗[ ] × [2]

[ ] × ∗[ ] × ∗[ ] × ∗[ ] × =[ ] × [3]

The TCR matrix (which stands for “Total Cost Resources”) is an n × n diagonal matrix

that represents n cost resource totals. As a diagonal matrix, all numbers are zero other than the

top-left to bottom-right diagonal. On that diagonal are total cost resource numbers, each column

representing a cost resource. Similarly, the TCD matrix (which stands for “Total Cost Drivers”)

is a m × m diagonal matrix that represents m cost driver totals. Like TCR, the TCD matrix has

zeroes except for the diagonal values, which reflect total cost driver figures. Each column,

effectively, stands for a cost driver. Importantly, the diagonal values are actually the inverse (i.e.

) of the total cost driver. This allows the AR matrix to be a rate of cost resources over cost

drivers. The PW matrix in Equations 2 and 3 stands for “Percentage Weights” and is an n × m

matrix that carries the all-important percentage weights that are obtained at great expense during

ABC’s initial cost study. PW’s elements represent how much each cost resource (i.e. the

diagonal figures in TCR) is driven by each cost driver (i.e. the diagonal figures in TCD). Each

row in the PW matrix sums to 1, since 100% of each columnar total value in the TCR matrix is

12

Electronic copy available at: https://ssrn.com/abstract=3965495


divided into activity rates from the cost drivers in TCD. PW is the central component of first-

stage allocation and the primary reason an ABC system is more accurate in its overhead cost

allocations than traditional job-order costing.

The matrices shown in Equation 3, including how they are ordered, map closely to the

steps of ABC as traditionally expressed. The appendix provides an example of this. As shown in

that appendix, the product of [TCR] * [PW] yields a matrix with columns that sum to activity

cost pools. The product of [TCR] * [PW] * [TCD] yields a matrix with columns that sum to

activity rates (i.e. the AR matrix). Although the TCR and TCD matrices are important for

practical calculations of ABC, in a theoretical sense they merely scale the PW matrix by the

magnitude of cost resources and cost drivers in a given firm at a given time. For simplicity, I can

assume TCR and TCD are identity matrices (i.e. the diagonal values are all 1) and re-state

Equation 3 as Equation 4 below.

[ ] × ∗[ ] × =[ ] × [4]

This re-arrangement highlights the PW matrix, which contains the correlations between

consumption of cost drivers and responsibility for overhead costs. The PW matrix in this

equation embeds the most important information from first-stage allocation into the second-stage

allocation, allowing the two steps to be combined into a single simple equation (which is useful

for when I feed the whole process into a machine learning algorithm). Finding the correct values

for this matrix is the principal purpose behind ABC’s expensive cost studies.

3. Hypothesis Development

Adapting ABC to a Neural Network

13

Electronic copy available at: https://ssrn.com/abstract=3965495


Neural networks, a series of node layers where each layer consists of nodes that each

receive, hold, and transmit information to the next layer of nodes (see Schmidhuber 2015 for

additional background) are an important machine learning tool that seem particularly appropriate

for MLABC. A neural network should easily handle the input and output as well as unique data

characteristics that traditional ABC may fail to account for (i.e. longitudinal correlations and

interactions). To adapt ABC to a neural network (so as to test MLABC), I start from Equation 4.

Neural network input is a cost object’s consumption of a set of cost drivers, i.e. the ACDC

matrix. These are what the firm can observe and directly trace to cost objects during the period.

The neural network’s output is the allocated overhead costs that the cost object is supposed to be

responsible for, i.e. the OHA matrix. The neural network trains with these inputs and outputs

using previous data about actual cost driver consumption and actual overhead costs. This training

allows the neural network to backpropagate and learn estimated values for the PW matrix.4

Since the actual overhead cost can only be observed in the aggregate (i.e. the firm knows

how much overall overhead it incurs, but it does not know how much overhead belongs to each

cost object), applying a neural network to this problem assumes the PW matrix for aggregate

numbers is roughly the same as the PW matrix for individual cost objects. This seems like a fair

assumption. At the very least it seems comparable to similar assumptions made by ABC, which

also uses aggregate global information gathered during the initial ABC cost study to guide local

overhead allocations.5

4
And, in practice, these values would also be scaled by total cost drivers and total cost resources.
5
Furthermore, there may be more room for MLABC to relax this assumption and account for local differences in the
correlations between cost resources and cost drivers. In a human-led ABC system, creating local variations in a PW
matrix could quickly get too costly or intractable. Such local variations could be effectively costless for an MLABC
system, which should excel at routinized number-crunching.

14

Electronic copy available at: https://ssrn.com/abstract=3965495


Longitudinal Correlations and Interactions Between Cost Drivers

MLABC can almost costlessly create new input nodes for prior periods’ cost driver

consumption, whereas I expect that this could quickly become too ungainly or expensive under

an ABC system. This can be important when prior period cost drivers are correlated with current

period cost resources. Below are several examples of cost drivers and cost resources that could

be longitudinally correlated.

 Seasonal costs: a toy line’s consumption of a sales-related cost driver in December

could be correlated with cost resources like sales returns, inventory obsolescence, and

selling costs during the first few months of the next year.

 Quality costs: a batch’s consumption of a cost driver related to preventive quality

controls could be correlated with structural failure and warranty costs in later periods.

 Hiring costs: a project’s consumption of a cost driver related to employee recruitment

could be correlated with future period cost resources related to re-work, waste, and

obsolescence in and other learning curve costs.

 Training costs: a sales team’s consumption of a training-related cost driver one month

could be correlated with customer turnover-related cost resources in later months.

 Information technology costs: a region’s consumption of a cost driver related to

server usage could be correlated with later periods’ cost resources related to efficient

production and logistics management.

 Transportation costs: a product line’s consumption of a distance-related cost driver

this period could be correlated with later periods’ spoilage, inventory shrink, or sales

returns cost resources in later periods.

15

Electronic copy available at: https://ssrn.com/abstract=3965495


Traditional ABC does not account for possible interactions among cost drivers either.

That is, it does not account for how the level of consumption of one cost driver might moderate

the rate at which overhead costs are incurred when a second cost driver is consumed. Here are

some examples where this could be the case.

 If a team incurs a lot of inspection hours for a product (a cost driver), it likely

decreases the rate at which batches (another cost driver) are correlated with re-work

costs (a cost resource).

 Increased training hours (a cost driver) likely decrease the rate at which direct labor

hours (another cost driver) are correlated with supervisor overtime costs (a cost

resource).

 An inefficient increase in kiln heat (a cost driver) likely increases the rate at which

kiln hours (another cost driver) are correlated with warranty costs (a cost resource).

 Faster ticket resolution (a cost driver) likely decreases the rate at which machine

hours (another cost driver) are correlated with machine downtime costs (a cost

resource).

Assuming longitudinal correlations and interactions exist in a firm, I expect MLABC can

make more use of the firm’s cost driver and cost resource data because MLABC can more easily

account for these types of relationships. This expectation leads to H1, formalized below.

H1: If MLABC and ABC both observe a set of cost drivers that have correlations with
cost resources that include longitudinal and interaction relationships, then MLABC is
more accurate than ABC.

Unique Cost Drivers

16

Electronic copy available at: https://ssrn.com/abstract=3965495


An ABC cost study can bring to the firm’s knowledge one or more cost drivers that are

not previously known to be correlated with cost resources. MLABC, if it is to avoid ABC’s

costliness, omits ABC’s initial and ongoing cost studies. Therefore MLABC remains effectively

ignorant of ABC’s unique cost drivers. How much this ABC advantage offsets the MLABC

advantage hypothesized in H1 depends on how informative the ABC-unique cost drivers are. The

next step of conceptually testing MLABC is to qualitatively examine the point at which

MLABC’s advantage from H1 is overcome by ABC’s ability to access unique cost drivers.

However, I must first discuss the possibility that MLABC also has unique cost drivers. These are

likely cost drivers that are known or suspected to be correlated with cost resources but that ABC

cannot tractably incorporate into its calculations. MLABC could incorporate cost drivers that

constitute a complex array of data, incurring little to no marginal cost from adding dozens,

hundreds, or even thousands of inputs into its calculations.6 ABC, on the other hand, must

engage in a costly process to study each new cost driver and input, a process that also bears the

risk of human error and increased latency. Therefore some cost drivers known prior to an ABC

cost study might be usable by MLABC but unusable by ABC.

For example, email server activity and email metadata could be strongly correlated with

overhead costs. Emails sent past the end of the regular workday from the engineering department

for example, could be correlated with overtime costs or maintenance costs because they could

signal a critical breakdown in an important machine. Long emails with negative sentiment could

be correlated with higher employee turnover costs. MLABC can incorporate cost drivers such as

these, while ABC cannot. Another example is market data. MLABC could receive market inputs

6
Increasing the number of inputs also increases MLABC’s neural network complexity, which in turn increases how
much training data is required to form an effective estimation of the PW matrix. However, if a firm is in a position
to add many of inputs, it is also likely to have a relatively large amount of extant training data available.

17

Electronic copy available at: https://ssrn.com/abstract=3965495


in real-time. These inputs, in turn, might help the firm estimate how much obsolescence or other

inventory holding costs individual cost objects incur. Continuous geographical inputs, such as

GPS coordinates for drivers, could also be added to an MLABC system in real-time.

As argued earlier, ABC can observe one or more unique cost drivers. But MLABC may

also observe one or more unique cost drivers and, as hypothesized in H1, it gains more

information from the cost drivers that MLABC and ABC have in common. It seems unlikely that

ABC is as accurate as MLABC unless ABC’s unique cost drivers account for a substantial

amount of the variance in cost resources. And, if that is the case, it brings into question why

those cost drivers are ABC-unique, since ABC-unique cost drivers are those that are not known

to be correlated with cost resources before an ABC cost study. One would expect that if a cost

driver accounts for a substantial amount of variance, it would already be strongly suspected as

being correlated with cost resources.7

H2: If MLABC and ABC both observe a set of cost drivers that have correlations with
cost resources that include longitudinal and interaction relationships and if ABC observes
a unique cost driver and MLABC observes a unique cost driver, then MLABC and ABC
are equally accurate when the ABC-unique cost drivers account for a substantial
proportion of the variance in cost resources.

MLABC’s Potential for Active Experimentation

Passive learning, i.e. simply analyzing data as it comes, can improve and refine one’s

predictions. But learning might be accelerated if the data that comes is designed to be optimally

informative, especially about those things one knows the least about. MLABC offers an

opportunity to increase the rate at which the costing system learns, specifically through

exogenous shocks to those least-understood cost drivers to make the new data richer and more

7
H2 uses the word “substantial,” which is an ambiguous adjective. I address this immediately following H3.

18

Electronic copy available at: https://ssrn.com/abstract=3965495


informative. Many archival academic studies (which, like a costing system in practice, are trying

to extract underlying relationships from messy real-world data) take advantage of exogenous

shocks, i.e. events that are not correlated with the underlying relationships they are studying.

These exogenous shocks can provide a useful counterfactual for the data and widen its range.

Once the effect of the shock is accounted for, this can mathematically clarify the relationship of

interest.8

My third hypothesis suggests that MLABC can identify which cost drivers it knows the

least about and then use this information to suggest exogenous cost driver shocks that could, in

turn, make future data sets more informative to the MLABC system. This is active

experimentation, and it could improve the MLABC system’s accuracy more quickly than

through passive learning. In this hypothesis, I am no longer comparing MLABC to ABC directly,

but rather I am comparing an MLABC system with active experimentation to an MLABC system

without active experimentation.

H3: If an MLABC system can actively experiment with cost drivers in its data set, it will
be more accurate than another MLABC system that cannot actively experiment.

H3, like H2, is phrased somewhat ambiguously. This is intentional. Labro (2015),

speaking of numerical experiment research, highlights the importance of not losing the “forest

for the trees” (101). There are several levers that change the shape of the decision space within

my numerical experiments, and it would be easy to get overly fixated on one or more of those

levers’ role within the simulated environment. This paper’s main contribution comes from

examining if theoretical space exists for MLABC given relatively reasonable assumptions.

8
By “exogenous shock”, I mean an MLABC-driven, top-down directive for more (or less) cost driver consumption
in the target period.

19

Electronic copy available at: https://ssrn.com/abstract=3965495


It should also be noted that numerical experiments, by nature, are “highly iterative”

(Leitner and Wall 2015, 111). I test both H2 and H3 by iteratively ratcheting up a variable of

interest within my numerical experiment model. This ratcheting is done with the objective of

finding either a non-significant (for H2) or significant (for H3) difference between two costing

systems. I do not test or provide all costing system results for all levels at which they could be

ratcheted, since that is not the point of these hypotheses and doing so would likely give undue

salience to the simulated decision space.

4. Method

Data Sets and Experimental Design: Numerical Experiments 1-3

To complete the three numerical experiments that test the three hypotheses from Section

3, I simulate three data sets: one for each numerical experiment. The data sets for Numerical

Experiments 1 and 2 include 1,000 observations, while the data set for Numerical Experiment 3

includes 2,000 observations (as described below). I base my data simulation process on my

Section 2 discussion and the matrix algebra that summarizes how ABC approaches cost drivers

and cost resources. Equation 5, below, is a modified version of Equation 4 from Section 2.

[ ] × ∗[ ] × +[ ] × =[ ] × [5]

The PW matrix, which I manually create, is constant for all observations in a data set.

The three PW matrices are non-uniform, allowing richness in the cost-driver-to-cost-resource

correlations. I generally follow the pattern of each cost resource being predominantly but not

exclusively driven by a different cost driver. The Ε matrix adds noise to the actual cost resources

incurred and makes the data sets more realistic. Each element in the Ε matrix is the absolute

value of a random draw from a normal distribution (mean = 0, standard deviation = 0.5). The

20

Electronic copy available at: https://ssrn.com/abstract=3965495


elements of the ACDC matrix generally start from a randomly drawn number, αm, from a

uniform distribution (0 ≤ αm < 1). This is supposed to represent actual cost driver consumption.

Numerical Experiment 1 uses three cost drivers that drive how much of three cost

resources each cost object incurs. This numerical experiment has a 2 × 1 design with the

exogenously manipulated factor being which costing system I use to analyze the data and

estimate the overhead cost of cost objects: ABC or MLABC. In accordance with H1, the first

cost driver (i.e. the first element in the ACDC matrix) has a longitudinal correlation with cost

resources. To accomplish this, I re-calculate the α1 from this element of the matrix to weight it

with the immediately prior observation, i.e. α1,t-1 (the first observation in the dataset simply

draws a random αt-1 from the same uniform distribution as αm, between 0 and 1).

. , + . , = , [6]

The next two elements in the ACDC matrix retain their initial draws, i.e. α2 and α3. The

fourth element of the ACDC matrix is the product of α2, α3, and -1. This is the interaction term.9

As one cost driver is incurred (α2 or α3), the cost resources incurred per unit of the other cost

driver decreases. The interaction term is left unweighted. The ACDC matrix is a 4 × 1 matrix

(i.e. m = 4) to account for the interaction term’s effect on cost resources incurred. The three cost

resources make n = 3.

Similar to Numerical Experiment 1, Numerical Experiment 2 has a 2 × 1 design that

compares the two costing systems of interest, ABC and MLABC. The data set for Numerical

Experiment 2 is similar to the data set for Numerical Experiment 1, but I add two more cost

drivers, i.e. m = 6. One of the new cost drivers is only observable by ABC, and one is only

observable by MLABC. These two new cost drivers simply keep their initial αm draws in the

9
I have arbitrarily chosen a negative interaction. It could also be the case that increases in one cost driver will
increase the slope of another cost driver’s effect on overhead costs.

21

Electronic copy available at: https://ssrn.com/abstract=3965495


ACDC matrix (i.e. α4 and α5). To test H2, I vary how much weight ABC’s unique cost driver has

in driving actual overhead costs. ABC’s unique cost driver has a weight of w, while all other cost

driver weights are generally patterned after the PW matrix weights from Numerical Experiment

1, but now each is multiplied by 1 – w. My objective in this experiment (as part of testing H2) is

to increase w until I find a w at which ABC and MLABC produce similarly accurate results.

Numerical Experiment 3 uses the same PW matrix as Numerical Experiment 2, with w

equal to whatever level achieves parity between ABC and MLABC in Numerical Experiment 2,

i.e. w*. Although Numerical Experiment 3 uses a 2 × 1 design like the first two numerical

experiments, the manipulated factor only compares performance of two different types of

MLABC, i.e. the control condition and the learning condition. For the control condition,

MLABC analyzes a data set that is formed using procedures like those of Numerical Experiment

2. The only difference being that it has double the observations: 2,000 observations. In the

learning condition, however, I train an MLABC neural network on the first 1,000 observations of

the data set (which are simulated following Numerical Experiment 2’s procedures) and then

determine which cost driver the neural network understands the least. To determine which cost

driver is least understood, I run a multiple linear regression with all cost drivers visible to

MLABC as predictors and a squared error term of the neural network’s estimates as the

dependent variable.10 The largest coefficient in this regression is the cost driver that is most

correlated with MLABC errors in the first 1,000 observations, thus MLABC knows the least

about it.

This least-understood cost driver is then exogenously shocked during the simulation of

the remaining 1,000 observations. This “shock” occurs before cost resources for that observation

10
The “squared error term” is the square of the difference between a cost object’s true cost and the MLABC
estimate of that cost object’s cost.

22

Electronic copy available at: https://ssrn.com/abstract=3965495


are figured. For half of the remaining 1,000 observations the least-understood cost driver is

multiplied by 1 – s and for the other half, the least-understood cost driver is multiplied by 1 + s

(where s is a number between 0 and 1 that I manipulate). I test H3 by statistically testing whether

these shocks make the data set significantly more informative. For consistency, I use

observations that are not exogenously shocked when I form dependent variables in Numerical

Experiment 3 (see later sub-section on dependent variables). This design ensures that differences

between the control and learning conditions are not simply due to the learning condition being

tested on data that is altered or mean-shifted to a place within the decision space where stronger

differentials can be observed. The two conditions train on the same number of observations and

are tested on observations that are simulated the same way. The learning condition simply has a

chance to isolate a least-understood cost driver and widen the range of cost driver consumption

over which it observes that cost driver.

ABC Proxy and Job-Order Costing Proxy

For Numerical Experiment 1 and Numerical Experiment 2, I create a proxy for ABC

based on the general procedures of ABC, with some modification. As discussed in Section 3, the

most important quantitative contribution of ABC is estimating the PW matrix. ABC limits the

PW matrix to linear, current-period, main effect correlations. To proxy for ABC, then, I conduct

a series of ordinary least squares multiple linear regressions, conducting one such regression per

cost resource and using all ABC-observable cost drivers as predictors in each regression. I

constrain the intercept to zero, since there are no fixed costs in my simulation. I restrict the

regression data to the first 90% of observations in the data set, so the regression’s predictions of

23

Electronic copy available at: https://ssrn.com/abstract=3965495


the PW matrix are not informed by the last 10% of the data set, which is the portion of the data

set I use to test the accuracy of costing systems in my hypotheses.

This proxy for ABC retains the key characteristic of ABC as it pertains to this study:

examining available evidence in search of correlations between cost drivers and cost resources.

To further test this proxy, I also create a proxy for job-order costing in the Numerical

Experiment 1 environment, and I test whether my ABC proxy outperforms my job-order costing

proxy, as in practice. The job-order costing proxy simply takes the average total overhead costs

(i.e. the sum of all cost resources) and divides it by the average total for the first cost driver in the

ACDC matrix (i.e. α1,t). I test the accuracy of this job-order costing system, and compare its

accuracy to that of ABC, in Section 5.

MLABC: Neural Network Design

To create the MLABC neural network, I use Python (version 3.7), importing the numpy

and keras modules. I form a deep neural network (with three hidden layers) using the keras

sequential model and dense layers. I find slightly more accurate results when I use a wider

hidden layer (I expand the hidden layers to have twice the number of nodes as the input layer).

The output layer has three nodes, corresponding to three cost resources being allocated to cost

objects (i.e. the OHA matrix introduced in Section 2). Each node uses a linear activation

function, meaning it simply passes along the sum of all the inputs it receives. This activation

function fits with the objective of this neural network: estimation (rather than classification).

For each numerical experiment, I train the neural network on the first 90% of the data set.

Then I test its accuracy on the last 10% of the data set, which the neural network has not

formally trained on. I use batch sizes of 100, 50 epochs, and the stochastic gradient descent

24

Electronic copy available at: https://ssrn.com/abstract=3965495


optimizer (Amari 1993). In short, I use relatively standard hyperparameters for the neural

network so it will backpropagate and learn. To facilitate MLABC’s ability to interpret

longitudinal correlations and interactions, I input the prior period’s α1,t-1 and the interaction term

as separate input nodes.

For Numerical Experiment 3, I also add columns to the learning condition’s exogenously

shocked data set (using Excel) before feeding that data set to the MLABC neural network. This

additional data includes (1) the amount of that data set’s shock condition (since I ratchet up the

shock condition to test various levels of exogenous shocks) and (2) whether this shock was

positive or negative. Feeding this information into the MLABC neural network seems the most

straightforward way for the neural network to assimilate the information gained from

exogenously shocking the data.

Measuring Costing System Accuracy

I test costing system accuracy by first determining if the true cost of a cost object exceeds

a customer’s price ceiling, such as in a special order decision. If the true cost exceeds the

customer’s ceiling, the firms should reject the cost object. I vary that price ceiling in $0.25

increments from $0.25 to $4.00. Each costing system then estimates the cost of these same cost

objects and suggests whether the firm should accept the special order based on the estimated cost

and the customer’s ceiling price. I tally how many times, by ceiling price, each costing system

answers the special order decision question correctly. That is my measure of costing system

accuracy throughout Section 5.

5. Results

25

Electronic copy available at: https://ssrn.com/abstract=3965495


ABC Proxy Validation

My first result is from a validation experiment to test whether my proxy for ABC is, like

in practice, more accurate than a job-order costing system. I use the data set for Numerical

Experiment 1 (1,000 observations) and apply the proxy ABC and job-order costing systems as

described in Section 4. These costing systems gather data from the first 900 observations of the

data set and estimate the cost of the last 100 observations. By “gather data” I mean that the ABC

system runs three multiple linear regressions on those first 900 observations while the job-order

costing system averages overhead and cost driver values over those first 900 observations. In

Table 1 Panel A, I show descriptive statistics for this validation experiment.

As expected, the ABC system is as accurate or more accurate than the job-order costing

system at all price ceilings (i.e. $0.25 to $4.00). The price ceilings are on the left-hand of Table 1

Panel B, and the data in the other columns represents how many times each costing system

correctly answers the special order question for the 100 last observations of the data set. A paired

t-test shows a significant difference (p < 0.001 one-tailed) between the two results columns.

Notably, ABC’s accuracy advantage is strongest when the price ceiling is relatively low.

[Table 1]

Numerical Experiment 1 Results

In H1, I hypothesize that MLABC can extract (1) more information from longitudinal

correlations between cost drivers and cost resources and (2) more information from interactions

between cost drivers’ correlations with cost resources. Numerical Experiment 1 tests this by

adapting an MLABC neural network and comparing its accuracy to the accuracy of an ABC

proxy. I use the same data set, ABC estimates, and ABC accuracy results shown in Table 1. For

26

Electronic copy available at: https://ssrn.com/abstract=3965495


convenience these are re-printed in both panels of Table 2. Because of the semi-stochastic nature

of neural networks, I create and train three MLABC neural networks, gather accuracy data from

each of these three networks, and then average their accuracy data. Each neural network is

created and trained using the parameters and procedures described in Section 4. Throughout my

tests of H1, H2, and H3, I simply refer to the average of these three MLABC results as a single

MLABC result. I show results from these neural networks in Table 2 Panel B.

[Table 2]

Supporting H1, I find a significant difference (p < 0.001 one-tailed, paired t-test) between

the accuracy of ABC and the average accuracy of the three MLABC neural networks I train.

MLABC does indeed exhibit an advantage when the data has longitudinal correlations and

interactions. This type of setting, where all cost drivers have characteristics not easily accounted

for by ABC, is an ideal setting for MLABC.

Numerical Experiment 2 Results

H2 predicts that ABC-unique cost drivers will need to account for a substantial amount of

the overall variance in cost resources if ABC is to be as accurate as MLABC. I provide

descriptive statistics for my data sets testing this hypothesis in Table 3 Panel A. I start testing H2

with a data set that gives the ABC-unique cost driver an approximately equal weight to the

MLABC cost driver, i.e. the ABC-unique cost driver uses w = 0.09091 (since the MLABC-

unique cost driver uses (1 – w) * 0.1 to determine its weight). This does not create parity

between the accuracy of ABC and MLABC: MLABC is significantly more accurate than ABC (p

< 0.001 one-tailed, paired t-test) at this weight of ABC’s unique cost driver. These results (as

well as results from the other two w-values I test) are shown in Table 3 Panel B.

27

Electronic copy available at: https://ssrn.com/abstract=3965495


[Table 3]

I create a new data set with w set to 0.5, increasing the weight of the ABC-unique cost

driver in determining cost resources and thus increasing the percentage of overall cost resource

variance explained by this cost driver. These results still suggest MLABC is significantly more

accurate than ABC (p = 0.002 one-tailed, paired t-test). Finally, I create a new data set with w set

to 0.8. This implies that the weights assigned to MLABC’s unique cost driver and to the cost

drivers common to both costing systems amount to at most 20% of the variance in cost

resources. This result produces a non-significant t-test (p = 0.36 one-tailed, paired t-test),

suggesting parity between the accuracy of ABC and MLABC. I interpret this finding as

supporting H2, which used admittedly vague wording about ABC’s unique cost driver explaining

“substantial” variance. I had to increase the weight of the ABC-unique cost driver to be more

than eight times its initial explanatory power.

Similar to Numerical Experiment 1, the data sets I create for Numerical Experiment 2 are

inherently designed to favor MLABC. They include (1) longitudinal correlations, (2)

interactions, and (3) an MLABC-unique cost driver. This is the ideal scenario for MLABC to

thrive and provide results comparable to ABC at lower cost. These results do not necessarily

suggest that MLABC will have this level of superiority over ABC if one or more of these

MLABC-favoring factors were removed.

Numerical Experiment 3 Results

To test H3, I need to test for significant differences in accuracy between two conditions:

the control condition and the learning condition. The control condition’s MLABC neural

network is built using Numerical Experiment 2’s rules, w = 0.8, but applied to 2,000 total

28

Electronic copy available at: https://ssrn.com/abstract=3965495


observations. For the learning condition, I generally also use Numerical Experiment 2’s rules

with the exceptions described in Section 4. First I train an MLABC neural network on 1,000

observations of data and find that the network’s least-understood cost driver is the MLABC-

unique cost driver added in Numerical Experiment 2 (i.e. this cost driver has the largest and only

statistically significant coefficient predicting squared error of the MLABC neural network). Then

I test H3 with data sets incorporating various levels of exogenous shock within the second 1,000

observations: s = {0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.00}. To be clear, I create a separate learning

condition data set for each s level (see Table 4 Panel B). I show descriptive statistics for these

data sets in Table 4 Panel A.

I test the accuracy of these learning condition MLABC neural networks by comparing the

accuracy of their estimates of overhead cost and the accuracy of the control condition MLABC

network’s estimate of overhead cost. All accuracy figures are based on the final 200 observations

of the data set.11 As with my tests of H1 and H2, I use price ceilings from $0.25 through $4.00 in

$0.25 increments. Results from the control condition and from the learning condition at various

levels of s are shown in Table 4 Panel B, along with indications of significance at various

confidence level thresholds (all p-values are from one-tailed paired t-tests).

[Table 4]

In this main test of H3 (i.e. the “Average (all)” row of Table 4 Panel B), I fail to find

support for H3 (all p-values > 0.18 one-tailed). However, upon examining the individual rows of

Table 4 Panel B, I see that this result is largely driven by poor performance when ceiling prices

are between $3.25 and $4.00. When I ignore these price ceiling rows, almost every level of s

11
Both MLABC conditions are tested on the last 200 observations of the 1,000 observations that are not
exogenously shocked. This helps ensure any differences in the two conditions’ accuracy is not due to, for example,
the exogenous shocks shifting the mean of a cost driver.

29

Electronic copy available at: https://ssrn.com/abstract=3965495


produces a result that is statistically significant (i.e. p < 0.05), supporting H3. I show this in the

bottom row of Table 4 Panel B. With this restriction on price ceiling, the average of all learning

condition columns (not tabulated) is also significantly different from the control condition

(difference = 3.20, p = 0.03) supporting H3 overall.

This latter result provides limited support for H3. It seems that active experimentation

may skew the MLABC neural network’s understanding of data at extremes. One might argue

that high price ceilings are particularly important because of what they may reflect about the

firm’s customers and the market in general. Or, on the other hand, one might counter-argue that

higher price ceilings are relatively uncommon. The average true price in the third numerical

experiment’s learning condition data sets is $1.94, and the standard deviation is $1.06. This

means the price ceilings that cause active experimentation trouble (from $3.25 to $4.00) are all

greater than one standard deviation from the mean but less than two standard deviations. Given a

normal distribution, these price ceilings might occur upwards of about 13.6% of the time. I

discuss potential future refinements to active experimentation in Section 6.

6. Discussion

In this paper, I use a design science approach to conceptually test a unique variation of

ABC, i.e. MLABC. I do this with three numerical experiments that are built around three

hypotheses of MLABC’s potential for providing ABC-like accuracy at a lower cost. Each

numerical experiment helps establish the existence of theoretical space for this novel

combination of ABC and machine learning. The overall result suggests there is some theoretical

space, with important details that could be improved in practice and/or with additional study.

30

Electronic copy available at: https://ssrn.com/abstract=3965495


This paper’s results provide some interesting insight into what practical scenarios

MALBC is most likely to thrive in. MLABC is most likely to equal or exceed the accuracy of

ABC (while costing less than ABC) when (1) the ABC-unique cost drivers account for relatively

little of the variance in cost resources, (2) MLABC-unique cost drivers explain a relatively large

amount of the variance in cost resources, and (3) the cost drivers common to ABC and MLABC

are noticeably driven by longitudinal correlations and interactions. The first of these points

means MLABC is less likely to succeed in an environment with poorly understood production

technology, as this seem like an environment where important cost drivers may be unknown. The

second point may suggest that MLABC works best at firms that rely on one or more of the

principal drivers that are MLABC-unique. The third point above suggests MLABC will do best

in firms where overhead is strongly affected by quality costs or similar long-term oriented or

interacting costs.

Some MLABC-unique cost drivers might incur more cost than the examples on Section 3

while still being a profitable addition to an MLABC system. For example, neuroimaging data

could amount to a complex array of activation and networking inputs that, theoretically, could be

fed into MLABC but not into ABC. The costs of acquiring this data are generally declining (for

example, the relatively affordable technology of functional near-infrared spectroscopy that may

still have enough granularity to show correlates with cost drivers; see Cui et al. 2011).

31

Electronic copy available at: https://ssrn.com/abstract=3965495


Neuroimaging data could measure the firm’s human capital,12 workforce sentiment,13 cause-and-

effect reasoning about the firm’s value streams,14 creativity,15 and loyalty.16

There are some important limitations to this paper. First, my chosen approach (a design

science approach that is focused on a simulated set of artifacts) does not use data from an actual

ABC or MLABC system in practice. Applying this work to practice, like similar simulated or

theoretical works, might face unexpected problems. The assumptions I use might not hold or

practical reality might not match up to the numerical values I have assigned to various aspects of

the simulation. This is a limitation that comes with the territory, as it were, and cannot be fully

avoided. I have attempted to mitigate it by (1) using assumptions and analysis that I believe are

relatively reasonable and (2) not fixating on the precise numerical results of my simulation so as

not to imply they are exactly what would happen in practice. The paper demonstrates the

theoretical space for MLABC, but that theoretical space will inevitably require some translation

before being applied to practice.

12
The neurosemantic technique called brain-reading allows to analyze the neural fingerprints of a person’s given
semantic representation from training data (Bauer and Just 2019a; 2019b; Mason and Just 2016; see also Damarla
and Just 2013; Pereira et al. 2009). Importantly, the neural fingerprint of these representations might change
predictably over time as experience and knowledge grow (Mason and Just 2015; see also Yoo et al. 2012). In short,
the firm’s proportion of novice workers and expert workers might be neurally measurable.
13
Kassam et al. (2013) use a brain-reading technique to predict the emotion that participants feel at the time. More
negative aggregate emotions might be associated with higher employee turnover costs or longer development times
(and thus higher overhead costs).
14
Satpute et al. (2005) find identifiable neural markers for causality. This could be important, for example, because
employees’ understanding that lower quality now causes poorer customer satisfaction and poorer financial results
later could be correlated with overhead costs.
15
The brain uses two parallel language processing systems: fine semantic processing and coarse semantic processing
(see Knox 2020). Fine semantic processing is intended for consistent and accurate searches for the encoded meaning
of a symbol or concept. Coarse semantic processing is intended for more divergent searches by which a symbol or
concept can be re-interpreted in a novel or creative way (Beeman and Chiarello 1998). These two semantic
processing systems may be neurally distinguishable (see Abraham 2014; Mason and Just 2004).
16
Strombach et al. (2015), among other theory of mind researchers, find evidence that social distance is neurally
detectable. This represents the fundamental difference between “us” versus “them” thinking. Ideally, a firm would
prefer its employees strongly identify with the firm because that might drive pro-social behavior that benefits the
firm (see Koys 2001; Foote and Li-Ping Tang 2008; see also Jha and Jha 2010; Organ 1990).

32

Electronic copy available at: https://ssrn.com/abstract=3965495


Furthermore, the MLABC system examined in this paper is novel, but ABC is not novel,

having been around for about forty years. Nor is machine learning novel. Machine learning

appeared in the middle of the twentieth century (although its roots go back to Bayes theorem in

1763), was then developed considerably at the end of the twentieth century as computing power

increased, and has since exploded into practical usage in the last two decades. This long history

of practical usage for both of MLABC’s precedents helps mitigate the risk that my simulation

takes place in a theoretical space that is so far flung from practice that it would be patently

unrealistic. Instead, MLABC is more like a marriage between two technologies that are relatively

well-tested in practice. Although there might be some new practical challenges that arise with the

implementation of MLABC, its parent technologies have already worked out some of the kinks

of practical implementation.

One of the limitations of a machine learning approach can be its need for large amounts

of training data. The backpropagation and updating process of a neural network can require a

significant amount of training data to achieve relative accuracy, especially if the modeling task is

complex. However, the task of estimating the PW matrix is not a complex one, especially

compared to tasks such as image classification, one of the more common applications for neural

networks. The first two numerical experiments demonstrate strong evidence of theoretical space

for MLABC using 1,000 observations of cost drivers and cost recourses. If these data sets

constitute weekly accounting reports, then these MLABC systems could have been trained on

less than two years’ worth of data from nine sub-units of the firm, e.g. nine departments, nine

regions, nine production lines, etc. That does not seem like an unrealistic amount of training data

for many firms that might be interested in implementing MLABC. Given the strength of the

33

Electronic copy available at: https://ssrn.com/abstract=3965495


results from Numerical Experiment 1, especially, it also seems likely that MLABC could be

economically feasible with even less data.

One important issue with an MLABC system that could warrant further research is

information security. Ensuring information security is a common topic in information systems

research because electronic data is invisible, can be transferred almost instantaneously, and faces

an ever-evolving aggressive threat from those who seek access illegitimately. An MLABC

system would be no different. It should fall under the cybersecurity stewardship of a firm’s chief

information officer and cybersecurity team. Data encryption, access control, and proactive

monitoring and countermeasures would likely be necessary to protect the firm’s all-important

information. That said, the MLABC system trains on data that should be already available in the

financial system: cost resource and cost driver numbers. Therefore, this data is already available

within the framework of the firm’s accounting information system and should already be

protected by the firm’s cybersecurity practices. The primary novel addition MLABC makes to

this data is discerning the PW matrix, which theoretically could provide a competitor with more

information on the strengths and weaknesses of the firm’s production technology than before,

assuming that competitor understood how to advantageously analyze that matrix. At least a

portion of the MLABC system would likely require treatment, cybersecurity-wise, like an

executive decision system that contains strategic data, with stringent limitations as to who can

access it.

The exact specifications of an MLABC neural network are worth exploring and testing in

future practice-oriented and research-oriented efforts. Specifically, neural networks are prone to

overfitting if the training data is too homogenous and if the neural network is too precise. The

neural network can become so good at the training data that it cues in on idiosyncratic

34

Electronic copy available at: https://ssrn.com/abstract=3965495


characteristics of the training data that do not translate to the general population, leading to

erroneous predictions when new data is considered. Several techniques can help with this, such

as dropout, controlling the backpropagation learning rate, additional training data, and changes to

the neural network’s overall design (some network designs could be more prone to overfitting).

Also, I did not have to worry about standardizing or normalizing my data, since my data sets

were simulated, and thus I could force the data to be on approximately the same scale from

inception. Practical implementation could require the development of additional techniques to

address standardizing and/or normalizing the data while still getting output that is meaningful for

decision making.

Although I technically find support for H3, I must limit the range of price ceilings to do

so. While the price ceilings I exclude are not the most often occurring price ceilings, assuming a

normal distribution, they would not be rare. I leave it to future researchers to examine whether

active experimentation can be realized in a more profitable way. It may be that more continuous

exogenous shocks that vary in their size or shocks to more than one cost driver would help active

experimentation be more effective at the edges. Or perhaps changing the technical specifications

of how the neural network examines shocked data could allow it to make better use of that data.

Data transformations and alternative statistical techniques, such as logarithmic transformations,

quadratic functions, or splicing, might also help maintain accuracy throughout the relevant

window of price ceilings or similar analogous dependent variables.

35

Electronic copy available at: https://ssrn.com/abstract=3965495


References

Abraham, A. (2014). Creative thinking as orchestrated by semantic processing vs. cognitive


control brain networks. Frontiers in Human Neuroscience, 8, 95.

Amari, S. I. (1993). Backpropagation and stochastic gradient descent method. Neurocomputing,


5(4-5), 185-196.

Appelbaum, D., & Nehmer, R. A. (2017). Using drones in internal and external audits: An
exploratory framework. Journal of Emerging Technologies in Accounting, 14(1), 99-113.

Babad, Y. M., & Balachandran, B. V. (1993). Cost driver optimization in activity-based costing.
The Accounting Review, 563-575.

Balakrishnan, R., & Penno, M. (2014). Causality in the context of analytical models and
numerical experiments. Accounting, Organizations and Society, 39(7), 531-534.

Beeman, M. E., & Chiarello, C. E. (1998). Right hemisphere language comprehension:


Perspectives from cognitive neuroscience. Lawrence Erlbaum Associates Inc. Mahwah, NJ.

Bauer, A. J., & Just, M. A. (2019a). Brain reading and behavioral methods provide
complementary perspectives on the representation of concepts. NeuroImage, 186, 794-805.

Bauer, A. J., & Just, M. A. (2019b). Neural representations of concept knowledge. Zubicaray, &
Schiller (Eds.), The Oxford Handbook of Neurolinguistics.

Cagwin, D., & Bouwman, M. J. (2002). The association between activity-based costing and
improvement in financial performance. Management Accounting Research, 13(1), 1-39.

Chapelle, O., Haffner, P., & Vapnik, V. N. (1999). Support vector machines for histogram-based
image classification. IEEE Transactions on Neural Networks, 10(5), 1055-1064.

Chou, C. C., Chang, C. J., Chin, C. L., & Chiang, W. T. (2018). Measuring the Consistency of
Quantitative and Qualitative Information in Financial Reports: A Design Science Approach.
Journal of Emerging Technologies in Accounting, 15(2), 93-109.

Cooper, R., & Kaplan, R. S. (1991). Profit priorities from activity-based costing. Harvard
Business Review, 69(3), 130-135.

Cooper, R., & Kaplan, R. S. (1988). Measure costs right: make the right decisions. Harvard
Business Review, 66(5), 96-103.

Cui, X., Bray, S., Bryant, D. M., Glover, G. H., & Reiss, A. L. (2011). A quantitative
comparison of NIRS and fMRI across multiple cognitive tasks. NeuroImage, 54(4), 2808-
2821.

36

Electronic copy available at: https://ssrn.com/abstract=3965495


Damarla, S. R., & Just, M. A. (2013). Decoding the representation of numerical values from
brain activation patterns. Human Brain Mapping, 34(10), 2624-2634.

Foote, D. A., & Li-Ping Tang, T. (2008). Job satisfaction and organizational citizenship behavior
(OCB) does team commitment make a difference in self-directed teams?. Management
Decision, 46(6), 933-947.

Hevner, A. R., S. T. March, J. Park, and S. Ram. (2004). Design science in information systems
research. MIS Quarterly 28 (1): 75–105.

Ittner, C. D., Larcker, D. F., & Taylor, R. (1997). The activity-based cost hierarchy, production
policies and firm profitability. Journal of Management Accounting Research, 9, 143.

Jha, S., & Jha, S. (2010). Determinants of organizational citizenship behaviour: A review of
literature. Journal of Management & Public Policy, 1(2).

Kaplan, R. S., & Anderson, S. R. (2004). Tool kit: Time-driven activity-based costing. Harvard
Business Review, 82(11).

Kassam, K. S., Markey, A. R., Cherkassky, V. L., Loewenstein, G., & Just, M. A. (2013).
Identifying emotions on the basis of neural activation. PloS One, 8(6), e66032.

Kennedy, T., & Affleck-Graves, J. (2001). The impact of activity-based costing techniques on
firm performance. Journal of Management Accounting Research, 13(1), 19-45.

Kennett, D. L., Durler, M. G., & Downs, A. (2007). Activity-based costing in large US cities:
Costs & benefits. The Journal of Government Financial Management, 56(1), 20.

Koys, D. J. (2001). The effects of employee satisfaction, organizational citizenship behavior, and
turnover on organizational effectiveness: A unit‐level, longitudinal study. Personnel
Psychology, 54(1), 101-114.

Labro, E. (2015). Using simulation methods in accounting research. Journal of Management


Control, 26(2-3), 99-104.

Leitner, S., & Wall, F. (2015). Simulation-based research in management accounting and
control: an illustrative overview. Journal of Management Control, 26(2-3), 105-129.

Lombardi, D. R., & Dull, R. B. (2016). The development of AudEx: An audit data assessment
system. Journal of Emerging Technologies in Accounting, 13(1), 37-52.

Mason, R. A., & Just, M. A. (2004). How the brain processes causal inferences in text: A
theoretical account of generation and integration component processes utilizing both cerebral
hemispheres. Psychological Science, 15(1), 1-7.

37

Electronic copy available at: https://ssrn.com/abstract=3965495


Mason, R. A., & Just, M. A. (2015). Physics instruction induces changes in neural knowledge
representation during successive stages of learning. NeuroImage, 111, 36-48.

Mason, R. A., & Just, M. A. (2016). Neural representations of physics concepts. Psychological
Science, 27(6), 904-913.

Michie, D., Spiegelhalter, D. J., & Taylor, C. C. (1994). Machine learning. Neural and Statistical
Classification, 13.

Muehlmann, B. W., Chiu, V., & Liu, Q. (2015). Emerging technologies research in accounting:
JETA's first decade. Journal of Emerging Technologies in Accounting, 12(1), 17-50.

O’Leary, D. E. (2009). The impact of Gartner’s maturity curve, adoption curve, strategic
technologies on information systems research, with applications to artificial intelligence,
ERP, BPM, and RFID. Journal of Emerging Technologies in Accounting, 6(1), 45-66.

Organ, D. W. (1990). The motivational basis of organizational citizenship behavior. Research in


Organizational Behavior, 12(1), 43-72.

Pereira, F., Mitchell, T., & Botvinick, M. (2009). Machine learning classifiers and fMRI: a
tutorial overview. NeuroImage, 45(1), S199-S209.

Satpute, A. B., Fenker, D. B., Waldmann, M. R., Tabibnia, G., Holyoak, K. J., & Lieberman, M.
D. (2005). An fMRI study of causal judgments. European Journal of Neuroscience, 22(5),
1233-1238.

Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61,
85-117.

Sedbrook, T. A. (2012). Modeling the REA enterprise ontology with a domain specific language.
Journal of Emerging Technologies in Accounting, 9(1), 47-70.

Shields, M. D. (1995). An empirical analysis of firms’ implementation experiences with activity-


based costing. Journal of Management Accounting Research, 7(1), 148-165.

Strombach, T., Weber, B., Hangebrauk, Z., Kenning, P., Karipidis, I. I., Tobler, P. N., &
Kalenscher, T. (2015). Social discounting involves modulation of neural value signals by
temporoparietal junction. Proceedings of the National Academy of Sciences, 112(5), 1619-
1624.

Yoo, J. J., Hinds, O., Ofen, N., Thompson, T. W., Whitfield-Gabrieli, S., Triantafyllou, C., &
Gabrieli, J. D. (2012). When the brain is prepared to learn: enhancing human learning using
real-time fMRI. NeuroImage, 59(1), 846-852.

38

Electronic copy available at: https://ssrn.com/abstract=3965495


Appendix: Example of First-Stage Allocation, Traditional and Matrix Method
Fact Pattern
Cost Resources: A= $1,000, B = $2,000, C = $3,000, D = $4,000, E = $5,000
Cost Drivers: I = 10 units, II = 20 units, III = 40 units, IV = 50 units
Percentage weights (check figure, each row sums to 100%):
I II III IV
A 10% 0% 90% 0%
B 30% 45% 0% 25%
C 30% 25% 0% 45%
D 20% 50% 0% 30%
E 30% 25% 20% 25%

Traditional ABC
Step 1: Form activity cost pools around cost drivers
Activity Cost Pool I (in thousands)
= ($ )( %) + ($ )( %) + ($ )( %) + ($ )( %) + ($ )( %)
=$ .
=$ ,

Activity Cost Pool II (in thousands)


= ($ )( %) + ($ )( %) + ($ )( %) + ($ )( %) + ($ )( %)
=$ .
=$ ,

Activity Cost Pool III (in thousands)


= ($ )( %) + ($ )( %) + ($ )( %) + ($ )( %) + ($ )( %)
=$ .
=$ ,

Activity Cost Pool IV (in thousands)


= ($ )( %) + ($ )( %) + ($ )( %) + ($ )( %) + ($ )( %)
=$ .
=$ ,

Step 2: Calculate activity rates from total activity cost pools and total cost drivers
Activity Rate I
=$ , ⁄
=$ /

Activity Rate II
=$ , ⁄
=$ /

Activity Rate III


=$ , ⁄

39

Electronic copy available at: https://ssrn.com/abstract=3965495


=$ . /

Activity Rate IV
=$ , ⁄
=$ /

Matrix Algebra Approach to ABC


Step 1: Define matrices

$ ,
⎡ ⎤
$ ,
⎢ ⎥
= ⎢ $ , ⎥
⎢ $ , ⎥
⎣ $ , ⎦

% % % %
⎡ % % % %⎤
⎢ ⎥
= ⎢ % % % %⎥
⎢ % % % %⎥
⎣ % % % %⎦

⎡ ⎤
⎢ ⎥
⎢ ⎥
= ⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦

Step 2: Multiply matrices (see Equation 3; AR = TCR * PW * TCD)

$ $ $ $
⎡ $ ⎤
$ $ $
⎢ ⎥
× =⎢ $ $ $ $ , ⎥
⎢ $ $ $ $ , ⎥
⎣$ , $ , $ , $ , ⎦

$ / $ / $22.5/ $0/
⎡ ⎤
$60/ $45/ $0/ $10/
⎢ ⎥
= ⎢ $90/ $37.5/ $0/ $27/ ⎥
⎢ $80/ $100/ $0/ $24/ ⎥
⎣ $150/ $62.5/ $25/ $25/ ⎦

Note: columns in this matrix will sum to the activity rates above (see Traditional ABC).

40

Electronic copy available at: https://ssrn.com/abstract=3965495


Table 1: ABC Proxy Validation Experiment
Panel A: ABC Proxy Validation Experiment Data Set Descriptive Statistics
n Mean Median SD
Cost Driver 1 (longitudinal) 1,000 0.501 0.498 0.284
Cost Driver 2 (interaction) 1,000 0.509 0.511 0.288
Cost Driver 3 (interaction) 1,000 0.515 0.523 0.285
Cost Resource 1 1,000 0.638 0.610 0.377
Cost Resource 2 1,000 0.677 0.618 0.358
Cost Resource 3 1,000 0.649 0.583 0.342
Cost Resource Total 1,000 1.964 1.929 0.652
Job-order Estimates 100 2.092 2.354 1.173
ABC Estimates 100 2.261 2.187 0.812

Panel B: ABC Proxy Validation Experiment Results


Accuracy: correct decisions out of 100
(difference from job-order in parentheses)
Ceiling
Job-Order ABC
Prices:
$0.25 94 100 (6)
$0.50 87 99 (12)
$0.75 80 95 (15)
$1.00 76 91 (15)
$1.25 65 79 (14)
$1.50 62 71 (9)
$1.75 53 56 (3)
$2.00 47 51 (4)
$2.25 44 45 (1)
$2.50 49 50 (1)
$2.75 54 61 (7)
$3.00 69 75 (6)
$3.25 77 80 (3)
$3.50 89 91 (2)
$3.75 93 98 (5)
$4.00 100 100 (0)
Average 71.19 77.63 (6.44)***

* Significantly different from Job-Order at p = 0.10; one-tailed, paired t-test.


** Significantly different from Job-Order at p = 0.05; one-tailed, paired t-test.
*** Significantly different from Job-Order at p = 0.01; one-tailed, paired t-test.

41

Electronic copy available at: https://ssrn.com/abstract=3965495


Table 2: Numerical Experiment 1
Panel A: Numerical Experiment 1 Data Set Descriptive Statistics (including Table 1 repeats)
n Mean Median SD
Cost Driver 1 (longitudinal) 1,000 0.501 0.498 0.284
Cost Driver 2 (interaction) 1,000 0.509 0.511 0.288
Cost Driver 3 (interaction) 1,000 0.515 0.523 0.285
Cost Resource 1 1,000 0.638 0.610 0.377
Cost Resource 2 1,000 0.677 0.618 0.358
Cost Resource 3 1,000 0.649 0.583 0.342
Cost Resource Total 1,000 1.964 1.929 0.652
ABC Estimates 100 2.261 2.187 0.812
MLABC Estimates 100 1.998 2.014 0.298

Panel B: Numerical Experiment 1 Results


Accuracy: correct decisions out of 100
(difference from ABC in parentheses)
Ceiling
ABC MLABC
Prices:
$0.25 100 100 (0)
$0.50 99 99 (0)
$0.75 95 99 (4)
$1.00 91 96 (5)
$1.25 79 89 (10)
$1.50 71 82 (11)
$1.75 56 70.7 (14.7)
$2.00 51 68 (17)
$2.25 45 67.7 (22.7)
$2.50 50 82.3 (32.3)
$2.75 61 86 (25)
$3.00 75 94 (19)
$3.25 80 96 (16)
$3.50 91 99 (8)
$3.75 98 99 (1)
$4.00 100 100 (0)
Average 77.63 89.23 (11.6)***

* Significantly different from ABC at p = 0.10; one-tailed, paired t-test.


** Significantly different from ABC at p = 0.05; one-tailed, paired t-test.
*** Significantly different from ABC at p = 0.01; one-tailed, paired t-test.

42

Electronic copy available at: https://ssrn.com/abstract=3965495


Table 3: Numerical Experiment 2
Panel A: Numerical Experiment 2 Data Set Descriptive Statistics
Average statistics across w-values
n Mean Median SD
Cost Driver 1 (longitudinal) 1,000 0.506 0.503 0.290
Cost Driver 2 (interaction) 1,000 0.495 0.495 0.287
Cost Driver 3 (interaction) 1,000 0.507 0.506 0.290
Cost Driver 4 (MLABC-unique) 1,000 0.502 0.509 0.288
Cost Driver 5 (ABC-unique) 1,000 0.497 0.498 0.288
Cost Resource 1 1,000 0.645 0.623 0.403
Cost Resource 2 1,000 0.655 0.624 0.378
Cost Resource 3 1,000 0.641 0.606 0.372
Cost Resource Total 1,000 1.942 1.949 0.847
ABC Estimates 100 2.403 2.455 0.991
MLABC Estimates 100 1.936 1.984 0.389

Panel B: Numerical Experiment 2 Results


Accuracy: correct decisions out of 100 (difference from ABC in parentheses)
Ceiling ABC MLABC ABC MLABC ABC MLABC
Prices: (w=0.0909) (w=0.0909) (w=0.50) (w=0.50) (w=0.80) (w=0.80)
$0.25 100 100 (0) 100 100 (0) 88 88 (0)
$0.50 99 100 (1) 99 100 (1) 89 86.7 (-2.3)
$0.75 96 97 (1) 96 97 (1) 92 84.7 (-7.3)
$1.00 88 92 (4) 88 92 (4) 91 83.7 (-7.3)
$1.25 78 85.3 (7.3) 78 85.3 (7.3) 85 79.3 (-5.7)
$1.50 67 74.3 (7.3) 67 74.3 (7.3) 81 77.7 (-3.3)
$1.75 54 64.3 (10.3) 54 64.3 (10.3) 78 66.7 (-11.3)
$2.00 52 75.3 (23.3) 52 75.3 (23.3) 73 61 (-12)
$2.25 44 81.7 (37.7) 44 81.7 (37.7) 73 59.3 (-13.7)
$2.50 50 82.3 (32.3) 53 83.7 (30.7) 68 69 (1)
$2.75 62 90 (28) 62 90 (28) 71 77 (6)
$3.00 70 95 (25) 70 95 (25) 75 85 (10)
$3.25 82 96 (14) 82 96 (14) 75 91 (16)
$3.50 91 97 (6) 91 97 (6) 85 95 (10)
$3.75 93 98 (5) 93 98 (5) 91 96 (5)
$4.00 97 100 (3) 97 100 (3) 94 96 (2)
89.3 89.4
Average 76.44 76.63 81.81 81 (-0.8)
(12.8)*** (12.7)***
* Significantly different from ABC at p = 0.10; one-tailed, paired t-test.
** Significantly different from ABC at p = 0.05; one-tailed, paired t-test.
*** Significantly different from ABC at p = 0.01; one-tailed, paired t-test.

43

Electronic copy available at: https://ssrn.com/abstract=3965495


Table 4: Numerical Experiment 3
Panel A: Numerical Experiment 3 Data Set Descriptive Statistics
Average statistics across s-values (except “control” row)
n Mean Median SD
Cost Driver 1 (longitudinal) 2,000 0.497 0.500 0.288
Cost Driver 2 (interaction) 2,000 0.500 0.498 0.289
Cost Driver 3 (interaction) 2,000 0.499 0.503 0.289
Cost Driver 4 (MLABC-unique) 2,000 0.489 0.441 0.350
Cost Driver 5 (ABC-unique) 2,000 0.503 0.506 0.290
Cost Resource 1 2,000 0.643 0.629 0.436
Cost Resource 2 2,000 0.645 0.629 0.424
Cost Resource 3 2,000 0.658 0.640 0.428
Cost Resource Total 2,000 1.946 1.949 1.056
MLABC Estimates (control) 200 1.822 1.807 0.503
MLABC Estimates (learning) 200 1.928 1.998 0.498

44

Electronic copy available at: https://ssrn.com/abstract=3965495


Panel B: Numerical Experiment 3 Results
Accuracy: correct decisions out of 200 (difference from control condition in parentheses)
Ceiling Learning Learning Learning Learning Learning Learning Learning
Control
Prices: (s=0.05) (s=0.10) (s=0.20) (s=0.40) (s=0.60) (s=0.80) (s=1.00)
$0.25 186 187 (1) 187 (1) 187 (1) 187 (1) 187 (1) 187 (1) 187 (1)
$0.50 177 185 (8) 185 (8) 185 (8) 185 (8) 185 (8) 185 (8) 185 (8)
$0.75 172 179 (7) 179 (7) 178 (6) 178 (6) 178 (6) 178 (6) 178 (6)
$1.00 167 173 (6) 174 (7) 176 (9) 175 (8) 176 (9) 172 (5) 175 (8)
$1.25 161 161 (0) 160 (-1) 161 (0) 161 (0) 161 (0) 161 (0) 160 (-1)
$1.50 151 148 (-3) 145 (-6) 143 (-8) 149 (-2) 141 (-10) 148 (-3) 144 (-7)
$1.75 136 138 (2) 136 (0) 138 (2) 139 (3) 137 (1) 135 (-1) 142 (6)
$2.00 126 125 (-1) 125 (-1) 129 (3) 127 (1) 127 (1) 126 (0) 125 (-1)
$2.25 127 130 (3) 128 (1) 131 (4) 130 (3) 130 (3) 130 (3) 127 (0)
$2.50 122 135 (13) 135 (13) 135 (13) 135 (13) 135 (13) 137 (15) 134 (12)
$2.75 146 144 (-2) 146 (0) 146 (0) 147 (1) 144 (-2) 142 (-4) 145 (-1)
$3.00 162 167 (5) 167 (5) 167 (5) 167 (5) 167 (5) 167 (5) 167 (5)
$3.25 181 174 (-7) 174 (-7) 174 (-7) 174 (-7) 174 (-7) 174 (-7) 174 (-7)
$3.50 189 184 (-5) 184 (-5) 184 (-5) 184 (-5) 184 (-5) 184 (-5) 184 (-5)
$3.75 196 188 (-8) 188 (-8) 188 (-8) 188 (-8) 188 (-8) 188 (-8) 188 (-8)
$4.00 198 193 (-5) 193 (-5) 193 (-5) 193 (-5) 193 (-5) 193 (-5) 193 (-5)
Average 163.19 162.88 163.44 163.69 162.94 162.94 163.00
162.31
(all) (0.88) (0.56) (1.13) (1.38) (0.63) (0.63) (0.69)
Average 156.00 155.58 156.33 156.67 155.67 155.67 155.75
152.75
(≤$3) (3.25)** (2.83)** (3.58)** (3.92)*** (2.92)* (2.92)** (3.00)**
* Significantly different from control at p = 0.10; one-tailed, paired t-test.
** Significantly different from control at p = 0.05; one-tailed, paired t-test.
*** Significantly different from control at p = 0.01; one-tailed, paired t-test.

45

Electronic copy available at: https://ssrn.com/abstract=3965495

You might also like